uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,497,215 | arxiv | \section{Introduction}
The use of deep neural networks as function approximators has enabled
new functionalities for reinforcement learning to provide solutions
for more complex problems \citet{van2015blocks}. However, adaptively
scaling and generalising trained agents to larger and complex environments
remains challenging. Object Oriented Reinforcement Learning (OORL)
is an attempt to leverage scalability of the solutions by learning
to optimally behave with respect to some classes of objects in an
environment. This knowledge can later be generalised to other environments
in which the same classes of objects are present \citet{watters2019cobra}.
Most object-oriented reinforcement learning approaches aim to extract
objects in the environment by vision-based or vision-free approaches
\citet{watters2019cobra,kansky2017schema,keramati2018strategic}.
An agent learns how the environment ``works'' in a task-free exploration
phase by constructing a global transition model wherein the attributes
of objects from different classes can be manipulated (e.g. moving
objects). These global transition models are amalgamated representations
of objects (e.g. interaction networks, interaction graphs \citet{battaglia2016interaction}).
Global transition models enable a single agent to adjust object attributes
to obtain high rewards. However, they fail if objects are dynamically
introduced or removed from the scene. Major re-training of the global
transition model is required to make the model accurate with such change of scene. Other methods learn
the local transition models of the objects \citet{watters2019cobra,scholz2014physics} and pass the learnt properties of objects
to a single agent to perform an action to maximise its returns. However,
in such settings, objects are considered to be neutral and are not
allowed to perform any action or construct their own reward function.
As a result, such global reward models are also to be re-trained with
only slight change in the environment.
These difficulties motivate an alternative approach of \emph{``using
objects in the environment with a plug and play functionality''}.
We define three main requirements for a plug and play environment:
\begin{itemize}
\item Factorising the global transition model of the environment into local
transition models of the classes of objects that are present in the
environment,
\item Factorising object specific reward models instead of a single global
reward model; and
\item Allowing adaptation of object specific reward models in the new environment.
\end{itemize}
Following the requirements of a plug and play approach, as the first
step, we eliminate the need for a single agent to adjust all objects.
Instead, independent objects inherit attributes from their class and
maintain their own local transition model and reward model. The global
transition dynamics is represented as a union of local transition
models, each with respect to one class of active objects in the scene.
Transition dynamics from an object class are pre-learnt and ready
for use in a new environment. Scenes can also be dynamically configured
with addition and removal of objects, and number of objects can be
arbitrarily large as they do not share a common reward model or a
transition model. Additionally, we develop a novel \textquoteleft trust
factor\textquoteright{} based transfer mechanism for reward functions
across environments. We term our approach as \emph{Plug and Play Reinforcement
Learning }(PaPRL). Figure \ref{fig:motiv} shows the overall structure
of a plug and play approach.
Experiments show that our representation achieves sample-efficiency
in a variety of set-ups from simple to complex environments. Building
upon the plug and play environments, objects can be arbitrarily added
or removed during the run time. To illustrate the effects of local
transition model of a class of objects, we consider two cases of (1)
learning the local transition models in an inexpensive and fast simulator
and then transfer/plug in a new environment (PaPRL-offline), and (2)
learning local transition model for each object during the run time
(PaPRL-online).
\section{Related Works}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{Images/motiv_main}
\end{center}
\caption{A Plug and play environment in which objects inherit a reward model
and a transition model form their class.}
\label{fig:motiv}
\end{figure}
This work builds upon three areas of related research:
\textbf{Physics-Based Reinforcement Learning}: In physics based reinforcement
learning, Deep Lagrangian Networks \citet{lutter2019deep} and Interaction
Networks \citet{battaglia2016interaction} aim to reason about the
way objects in complex systems interact. Deep Lagrangian Networks
(DeLaN) \citet{lutter2019deep} encode the physics prior in the form
of a differential in the network topology to achieve lower sample complexity and better extrapolation. However,
these works use a global transition model. Whilst they can naturally
handle arbitrary number of objects, they are not flexible in including
new classes of objects. Consequently, new class of objects require
complete retraining of the global transition model, and hence, this
may not be feasible in a plug and play setting in which the new classes
of objects may be introduced at any time.
\textbf{Object-Oriented Reinforcement Learning}: Object-oriented MDPs
\citet{diuk2008object} are an extension to the standard MDPs to construct
the global transition dynamics of the environment and develop an object-oriented
non-transferable reinforcement learning. Following this approach some studies address the need for object-centric
understanding of the environment \citet{cobo2013object,garnelo2016towards,keramati2018strategic},
however, they are not naturally transferable to a new scene as the
objects' properties cannot be reused in a new environment. Keramati
et al. \citet{keramati2018strategic} aims to boost the performance
of agents in a large state space with sparse reward. Schema networks
\citet{kansky2017schema} use a vision system that detects and tracks
entities in an image. Given the entities, a self-transition variable
is defined to represent the probability that a position attribute
remains active in the next time step. While this self-transition variable
can be sufficient in some video games, it may not be suitable for
more complex physical environments with many attributes for objects.
Scholz et al. \citet{scholz2014physics} propose a A Physics-Based
model prior for object-oriented Reinforcement Learning (PBLR) with
a focus on Robotics applications. This representation defined a state-space
dynamics function in terms of the agent's beliefs over objects' inertial
parameters and the existence and parametrisation of physical constraints.
PBLR's approach is quite different and the solutions do not translate
into the settings of a Plug and Play framework in which every object
holds its own inherited properties. Model-based reinforcement learning
approaches such as COBRA \citet{watters2019cobra} aim to learn representations
of the world in terms of objects and their interactions and pass this
knowledge as a transition model to the single agent and with a global
reward model. Hence, this approach cannot be used in a plug and play
environment in which the objects are acting based on their own local
transition model and their own reward model.
\textbf{Multi-Agent Reinforcement Learning}: These approaches may
also be considered as related studies as in our proposed method, objects
can perform actions in the environment. However, the focus of multi-agent
methods is generally on improving the cooperation setting among agents
with same class of attributes to maximise a common long-term return,
and thus are not suitable for a plug and play environment where objects
can come from a variety of classes. MOO-MDP \citet{da2017moo} is
a study that combines the concept of object-oriented representation
and multi-agent reinforcement learning. This method is a model-free
algorithm but uses a global transition model to solve deterministic
multi-agent, object-oriented problems with a discrete action. The
extracted knowledge from objects are not transferable to a new environment.
Model-based multi-Agent reinforcement learning approaches such as
\citet{bargiacchi2020model}, aim to learn a model of the environment
and update a single reward model in a sample-efficient manner with
cooperation among agents. We however, are interested in modelling local transition dynamics, and we consider the agents to act independently.
\textbf{Relations to Plug and Play Environments}: Based on the three
main requirements of a plug and play environment, physics-based reinforcement
learning approaches fail to fit into this setting as they only develop
a global transition model of the environment. Whilst some of the object-oriented
reinforcement learning approaches construct local transition models,
but they develop a single reward model that is incompatible with the
plug and play approach. Additionally, local transition models of objects
in such methods are not reusable in new environments as they are environment-specific.
Model-based multi-agent reinforcement learning methods generally work
with similar agents with same classes of attributes and the focus
of such methods is on improving the co-operation of agents with the
help of global transition model to achieve the highest return. Consequently,
such approaches cannot be applied into a plug and play environment.
\section{Plug and Play Markov Decision Processes}
Following the sequential decision-making problems, our plug and play
reinforcement learning framework is constructed based on MDPs and
object oriented MDPs. Plug and Play Markov Decision Process (PaP-MDP)
is described by $<\mathcal{\mathrm{{\mathcal{O}},\ }{C}},\mathcal{Z},\mathcal{{A}},\mathcal{{P}},\mathcal{{R}}>$,
where:
\begin{itemize}
\item $\mathcal{{O}}=\{o_{1},o_{2},\ldots,o_{O}\}$ is the set of objects
that are existing in the environment,
\item $\mathcal{{C}=}\{c_{1},c_{2},\ldots,c_{C}\}$ is the set of classes
of objects that are present in the environment,
\item $\mathcal{Z}=\{z_{1},z_{2},\ldots,z_{O}\}$ is the set of booleans
that determines if an object is allowed to take an action in the environment:
$z_{i}=\{0,1\},\mathrm{and}\sum_{i\in\{1,\ldots,O\}}z_{i}=O^{\prime};1\leq O^{\prime}\leq O$,
\item $\mathcal{{A}}=\{a_{1},a_{2},\ldots,a_{O}\}$ is the set of valid
actions the objects in the environment $o_{i}\in\mathcal{O}$ may
take at every time-step if $z_{i}=1$,
\item $\mathcal{{P}}=\{p_{1},p{}_{2},\ldots,p_{O}\}$ is the set of local
transition models for the objects in the environment $o_{i}\in\mathcal{O}$
if $z_{i}=1$; and
\item $\mathcal{{R}}$ is the global reward function.
\end{itemize}
Different classes of objects have distinct attributes: $\mathrm{{att}}(c_{i})=\{e_{1}^{i},\ldots,e_{E_{i}}^{i}\},\forall i\in{1,\ldots C}$
and $E_{i}$ is the number of attributes for class $c_{i}$. These
attributes are inherited by new objects generated from class $c_{i}$.
To efficiently determine the class of each object, we define $\mathrm{C}(o_{i})=c_{k\in\{1\ldots C\}}$
that returns the class of an object. Attributes of objects may be
changed only by that specific object. If $z_{i}=1$, then $o_{i}$ can perform an action, we term these
objects as \textbf{\emph{active objects}}\emph{. }On the other hand,
if $z_{i}=0$, then $o_{i}$ is not permitted to perform any action
in the environment and we term these objects as \textbf{\emph{neutral
objects}} $\mathcal{N}=\cup o_{i},(\forall i\in\{1,\dots,O\}\land z_{i}=0)$
and $a_{i}=\emptyset$, a null action. Active objects can observe
the neutral objects' attributes. However, active objects are unaware
of each others' attributes and they do not interact directly with
each other.
Given the definition of objects and attributes, the state of an object
$o_{i}$ in the environment is defined as:
\begin{equation}
o_{i}.s=[o_{i}.e_{1}^{\mathrm{C}(o_{i})},\ldots,o_{i}.e_{E}^{\mathrm{C}(o_{i})}]\label{eq:o_state}
\end{equation}
where the the ``dot'' notation denotes the attribute of an object.
The state of PaP-MDP is defined as the union of all states for different
objects that are present in the environment:
\begin{equation}
\mathcal{S}=\cup_{\forall o\in\mathcal{{O}}}\quad o.s\label{eq:o_state_2}
\end{equation}
\subsection{Local Transition Models \label{subsec:Local-Transition-Models}}
Local transition models are defined for each class of active objects
in the environment. These local models act as dynamics models that
learn and encode the behaviour of different classes of active objects
as they interact with neutral objects. $p_{c,j}:o_{j}.s\rightarrow o_{j}.s^{\prime},\forall o_{j}\in\mathcal{N}$,
is an approximation of a local transition function for active object
$o_{i},c=\mathrm{C}(o_{i})$, that learns how to predict the next
state of the neutral object $o_{j}$ in the environment after they
interact. Active objects can observe the attributes of neutral objects,
hence, they can simply record the attributes of the neutral object
before and after the interaction, e.g. a collision can be determined
based on the distance of the objects. As an example, Figure \ref{fig:dynamic}
shows an active object $o_{i}$ before and after collision with a
neutral object $o_{j}$. $p_{c,j},c=\mathrm{C}(o_{i})$, the local
transition model, predicts $o_{j}.s^{\prime}$ given $o_{j}.s$ -
that is the state of the neutral object before the collision with
$o_{i}$. Linear function approximation may suffice to describe simple
interactions, for more complex and non-linear behaviours such as collision,
a non-linear function approximator is required. In this case, we use
a neural network $\mathcal{D}$ with weights $\theta$ trained with
a backpropagation algorithm to learn the interaction in a supervised
manner. The network is trained to minimise the MSE loss, using the
post-interaction state of the neutral object $o_{j}$ as label:
\[
\mathcal{L}_{j}(\theta)=\mathrm{\mathbb{E}}[(o_{j}.s^{\prime}-\mathcal{D}(o_{j}.s,\theta))^{2}]
\]
The active objects thus predict the post-interaction state of the
neutral object before interaction.
\begin{figure}[t]
\centering \includegraphics[width=0.8\columnwidth]{Images/p_1}\caption{The interaction between $o_{i}$ and neutral object $o_{j}$ is modeled
by $p_{c,j}$.}
\label{fig:dynamic}
\end{figure}
\subsection{Reward Learning Algorithm \label{subsec:Reward}}
Following PHYRE study \citet{bakhtin2019phyre}, we design our main
experimental environment to be \emph{``wait and see''}. Each active
object ($\forall o_{i}\notin\mathcal{N}$) is allowed to perform an
action only if interacting with a neutral object $o_{j}$. If $o_{i}$
and $o_{j}$ interact more than one time during an episode, $o_{i}$
is not allowed to perform more actions and will be treated as a neutral
object until it receives the reward and completes the episode.
\subsubsection{Reward Model \label{subsec:Reward-Model}}
We use a specific type of DQN to model the reward function in the
environment. Note that in the proposed problem, objects cannot perform
more than one action in a single episode and the time-steps are eliminated.
As a result, the Q-network of an active object represents a model
of its reward function. All active objects have one common goal, yet
they are not aware of each other's attributes. Given that, each object
collects a set of state-action-reward triplets after interacting with
a neutral object. However, given the trained local transition model
for each active object, we use triplets of the form \emph{``<post-interaction
state, action, reward>'}'. We argue that the learned local transition
model is a critical prior knowledge about the physics of an object
that can improve sample-efficiency of this approach. In a practical
sense, an active object can adjust itself by performing an action
that results in a post-interaction state which is more favourable
to achieve the highest reward. Consider the example of a ball as a~neutral
object ($o_{j}\in\mathcal{N}$) and an obstacle ($o_{i}\notin\mathcal{N}$)
as an active object. Given the collision as a type of interaction
we are interested in, Figure \ref{fig:dynamic_2} shows that given
$p_{c,j},c=\mathrm{C}(o_{i})$, the local transition model, $o_{i}$
performs an action which results in desirable post-interaction state
that is expected to return the highest global reward:
\begin{equation}
a_{i}=\mathrm{argmax}_{a}o_{i}\mathrm{.Q}(p_{c,j}(o_{j}.s),a_{i}),\quad c=\mathrm{C}(o_{i})\label{eq:action}
\end{equation}
where $\mathrm{Q}$ is a neural network that is minimised by MSE loss
between the prediction and the observed reward:
\begin{equation}
\mathcal{L^{\prime}}_{j}(\theta)=\mathrm{E}[(r(\mathcal{S},\mathcal{A})-o_{i}.\mathrm{Q}(p_{c,j}(o_{j}.s),a_{i}))^{2}],\quad c=\mathrm{C}(o_{i}).\label{eq:rew_loss}
\end{equation}
Note that $r(\mathcal{S},\mathcal{A})$ is the global reward function
based on the environment's set of states $\mathcal{S}$ and object's
set of actions $\mathcal{A}$.
\begin{figure}[t]
\centering{}\includegraphics[width=0.9\columnwidth]{Images/p_2}\caption{An example of interaction between active object $o_{i}$ and neutral
object $o_{j}$. As $o_{i}$ observes the attributes of $o_{j}$,
it performs a rotation action $a_{i}$ that results in the post-interaction
state $o_{j}.s^{\prime}$ that is the most favourable to return the
highest reward based on the trained rewarding model. Solid blue arrows
show the trajectory of $o_{j}$ if no action is taken by $o_{i}$.
Green dashed arrows shows the post-interaction trajectory of $o_{j}$
if action $a_{i}$ is performed.}
\label{fig:dynamic_2}
\end{figure}
\subsubsection{Trust Factor}
A key part of our plug and play reinforcement learning framework is
derived from the concept of object oriented environments. As demonstrated
in reward learning section, each active object $o_{i}\notin\mathcal{N}$
constructs its own reward model $o_{i}\mathrm{.Q}$. However, $o_{i}$
also inherits a reward function from class $\mathrm{C}(o_{i})$ upon
initilisation, as the reward model is also one of the attributes of
that class. We term an inherited reward model for $o_{i}$ from class
$c=\mathrm{C}(o_{i})$, as $c.\mathrm{Q}.$ The inherited reward model
may not be the best choice as a new active object can be initialised
with different attributes such as position, length, weight, etc. We
impose no restrictions on $c.\mathrm{Q}$ and it can be a randomly
initialised neural network or any reward model from one of the previously
trained objects from that class. If the class reward model is selected
from one of the previously trained objects in the environment, a newly
initialised object may find the class reward function as a reasonable
starting model to partially rely on. To leverage the mixed use of
class and object models, we define a trust factor that intuitively
measures how accurately a reward model for an active object works.
The trust factor of approximated reward function $\mathrm{Q}$ at
iteration $t$ as:
\begin{equation}
\mathrm{TF}_{t}(\mathrm{Q})=(1+\sum_{k=1}^{K}|r(\mathcal{S},\mathcal{A})-\mathrm{Q}(p_{c,j}(o_{j}.s),a_{i})|)^{-1},\enskip c=\mathrm{C}(o_{i})\label{eq:trust}
\end{equation}
where $K$ is the number of episodes an object waits to calculate
its trust to the current reward model $\mathrm{Q}\in\{o_{i}.\mathrm{Q},\mathrm{C}(o_{i}).\mathrm{Q}\}$
that is currently being used. If the difference of the predicted and
the actual received rewards is considerable, the trust factor will
be low and the object attempts to use the alternative reward function
for the next round. Algorithm \ref{tab:tf} shows the trust factor
calculation.
\begin{algorithm}[t]
\caption{Trust Factor Calculation for Active Objects}
\begin{algorithmic}[1]
\STATE \textbf{Input:}
\STATE Active objects present in the environment: $\forall o_{i}\notin\mathcal{N}$,
\STATE Neutral objects present in the environment: $\forall o_{j}\in\mathcal{N}$,
\STATE Reward models for all active objects $\mathrm{Q}\in\{o_{i}.\mathrm{Q},\mathrm{C}(o_{i}).\mathrm{Q}\},o_{i}\notin\mathcal{N}$,
\STATE Initialize $K$, the step size and the trust threshold $h$
(see \ref{eq:trust}).
\STATE \textbf{Output: }Trust factors for all active objects at episode
$t$: $\mathrm{TF}_{t}(\mathrm{Q})$.
\FOR {$\forall o_i$}
\STATE $\mathrm{TF}_{t}(\mathrm{Q})=(1+\sum_{k=1}^{K}|r(\mathcal{S},\mathcal{A})-\mathrm{\mathrm{Q}}(p_{c,j}(o_{j}.s),a_{i})|)^{-1}$
\IF {$\mathrm{TF}_t(\mathrm{Q}) \leq h$}
\STATE $\mathrm{\mathrm{Q}=Replace}(o_{i}.\mathrm{Q},\mathrm{C}(o_{i}).\mathrm{Q})\quad$\emph{//Replace
the current reward model with the other available reward model}
\ENDIF
\ENDFOR
\end{algorithmic}
\label{tab:tf}
\end{algorithm}
\begin{figure}[h]
\begin{centering}
\includegraphics[scale=0.30]{Images/p_3}
\end{centering}
\caption{Plug and Play Reinforcement Learning}
\label{fig:framework}
\end{figure}
Figure \ref{fig:framework} shows the overview of our proposed plug
and play reinforcement learning framework in which all the objects
inherit their class attributes and attempt to maximise their own constructed
reward. Algorithm \ref{tab:method} outlines the pseudo code of plug
and play reinforcement learning.
\begin{algorithm}[t]
\caption{Plug and Play Reinforcement Learning Algorithm}
\begin{algorithmic}[1]
\STATE \textbf{Input:}
\STATE Active objects present in the environment: $\forall o_{i}\notin\mathcal{N}$,
\STATE Neutral objects present in the environment: $\forall o_{j}\in\mathcal{N}$,
\STATE Reward models for all active objects $\mathrm{Q}\in\{o_{i}.\mathrm{Q},\mathrm{C}(o_{i}).\mathrm{Q}\},\forall o_{i}\notin\mathcal{N}$,
\STATE Initialize $K$, the step size and the trust threshold $h$
(see \ref{eq:trust}).
\STATE \textbf{Output: }Updated estimates of $\mathrm{Q}\in\{o_{i}.\mathrm{Q},\mathrm{C}(o_{i}).\mathrm{Q}\},\forall o_{i}\notin\mathcal{N}$.
\FOR {$\forall o_j,\ o_j \in \mathcal{N}$}
\FOR {$\forall o_i,\ o_i \notin \mathcal{N}$}
\IF {$\mathrm{interact}(o_j,o_i)$}
\STATE $a_{i}=\mathrm{argmax}_{a_{i}}\mathrm{Q}(p_{c,j}(o_{j}.s),a_{i}),\quad c=\mathrm{C}(o_{i})$.
\STATE Construct the $<o_{j}.s^{\prime}$, $a_{i}$, $r(\mathcal{S},\mathcal{A})>$
and append to the observation set.
\STATE Sample a random minibatch of triplets and update $\mathrm{Q}$
with minimising Loss function \ref{eq:rew_loss}.
\IF {$\mathrm{TF}_t(\mathrm{Q}) \leq h$}
\STATE $\mathrm{\mathrm{Q}=Replace}(o_{i}.\mathrm{Q},\mathrm{C}(o_{i}).\mathrm{Q})\quad$
\emph{//Replace the current reward (Algorithm 1)}
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{tab:method}
\end{algorithm}
\section{Experiments}
\begin{figure}
\begin{centering}
\begin{minipage}[t]{0.5\columnwidth}%
\noindent \begin{center}
\includegraphics[scale=0.23]{Images/env3_}
\par\end{center}%
\end{minipage}
\par\end{centering}
\begin{raggedright}
(a) Rotating Wall active objects and the neutral-ball.
\par\end{raggedright}
\begin{centering}
\begin{minipage}[t]{0.5\columnwidth}%
\noindent \begin{center}
\includegraphics[scale=0.23]{Images/env5}
\par\end{center}%
\end{minipage}
\par\end{centering}
\begin{raggedright}
(b) Arc Wall active objects (red) and the neutral-ball (blue).
\par\end{raggedright}
\begin{raggedright}
\caption{A sample experiment with active and neutral objects $\{o_{1}\}\in\mathcal{N},\{o_{2},o_{3},o_{4}\}\protect\notin\mathcal{N}$.
Active objects receive the highest reward if the neutral-ball falls
in the green basket.}
\par\end{raggedright}
\centering{}\label{fig:exp1}
\end{figure}
\begin{figure*}[t]
\begin{raggedright}
\begin{minipage}[t]{0.21\paperwidth}
\begin{center}
\includegraphics[width=0.8\textwidth]{Images/env1_}
\end{center}
\begin{center}
(a) An environment with $\{o_{1}\}\in\mathcal{N},\{o_{2}\}\notin\mathcal{N}$
as rotating wall.
\end{center}%
\end{minipage}%
\begin{minipage}[t]{0.21\paperwidth}%
\begin{center}
\includegraphics[width=0.2\paperwidth]{Images/1D_wall}
\end{center}
\begin{center}
(b) Obtained reward from the environment.
\end{center}%
\end{minipage}%
\begin{minipage}[t]{0.21\paperwidth}%
\begin{center}
\includegraphics[width=0.2\paperwidth]{Images/env_1_arc_}
\end{center}
\begin{center}
(c) An environment with $\{o_{1}\}\in\mathcal{N},\{o_{2}\}\notin\mathcal{N}$
as the arc wall.
\end{center}%
\end{minipage}%
\begin{minipage}[t]{0.21\paperwidth}%
\begin{center}
\includegraphics[width=0.2\paperwidth]{Images/1D_arc}
\end{center}
\begin{center}
(d) Obtained reward from the environment.
\end{center}%
\end{minipage}
\end{raggedright}
\begin{raggedright}
\begin{minipage}[t]{0.2\paperwidth}%
\begin{center}
\includegraphics[width=0.8\textwidth]{Images/env2_}
\end{center}
\begin{center}
(e) An environment with $\{o_{1}\}\in\mathcal{N},\{o_{2},o_{3}\}\notin\mathcal{N}$
with two rotating walls.
\end{center}%
\end{minipage}%
\begin{minipage}[t]{0.23\paperwidth}%
\begin{center}
\includegraphics[width=0.9\textwidth]{Images/2D_wall}
\end{center}
\begin{center}
(f) Obtained reward from the environment.
\end{center}%
\end{minipage}%
\begin{minipage}[t]{0.2\paperwidth}%
\begin{center}
\includegraphics[width=0.8\textwidth]{Images/env6}
\end{center}
\begin{center}
(g) An environment with $\{o_{1}\}\in\mathcal{N},\{o_{2},o_{3}\}\notin\mathcal{N}$
as arc walls.
\end{center}%
\end{minipage}%
\begin{minipage}[t]{0.23\paperwidth}%
\begin{center}
\includegraphics[width=0.9\textwidth]{Images/2D_arc}
\end{center}
\begin{center}
(h) Obtained reward from the environment.
\end{center}%
\end{minipage}
\end{raggedright}
\caption{Single (first row) and many (second row) active objects (rotating
wall and arc wall) in the environment with a comparison of PaP-RL
and DQN.}
\label{fig:exp2}
\end{figure*}
In this section, we demonstrate the performance of PaP-RL in a simple
plug and play reinforcement learning platform\footnote{Codes and videos are available in supplementary materials.},
following which, we present detailed empirical comparisons against
the baselines.
\subsection{Experimental Setting\label{subsec:Settings-of-the}}
In our experiments, we introduce two types of objects: active objects
( Rotating Wall and Arc Wall) that can perform actions, and a neutral
object (Neutral-ball) that cannot. The two classes of active objects
have their own attributes that can be adjusted by performing the required
action to maximise the expected return.
\subsubsection{Rotating Wall}
The rotating wall class of active objects are constructed by the following
attributes:
\[
\mathrm{att}(C_{w})=[f_{w},e_{w},\theta_{w}],
\]
where $f_{w}\in[0.4,0.9]$ is the friction coefficient of the wall
, $e_{w}\in[0.4,0.9]$ is the elasticity coefficient of the wall,
and $\theta_{w}\in[-\frac{\pi}{10},\frac{\pi}{10}]$ is the degree
of rotation for the wall.
\subsubsection{Arc Wall}
The arc wall class of active objects are designed by the following
attributes:
\[
\mathrm{att}(C_{a})=[f_{a},e_{a},\dot{v}_{a}],
\]
where $f_{a}\in[0.4,0.9]$ is the friction coefficient of the wall,
$e_{a}\in[0.4,0.9]$ is the elasticity coefficient of the wall, and
$\dot{v}_{a}\in[-50,50]$ is the angular velocity of the arc wall.
\subsubsection{Neutral-ball}
We assume the Neutral-ball class of objects have the following attributes:
\[
\mathrm{att}(C_{b})=[v_{x},v_{y},\dot{v_{b}},\theta_{b},f_{b},e_{b},m_{b}],
\]
where $v_{x}$: velocity in x-axis, $v_{y}$: the velocity in y-axis,
$\dot{v_{b}}$: the angular velocity, $\theta_{b}$: the angle of
movement, $f_{b}\in[0.4,0.9]$: friction coefficient of the ball,
$e_{b}\in[0.4,0.9]$ elasticity coefficient of the ball, and $m_{b}\in[5,25]$:
mass of the ball. As discussed before, the post-interaction state
of the ball as a neutral object needs to be recorded in order to train
the local transition models of each active object. Hence, $o_{j}.s^{\prime}$
can be defined as:
\[
o_{j}.s^{\prime}=[v_{x}^{\prime},v_{y}^{\prime},\dot{v}_{b}^{\prime},\theta_{b}^{\prime}],
\]
where $v_{x}^{\prime}$: velocity in x-axis after interaction, $v_{y}^{\prime}$:
the velocity in y-axis after interaction, $\dot{v}_{b}^{\prime}$:
the angular velocity after interaction, $\theta_{b}^{\prime}$: the
angle of movement after interaction.
\subsubsection{Basket-Ball Platform }
We design a simple platform with one neutral object (blue ball) and
several active objects (either rotating or arc walls). The active
objects can perform actions to allow the blue ball to reach the basket.
A smooth reward function returns the maximum reward if the neutral-ball
moves into the desired location (e.g. a Basket). The active object
$o_{i}$ receives a reward after interaction with neutral object $o_{j}$
as follows:
\begin{equation}
r(o_{j}.s^{\prime},a_{i})=\begin{cases}
1, & \text{if}\ o_{j}\ \mathrm{in\ Basket}\\{}
[\mathcal{\mathrm{Min}}(d)]^{-1}, & \text{otherwise}
\end{cases}
\end{equation}
where $d$ is the distance of the ball with the center of the basket
at each time-frame, $o_{j}.s^{\prime}$ is the post-interaction state
of $o_{j}$, and $a_{i}$ is the action taken by $o_{i}$ to adjust
its attributes. Figure \ref{fig:exp1} illustrates an example of this
platform with two different classes of active objects. At every episode
of the environment, a neutral object is generated in a random position
and the active objects are required to adjust their attributes by
performing the proper action. Note that every active object is only
allowed to perform a single action in an episode and wait until it
receives the global reward from the environment. Further implementation details with more experiments are available
in supplementary materials.
\begin{figure*}[t]
\centering{}\includegraphics[width=0.6\paperwidth]{Images/ENV_Add}\caption{Effects of adding/removing active objects (red arc walls) on the reward
for PaPRL and MO-DQN.}
\label{fig:exp5}
\end{figure*}
\subsection{Single Active Object \label{subsec:Single-Active-Object}}
We start with an environment with a single active object and we proceed
to more complex environments. Our first experiment is based on a rotating
wall active object and a neutral-ball object.
\subsubsection{Baselines}
To compare our proposed method with other related approaches, we show
the results of following approaches:
\begin{itemize}
\item \textbf{DQN as implemented in PHYRE} \citet{bakhtin2019phyre}: For
this baseline, we construct the Q-network with state-action pairs
and the returned reward as the target. The state in this case, is
the state of the neutral objects $o_{j}.s$ that is about to interact
with an active object $o_{i}$ that takes an action $a_{i}$ which
is expected to result in the highest return. Similar to PHYRE framework
that deals with continuous action space, we sample $10,000$ actions
at each episode and choose the one with the highest expected reward.
\item \textbf{PaP-Online:} PaP-RL with no pre-trained local transition function,
hence it learns the local transition models during the run time by
observing prior- and post-interaction states.
\item \textbf{PaP-Offline:} PaP-RL with pre-trained local transition function
from an inexpensive and fast simulator with no reward.
\end{itemize}
Figure \ref{fig:exp2} (first row) shows the results of single active
object experiments. Our offline and online PaP-RL methods outperforms
DQN, with the offline version being more sample-efficient as it relies
on a pre-trained local transition model. Whereas, the DQN approach
that only relies on prior-interaction of the neutral object and does
not benefit from local transition models have a slower rate of improvement.
\subsection{Many Active Objects}
We now extend the problem to incorporate many active objects $o_{i}\notin\mathcal{N}$
in the environment.
\subsubsection{Baselines}
To compare our proposed method with other related approaches, we are
using two baselines as follows:
\begin{itemize}
\item \textbf{MO-DQN in PHYRE} \citet{bakhtin2019phyre}: We extended the
implemented DQN for every active object in the environment and we
call it Multi-Object DQN (MO-DQN). Following this approach, all the
active objects in the environment independently maintain a Q-network
based on state-action pairs and the returned reward as explained in
the settings of the experiments. Hence, at every episode, $o_{i}$
selects the action that maximises the expected reward based on its
constructed reward model.
\item \textbf{PaP-Online:} Similar to Single Active Object experiments,
we use PaP-RL with no pre-trained local transition function, hence
every active object in the environment learns the localy transition
models by observing prior- and post-interaction states, and its own
reward model independently.
\end{itemize}
Figure \ref{fig:exp2} (second row) shows the results of our experiment
with many active objects in the environment and it confirm that PaP-RL
outperforms DQN with higher numbers of active objects in the environment,
with the offline version being better as expected.
\subsection{Adding/Removing Objects}
As we have explained before, PaP-MDP is defined to incorporate addition/removal of active objects in the environment. Figure \ref{fig:exp5}
shows the effects of adding/removing objects in the environment during
the run time. We compare PaPRL with MO-DQN method by adding new objects
after 1000, 2000, and 3000 episodes. To illustrate the effects of
removal, two objects are removed at the 4000th episode. Figure \ref{fig:exp5}
shows that both PaPRL and DQN experience a higher rate of improvement
when a new object is added to the environment (with an exception of
DQN after episode 3000). The reason for this boost of improvement
is that the newly added active objects are likely to prevent the neutral
ball to leave the environment by violating the environment boundaries
and receiving a low reward. Hence, the neutral ball is given more
chances for (possible) interactions with new active objects to fall
in the basket. Figure \ref{fig:exp5} also shows that PaPRL-offline
outperforms other baselines as it uses the pre-trained local transition
models.
\section{Conclusion}
In this paper, we proposed the Plug and Play Markov Decision Processes
to introduce a plug and play, object-centric reinforcement learning
approach. In our proposed plug and play approach, independent objects
inherit attributes from their class and maintain their own local transition
model and reward model. Accordingly, the global transition dynamics
is represented as a union of local transition models, each with respect
to one class of active objects in the scene. We also showed that in
this framework, scenes can also be dynamically configured with addition
and removal of objects, and number of objects can be arbitrarily large.
Our experimental results prove sample-efficiency of our approach compared
to other related methods.
|
1,116,691,497,216 | arxiv | \section{Introduction\label{sec:Introduction}}
\begin{comment}
Placeholder
\end{comment}
In the last decade, there has been a sharp increase in the demand
for data traffic~\cite{index2016global}. To address such massive
consumer demand for data communications, especially from the powerful
user equipment (UEs) such as smartphones and tablets, several noteworthy
technologies have been proposed~\cite{7126919}, such as small cell
networks (SCNs), cognitive radio, device-to-device (D2D) communications,
etc. In particular, D2D communications allow direct data transfer
between a pair of neighboring mobile UEs. Due to the short communication
distance between such pairs of D2D UEs, D2D communications hold great
promise in improving network performance such as coverage, spectral
efficiency, energy efficiency and so on~\cite{6970763}.
In the standardization of the 5-th generation (5G) networks, the orthogonal
frequency division multiple access (OFDMA) based D2D communications
adopt two types of spectrum sharing methods, (i) inband (e.g., using
cellular spectrum) or (ii) outband (e.g., unlicensed spectrum). In
particular, in the inband D2D communications, D2D users can setup
their communications in an underlay or overlay manner. More specifically,
in an underlay setting, D2D users access the same spectrum of cellular
users (CUs) whereas in overlay, D2D users access a dedicated portion
of cellular spectrum~\cite{3gpp}. Recently, D2D underlaying cellular
networks have been standardized by the 3rd Generation Partnership
Project (3GPP)\cite{TR36.814}. For the underlay inband D2D communications,
the most critical issue is to reduce the interference as cellular
links and D2D links share the same radio resources.
Although the reuse of the cellular spectrum via D2D can improve the
area spectral efficiency of the network, such D2D operations also
pose great challenges. The major challenge in the D2D-enabled cellular
network is the existence of inter-tier and intra-tier interference
due to the aggressive frequency reuse, where cellular UEs and D2D
UEs share the same spectrum. It is essential to design an effective
interference management scheme to control the interference generated
by the D2D links to the cellular links, and vice versa. Consequently,
there has been a surge of academic studies in this area. Transmission
power control~\cite{6909030,6928445,7933260,lee2015power}, distance
based mode selection~\cite{7147772} and a guard zone interference
control scheme~\cite{6047553,7147834,7676388} have been proposed
to solve this problem. In this paper, we present a novel mode selection
scheme based on the maximum received signal strength for each user
equipment (UE) to control the interference.\textcolor{brown}{{} }In
more detail, a UE will operate in a cellular mode if its received
signal strength from the strongest base station (BS) is larger than
a threshold $\beta$ ; otherwise, it will operate in a D2D mode. This
will mitigate large interference from the D2D links to the cellular
links.\textcolor{brown}{{} }To analyze the proposed interference control
scheme, we develop a theoretical framework that takes power control,
practical path loss and lognormal fading into account. Based on our
analytical results, we find a tradeoff between the maximization of
the ASE performance and the fairness of the D2D links, and the optimum
setting of the threshold $\beta$ that maximizes the ASE.
Moreover, the path loss models of D2D links and cellular links in
a D2D-enabled cellular network are different due to the difference
in the heights and the locations of transmitters~\cite{our_work_TWC2016}.
It is well known that LoS transmission may occur when the distance
between a transmitter and a receiver is small, and NLOS transmission
is common in office environments and in central business districts.
Furthermore, when the distance between a transmitter and a receiver
decreases, the probability that a LoS path exists between them increases,
thereby causing a transition from NLoS transmission to LoS transmission
with a higher probability. Due to the proximity between D2D users,
the physical channels which constitute D2D communications are expected
to be complex in nature, experiencing both LoS and NLoS conditions
across these pairs, which are distinctly different from conventional
cellular environments~\cite{7890358}. Generally speaking, D2D links
are more likely to operate in LoS conditions while the cellular links
are more likely to operate in NLoS conditions. To the best of our
knowledge, there have been no studies that investigate network performance
of D2D enhanced cellular networks, which adopts different path loss
models for the cellular links and the D2D links. Our analysis shows
non-trivial difference on the network performance when considering
different path loss models for the cellular links and the D2D links
respectively, which captures the different environment conditions
that cellular links and D2D links operate in.
Compared with the existing work, the main contributions of this paper
are:
\begin{itemize}
\item We proposed a tractable interference management scheme for each user
equipment (UE) to control the co-channel interference. Specifically,
a UE will operate in a cellular mode if its received signal strength
from the strongest base station (BS) is larger than a threshold $\beta$
; otherwise, it will operate in a D2D mode.
\item We present a general analytical framework using stochastic geometry
and intensity matching approach~\cite{7482733}. Then, we derive
the results of coverage probability and ASE for both the cellular
mode and the D2D mode UEs. Our framework considers interference management,
LoS/NLoS transmission and shadow fading. The accuracy of our analytical
results is validated by Monte Carlo simulations.
\item Different from the existing work that does not differentiate the path
loss models between cellular links and D2D links, our analysis adopts
two different path loss models for cellular links and D2D links, respectively.
Our results demonstrate that the D2D links can provide a considerable
ASE gain when the threshold parameter is appropriately chosen. More
specifically, our analysis shows the interference from D2D tier can
be controlled by using our mode selection scheme, and there is an
optimal $\beta$ to achieve the maximum ASE while the performance
of cellular tier is guaranteed.
\end{itemize}
The rest of this paper is structured as follows. Section~\ref{sec:Related-Work}
provides a brief review of related work. Section~\ref{sec:System-Model}
describes the system model. Section~\ref{sec:General-Results} presents
our theoretical analysis on the coverage probability and the ASE with
applications in a 3GPP special case. The numerical and simulations
results are discussed in Section~\ref{sec:Simulation-and-Discussion}.
Our conclusions are drawn in Section~\ref{sec:Conclusion}.
\section{Related Work\label{sec:Related-Work}}
\begin{comment}
Placeholder
\end{comment}
D2D communications underlaying cellular networks are ongoing standardization
topics in LTE-A~\cite{TR36.828}. Meanwhile, stochastic geometry
which is accurate in modeling irregular deployment of base stations
(BSs) and mobile user equipment (UEs) has been widely used to analyze
network performance~\cite{6042301,6516885,peng2014device}. Andrews,
et.al conducted network performance analyses for the downlink (DL)~\cite{6042301}
and the uplink (UL)~\cite{6516885} of SCNs, in which UEs and/or
base stations (BSs) were assumed to be randomly deployed according
to a homogeneous Possion point process (HPPP). In~\cite{peng2014device},
Peng developed an analytical framework for the D2D communications
underlaied cellular network in the DL, where a Rician fading channel
model was adopted to model the small-scale fast fading for the D2D
communication links. Although some studies assumed that D2D links
operate on the DL spectrum, and hence the interference from BSs to
D2D receivers is severe. In practice, allowing D2D links to access
the UL spectrum might be a more realistic assumption, as 3GPP has
standardized D2D communications~\cite{36.877}\textcolor{brown}{.}
On the other hand, as one of the fundamental performance metrics of
the communication system, D2D transmission capacity has been analyzed
in the literature~\cite{6047553,6909030,6928445,7147772,7147834,7676388}.
In~\cite{6047553}, the author proposed an interference-limited area
control scheme to mitigate the interference from cellular to D2D considering
a single slope path loss model. In~\cite{lee2015power}, Lee proposed
a power control algorithm to control the co-channel interference in
which global channel state information are required at BSs. In~\cite{7147772},
Liu provided a unified framework to analyze the downlink outage probability
in a multi-channel environment with Rayleigh fading, where D2D UEs
were selected based on the average received signal strength from the
nearest BS, which is equivalent to a distance-based selection. The
authors of~\cite{7147834} and~\cite{7676388} proposed novel approaches
to model the interference in uplink or downlink underlaid/overlaid
with Rayleigh fading and single path loss model.
Meanwhile, limited studies have been conducted to consider D2D networks
with general fading channels, for example in~\cite{7890358} and~\cite{peng2014device},
the authors considered generalized fading conditions and analyzed
the network performance, while they did not differentiate the path
loss models between the D2D links and cellular links.
Although the existing works have provided precious insights into resource
allocation and capacity enhancement for D2D communications, there
are several remaining problems:
\begin{itemize}
\item The mode selection schemes in the literatures were not very practical,
they were mostly based on the UE-to-BS distance while a more practical
one which based on the maximum received signal strength should be
considered.
\item In some studies, only a single BS with one cellular UE and one D2D
pair were considered, which did not take into account the influence
from other cells. Moreover, in most studies, the authors considered
D2D receiver UEs as an additional tier of nodes, independent of the
cellular UEs and the D2D transmitter UEs. Such tier of D2D receiver
UEs without cellular capabilities appears from nowhere and is hard
to justify in practice.
\item The path loss model is not practical, e.g., the impact of LoS/NLoS
conditions have not been well studied in the context of D2D and usually
the same path loss model was used for both the cellular and the D2D
tiers. In addition, shadow fading was widely ignored in the existing
analyses, which did not reflect realistic networks.
\end{itemize}
\begin{comment}
In this section we review the most important recent results on the
D2D communication\cite{7942092}. By using the stochastic geometry
theory, Andrews, et.al conducted network performance analyses for
the downlink (DL)~\cite{6042301} and the uplink (UL)~\cite{6516885}
of SCNs, in which UEs and/or base stations (BSs) were assumed to be
randomly deployed according to a homogeneous Possion distribution.
In~\cite{peng2014device}, Peng developed an analytical framework
for the D2D underlaid cellular network in the DL, where the Rician
fading channel model was adopted to model the small-scale fast fading
for the D2D communication links. In~\cite{7147772}, Liu provided
a unified framework to analyze the downlink outage probability in
a multi-channel environment with Rayleigh fading, where D2D UEs were
selected based the received signal strength from the nearest BS. In~\cite{7073589},
Sun presented an analytical framework for evaluating the network performance
in terms of load-aware coverage probability and network throughput
using a dynamic TDD scheme in which mobile users in proximity can
engage in D2D communications. In~\cite{7147834}, George proposed
exclusion regions to protect cellular receivers from excessive interference
from active D2D transmitters. In~\cite{lee2015power}, the authors
derived approximate expressions for the distance distribution between
two D2D peers conditioned on the core network\textquoteright s knowledge
of the cellular network and analyzed the performance of network-assisted
D2D discovery in random spatial networks.
In~\cite{related_work_Jeff}, the authors considered a multi-slope
piece-wise path loss function, while in~\cite{Related_work_Health},
the authors modeled line-of-sight (LoS) and non-line-of-sight (NLoS)
transmissions as probabilistic events for a millimeter wave communication
scenario. In more recent work~\cite{our_work_TWC2016}, the authors
further considered both piece-wise path loss functions and probabilistic
LoS and NLoS transmissions,.
Although the existing work provides precious insights into resource
allocation and mode selection for D2D communications, there still
exists several problems:
\begin{itemize}
\item In some studies, only a single BS with one cellular UE and one D2D
pair were considered, which did not take into account the influence
from other cells.
\item The mode selection scheme in the literature was not very practical,
which was mostly based on the distance only and considered D2D receiver
UEs as an additional tier of nodes, independent of the cellular UEs
and the D2D transmitter UEs. Such tier of D2D receiver UEs without
cellular capabilities appears from nowhere and is hard to justify.
\item D2D communications usually coexist with the UL of cellular communications
due to the relatively low inter-tier interference. Such feature has
not been well treated in the literature.
\item The path loss model is not practical, e.g., LoS/NLoS transmissions
have not been well studied in the context of D2D, and usually the
same path loss model was used for both the cellular and the D2D tiers.
\item Shadow fading was widely ignored in the analysis, which did not reflect
realistic networks.
\end{itemize}
\end{comment}
\begin{comment}
Placeholder
\end{comment}
To sum up, in this paper, we propose a more generalized framework
which takes into account a novel interference management scheme based
on the maximum received signal strength, probabilistic NLoS and LoS
transmissions and lognormal shadow fading, and shed new insight on
the interference management of coexistent D2D and cellular transmissions.
\section{System Model\label{sec:System-Model}}
\begin{comment}
Placeholder
\end{comment}
In this section, we first explain the scenario of the D2D communication
coexisting with cellular network. Then, we present the path loss model
and the mode selection scheme.
\subsection{Scenario Description\label{subsec:Scenario-Description}}
We consider a D2D underlaid UL cellular network, where BSs and UEs,
including cellular UL UEs and D2D UEs, are assumed to be distributed
on an infinite two-dimensional plane $\mathbf{\mathit{\mathtt{\mathbb{R}}}^{2}}$.
We assume that the cellular BSs are spatially distributed according
to a homogeneous PPP of intensity $\lambda_{b}$ , i.e., $\varPhi_{b}=\{X_{i}\}$,
where $X_{i}$ denotes the spatial locations of the $i$th BS. Moreover,
the UEs are also distributed in the network region according to another
independent homogeneous PPP $\varPhi_{u}$ of intensity $\lambda_{u}$.
\subsection{Path Loss Model\label{subsec:Path-Loss-Model}}
\begin{comment}
Placeholder
\end{comment}
We incorporate both NLoS and LoS transmissions into the path loss
model. Following~\cite{our_GC_paper_2015_HPPP,our_work_TWC2016},
we adopt a very general path loss model, in which the path loss $\zeta\left(r\right)$,
as a function of the distance $r$, is segmented into $N$ pieces
written a
\begin{comment}
\begin{singlespace}
\noindent
\[
\zeta\left(w\right)=\begin{cases}
\zeta_{1}\left(w\right)=\begin{cases}
\begin{array}{l}
\zeta_{1}^{\textrm{L}}\left(w\right),\\
\zeta_{1}^{\textrm{NL}}\left(w\right),
\end{array} & \hspace{-0.3cm}\begin{array}{l}
\textrm{with probability }\textrm{Pr}_{1}^{\textrm{L}}\left(w\right)\\
\textrm{with probability }\left(1-\textrm{Pr}_{1}^{\textrm{L}}\left(w\right)\right)
\end{array}\end{cases}\hspace{-0.3cm}, & \hspace{-0.3cm}\textrm{when }0\leq w\leq d_{1}\\
\zeta_{2}\left(w\right)=\begin{cases}
\begin{array}{l}
\zeta_{2}^{\textrm{L}}\left(w\right),\\
\zeta_{2}^{\textrm{NL}}\left(w\right),
\end{array} & \hspace{-0.3cm}\begin{array}{l}
\textrm{with probability }\textrm{Pr}_{2}^{\textrm{L}}\left(w\right)\\
\textrm{with probability }\left(1-\textrm{Pr}_{2}^{\textrm{L}}\left(w\right)\right)
\end{array}\end{cases}\hspace{-0.3cm}, & \hspace{-0.3cm}\textrm{when }d_{1}<w\leq d_{2}\\
\vdots & \vdots\\
\zeta_{N}\left(w\right)=\begin{cases}
\begin{array}{l}
\zeta_{N}^{\textrm{L}}\left(w\right),\\
\zeta_{N}^{\textrm{NL}}\left(w\right),
\end{array} & \hspace{-0.3cm}\begin{array}{l}
\textrm{with probability }\textrm{Pr}_{N}^{\textrm{L}}\left(w\right)\\
\textrm{with probability }\left(1-\textrm{Pr}_{N}^{\textrm{L}}\left(w\right)\right)
\end{array}\end{cases}\hspace{-0.3cm}, & \hspace{-0.3cm}\textrm{when }w>d_{N-1}
\end{cases}.
\]
\end{singlespace}
\end{comment}
\begin{equation}
\zeta\left(r\right)=\begin{cases}
\zeta_{1}\left(r\right), & \textrm{when }0\leq r\leq d_{1}\\
\zeta_{2}\left(r\right), & \textrm{when }d_{1}<r\leq d_{2}\\
\vdots & \vdots\\
\zeta_{N}\left(r\right), & \textrm{when }r>d_{N-1}
\end{cases},\label{eq:prop_PL_model}
\end{equation}
where each piece $\zeta_{n}\left(r\right),n\in\left\{ 1,2,\ldots,N\right\} $
is modeled as
\begin{equation}
\zeta_{n}\left(r\right)\hspace{-0.1cm}=\hspace{-0.1cm}\begin{cases}
\hspace{-0.2cm}\begin{array}{l}
\zeta_{n}^{\textrm{L}}\left(r\right)=A_{n}^{{\rm {L}}}r^{-\alpha_{n}^{{\rm {L}}}},\\
\zeta_{n}^{\textrm{NL}}\left(r\right)=A_{n}^{{\rm {NL}}}r^{-\alpha_{n}^{{\rm {NL}}}},
\end{array} & \hspace{-0.2cm}\hspace{-0.3cm}\begin{array}{l}
\textrm{LoS Probability:}~\textrm{Pr}_{n}^{\textrm{L}}\left(r\right)\\
\textrm{NLoS Probability:}~1-\textrm{Pr}_{n}^{\textrm{L}}\left(r\right)
\end{array}\hspace{-0.1cm},\end{cases}\label{eq:PL_BS2UE}
\end{equation}
where
\begin{itemize}
\item $\zeta_{n}^{\textrm{L}}\left(r\right)$ and $\zeta_{n}^{\textrm{NL}}\left(r\right),n\in\left\{ 1,2,\ldots,N\right\} $
are the $n$-th piece path loss functions for the LoS transmission
and the NLoS transmission, respectively,
\item $A_{n}^{{\rm {L}}}$ and $A_{n}^{{\rm {NL}}}$ are the path losses
at a reference distance $r=1$ for the LoS and the NLoS cases, respectively,
\item $\alpha_{n}^{{\rm {L}}}$ and $\alpha_{n}^{{\rm {NL}}}$ are the path
loss exponents for the LoS and the NLoS cases, respectively.
\end{itemize}
\noindent In practice, $A_{n}^{{\rm {L}}}$, $A_{n}^{{\rm {NL}}}$,
$\alpha_{n}^{{\rm {L}}}$ and $\alpha_{n}^{{\rm {NL}}}$ are constants
obtainable from field tests and continuity constraints~\cite{SCM_pathloss_model}.
As a special case, we consider a path loss function adopted in the
3GPP~\cite{TR36.828}, and we adopt two different path loss models
for cellular links and D2D links as
\begin{equation}
\zeta_{B}\left(r\right)\hspace{-0.1cm}=\hspace{-0.1cm}\begin{cases}
\hspace{-0.2cm}\begin{array}{l}
A_{B}^{{\rm {L}}}r^{-\alpha_{B}^{{\rm {L}}}},\\
A_{B}^{{\rm {NL}}}r^{-\alpha_{B}^{{\rm {NL}}}},
\end{array} & \hspace{-0.2cm}\hspace{-0.3cm}\begin{array}{l}
\textrm{LoS Probability:}~\textrm{Pr}_{B}^{\textrm{L}}\left(r\right)\\
\textrm{NLoS Probability:}~1-\textrm{Pr}_{B}^{\textrm{L}}\left(r\right)
\end{array}\hspace{-0.1cm},\end{cases}\label{eq:PL_BS2UEspecial case-1}
\end{equation}
and
\begin{equation}
\zeta_{D}\left(r\right)\hspace{-0.1cm}=\hspace{-0.1cm}\begin{cases}
\hspace{-0.2cm}\begin{array}{l}
A_{D}^{{\rm {L}}}r^{-\alpha_{D}^{{\rm {L}}}},\\
A_{D}^{{\rm {NL}}}r^{-\alpha_{D}^{{\rm {NL}}}},
\end{array} & \hspace{-0.2cm}\hspace{-0.3cm}\begin{array}{l}
\textrm{LoS Probability:}~\textrm{Pr}_{D}^{\textrm{L}}\left(r\right)\\
\textrm{NLoS Probability:}~1-\textrm{Pr}_{D}^{\textrm{L}}\left(r\right)
\end{array}\hspace{-0.1cm},\end{cases}\label{eq:PL_BS2UEspecial case-2}
\end{equation}
together with a linear LoS probability function as follows~\cite{TR36.828},
\begin{equation}
\textrm{Pr}_{B}^{\textrm{L}}\left(r\right)=\begin{cases}
1-\frac{r}{d_{B}} & 0<r\leq d_{B}\\
0 & r>d_{B}
\end{cases},\label{eq:LoS probability function-2}
\end{equation}
and
\begin{equation}
\textrm{Pr}_{D}^{\textrm{L}}\left(r\right)=\begin{cases}
1-\frac{r}{d_{D}} & 0<r\leq d_{D}\\
0 & r>d_{D}
\end{cases},\label{eq:LoS probability function-1}
\end{equation}
where $d_{B}$ and $d_{D}$ is the cut-off distance of the LoS link
for UE-to-BS links and UE-to-UE links. The adopted linear LoS probability
function is very useful because it can include other LoS probability
functions as its special cases~\cite{our_work_TWC2016}.
\subsection{User Mode Selection Scheme\label{subsec:User-Mode-Selection}}
\begin{comment}
Placeholder
\end{comment}
There are two modes for UEs in the considered D2D-enabled UL cellular
network, i.e., cellular mode and D2D mode. Each UE is assigned with
an operation mode according to the comparison of the maximum received
DL power from its serving BS with a threshold. In more detail, the
considered user model selection criterion is formulated as
\begin{equation}
Mode=\begin{cases}
\textrm{Cellular}, & \textrm{if }P^{\ast}=\underset{b}{\max}\left\{ P_{b}^{\textrm{rx}}\right\} >\beta\\
\textrm{D2D}, & \textrm{otherwise}
\end{cases},\label{eq:modeselction}
\end{equation}
where the string variable $Mode$ takes the value of 'Cellular' or
'D2D' to denote the cellular mode and the D2D mode, respectively.
In particular, for a tagged UE, if $P^{\ast}$ is large than a specific
threshold $\beta>0$. This UE is not appropriate to work in the D2D
mode due to its potentially large interference to cellular UEs. Hence,
it should operate in the cellular mode and directly connect with the
strongest BS; otherwise, it should operate in the D2D mode. The UEs
Which are associated with cellular BSs are referred to as cellular
UEs (CUs). The distance from a CU to its associated BS is denoted
by $R_{B}$. From~\cite{6928445}, we assume CUs are distributed
following a non-homogenous PPP $\varPhi_{c}$. For a D2D UE, we adopt
the same assumption in~\cite{7147772} that it randomly decides to
be a D2D transmitter or a D2D receiver with equal probability at the
beginning of each time slot, and a D2D receiver UE selects the strongest
D2D transmitter UE for signal reception.
The received power for a typical UE from a BS $b$ can be written
as
\begin{equation}
P_{b}^{\textrm{rx}}=\begin{cases}
A_{BL}P_{B}\mathrm{\mathcal{H}_{B}}\left(b\right)R_{B}^{-\alpha_{BL}} & \mathtt{\text{LoS}}\\
A_{BN}P_{B}\mathrm{\mathcal{H}_{B}}\left(b\right)R_{B}^{-\alpha_{BN}} & \textrm{otherwise}
\end{cases},\label{eq:maximumreceivedpower}
\end{equation}
where $A_{BL}=10^{\frac{1}{10}A_{BL}^{\textrm{dB}}}$ and $A_{BN}=10^{\frac{1}{10}A_{BN}^{\textrm{dB}}}$
denote a constant determined by the transmission frequency for BS-to-UE
links in LoS and NLoS conditions, respectively. $P_{B}$ is the transmission
power of a BS, $\mathrm{\mathcal{H}_{B}}\left(b\right)$ is the lognormal
shadowing from a BS $b$ to the typical UE. $\alpha_{BL}$ and $\alpha_{BN}$
denote the path loss exponents for BS-to-UE links with LoS and NLoS,
respectively. Base on the above system model, we can obtain the intensity
of CU as $\lambda_{c}=q\lambda_{u}$, where $q$ denotes the probability
of $P^{\ast}>\beta$ and will be derived in closed-form expressions
in Section~\ref{sec:General-Results}. It is apparent that the D2D
UEs are distributed following another non-homogenous PPP $\varPhi_{d}$,
the intensity of which is $\lambda_{d}=\left(1-q\right)\lambda_{u}$.
Considering that a required content file might not exist in a D2D
transmitter, in reality, we assume that $\rho\%$ D2D transmitters
possess the required content files and deliver them to D2D receivers.
In other words, $\rho\%$ of the D2D links will eventually work in
one time slot.
We assume an underlaid D2D model. That is, each D2D transmitter reuses
the frequency with cellular UEs, which incurs inter-tier interference
from D2D to cellular. However, there is no intra-cell interference
between cellular UEs since we assume an orthogonal multiple access
technique in a BS. It follows that there is only one uplink transmitter
in each cellular BS. Here, we consider a fully loaded network with
$\lambda_{u}\gg\lambda_{b}$, so that on each time-frequency resource
block, each BS has at least one active UE to serve in its coverage
area. Note that the case of $\lambda_{u}<\lambda_{b}$ is not trivial,
which even changes the capacity scaling law~\cite{Ding2017capScaling}.
In this paper, we focus on the former case, and leave the study of
$\lambda_{u}<\lambda_{b}$ as our future work. Generally speaking,
the active CUs can be treated as a thinning PPP $\varPhi_{c}$ with
the same intensity $\lambda_{b}$ as the cellular BSs.
Moreover, we assume a channel inversion strategy for the power control
for cellular UEs, i.e.,
\begin{equation}
P_{c_{i}}=\begin{cases}
P_{0}\mathcal{\mathrm{\left(\frac{R_{i}^{\alpha_{BL}}}{\mathcal{H_{\mathrm{c_{i}}}}A_{BL}}\right)^{\varepsilon}}} & \mathtt{\text{LoS}}\\
P_{0}\mathcal{\mathrm{\left(\frac{R_{i}^{\alpha_{BN}}}{\mathcal{H_{\mathrm{c_{i}}}}A_{BN}}\right)^{\varepsilon}}} & \text{otherwise}
\end{cases},\label{eq:cupowercontrol}
\end{equation}
where $P_{c_{i}}$ is the transmission power of the $i$-th UE in
cellular link, $R_{i}$ is the distance of the $i$-th link from a
CU to the target BS, $\mathcal{H_{\mathrm{c_{i}}}}$ is the lognormal
shadowing between target BS and the cellular UE, $\epsilon\in(0,1]$
is the fractional path loss compensation, $P_{0}$ is the receiver
sensitivity. For downlink BS and D2D transmitters, they use constant
transmit powers $P_{B}$ and $P_{d}$, respectively. Besides, we denote
the additive white Gaussian noise (AWGN) power by $\sigma^{2}$.
\subsection{Performance Metrics\label{subsec:The-Performance-Metrics}}
\begin{comment}
Placeholder
\end{comment}
According to~\cite{6042301}, the coverage probability is defined
as
\begin{equation}
P_{Mode}\left(\gamma,\lambda_{u},\alpha_{B,D}\right)=\Pr\left[\textrm{SINR}>\gamma\right],\label{eq:definesinr}
\end{equation}
where $\gamma$ is the SINR threshold, the subscript string variable
$Mode$ takes the value of 'Cellular' or 'D2D', and the interference
in this paper consist of the interference from both cellular UEs and
D2D transmitters.
Furthermore, the area spectral efficiency (ASE) in $\textrm{bps/Hz/k\ensuremath{m^{2}}}$
can be formulated as
\begin{align}
& A_{Mode}^{\textrm{ASE}}\left(\lambda_{Mode},\gamma_{0}\right)\label{eq:ase}\\
& =\lambda_{Mode}\int_{\gamma_{0}}^{\infty}\log_{2}\left(1+x\right)f_{X}\left(\lambda_{Mode},\gamma_{0}\right)dx,\nonumber
\end{align}
where $\gamma_{0}$ is the minimum working SINR for the considered
network, and $f_{X}\left(\lambda_{Mode},\gamma_{0}\right)$ is the
PDF of the SINR observed at the typical receiver for a particular
value of $\lambda_{Mode}$.
For the whole network consisting of both cellular UEs and D2D UEs,
the sum ASE can be written as
\begin{equation}
A^{\textrm{ASE}}=A_{\textrm{Cellular}}^{\textrm{ASE}}+A_{\textrm{D2D}}^{\textrm{ASE}}.\label{eq:totalase}
\end{equation}
\section{Main Results\label{sec:General-Results}}
\begin{comment}
Placeholder
\end{comment}
In this section, the performance of UEs are characterized in terms
of their coverage probability and ASE both for cellular tier and D2D
tier. The probability that the UE operates in the cellular mode is
derived in Section \ref{subsec:Coverage-Probability}, the coverage
probability of cellular UE and D2D UE are derived in Section \ref{subsec:Cellular-mode}
and Section \ref{subsec:Coverage-Probability-of}, respectively.
\subsection{Probability operating in the cellular mode\label{subsec:Probability-Operating-In}}
Due to consideration of lognormal shadowing in this mode we use the
intensity measure method in\cite{7482733} to first obtain an equivalent
network for further analysis. In particular, we transform the original
PPP with lognormal shadowing to a equivalent PPP which has the matched
intensity measure and intensity. More specifically, define $\overline{R}_{i}^{BL}=\mathrm{\mathcal{H}_{B}^{-1/\alpha_{BL}}}R_{i}^{BL}$
and $\overline{R}_{i}^{BN}=\mathrm{\mathcal{H}_{B}^{-1/\alpha_{BN}}}R_{i}^{BN}$,
where $R_{i}^{BL}$ and $R_{i}^{BN}$ are the distance separating
a typical user from its tagged strongest base station with LoS and
NLoS. $\overline{R}_{i}^{BL}$ and $\overline{R}_{i}^{BN}$ are the
equivalent distance separating a typical user from its tagged nearest
base station in the new PPP.
The network consists of two non-homogeneous PPPs with intensities
$\lambda p^{NL}(R_{i})$ and $\lambda p^{L}(R_{i})$, which representing
the sets of NLoS and LoS links respectively. Each UE is associated
with the strongest transmitter. Moreover, intensities $\lambda{}^{NL}(\cdot)$
and $\lambda{}^{L}(\cdot)$ are given by
\begin{equation}
\lambda{}^{NL}(t)=\frac{d}{dt}\varLambda^{NL}\left(\left[0,t\right]\right)
\end{equation}
and
\begin{equation}
\lambda{}^{L}(t)=\frac{d}{dt}\varLambda^{L}\left(\left[0,t\right]\right)
\end{equation}
respectively, where
\begin{equation}
\varLambda^{NL}\left(\left[0,t\right]\right)=\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{NL}}}p^{NL}(r)rdr\right]\label{eq:intensity nlos}
\end{equation}
and
\begin{equation}
\varLambda^{L}\left(\left[0,t\right]\right)=\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{L}}}p^{L}(r)rdr\right].\label{eq:intensity los}
\end{equation}
Similar definition are adopted to D2D tier as well. The transformed
network has the exactly same performance for the typical receiver
(BS or D2D RU) on the coverage probability with the original network.
In this subsection, we present our results on the probability that
the UE operates in the cellular mode and the equivalence distance
distributions in the cellular mode and D2D mode, respectively. In
the following, we present our first result in Lemma~\ref{lem:When-operating-under},
which will be used in the later analysis of the coverage probability.
\begin{lem}
\label{lem:When-operating-under}The probability that a typical UE
connects to the strongest BS and operates in the cellular mode $q$
is given by
\begin{align}
q & =1-\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda_{B}\int_{0}^{\left(\frac{P_{b}\textrm{A}_{BL}\mathcal{H}}{\beta}\right)^{1/\alpha_{BL}}}p^{L}(r)rdr\right]\right.\nonumber \\
& -\left.\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda_{B}\int_{0}^{\left(\frac{P_{b}\textrm{A}_{BN}\mathcal{H}}{\beta}\right)^{1/\alpha_{BN}}}p^{NL}(r)rdr\right]\right],\label{eq:q}
\end{align}
and the probability that the UE operates in the D2D mode is $\left(1-q\right)$.
\end{lem}
\begin{IEEEproof}
See Appendix A
\begin{comment}
\noindent The probability that a generic mobile UE operates in the
cellular mode is
\begin{eqnarray}
q & = & \mathsf{\mathit{\mathrm{1}-\Pr\left[\underset{b}{\max}\left\{ P_{b}^{\textrm{rx}}\right\} \leq\beta\right]}}\nonumber \\
& = & 1-\Pr\left[\max\left\{ P_{LOS}^{rx}\right\} \leq\beta\cap\max\left\{ P_{NLOS}^{rx}\right\} \leq\beta\right]\nonumber \\
& = & 1-\Pr\left[\min\overline{R}_{i}^{BL}\geq\left(\frac{P_{b}\textrm{A}_{BL}}{\beta}\right)^{1/\alpha_{BL}}\cap\min\overline{R}_{i}^{BN}\geq\left(\frac{P_{b}\textrm{A}_{BN}}{\beta}\right)^{1/\alpha_{BN}}\right]\\
& = & 1-\Pr\left[\textrm{no nodes within \ensuremath{\left(\frac{P_{b}\textrm{A}_{BL}}{\beta}\right)^{1/\alpha_{BL}}}}\cap\textrm{no nodes within \ensuremath{\left(\frac{P_{b}\textrm{A}_{BN}}{\beta}\right)^{1/\alpha_{BN}}}}\right]\\
& = & 1-\exp\left[-\wedge^{\textrm{NL}}\left(\left[0,\left(\frac{P_{b}\textrm{A}_{BL}}{\beta}\right)^{1/\alpha_{BL}}\right]\right)\right]\cdot\exp\left[-\wedge^{\textrm{L}}\left[0,\left(\frac{P_{b}\textrm{A}_{BN}}{\beta}\right)^{1/\alpha_{BN}}\right]\right]\\
& = & 1-\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda\int_{0}^{\left(\frac{P_{b}\textrm{A}_{BL}\mathcal{H}}{\beta}\right)^{1/\alpha_{BL}}}p^{L}(r)rdr\right]\right]\cdot\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda\int_{0}^{\left(\frac{P_{b}\textrm{A}_{BN}\mathcal{H}}{\beta}\right)^{1/\alpha_{BN}}}p^{NL}(r)rdr\right]\right],\label{eq:cellular proability}
\end{eqnarray}
\end{comment}
\end{IEEEproof}
Note that Eq.(\ref{eq:q}) explicitly account for the effects of shadow
fading, pathloss, transmit power, spatial distribution of BSs and
mode selection threshold $\beta$ . From the result, one can see that
the HPPP $\phi_{u}$ can be divided into two PPPs: the PPP with intensity
$q\lambda_{u}$ and the PPP with intensity $(1-q)\lambda_{u}$, which
representing cellular UEs and D2D UEs, respectively. Same as in~\cite{6928445},
We assume these two PPPs are independent.
\begin{figure}
\begin{centering}
\includegraphics[width=8.8cm]{cuprobablity}
\par\end{centering}
\caption{\label{fig1}The probability for a UE to operate in the cellular model
vary the RSS threshold $\beta$ , $\textrm{P}_{B}=46\textrm{dBm}$,
log-normal shadowing with zero means, $\sigma_{B}^{2}=8\text{dB}$
and $\sigma_{D}^{2}=7\text{dB}$}
\end{figure}
Fig.1 illustrates the probability for a UE to operate in the cellular
model based on Eq.(\ref{eq:q}). It can be seen that the simulation
results perfectly match the analytical results. From Fig.1, we can
find that over 50\% UEs can operate in the cellular mode when $\beta$
is smaller than -55 dBm as the BS intensity is 5$\text{BS/k\ensuremath{m^{2}}}$.
This value increases by approximately to -37 dBm and -35 dBm when
the BS intensity is 10$\mathtt{\text{BS/k\ensuremath{m^{2}}}}$ and
15$\text{BS/k\ensuremath{m^{2}}}$, respectively. It indicates that
the percentage of CUs will increase as the BS intensity grows.
\subsection{Coverage probability \label{subsec:Coverage-Probability}}
In this subsection, we investigate the coverage probability that a
receiver's signal-to-interference-plus-noise ratio (SINR) is above
a per-designated threshold $\gamma$:
\begin{equation}
P_{Mode}\left(T,\lambda_{u},\alpha_{B,D}\right)=\Pr\left[\textrm{SINR}>\gamma\right]\label{eq:definesinr-1}
\end{equation}
\noindent where $\gamma$ is the SINR threshold, the subscript string
variable $Mode$ takes the value of 'Cellular' or 'D2D', the SINR
is calculated as
\begin{equation}
\mathrm{SINR}=\frac{P_{Mode}\zeta_{Mode}\left(r\right)\mathcal{H_{\mathit{Mode}}}}{I_{cellular}+I_{d2d}+N_{0}},\label{eq:SINR defined}
\end{equation}
where $\mathcal{H_{\mathrm{Mode}}}$ is the lognormal shadowing between
transmitter and receiver in cellular mode or D2D mode. $P_{B}$, $P_{D}$
and $N_{0}$ are the transmission power of each cellular and D2D UE
transmitter and the additive white Gaussian noise (AWGN) power at
each receiver, respectively. $I_{cellular}$ and $I_{d2d}$ is the
cumulative interference given by $I_{cellular}=\sum_{i:\,c_{i}\in\Phi_{c}\setminus signal}P_{c,i}\beta_{i}\mathcal{H}_{i},$
and $I_{d2d}=\sum_{j:\,d_{i}\in\Phi_{d2d}\setminus signal}P_{D}\beta_{j}\mathcal{H}_{j},$
where $c_{i}$ and $d_{j}$ are the $i$-th interfering CU and $j$-th
interfering TU, $P_{c,i}$ is the transmit power $i$-th interfering
CU, $\beta_{i}$ ,$\beta_{j}$ and $\mathcal{H_{\mathrm{i}}}$, $\mathcal{H_{\mathrm{j}}}$
are the path loss associated with $c_{i}$ and $d_{j}$, and the lognormal
fading associated with $c_{i}$ and $d_{j}$, respectively.
\subsubsection{Coverage probability of cellular mode\label{subsec:Cellular-mode}}
Based on the path loss model in Eq.(\ref{eq:PL_BS2UEspecial case-1},\ref{eq:LoS probability function-2})
and the equivalence method in subsection~\ref{subsec:Probability-Operating-In},
we present our main result on $p_{c}^{\textrm{cov}}\left(\lambda,\gamma\right)$
in Theorem~\ref{thm:coverage of cellular mode}.
\begin{thm}
\noindent {\small{}\label{thm:coverage of cellular mode}}For the
typical BS which is located at the origin, considering the path loss
model in Eq.(\ref{eq:PL_BS2UEspecial case-1}) and the equivalence
method, the coverage probability $p_{c}^{{\rm {cov}}}\left(\lambda,\gamma\right)$
can be derived as
\begin{equation}
p_{c}^{{\rm {cov}}}\left(\lambda,\gamma\right)=T_{c}^{{\rm {L}}}+T_{c}^{{\rm {NL}}},\label{eq:Theorem_1_p_cov}
\end{equation}
where $T_{c}^{{\rm {L}}}=\int_{0}^{t_{LoS}}\left(\mathcal{\int_{\mathrm{-}\infty}^{\infty}\mathrm{\left[\frac{1-e^{-i\omega/\gamma}}{2\pi i\omega}\right]\mathcal{F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega)}}d\omega\right)f_{\overline{R_{LCU}}}(r)dr$
and
\noindent $T_{c}^{{\rm {NL}}}=\int_{0}^{t_{NLoS}}\left(\mathcal{\int_{\mathrm{-}\infty}^{\infty}\mathrm{\left[\frac{1-e^{-i\omega/\gamma}}{2\pi i\omega}\right]\mathcal{F}_{\frac{\textrm{1}}{SINR^{NL}}}(\omega)}}d\omega\right)f_{\overline{R_{NLCU}}}(r)dr$
,
\noindent $t_{LoS}=\left(\frac{\beta}{\textrm{\ensuremath{P_{B}A^{L}}}}\right){}^{-1/\alpha_{BL}}$,$t_{NLoS}=\left(\frac{\beta}{\textrm{\ensuremath{P_{B}A^{NL}}}}\right){}^{-1/\alpha_{BN}}$,
\noindent $f_{\overline{R_{LCU}}}(r)$ and $f_{\overline{R_{NLCU}}}(r)$
, are represented by
\begin{equation}
f_{\overline{R_{LCU}}}^{{\rm {L}}}\left(r\right)=\frac{\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{\overline{r_{1}}}\left({\rm {Pr}}^{{\rm {NL}}}\left(u\right)\right)\lambda_{B}^{NL}(u)du\right)\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{r}{\rm {Pr}}^{{\rm {L}}}\left(u\right)\lambda_{B}^{L}(u)du\right){\rm {Pr}}^{{\rm {L}}}\left(r\right)\lambda_{B}^{L}(r)}{q},\label{eq:geom_dis_PDF_UAS1_LoS_thm}
\end{equation}
and
\begin{equation}
f_{\overline{R_{NLCU}}}^{{\rm {NL}}}\left(r\right)=\frac{\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{\overline{r_{2}}}{\rm {Pr}}^{{\rm {L}}}\left(u\right)\lambda(u)du\right)\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{r}\left({\rm {Pr}}^{{\rm {NL}}}\left(u\right)\right)\lambda_{B}^{NL}(u)du\right){\rm {Pr}}^{{\rm {NL}}}\left(r\right)\lambda_{B}^{NL}(r)}{q},\label{eq:geom_dis_PDF_UAS1_NLoS_thm}
\end{equation}
\end{thm}
\noindent where $\overline{r_{1}}$ and $\overline{r_{2}}$ are given
implicitly by the following equations as
\begin{equation}
\overline{r_{1}}=\underset{\overline{r_{1}}}{\arg}\left\{ \zeta^{{\rm {NL}}}\left(\overline{r_{1}}\right)=\zeta_{n}^{{\rm {L}}}\left(\overline{r}\right)\right\} ,\label{eq:def_r_1}
\end{equation}
\noindent and
\begin{equation}
\overline{r_{2}}=\underset{\overline{r_{2}}}{\arg}\left\{ \zeta^{{\rm {L}}}\left(\overline{r_{2}}\right)=\zeta_{n}^{{\rm {NL}}}\left(\overline{r}\right)\right\} .\label{eq:def_r_2}
\end{equation}
\noindent In addition, $\mathcal{F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega)$
and $\mathcal{F}_{\frac{\textrm{1}}{SINR^{NL}}}(\omega)$ are respectively
computed by
\begin{align}
\mathcal{F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega) & =\exp\left(-\int_{r}^{\infty}\left(1-\int_{0}^{t_{LoS}}\left[\exp\left(\mathrm{i\omega\frac{\mathrm{\left(z^{\alpha_{BL}}\right)^{\varepsilon}}v^{-\alpha_{BL}}}{A_{BL}^{2\epsilon}\left(r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)\right]f_{\overline{R_{LCU}}}(z)dz\right)\lambda_{B}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{r}^{\infty}\left(1-\int_{0}^{t_{LoS}}\left[\exp\left(\mathrm{i\omega\frac{\mathcal{\mathrm{\left(\frac{z^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}}A_{BN}v^{-\alpha_{BN}}}{\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)\right]f_{\overline{R_{LCU}}}(z)dz\right)\lambda_{B}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{t_{LoS}}^{\infty}\left(1-\exp\left(\mathrm{\mathrm{i\omega\frac{P_{d}A_{BL}v^{-\alpha_{BL}}}{P_{0}\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}}\right)\right)\lambda_{tu}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{t_{LoS}}^{\infty}\left(1-\exp\left(\mathrm{i\omega\frac{P_{d}A_{BN}v^{-\alpha_{BN}}}{P_{0}\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)\right)\lambda_{tu}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(\mathrm{i\omega\frac{\sigma_{c}^{2}}{P_{0}\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right),\label{eq:cellular}
\end{align}
and
\begin{align}
\mathcal{F}_{\frac{\textrm{1}}{SINR^{NL}}}(\omega) & =\exp\left(-\int_{r}^{\infty}\left(1-\int_{0}^{t_{NLoS}}\left[\exp\left(\mathrm{i\omega\frac{\mathrm{\left(\frac{z^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}A_{BL}v^{-\alpha_{BL}}}{\left(A_{BN}r^{-\alpha^{BN}}\right)^{1-\varepsilon}}}\right)\right]f_{\overline{R_{NLCU}}}(z)dz\right)\lambda_{B}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{r}^{\infty}\left(1-\int_{0}^{t_{NLoS}}\left[\exp\left(\mathrm{i\omega\frac{\mathrm{\left(\frac{z^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}A_{BN}v^{-\alpha_{BN}}}{\left(A_{BN}r^{-\alpha^{BN}}\right)^{1-\varepsilon}}}\right)\right]f_{\overline{R_{NLCU}}}(z)dz\right)\lambda_{B}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{t_{NLoS}}^{\infty}\left(1-\exp\left(\mathrm{\mathrm{i\omega\frac{P_{d}A_{BL}v^{-\alpha_{BL}}}{P_{0}\left(A_{BN}r^{-\alpha^{BN}}\right)^{1-\varepsilon}}}}\right)\right)\lambda_{tu}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{t_{NLoS}}^{\infty}\left(1-\exp\left(\mathrm{i\omega\frac{P_{d}A_{BN}v^{-\alpha_{BN}}}{P_{0}\left(A_{BN}r^{-\alpha^{BN}}\right)^{1-\varepsilon}}}\right)\right)\lambda_{tu}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(\mathrm{i\omega\frac{\sigma_{c}^{2}}{P_{0}\left(A_{BN}r^{-\alpha^{BN}}\right)^{1-\varepsilon}}}\right)\label{eq:FUNCTIION SINRNLOS}
\end{align}
\begin{IEEEproof}
See Appendix B.
\end{IEEEproof}
From~\cite{our_work_TWC2016}, $T_{c}^{{\rm {L}}}$ and $T_{c}^{{\rm {NL}}}$
are independent of each other. The coverage probability evaluated
by Eq.(\ref{eq:Theorem_1_p_cov}) is at least a 4-fold integral which
is complicated for numerical computation. However, it gives general
results that can be applied to various multi-path fading or shadowing
model, e.g., Rayleigh fading, Nakagami-m fading, etc, and various
NLoS/LoS transmission models as well.
The third and forth row in Eq.(\ref{eq:cellular}) and Eq.(\ref{eq:FUNCTIION SINRNLOS})
are the aggregate interference from D2D tier. When the mode selection
threshold $\beta$ increases, we can find the intensity of D2D transmitter
also increases. This will reduce the coverage probability performance
of cellular tier, so we make $p_{c}^{{\rm {cov}}}>\varepsilon$ as
a condition to guarantee the performance for cellular mode when choosing
$\beta$.
\subsubsection{Coverage probability of the typical UE in the D2D mode\label{subsec:Coverage-Probability-of}}
From~\cite{7147772}, one can see that in order to derive the coverage
probability of a generic D2D UE, we only need to derive the coverage
probability for a typical D2D receiver UE. Similar to the analysis
in subsection~\ref{subsec:Cellular-mode}, we focus on a typical
D2D UE which is located at the origin $o$ and scheduled to receive
data from another D2D UE. Following Slivnyak's theorem for PPP, the
coverage probability result derived for the typical D2D UE holds also
for any generic D2D UE located at any location. In the following,
we present the coverage probability for a typical D2D UE in Theorem~\ref{thm:We-focus-on}.
\begin{thm}
\label{thm:We-focus-on}We focus on a typical D2D UE which is located
at the origin $o$ and scheduled to receive data from another D2D
UE, the probability of coverage $p_{D2D}^{{\rm {cov}}}\left(\lambda,\gamma\right)$
can be derived as
\noindent
\begin{equation}
p_{D2D}^{{\rm {cov}}}\left(\lambda,\gamma\right)=T_{D2D}^{{\rm {L}}}+T_{D2D}^{{\rm {NL}}},\label{eq:Theorem_1_p_cov-1}
\end{equation}
where $T_{D2D}^{{\rm {L}}}=\int_{0}^{\infty}\left(\mathcal{\int_{\mathrm{-}\infty}^{\infty}\mathrm{\left[\frac{1-e^{-i\omega/\gamma}}{2\pi i\omega}\right]\mathcal{F}_{\frac{\textrm{1}}{SINR_{D2D}^{L}}}(\omega)}}d\omega\right)f_{\overline{R_{LD2D}}}(\overline{R_{d,0}})d\overline{R_{d,0}}$,
\noindent $T_{D2D}^{{\rm {NL}}}=\int_{0}^{\infty}\left(\mathcal{\int_{\mathrm{-}\infty}^{\infty}\mathrm{\left[\frac{1-e^{-i\omega/\gamma}}{2\pi i\omega}\right]\mathcal{F}_{\frac{\textrm{1}}{SINR_{D2D}^{NL}}}(\omega)}}d\omega\right)f_{\overline{R_{NLD2D}}}(\overline{R_{d,0}})d\overline{R_{d,0}}$
,\\
$f_{\overline{R_{LD2D}}}(r)$ and $f_{\overline{R_{NLD2D}}}(r)$
can be calculated from cumulative distribution function (CDF) of $\overline{R}_{d}^{LOS}$
and $\overline{R}_{d}^{NLOS}$ in appendix C. In addition, $\mathcal{F}_{\frac{\textrm{1}}{SINR_{D2D}^{L}}}(\omega)$
and $\mathcal{F}_{\frac{\textrm{1}}{SINR_{D2D}^{NL}}}(\omega)$ are
respectively computed by
\begin{align}
\mathcal{F}_{\frac{\textrm{1}}{SINR_{D2D}^{L}}}(\omega) & =\exp\left(-\int_{0}^{\infty}\left(1-\int_{0}^{t_{LoS}}\left[\exp\left(\mathrm{i\omega\frac{P_{0}\mathcal{\mathrm{\left(\frac{\overline{R}_{i}^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}}v^{-\alpha_{dL}}}{P_{d}(\overline{R_{d,0}})^{-\alpha_{dL}}}}\right)\right]f_{\overline{R_{LCU}}}(\overline{R}_{i})d\overline{R}_{i}\right)\lambda_{B}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{0}^{\infty}\left(1-\int_{0}^{t_{LoS}}\left[\exp\left(\mathrm{i\omega\frac{P_{0}\mathcal{\mathrm{\left(\frac{\overline{R}_{i}^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}}A_{DN}v^{-\alpha_{dN}}}{P_{d}A_{DL}(\overline{R_{d,0}})^{-\alpha_{dL}}}}\right)\right]f_{\overline{R_{LCU}}}(\overline{R}_{i})d\overline{R}_{i}\right)\lambda_{B}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{r}^{\infty}\left(1-\exp\left(\mathrm{\mathrm{i\omega\frac{v^{-\alpha_{dL}}}{(\overline{R_{d,0}})^{-\alpha_{dL}}}}}\right)\right)\lambda_{tu}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{r}^{\infty}\left(1-\exp\left(\mathrm{i\omega\frac{A_{DN}v^{-\alpha_{dN}}}{A_{DL}(\overline{R_{d,0}})^{-\alpha_{dL}}}}\right)\right)\lambda_{tu}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(\mathrm{i\omega\frac{\sigma_{d}^{2}}{P_{d}A_{DL}(\overline{R_{d,0}})^{-\alpha_{dL}}}}\right),
\end{align}
and
\begin{align}
\mathcal{F}_{\frac{\textrm{1}}{SINR_{D2D}^{NL}}}(\omega) & =\exp\left(-\int_{0}^{\infty}\left(1-\int_{0}^{t_{NLoS}}\left[\exp\left(\mathrm{i\omega\frac{P_{0}\mathcal{\mathrm{\left(\frac{\overline{R}_{i}^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}}A_{DL}v^{-\alpha_{dL}}}{P_{d}A_{DN}(\overline{R_{d,0}})^{-\alpha_{dN}}}}\right)\right]f_{\overline{R_{NLCU}}}(\overline{R}_{i})d\overline{R}_{i}\right)\lambda_{B}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{0}^{\infty}\left(1-\int_{0}^{t_{NLoS}}\left[\exp\left(\mathrm{i\omega\frac{P_{0}\mathcal{\mathrm{\left(\frac{\overline{R}_{i}^{\alpha_{BL}}}{A_{BL}}\right)^{\varepsilon}}}v^{-\alpha_{dN}}}{P_{d}(\overline{R_{d,0}})^{-\alpha_{dN}}}}\right)\right]f_{\overline{R_{NLCU}}}(\overline{R}_{i})d\overline{R}_{i}\right)\lambda_{B}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{r}^{\infty}\left(1-\exp\left(\mathrm{\mathrm{i\omega\frac{A_{DL}v^{-\alpha_{dL}}}{A_{DN}(\overline{R_{d,0}})^{-\alpha_{dN}}}}}\right)\right)\lambda_{tu}^{L}(v)dv\right)\nonumber \\
\times & \exp\left(-\int_{r}^{\infty}\left(1-\exp\left(\mathrm{i\omega\frac{v^{-\alpha_{dN}}}{(\overline{R_{d,0}})^{-\alpha_{dN}}}}\right)\right)\lambda_{tu}^{NL}(v)dv\right)\nonumber \\
\times & \exp\left(\mathrm{i\omega\frac{\sigma_{d}^{2}}{P_{d}A_{DN}(\overline{R_{d,0}})^{-\alpha_{dN}}}}\right),\label{eq:FUNCTIION SINRNLOS-1}
\end{align}
where $A_{DL}=10^{\frac{1}{10}A_{DL}^{\textrm{dB}}}$ and $A_{DN}=10^{\frac{1}{10}A_{DN}^{\textrm{dB}}}$
denote a constant determined by the transmission frequency for UE-to-UE
links in LoS and NLoS, respectively.
\end{thm}
\begin{IEEEproof}
See Appendix C.
\end{IEEEproof}
The coverage probability of D2D users is evaluated by Eq.(\ref{eq:Theorem_1_p_cov-1}).
Here, we assumed that D2D users are independently distributed regard
to cellular users~\cite{7147772}, so the D2D users follow a Possion
point process. Although the analytical results are complicated, it
provides general results that can be applied to various multi-path
fading or shadowing models in the D2D-enhanced networks.
\section{Simulation and Discussion\label{sec:Simulation-and-Discussion}}
\begin{comment}
Placeholder
\end{comment}
In this section, we use numerical results to validate our results
and analyze the performance of the D2D-enabled UL cellular network.
To this end, we present the simulation parameters, the results for
the coverage probability, the results for the area spectral efficiency
in Section~\ref{subsec:Simulation-Setup},~\ref{subsec:The-Results-on},~\ref{subsec:The-Results-on-1},
respectively.
\subsection{Simulation setup\label{subsec:Simulation-Setup}}
According to the 3GPP LTE specifications~\cite{TR36.872}, we set
the system bandwith to 10MHz, carrier frequency $f_{c}$ to 2GHz,
the BS intensity to $\lambda_{B}=5\,\textrm{BSs/km}^{2}$, which results
in an average inter-site distance of about 500$\,$m. The UE intensity
is chosen as $\lambda=200\,\textrm{UEs/km}^{2}$, which is a typical
value in 5G~\cite{our_work_TWC2016}. The transmit power of each
BS and each D2D transmitter are set to $P_{B}=46\,\textrm{dBm}$ and
$P_{D}=10\,\textrm{dBm}$, respectively. Moreover, the threshold for
selecting cellular mode communication is $\beta=-70\sim-30\textrm{dBm}$.
The standard deviation of lognormal shadowing is $8\,\textrm{dB}$
between UEs to BSs and $7\,\textrm{dB}$ between UEs to UEs. The noise
powers are set to $-95\,\textrm{dBm}$ for a UE receiver and $-114\,\textrm{dBm}$
for a BS receiver, respectively. The simulation parameters are summarized
in Table~\ref{table1}.
\begin{table}
\caption{Simulation Parameters}
\label{table1}
\centering{
\begin{tabular}{|c|c|c|c|}
\hline
Parameters & Values & Parameters & Values\tabularnewline
\hline
\hline
$\mathtt{BW}$ & 10MHz & $f_{c}$ & 2GHz\tabularnewline
\hline
$\lambda_{B}$ & {\small{}5 BSs/$km^{2}$} & $\sigma_{c}^{2}$ & {\small{}-95 dBm}\tabularnewline
\hline
$\lambda_{u}$ & {\small{}200 UEs/$km^{2}$} & $\sigma_{d}^{2}$ & {\small{}-114 dBm}\tabularnewline
\hline
$\varepsilon$ & {\small{}0.8} & $P_{0}$ & {\small{}-70 dBm}\tabularnewline
\hline
$\alpha_{BL}$ & {\small{}2.42} & $A_{BL}$ & {\small{}$10^{-3.08}$}\tabularnewline
\hline
$\alpha_{BN}$ & {\small{}4.28} & $A_{BN}$ & {\small{}$10^{-0.27}$}\tabularnewline
\hline
$\alpha_{dL}$ & {\small{}2} & $A_{DL}$ & {\small{}$10^{-3.845}$}\tabularnewline
\hline
$\alpha_{dN}$ & {\small{}4} & $A_{DN}$ & {\small{}$10^{-5.578}$}\tabularnewline
\hline
$P_{b}$ & {\small{}46 dBm} & $P_{d}$ & {\small{}10 dBm}\tabularnewline
\hline
$d_{B}$ & 0.3km & $d_{D}$ & 0.1km\tabularnewline
\hline
\end{tabular}
\end{table}
\subsection{Validation of analytical results of $p^{{\rm {cov}}}\left(\lambda,\gamma\right)$\label{subsec:The-Results-on}}
\begin{figure}
\begin{centering}
\includegraphics[width=12cm]{40dbm}\caption{\label{fig:The-Coverage-Probability}The Coverage Probability $p^{{\rm {cov}}}\left(\lambda,\gamma\right)$
vs. SINR threshold ($\lambda_{UE}=200\,\textrm{UEs/km}^{2}$, $\lambda_{BS}=5\,\textrm{UEs/km}^{2}$
and $\rho=10\%$). The mode select threshold $\beta$ is $-50\text{dBm}$. }
\par\end{centering}
\end{figure}
In Fig.\ \ref{fig:The-Coverage-Probability}, we plot the results
of the coverage probability of cellular tier and D2D tier, we can
draw the following observations:
\begin{itemize}
\item The analytical results of the coverage probability from Eq.(\ref{eq:Theorem_1_p_cov})
and Eq.(\ref{eq:Theorem_1_p_cov-1}) match well with the simulation
results, which validates our analysis and shows that the adopted model
accurately captures the features of D2D communications.
\item The coverage probability decreases with the increase of SINR threshold,
because a higher SINR requirement makes it more difficult to satisfy
the coverage criterion in Eq.(\ref{eq:definesinr-1}).
\item For D2D tier, the coverage probability reduces very slowly because
the signals in most of the successful links are LoS while the interference
is most likely NLoS, hence the SINR is relatively large, e.g., well
above 15 dB.
\end{itemize}
\begin{figure}[h]
\begin{centering}
\includegraphics[width=12cm]{9YUE5}
\par\end{centering}
\caption{\label{fig:The-Coverage-Probability-1}The Coverage Probability $p^{{\rm {cov}}}\left(\lambda,\gamma\right)$
vs. $\beta$ for 3GPP Case~1 ($\gamma_{0}=0\,\textrm{dB}$, $\lambda_{UE}=200\,\textrm{UEs/km}^{2}$,
$\lambda_{BS}=5\,\textrm{UEs/km}^{2}$ and $\rho=10\%$).}
\end{figure}
To fully study the SINR coverage probability with respect to the values
of $\beta$ , the results of coverage probability with various $\beta$
and $\gamma_{0}$=0 dB are plotted in Fig \ref{fig:The-Coverage-Probability-1}.
From this figure, we can draw the following observations:
\begin{itemize}
\item The coverage probability of cellular users increases as $\beta$ grows
from -70 dBm to -57 dBm, which is because a larger $\beta$ reduces
the distance between the typical CU to the typical BS so that the
signal link's LoS probability increases. Then, the coverage probability
performance decreases because the interference from D2D tier is growing.
When we set $\varepsilon=0.9$, we should choose $\beta$ no larger
than -45 dBm to guarantee the cellular performance.
\item In the D2D mode, the coverage probability also increases as $\beta$
increases from -70 dBm to -60 dBm, this is because the distance between
the typical D2D pair UEs decreases while the transmit power is constant.
From $\beta=-60$ dBm to $\beta=-45$ dBm, the coverage probability
decreases because the interference from the D2D tier increases. Then,
the coverage probability increases when $\beta$ is larger than -45
dBm because the signal power experience the NLoS to LoS transition
while the aggregate interference remains to be mostly NLoS interference.
\end{itemize}
\subsection{Discussion on the analytical results of ASE\label{subsec:The-Results-on-1}}
\begin{figure}[h]
\begin{centering}
\includegraphics[width=12cm]{ASE}\caption{\label{fig:The-ASE-}The ASE $A^{\textrm{ASE}}\left(\lambda,\gamma_{0}\right)$
vs. $\beta$ for 3GPP Case~1 ($\gamma_{0}=0\,\textrm{dB}$, $\lambda_{UE}=200\,\textrm{UEs/km}^{2}$,
$\lambda_{BS}=5\,\textrm{UEs/km}^{2}$ and $\rho=10\%$).}
\par\end{centering}
\end{figure}
The analytical results of ASE with $\gamma_{0}$=0 db vs various $\beta$
values are shown in Eq.(\ref{eq:ase}). Fig.\ref{fig:The-ASE-} illustrates
the ASEs of Cellular links, D2D links and of the whole network with
respect to different mode selection thresholds $\beta$ . From this
figure we can draw the following observations:
\begin{itemize}
\item The total ASE increases when $\beta\in[-55dBm,-42dBm]$, as the D2D
links increases, because they do not generate a lot of interference
to the cellular tier.
\item An optimal $\beta$ around$-55$ dBm can achieve the maximum ASE while
the coverage probability of the cellular tier is above 0.9.
\item When $\beta\in[-55dBm,-42dBm]$, the total ASE decreases because the
D2D links generate more interference which makes the coverage probability
of cellular UEs suffer. The ASE and the coverage probability of cellular
links also decrease because the aggregate interference is now mostly
LoS interference.
\item When $\beta\in[-42dBm,-30dBm]$, the additional D2D links make significant
contribution to the ASE performance so that the total ASE grows again.
Then, the total ASE approaches that of the D2D ASE because the percentage
of D2D UE is approaching 100\%, which has been analyzed in Eq.(\ref{eq:q}).
Although the total ASE grows very quickly when $\beta\in[-42dBm,-30dBm]$,
the interference from D2D links to the cellular tier remains to be
large so that the performance of the cellular tier is poor. Hence,
we do not recommend the network operate in this range of $\beta$.
\end{itemize}
From Fig.\ref{fig1} we can find D2D links will increase as $\beta$
increase for all different densities of BS. At first, D2D links will
enhance the ASE performance but they do not generate a lot of interference
to the cellular tier. Then the increase of D2D transmitter will generate
more interference which makes the coverage probability of cellular
UEs suffer. The optimal $\beta$ can be find in this stage for different
densities of BS. At last the total ASE approaches to that of the D2D
ASE because the percentage of D2D UE is approaching 100\%. Above all,
there exists an optimal $\beta$ that can achieve the maximum ASE
of the D2D-enabled cellular while the coverage probability in cellular
tier is guaranteed. The mode selection threshold can control the interference
from both cellular tier and D2D tier. In addition, the D2D tier can
nearly double the ASE for the network when appropriately choosing
the threshold for mode selection.
\section{Conclusion\label{sec:Conclusion}}
\begin{comment}
Placeholder
\end{comment}
In this paper, we proposed an interference management method in a
D2D enhanced uplink cellular network, where the location of the mobile
UEs and the BSs are modeled as PPPs. In particular, each UE selects
its operation mode based on its downlink received power and an interference
threshold $\beta$. Practical pathloss and slow shadow fading are
consider in modeling the power attenuation. This mode selection method
mitigates large interference from D2D transmitter to cellular network.
Using a stochastic geometric approach, we analytically evaluated the
coverage probability and the ASE for various values of the mode selection
threshold $\beta$. Our results showed that the D2D links can provide
high ASE when the threshold parameter is appropriately chosen. More
importantly, we concluded that there exists an optimal $\beta$ to
achieve the maximum ASE while guaranteeing the coverage probability
performance of the cellular network.
As our future work, we will consider other factors of realistic networks
in the theoretical analysis for SCNs, such as practical directional
antennas~\cite{7126919} and non-HPPP deployments of BSs~\cite{7959926}.
\section*{Appendix\ A:Proof of Lemma\ \ref{lem:When-operating-under}\label{sec:AppendixA:Proof-of-Lemma}}
\begin{IEEEproof}
\noindent The probability that the RSS is larger than the threshold
is given by
\begin{equation}
P=\Pr\left[\underset{b}{\max}\left\{ P_{b}^{\textrm{rx}}\right\} >\beta\right],\label{eq:receivepower}
\end{equation}
where we use the standard power loss propagation model with a path
loss exponent $\alpha_{BL}$ (for LoS UE-BS links) and $\alpha_{BN}$
(for NLoS UE-BS links).The probability that a generic mobile UE operates
in the cellular mode is
\begin{eqnarray}
q & = & \mathsf{\mathit{\mathrm{1}-\Pr\left[\underset{b}{\max}\left\{ P_{b}^{\textrm{rx}}\right\} \leq\beta\right]}}\nonumber \\
& = & 1-\Pr\left[\max\left\{ P_{LOS}^{rx}\right\} \leq\beta\cap\max\left\{ P_{NLOS}^{rx}\right\} \leq\beta\right]\nonumber \\
& = & 1-\Pr\left[\min\overline{R}_{i}^{BL}\geq\left(\frac{P_{b}\textrm{A}_{BL}}{\beta}\right)^{1/\alpha_{BL}}\cap\min\overline{R}_{i}^{BN}\geq\left(\frac{P_{b}\textrm{A}_{BN}}{\beta}\right)^{1/\alpha_{BN}}\right]\nonumber \\
& = & 1-\Pr\left[\textrm{no nodes within \ensuremath{\left(\frac{P_{b}\textrm{A}_{BL}}{\beta}\right)^{1/\alpha_{BL}}}}\cap\textrm{no nodes within \ensuremath{\left(\frac{P_{b}\textrm{A}_{BN}}{\beta}\right)^{1/\alpha_{BN}}}}\right]\nonumber \\
& = & 1-\exp\left[-\wedge^{\textrm{NL}}\left(\left[0,\left(\frac{P_{b}\textrm{A}_{BL}}{\beta}\right)^{1/\alpha_{BL}}\right]\right)\right]\cdot\exp\left[-\wedge^{\textrm{L}}\left[0,\left(\frac{P_{b}\textrm{A}_{BN}}{\beta}\right)^{1/\alpha_{BN}}\right]\right]\nonumber \\
& = & 1-\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda\int_{0}^{\left(\frac{P_{b}\textrm{A}_{BL}\mathcal{H}}{\beta}\right)^{1/\alpha_{BL}}}p^{L}(r)rdr\right]\right]\nonumber \\
& & \cdot\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda\int_{0}^{\left(\frac{P_{b}\textrm{A}_{BN}\mathcal{H}}{\beta}\right)^{1/\alpha_{BN}}}p^{NL}(r)rdr\right]\right],
\end{eqnarray}
which concludes our proof.
\end{IEEEproof}
\section*{Appendix B:Proof of Theorem 2}
\begin{IEEEproof}
By invoking the law of total probability, the coverage probability
of cellular links can be divided into two parts, i.e., $T_{c}^{{\rm {L}}}+T_{c}^{{\rm {NL}}}$,
which denotes the conditional coverage probability given that the
typical BS is associated with a BS in LoS and NLoS, respectively.
First, we derive the coverage probability for LoS link cellular tier.
Conditioned on the strongest BS being at a distance $R_{B,0}$ from
the typical CU, the equivalence distance$\overline{R_{LOSCU}}=\mathrm{\mathcal{H}_{B}^{-1/\alpha_{BL}}R_{B,0}}$
$\left(\overline{R_{LOSCU}}\leq\left(\frac{\beta}{\textrm{\ensuremath{P_{B}A^{L}}}}\right){}^{-1/\alpha_{BL}}\right)$,
probability of coverage is given by
\begin{align}
T^{{\rm {L}}} & =\Pr\left[\frac{1}{SINR^{L}}<\frac{1}{\gamma}\left|\textrm{LOS}\right.\right]\nonumber \\
= & \int_{0}^{t_{LoS}}\left(\int_{0}^{\frac{1}{\gamma}}f_{\frac{\textrm{1}}{SINR^{L}}}\left(x\right)dx\right)f_{\overline{R_{LCU}}}(r)dr\nonumber \\
= & \int_{0}^{t_{LoS}}\left(\int_{0}^{\frac{1}{\gamma}}\frac{1}{2\pi}\mathcal{\int_{\mathrm{-}\infty}^{\infty}F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega)\cdot e^{-iwx}\cdot d\omega dx\right)f_{\overline{R_{LCU}}}(r)dr\nonumber \\
= & \int_{0}^{t_{LoS}}\left(\mathcal{\int_{\mathrm{-}\infty}^{\infty}\mathrm{\left[\frac{1-e^{-i\omega/\gamma}}{2\pi i\omega}\right]\mathcal{F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega)}}d\omega\right)f_{\overline{R_{LCU}}}(r)dr\text{.}
\end{align}
where $i=\sqrt{-1}$ is the imaginary unit. The inner integral is
the conditional PDF of $\frac{1}{SINR}$; The intensity of cellular
UEs and D2D UEs can be calculated as
\begin{align}
\lambda_{B}^{L}(t) & =2\pi\lambda_{b}\frac{d}{dt}\left(\int_{0}^{\infty}\left[\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{L}}}\Pr^{L}(r)rdr\right]f\left(H\right)dH\right),
\end{align}
\begin{align}
\lambda_{B}^{NL}(t) & =\frac{d}{dt}\left(\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda_{b}\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{NL}}}\Pr^{NL}(r)rdr\right]\right),
\end{align}
\begin{align}
\lambda_{tu}^{L}(t) & =\frac{d}{dt}\left(\mathbb{E}_{\mathcal{H}}\left[\pi\left(1-q\right)\lambda_{u}\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{L}}}\Pr^{L}(r)rdr\right]\right),
\end{align}
\begin{align}
\lambda_{tu}^{NL}(t) & =\frac{d}{dt}\left(\mathbb{E}_{\mathcal{H}}\left[\pi\left(1-q\right)\lambda_{u}\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{NL}}}\Pr^{NL}(r)rdr\right]\right),
\end{align}
$\mathcal{F}_{SINR^{-1}}(\omega)$ denotes the conditional characteristic
function of $\frac{1}{SINR}$, which can be written by
$\mathcal{F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega)$
\begin{eqnarray}
& = & \int_{R^{2}}f_{\frac{\textrm{1}}{SINR^{L}}}\left(x\right)e^{i\omega x}dx\nonumber \\
& = & E_{\Phi}\left[\exp\left(\mathrm{i\omega\frac{1}{SINR^{L}}}\right)\left|R_{typicalcu}=\overline{r}\right.\right]\nonumber \\
& = & \mathbb{E_{\Phi}\left[\exp\left(\mathrm{i\omega\frac{I_{c}+I_{d}+\sigma^{2}}{S^{L}}}\right)\mathrm{\left|R_{typicalcu}=\overline{r}\right.}\right]}\nonumber \\
& = & \mathbb{E}_{\Phi}\left[\left.\exp\left(\mathrm{i\omega\frac{I_{c}}{S^{L}}}\right)\exp\left(\mathrm{i\omega\frac{I_{d}}{S^{L}}}\right)\exp\left(\mathrm{i\omega\frac{\sigma^{2}}{S^{L}}}\right)\right|R_{typicalcu}=\overline{r}\right].
\end{eqnarray}
By applying stochastic geometry and the probability generating functional(PGFL)
of the PPP. $\mathcal{F}_{\frac{\textrm{1}}{SINR^{L}}}(\omega)$ can
be written as three parts, namely $\mathcal{L_{\mathrm{I_{c}}}\mathrm{(\omega)}}$,$\mathcal{L}_{\mathrm{I_{d}}}\mathrm{(\omega)}$
and $\mathcal{L_{\mathrm{n}}\mathrm{(\omega)}}$,
\begin{align}
\mathcal{L}_{\mathrm{I_{c}}}\mathrm{(\omega)=} & \exp\left(\mathrm{i\omega\frac{I_{CL}+I_{CN}}{S^{L}}}\right)\nonumber \\
= & \exp\left(-\int_{r}^{\infty}\left(1-\int_{0}^{t_{LoS}}\left[\exp\left(\mathrm{i\omega\frac{\mathrm{\left(z^{\alpha_{BL}}\right)^{\varepsilon}}v^{-\alpha_{BL}}}{A_{BL}^{2\epsilon}\left(r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)\right]f_{\overline{R_{LCU}}}(z)dz\right)\lambda_{B}^{L}(v)dv\right.\nonumber \\
& \left.-\int_{r}^{\infty}\left(1-\int_{0}^{t_{LoS}}\left[\exp\left(\mathrm{i\omega\frac{\mathcal{\mathrm{\left(z^{\alpha_{BL}}\right)^{\varepsilon}}}v^{-\alpha_{BN}}}{A_{BL}^{2\epsilon}\left(r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)\right]f_{\overline{R_{LCU}}}(z)dz\right)\lambda_{B}^{NL}(v)dv\right),
\end{align}
and
\begin{align}
\mathcal{L}_{\mathrm{I_{d}}}\mathrm{(\omega)=} & \exp\left(\mathrm{i\omega\frac{I_{DL}+I_{DN}}{S^{L}}}\right)\nonumber \\
= & \exp\left(-\int_{t_{LoS}}^{\infty}\left(1-\exp\left(\mathrm{\mathrm{i\omega\frac{P_{d}A_{BL}v^{-\alpha_{BL}}}{P_{0}\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}}\right)\right)\lambda_{tu}^{L}(v)dv\right.\nonumber \\
& \left.-\int_{t_{LoS}}^{\infty}\left(1-\exp\left(\mathrm{i\omega\frac{P_{d}A_{BN}v^{-\alpha_{BN}}}{P_{0}\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)\right)\lambda_{tu}^{NL}(v)dv\right),
\end{align}
and$\mathcal{L_{\mathrm{n}}\mathrm{(\omega)}}=\exp\left(\mathrm{iw\frac{\sigma^{2}}{P_{0}\left(A_{BL}r^{-\alpha^{BL}}\right)^{1-\varepsilon}}}\right)$
which is the cellular interference , D2D interference and noise part
in characteristic function.
Finally, note that the value of $p_{c}^{{\rm {cov}}}\left(\lambda,\gamma\right)$
in Eq. (\ref{eq:Theorem_1_p_cov}) should be calculated by taking
the expectation with $f_{\overline{R_{LCU}}}(r)$ and $f_{\overline{R_{NLCU}}}(r)$,
which is given as follow
\begin{align}
f_{\overline{R_{LCU}}}(r) & =\left(\frac{d}{dr}\left\{ 1-\exp\left[-\varLambda^{L}\left(\left[0,r\right]\right)\right]\cdot\exp\left[-\varLambda^{NL}\left(\left[0,\overline{r_{1}}\right]\right)\right]\right\} \left|CU\right.\right)\nonumber \\
& =\frac{\exp\left[-\varLambda^{L}\left(\left[0,r\right]\right)\right]\cdot\exp\left[-\varLambda^{NL}\left(\left[0,\overline{r_{1}}\right]\right)\right]{\rm {Pr}}^{{\rm {L}}}\left(r\right)\lambda_{B}^{L}(r)}{q},
\end{align}
where the typical UE should guarantee that there is no NLoS BS in
$\overline{r_{1}}$ when the signal is LoS. Given that the typical
BS is connected to a NLoS UE, the conditional coverage probability
$T^{{\rm {N}}}$ can be derived in a similar way as the above. In
this way, the coverage probability is obtained by $T_{c}^{{\rm {L}}}+T_{c}^{{\rm {NL}}}$.
Which concludes our proof.
\end{IEEEproof}
\section*{Appendix C:Proof of Theorem 3}
\begin{IEEEproof}
\label{lem:The-typical-D2D cdf of d2d distance}The typical D2D receiver
selects the equivalent nearest UE as a potential transmitter. If the
potential D2D receiver is operating in a cellular mode, D2D RU must
search for another transmitter. We approximately consider that the
second neighbor can be found as the transmitter under this situation
both for LoS/NLoS links. The approximate cumulative distribution function(CDF)
of $\overline{R}_{d}^{LOS}$ can be written as
\begin{align}
\Pr\left[\overline{R}_{d}^{LOS}<R\right]\nonumber \\
\approx & \int_{R+t_{LoS}}^{\infty}\left(\int_{0}^{R}f_{R_{d}^{LOS}}(\overline{R}_{d})d\overline{R}_{d}\right)f_{r_{1}^{LOS}}(r_{1})dr_{1}\nonumber \\
+ & \int_{t_{LoS}}^{R+t_{LoS}}\left(\int_{0}^{r_{1}-t_{LoS}}f_{R_{d}}(\overline{R}_{d})d\overline{R}_{d}\right.\nonumber \\
+ & \int_{r_{1}-t_{LoS}}^{R}(1-P_{c}^{L})\cdot f_{R_{d}^{LOS}}(\overline{R}_{d})d\overline{R}_{d}\nonumber \\
+ & \left.\int_{r_{1}-t_{LoS}}^{R}P_{c}^{L}\cdot f_{R_{d_{2}}^{LOS}}\left(\overline{R}_{d}\right)d\overline{R}_{d}\right)f_{r_{1}^{LOS}}(r_{1})dr_{1}\nonumber \\
+ & \int_{R+t_{NLoS}}^{\infty}\left(\int_{0}^{R}f_{R_{d}^{LOS}}(\overline{R}_{d})d\overline{R}_{d}\right)f_{r_{1}^{NLOS}}(r_{1})dr_{1}\nonumber \\
+ & \int_{t_{NLoS}}^{R+t_{NLoS}}\left(\int_{0}^{r_{1}-t}f_{R_{d}^{LOS}}(\overline{R}_{d})d\overline{R}_{d}\right.\nonumber \\
+ & \int_{r_{1}-t_{NLoS}}^{R}(1-P_{c}^{L})\cdot f_{R_{d}^{LOS}}(\overline{R}_{d})d\overline{R}_{d}\nonumber \\
+ & \left.\int_{r_{1}-t_{NLoS}}^{R}P_{c}^{L}\cdot f_{R_{d_{2}}^{LOS}}\left(\overline{R}_{d}\right)d\overline{R}_{d}\right)f_{r_{1}^{NLOS}}(r_{1})dr_{1},\label{eq:pdf of los d2dlink}
\end{align}
where $r_{1}$ is the equivalent distance from TU to the strongest
LoS/NLoS BS, $t_{LoS}=\left(\frac{\beta}{\textrm{\ensuremath{P_{B}A^{L}}}}\right){}^{-1/\alpha_{BL}}$,$t_{NLoS}=\left(\frac{\beta}{\textrm{\ensuremath{P_{B}A^{NL}}}}\right){}^{-1/\alpha_{BN}}$,
$P_{c}^{L/NL}$is the probability of a D2D receiver be a CU.
\begin{equation}
f_{r_{1}^{LOS}}(r)=\frac{\exp\left[-\varLambda^{L}\left(\left[0,r\right]\right)\right]\cdot\exp\left[-\varLambda^{NL}\left(\left[0,\overline{r_{1}}\right]\right)\right]{\rm {Pr}}_{{\rm {B}}}^{{\rm {L}}}\left(r\right)\lambda_{B}^{L}(r)}{1-q}\label{eq:distane D2D to LOS bs}
\end{equation}
and
\begin{equation}
f_{r_{1}^{NLOS}}(r)=\frac{\exp\left[-\varLambda^{NL}\left(\left[0,r\right]\right)\right]\cdot\exp\left[-\varLambda^{L}\left(\left[0,\overline{r_{1}}\right]\right)\right]{\rm {Pr}}_{{\rm {B}}}^{{\rm {NL}}}\left(r\right)\lambda_{B}^{NL}(r)}{1-q}\label{eq:distane D2D to NLOS bs}
\end{equation}
According to~\cite{our_work_TWC2016}, if there is no difference
between CUs and D2D UEs, the pdf of the distance for a tier of PPP
LoS UEs is
\begin{equation}
f_{R_{d}^{LOS}}(r)=\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{\overline{r_{1}}}{\rm {Pr}}_{{\rm {D}}}^{{\rm {NL}}}\left(u\right)\lambda_{u}^{NL}(u)du\right)\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{r}{\rm {Pr}}_{{\rm {D}}}^{{\rm {L}}}\left(u\right)\lambda_{u}^{L}(u)du\right){\rm {Pr}}_{{\rm {D}}}^{{\rm {L}}}\left(r\right)\lambda_{u}^{L}(r)
\end{equation}
and if there is no difference between CUs and D2D UEs, the pdf of
the distance for a tier of PPP NLoS UEs is
\begin{equation}
f_{R_{d}^{NLOS}}(r)=\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{\overline{r_{2}}}{\rm {Pr}}_{{\rm {D}}}^{{\rm {L}}}\left(u\right)\lambda_{u}^{L}(u)du\right)\exp\left(\hspace{-0.1cm}-\hspace{-0.1cm}\int_{0}^{r}{\rm {Pr}}_{{\rm {D}}}^{{\rm {NL}}}\left(u\right)\lambda_{u}^{NL}(u)du\right){\rm {Pr}}_{{\rm {D}}}^{{\rm {NL}}}\left(r\right)\lambda_{u}^{NL}(r),
\end{equation}
where
\begin{equation}
\lambda_{u}^{L}(r)=\frac{d}{dt}\left(\mathbb{E}_{\mathcal{H}}\left[2\pi\left(1-q\right)\lambda_{u}\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{L}}}{\rm {Pr}}_{{\rm {D}}}^{{\rm {L}}}(r)rdr\right]\right),
\end{equation}
and
\begin{equation}
\lambda_{u}^{NL}(r)=\frac{d}{dt}\left(\mathbb{E}_{\mathcal{H}}\left[2\pi\left(1-q\right)\lambda_{u}\int_{0}^{t\left(\mathcal{H}\right)^{1/\alpha^{NL}}}{\rm {Pr}}_{{\rm {D}}}^{{\rm {NL}}}(r)rdr\right]\right),
\end{equation}
According to~\cite{1512427} , the second neighbor point is distributed
as
\begin{equation}
f_{R_{d_{2}}^{LOS}}(r)=2\pi^{2}r^{3}\lambda_{u}^{L}(t)^{2}\cdotp\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda_{u}\int_{0}^{r\left(\mathcal{H}\right)^{1/\alpha^{L}}}{\rm {Pr}}_{{\rm {D}}}^{{\rm {L}}}rdr\right]\right],\label{eq:distance distribution-los}
\end{equation}
and
\begin{equation}
f_{R_{d_{2}}^{NLOS}}(r)=2\pi^{2}r^{3}\lambda_{u}^{NL}(t)^{2}\cdotp\exp\left[-\mathbb{E}_{\mathcal{H}}\left[2\pi\lambda_{u}\int_{0}^{r\left(\mathcal{H}\right)^{1/\alpha^{NL}}}{\rm {Pr}}_{{\rm {D}}}^{{\rm {NL}}}rdr\right]\right],\label{eq:distance distribution-NLOS}
\end{equation}
similarity, the cdf of the distance of NLoS D2D signal can be written
as
\begin{align}
\Pr\left[\overline{R}_{d}^{NLOS}<R\right]\nonumber \\
\approx & \int_{R+t_{LoS}}^{\infty}\left(\int_{0}^{R}f_{R_{d}^{NLOS}}(\overline{R}_{d})d\overline{R}_{d}\right)f_{r_{1}^{LOS}}(r_{1})dr_{1}\nonumber \\
+ & \int_{t_{LoS}}^{R+t_{LoS}}\left(\int_{0}^{r_{1}-t_{LoS}}f_{R_{d}^{NLOS}}(\overline{R}_{d})d\overline{R}_{d}\right.\nonumber \\
+ & \int_{r_{1}-t_{LoS}}^{R}(1-P_{c}^{NL})\cdot f_{R_{d}^{NLOS}}(\overline{R}_{d})d\overline{R}_{d}\nonumber \\
+ & \left.\int_{r_{1}-t_{LoS}}^{R}P_{c}^{NL}\cdot f_{R_{d_{2}}^{NLOS}}\left(\overline{R}_{d}\right)d\overline{R}_{d}\right)f_{r_{1}^{LOS}}(r_{1})dr_{1}\nonumber \\
+ & \int_{R+t_{NLoS}}^{\infty}\left(\int_{0}^{R}f_{R_{d}^{NLOS}}(\overline{R}_{d})d\overline{R}_{d}\right)f_{r_{1}^{NLOS}}(r_{1})dr_{1}\nonumber \\
+ & \int_{t_{NLoS}}^{R+t_{NLoS}}\left(\int_{0}^{r_{1}-t}f_{R_{d}^{NLOS}}(\overline{R}_{d})d\overline{R}_{d}\right.\nonumber \\
+ & \int_{r_{1}-t_{NLoS}}^{R}(1-P_{c}^{NL})\cdot f_{R_{d}^{NLOS}}(\overline{R}_{d})d\overline{R}_{d}\nonumber \\
+ & \left.\int_{r_{1}-t_{NLoS}}^{R}P_{c}^{NL}\cdot f_{R_{d_{2}}^{NLOS}}\left(\overline{R}_{d}\right)d\overline{R}_{d}\right)f_{r_{1}^{NLOS}}(r_{1})dr_{1},\label{eq:pdf of los d2dlink-1}
\end{align}
the pdf of $\overline{R_{d}}^{L(NL)}$ can be written as
\begin{equation}
f_{\overline{R_{d}}^{L(NL)}}(r)=\frac{\partial\Pr\left[R_{d}^{L(NL)}>r\right]}{\partial\overline{R_{d}}},
\end{equation}
where $P_{c}$ is the probability of the potential D2D receiver operating
in the cellular mode, and it can be calculated as
\begin{equation}
P_{c}^{LOS/NLOS}=\arccos\left(\frac{\overline{R}_{d}+r_{1}^{2}-t_{LOS/NLOS}^{2}}{2\overline{R}_{d}r_{1}}\right)/\pi,\label{eq:circle}
\end{equation}
which concludes our proof.
\end{IEEEproof}
\bibliographystyle{unsrt-fr}
|
1,116,691,497,217 | arxiv | \section{Introduction}\label{sec:introduction}
Active and emerging flux regions host an abundance of compact transient
brightenings, particularly in their early evolving stages, that are the
likely heating signatures of reconnection as emerging fields reconfigure.
Examples of such transients range from Ellerman bombs\ in the lower atmosphere to
microflares in the upper chromosphere and understanding their formation may
therefore provide essential information in understanding the evolution of active
regions as a whole.
First observed in 1915, Ellerman bombs\ were described in a publication two years later
\citepads{1917ApJ....46..298E}
and have been subject of renewed interest since the early 2000s with the
observations from the Flare Genesis balloon mission
\citepads{2002ApJ...575..506G},
but in particular since the high-resolution imaging
with the Solar Optical Telescope (SOT) aboard {\it Hinode\/}\ and imaging spectroscopy
with the CRisp Imaging SpectroPolarimeter\ (CRISP;
\citeads{2008ApJ...689L..69S})
at the
Swedish 1-m Solar Telescope\ (SST;
\citeads{2003SPIE.4853..341S}).
Both {\it Hinode\/}\ observations in \mbox{Ca\,\specchar{ii}\,\,H}\
\citepads{2010PASJ...62..879H}
and CRISP observations in \Halpha\
\citepads{2011ApJ...736...71W}
clearly demonstrated Ellerman bomb\ sub-arcsecond fine-structure and rapid
variability on a timescale of seconds.
Furthermore, imaging spectroscopy with the SST revealed that these are
sub-canopy events: while they are clearly visible in the wings of \Halpha\ they
get more and more obscured as one observes closer to line centre, to the point
that they become invisible in line core images (%
\citeads{2011ApJ...736...71W},
\citeads{2013JPhCS.440a2007R}).
A phenomenon with similar morphology and dynamics was identified in early
{\it Interface Region Imaging Spectrograph\/}\ (IRIS;
\citeads{2014SoPh..289.2733D})
observations by
\citetads{2014Sci...346C.315P}
and described as ``hot bombs'' which were suggested to be located in the cool
lower atmosphere.
These UV bursts\ are characterised by strongly broadened and enhanced \mbox{Si\,\specchar{iv}},
\CII\ and \mbox{Mg\,\specchar{ii}{\specand}k}\ lines, often with absorption blends from neutral species
superimposed.
While the latter already suggests sub-canopy formation of the emission, these
events share further characteristics with Ellerman bombs: they tend to occur on polarity
inversion lines, have a signature in the UV continua at 1600\,\AA\ and
1700\,\AA\ observed by the {\it Solar Dynamics Observatory\/}'s (SDO)
{\it Atmospheric Imaging Assembly\/}\ (AIA;
\citeads{2012SoPh..275...17L}),
but remain invisible in its
\HeII\ and higher-temperature coronal channels.
Without co-temporal \Halpha\ data a connection to Ellerman bombs\ could not be made at the
time, but later studies (\eg\
\citeads{2015ApJ...812...11V},
\mbox{\citeads{2015ApJ...810...38K}},
\citeads{2016ApJ...824...96T},
\citeads{2017A&A...598A..33L}),
have shown that there is indeed overlap between the Ellerman bomb\ and UV burst\ populations,
however not one-to-one.
Recent 3D magneto-hydrodynamic numerical experiments have reproduced the typical
\Halpha\ wing enhancements observed in Quiet Sun Ellerman bomb-like events
(\citeads{2017A&A...601A.122D};
these are the Quiet Sun counterparts of the ``classical'' Ellerman bombs\ and were first
reported by
\citeads{2016A&A...592A.100R})
and in stronger-field Ellerman bombs\
\citepads{2017ApJ...839...22H}.
The latter study was also able to reproduce the \mbox{Si\,\specchar{iv}}\ enhancements that
characterise UV bursts, albeit not simultaneously with the Ellerman bomb\ signatures;
\ie\ the events with Ellerman bomb\ signature did not show enhanced \mbox{Si\,\specchar{iv}}\ emission,
while the UV bursts\ had enhanced \Halpha\ core intensity, unlike observational
Ellerman bombs.
The reconnection height appears to be key: where Ellerman bombs\ resulted from
reconnection in the first few hundred kilometers of the atmosphere, UV bursts\ were
due to reconnection up at some 2\,Mm.
This may help explain the observational characteristic that not all Ellerman bombs\ have a
UV burst\ counterpart signature (cf.~\eg\
\citeads{2015ApJ...812...11V},
\citeads{2016ApJ...824...96T},
\citeads{2016A&A...593A..32G}),
as the absence of a one-to-one correlation suggests differences in the
atmospheric conditions between events that show either signature in isolation.
It does, however, not explain those events where \mbox{Si\,\specchar{iv}}\ and \Halpha\ appear
co-spatially even at more slanted lines-of-sight.
The \mbox{Si\,\specchar{iv}}\ visibility poses additional problems, as this would seem to require
excessive temperatures in the lower solar atmosphere compared to what has so far
been suggested based on semi-empirical modeling of \Halpha\ and \CaII\
diagnostics (\eg\
\citeads{1983SoPh...87..135K},
\citeads{2010MmSAI..81..646B},
\citeads{2013A&A...557A.102B},
\citeads{2014A&A...567A.110B},
\citeads{2017RAA....17...31F}),
and more recently including IRIS \mbox{Mg\,\specchar{ii}\,\,h}\
\citepads{2016A&A...593A..32G}.
On the other hand, analysis of \mbox{He\,\specchar{i}\,\,D$_{3}$}\ observations with the TRIPPEL
spectrograph at the SST suggests temperatures of order a few ten thousand kelvin
could be reached
\citepads{2017A&A...598A..33L}.
Furthermore,
\citetads{2016A&A...590A.124R}
argues that temperatures of order 1--2\,\mbox{$\times10^{4}$\;K}\ may be sufficient to result in \mbox{Si\,\specchar{iv}}\
emission, provided one assumes LTE in the Ellerman bomb\ onset and non-equilibrium
conditions in the subsequent dynamical evolution.
Now, comissioning observations with CHROMIS in \CaIIK\ uncover a whole new level
of fine structure, with highly dynamic blob-like substructure evolving on the
time scale of seconds.
In a recent paper,
\citetads{2017ApJ...851L...6R}
argue that these observations suggest plasmoid-driven reconnection in UV bursts.
This appears to be supported by 2.5D numerical experiments, where the
superposition of plasmoids at different Doppler shifts could explain
multi-peaked and triangular \mbox{Si\,\specchar{iv}}\ profiles that are sometimes observed in
UV bursts.
This study aims at inferring the atmospheric stratification of Ellerman bombs\ with UV burst\
signature by combing the wealth of information that the SST and IRIS provide.
The remainder of this paper is structured as follows.
Section~\ref{sec:observations} details the IRIS and SST observations,
including the alignment procedure and event identification and selection.
Section~\ref{sec:stic} describes the inversion code and setup, while the
inversion results are presented in Section~\ref{sec:results}.
Section~\ref{sec:discussion} offers a discussion of these results and, finally,
in section~\ref{sec:conclusion} we summarise our conclusions.
\begin{figure*}[ht]
\centerline{\includegraphics[width=\textwidth]{fig1a}}
\vspace{-5ex}
\centerline{\includegraphics[width=\textwidth]{fig1b}}
\vspace{-2ex}
\caption[]{\label{fig:fov} %
Overview images of the 3 and 5 September, 2016 data sets in the red wing of
\CaIR\ at +0.35\,\AA\ ({\it left panels\/}) and \CaIIK\ at +0.39\,\AA\ ({\it
right panels\/}).
The location of events selected for further study are highlighted with
labelled white boxes in the panels.
The slanted dashed lines indicate the extent of the IRIS raster (which
extends beyond the upper and lower boundaries of this field-of-view).
The red arrows point to Solar North and West, while the white arrows
indicate the direction to the closest limb.
}
\end{figure*}
\section{Observations and data reduction}\label{sec:observations}
\subsection{Acquisition and data properties}
For this study we employ two data sets obtained on September 3 and 5,
2016, respectively.
On both days the target was active region NOAA 12585, with the SST pointing at
($X$,$Y$)=($-$561\hbox{$^{\prime\prime}$},44\hbox{$^{\prime\prime}$}) on the 3rd and at
($X$,$Y$)=($-$161\hbox{$^{\prime\prime}$},24\hbox{$^{\prime\prime}$}) on the
5th, corresponding to a viewing angle of $\mu$=0.81 and 0.99, respectively.
At the SST observations were performed with the CRISP and CHROMIS instruments.
Both are dual Fabry-P\'erot tunable filter instruments, where CRISP has
additional polarimetric capabilities.
On both days, CRISP provided imaging spectroscopy in the \Halpha\ line in 15
positions out to $\pm$1.5\,\AA\ at 200\,m\AA\ steps and full Stokes
imaging spectropolarimetry in the \CaIR\ line in 21 positions out to
$\pm$1.75\,\AA\ at 70\,m\AA\ steps in the inner wings and increasing steps of up
to 800\,m\AA\ in the outer wings. The cadence of these observations is 20\,s.
On September 5 this sequence was extended to include full Stokes
spectropolarimetry in the \FeI~6301 and 6302\,\AA\ lines in 16 wavelength
positions, resulting in an overall cadence of 32\,s.
On both days CHROMIS recorded \CaIIK\ and \mbox{H\hspace{0.2ex}$\beta$}\ imaging spectroscopy, but for
our analysis we focus only on the former.
The \CaIIK\ line was sampled out to $\pm$0.7\,\AA\ at 78\,m\AA\ spacing and
additional samplings at $\pm$1.33\,\AA\ as well as a continuum point out at
3999.7\,\AA.
The cadence of the CHROMIS data is 13\,s and 12\,s for the respective data sets.
On both days these observations were supported by IRIS with a medium dense
16-step raster (OBSID 3625503135).
This program covers about 5\hbox{$^{\prime\prime}$}\,$\times$\,60\hbox{$^{\prime\prime}$}\ with continuous
0\farcs{33} steps at 0.5\,s exposure time per slit position, resulting in an
overall raster cadence of 20.8\,s.
As part of the program, the far-UV (FUV) data were spectrally rebinned
on-board by 4 to increase the signal-to-noise ratio.
Context slit-jaw imaging in the \CII, \mbox{Si\,\specchar{iv}}, and \mbox{Mg\,\specchar{ii}\,\,k}\ bands was recorded at
10.4\,s cadence.
Absolute wavelength and intensity calibrations were performed for all data.
For CRISP and CHROMIS data the atlas profile by
\citetads{1984SoPh...90..205N}
was used, taking into account limb darkening due to the non-vertical viewing
angles.
For IRIS spectra we followed the standard procedure, with wavelength calibration
to the \OI~1355.5977\,\AA\ line for FUV1 (containing \CII), to
\FeII~1392.817\,\AA\ for FUV2 (containing the \mbox{Si\,\specchar{iv}}\ lines) and to the
\NiI~2799.474\,\AA\ line in the near-UV (NUV), while using the
wavelength-dependent response functions for the intensity calibration.
All resulting intensities are expressed in CGS units as function of frequency
(\ie\ $I_{\nu}$ [\hbox{erg\;s$^{-1}$\;cm$^{-2}$\;Hz$^{-1}$\;sr$^{-1}$}]).
\subsection{Data reduction and alignment}
The CRISP data were reduced using the CRISPRED
\citepads{2015A&A...573A..40D}
processing pipeline, which includes image reconstruction through Multi-Object
Multi-Frame Blind Deconvolution (MOMFBD;
\citeads{2005SoPh..228..191V}).
CHROMIS data were reduced using similar procedures, modified from CRISPRED to
accomodate for the CHROMIS data format and bundled into the CHROMISRED pipeline
\citepads{2018arXiv180403030L}.
The CRISP data were then scaled up to the native CHROMIS pixel scale (from
0\farcs{0592} to 0\farcs{0376}) and subsequently aligned to the CHROMIS images
by iteratively cross-correlating a wavelength-integrated image for every time
step in \CaIIK\ ($\pm$0.47\,\AA\ around the core) with the nearest-neighbour one
in time in \CaIR\ ($\pm$0.45\,\AA\ around the core).
Similarly, the IRIS to SST alignment was performed using the \mbox{Mg\,\specchar{ii}\,\,k}\ slit-jaw
images (also scaled up to CHROMIS pixel size), with wavelength-integrated \CaII\
images as anchor.
After initial guess alignment based on pointing coordinates and FOV rotation,
further fine-alignment was achieved through iterative shift and
cross-correlation of the images until the correction shift fell below 0.1
(CHROMIS) pixel.
\subsection{Event identification and selection}
We used the output from an Ellerman bomb\ detection code
{\tt EBDETECT}\
\citepads{2019arXiv190107975V}
as a basis for event selection, as comparison with the intensity images then
allows us to identify those Ellerman bombs\ that also display UV burst\ signatures.
CRISPEX (%
\citeads{2012ApJ...750...22V};
\citeads{2018arXiv180403030L})
was used for data browsing, event and snapshot selection, as well as
verification of the automated detection.
Ideally, we would select events that show both Ellerman bomb\ and UV burst\ signatures, as
well as events that show only one of those characteristics in isolation, but
unfortunately the data at hand only provided examples of the former.
Although comparison of the SST and IRIS slit-jaw image fields-of-view indicate a
number of events was observed that only had one of the signatures, these were
not covered by the IRIS raster.
Hence, we selected two events---A and B---for detailed study, observed on
September 3 and 5, respectively.
The spectral criterion for an event to classify as a UV burst\ is to display
profiles as described in
\citetads{2018SSRv..214..120Y},
\ie\ substantially enhanced and broadened \mbox{Si\,\specchar{iv}}\ lines, although
the \CII\ and \mbox{Mg\,\specchar{ii}{\specand}k}\ lines are commonly also enhanced.
An Ellerman bomb\ with UV burst\ signature requires the same IRIS line enhancements in
addition to the regular Ellerman bomb\ signature of bright \Halpha\ wings and dark core.
Both the events under scrutiny have previously been analysed in
\citetads{2017ApJ...851L...6R}.
For both events \CaIR, \CaIIK\ and IRIS data are available, while Event B was
also covered by additional \FeI\ spectropolarimetry.
The results we present and discuss in the following sections are from inversions
of selected snapshots of both events, as well as (temporally downsampled)
time sequence of Event A. Before presenting the inversion results, we first
discuss the inversion code and setup in the following Section~\ref{sec:stic}.
\section{Inversions with the STockholm Inversion Code} \label{sec:stic}
We use the MPI-parallel non-LTE STockholm Inversion Code\ (STiC;
\citeads{2016ApJ...830L..30D},
\citeads{2019A&A...623A..74D})
to invert the SST and IRIS line profiles in order to infer the possible
atmospheric conditions.
The code builds on an optimised version of RH
\citepads{2001ApJ...557..389U}
to solve the atom population densities
and in each iteration the pressure scale is
computed assuming hydrostatic equilibrium, from which in turn the hydrogen and
electron densities are derived using an LTE equation of state (from
\citeads{2017A&A...597A..16P}).
The electron densities can also be derived assuming non-LTE hydrogen ionisation,
by iteratively solving the statistical equilibrium equations while imposing
charge conservation (similar to
\citeads{2007A&A...473..625L}).
We found, however, that the latter did not significantly change our inversion
results (we refer the reader to Appendix~\ref{sec:appendix} for a results
comparison between the two approaches) and therefore decided to assume LTE
electron densities instead, with the added benefit of faster and more stable
inversions.
The inversions are performed pixel-by-pixel, \ie\ assuming 1.5D plane-parallel
atmospheres.
This means that 3D radiative transfer effects, which are important for \CaII\
line cores
(\citeads{2009ApJ...694L.128L},
\citeads{2009ASPC..415...87L},
\citeads{2018A&A...612A..28L})
and \MgII\ lines
(\citeads{2013ApJ...772...89L},
\citeads{2013ApJ...772...90L},
\citeads{2015ApJ...806...14P})
cannot be taken into account by the code, however, these should not affect the
line wings as much, where the emission of interest is observed.
Also for \mbox{Si\,\specchar{iv}}\ (which is generally formed under optically thin conditions) this
is likely a minor effect.
STiC does allow including partial frequency redistribution (PRD) effects by
scattered photons
\citepads{2012A&A...543A.109L}.
We initialise the model atmosphere from FAL-C by interpolating to 44 depth points.
While this is unlikely to be close to the solar burst atmospheres that we are
interested in, the inversion code already roughly approaches the final results
after the first cycle.
Modification of the initial atmosphere by moving the transition region to lower
heights or by raising the chromospheric temperature plateau did not
significantly affect the inversion outcomes.
The inversions are run in multiple cycles, with the general approach being to
use fewer nodes in the first cycle to obtain the large scale trends and more
nodes in the subsequent cycles to get a more detailed atmospheric structure (as
suggested by
\citeads{1992ApJ...398..375R}).
In between the cycles the model atmosphere is smoothed horizontally using a
Gaussian with a 2\,pixel full-width-at-half-maximum, to decrease the
effects of pixels where inversions failed.
This smoothing is applied at twice the node resolution, \ie\ at every node point
as well as at a point equidistantly between each node, followed by
interpolation to all other depth points.
The smoothed atmosphere is then used as input atmosphere for the subsequent
cycle.
While inverting the \CaII\ data in isolation (or even with \FeI) is a
straightforward and relatively quick process, obtaining reasonable fits when
including UV lines is non-trivial as we show in the
following.
Instead of attempting direct inversion of all available diagnostics, we first
perform two cycles with \CaII\ (and for September 5 also \FeI) data, using the
output atmosphere thereof as input atmosphere for the inversions including IRIS
diagnostics.
Table~\ref{tab:cycles} summarises the number of nodes used in each cycle for
temperature $T$, line-of-sight velocity \hbox{$v_{\rm{LOS}}$}, microturbulence (or
non-thermal velocity) \hbox{$v_{\rm{micro}}$}, the longitudinal and horizontal
components of the magnetic field \hbox{$B_{\rm{lon}}$}\ and \hbox{$B_{\rm{hor}}$}\ (in the frame of the
observer, \ie\ \hbox{$B_{\rm{lon}}$}\ is the line-of-sight component, while \hbox{$B_{\rm{hor}}$}\ is that in
the plane-of-the-sky),
and azimuth $\alpha$.
The nodes are by default distributed equidistantly between $\hbox{log $\tau_{500}$}\tis0.1$ and $-$8.
The exception to this is when we include \FeI, in which case the nodes for
\hbox{$B_{\rm{lon}}$}\ and \hbox{$B_{\rm{hor}}$}\ are placed at specific locations:
$\hbox{log $\tau_{500}$}\tis[-0.5,-2.0,-5.0]$ and $\hbox{log $\tau_{500}$}\tis[-0.5,-5.0]$, respectively.
The third cycle applies only for runs that include IRIS data, \ie\ cycles 1 and
2 are run with \CaII\ (and if available \FeI) only and the output atmosphere
from that second cycle is used as input atmosphere for the third cycle when
\MgII\ and \mbox{Si\,\specchar{iv}}\ are included.
In principle, including more diagnostics formed at different heights would warrant
increasing the number of velocity nodes, however, tests showed this did not
generally yield better fits, hence we retained the number of velocity nodes from
the second cycle in those following.
STiC offers four choices in depth interpolation of the parameters: linear,
quadratic and cubic Bezier splines
\citepads{2013ApJ...764...33D},
and discontinuous
\citepads{2016A&A...586A..42S}.
Tests with single pixels and small patches indicated best results were obtained
with linear interpolation when considering only SST data, but that allowing for
discontinuities was necessary to better fit \MgII.
When including \mbox{Si\,\specchar{iv}}\ best results were again obtained with linear depth
interpolation.
The model atoms used for \CaII, \MgII\ and \mbox{Si\,\specchar{iv}}\ have 6, 11 and 9 levels,
respectively.
\CaIIK\ and \mbox{Mg\,\specchar{ii}{\specand}k}\ are computed with PRD,
while for \CaIR\ and \mbox{Si\,\specchar{iv}}\ complete frequecy redistribution (CRD) is assumed.
\begin{table}[h]
\caption{Number of nodes used in each inversion cycle.}
\begin{center}
\begin{tabular}{l|cc|cc|cc}%
\hline \hline
Parameter & \multicolumn{2}{c|}{\CaIR} & \multicolumn{2}{c|}{\CaII\ (+\FeI)} &
\multicolumn{2}{c}{\CaII\ (+\FeI) + IRIS} \\
\cline{2-3}\cline{4-5}\cline{6-7} & 1 & 2 & 1 & 2 & 3A & 3B \\
\hline
$T$ & 4 & 9 & 4 & 9 & 9 & 13 \\
$v_{\rm{LOS}}$ & 1 & 3 & 1 & 4 & 4 & 4 \\
$v_{\rm{micro}}$ & 0 & 2 & 1 & 5 & 5 & 5 \\
$B_{\rm{lon}}$ & 1 & 2 & 1 & 2 (3)$^{a}$ & 2 (3)$^{a}$ & 2 \\
$B_{\rm{hor}}$ & 1 & 2 & 1 & 2 & 2 & 2 \\
$\alpha$ & 1 & 1 & 1 & 1 & 1 & 1 \\
\hline\hline
\end{tabular}
\tablefoot{
The first two sets of columns specify the nodes for the two inversion cycles
considering only SST spectral diagnostics, while the last two provide
the details for the third cycle that includes IRIS diagnostics,
differentiating between the run adding only \MgII\ (3A) and the one including
both \MgII\ and \mbox{Si\,\specchar{iv}}\ (3B) on top of all other diagnostics. \\
\tablefoottext{a}{The number of nodes for \hbox{$B_{\rm{lon}}$}\ in cycles 2 and 3 depends on
whether \FeI\ is included or not; three nodes are used when it is. Both
\hbox{$B_{\rm{lon}}$}\ and \hbox{$B_{\rm{hor}}$}\ nodes are in that case placed at particular values of \hbox{log $\tau_{500}$}\
(see main text).}
}
\end{center}
\label{tab:cycles}
\end{table}
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig2a}}
\vspace{-3.5ex}
\centerline{\includegraphics[width=\textwidth]{fig2b}}
\vspace{-2ex}
\caption[]{\label{fig:subfov} %
Close-up images of the selected events in the various diagnostics in which
they are observed.
{\it From left to right\/}:
\Halpha\ summed wings ($\pm$1\,\AA), \CaIR\ summed wings
($\pm$0.59\,\AA), composite \CaIIK\ images at
the indicated wavelength offsets (with blue wing in blue, red wing in red),
\mbox{Mg\,\specchar{ii}\,\,k}\ blue wing, \MgII\ triplet red wing, \mbox{Si\,\specchar{iv}}\ (nominal rest wavelength)
and a Stokes $V/I$ magnetogram proxy (from \CaIR\ for September 3 and
\FeI\,6301.5\,\AA\ at $-$0.07\,\AA\ for September 5), with positive and negative polarities
in black and red, respectively.
For each event---labelled in the top left corner of the \Halpha\ panel---two
snapshots are shown at the times specified in the top right corner of the
same panel.
The white arrows below the event labels indicate the direction to the
closest limb.
The dashed cross-hairs do not highlight any particular feature, but are
meant to aid in comparing the substructure in various diagnostics.
The coloured diamonds in the \mbox{Si\,\specchar{iv}}\ panels of Event A (second-to-last
panels in the top two rows) indicate locations for which spectra and
inversion results are shown in Fig.~\ref{fig:siiv_invprof}.
Panels have been bytescaled individually to better highlight relevant
substructure.
}
\end{figure*}
\section{Results} \label{sec:results}
Figure~\ref{fig:subfov} shows two snapshots of Event A in various diagnostics
in its top two rows; the next two rows offer a similar display for Event B.
The panels are selected through nearest neighbour interpolation in time with
\CaIIK\ as leading diagnostic, resulting in a difference of up to 2.9 and 9.2\,s
between the panels for the two snapshots of Event A and up to 2.3 and 5.3\,s for
those of Event B.
Depending on the frames chosen these timing differences can be much
larger, though: for CRISP and CHROMIS the scan-averaged times can differ by
nearly 20\,s and 6\,s for September 3 and 5, respectively, while the offset with
IRIS can run up to about 11\,s on both days.
This can have an appreciable impact on the ability of STiC to obtain agreement
between the different diagnostics, in particular for those cases where
fast-evolving fine structure is considered (cf.~\eg\
\citetads{2018A&A...614A..73F},
although the effects on the Stokes profiles are not as extreme in our case).
The frames displayed here were chosen for their relatively high contrast and the
amount of substructure visible in \CaIIK\ images.
For Event A \Halpha\ and \CaIR\ wing images show much the same morphology,
although the \CaIR\ wing emission is stronger in the top parts of the Ellerman bomb.
Comparing the \CaIR\ and \CaIIK\ panels---and as already noted in
\citetads{2017ApJ...851L...6R}---\CaIIK\
reveals additional substructure compared to the CRISP images.
Particularly the second snapshot, at 10:02\,UT, shows that the more or less
monolithic \Halpha\ and \CaIR\ brightening at the geometric base of the Ellerman bomb\
(\ie\ crossed by the vertical dashed line at
$(x,y)\simeq(45\hbox{$^{\prime\prime}$},25\hbox{$^{\prime\prime}$})$) is composed of at least three
individual, thin jet-like structures.
Striking is also the spatial offset between the location of red and blue \mbox{K$_2$}\
peak emission (third column), where in both snapshots the \mbox{K$_{2R}$}\
enhancement is located at the Ellerman bomb\ base, while the \mbox{K$_{2V}$}\ enhancement is
predominantly observed as the jet tops (cf. the offset between red and
blue).
Considering the next three panels showing IRIS \MgII\ and \mbox{Si\,\specchar{iv}}\ this event
classifies as an Ellerman bomb\ with UV burst\ signature.
The fine structure so well-observed with CHROMIS is unsurprisingly lost, but
confirms earlier reports (\eg\
\citeads{2015ApJ...812...11V})
indicating the \mbox{Mg\,\specchar{ii}\,\,k}\ wing emission appears co-spatial with the body of the
\Halpha, while \mbox{Si\,\specchar{iv}}\ is offset with respect to the bulk of both \Halpha\ and
\MgII\ emission and is mostly observed towards the geometric top of the event.
Finally, comparison with the \CaIR\ Stokes $V/I$ panel suggests opposite
polarity reconnection as the driver of this event; guided by the cross-hairs we
can readily trace the base of the \Halpha\ brightenings to the polarity
inversion line.
A similar picture emerges for Event B, although geometric effects separating
both the bi-directional red- and blue-shifts and the emission at the different
IRIS wavelengths are much smaller, as one would expect given the near-vertical
viewing angle.
Individual jets that can be seen in particularly the second \Halpha\ snapshot
are not that well visible in the \CaIR\ and \CaIIK\ panels, however both \CaIIK\
panels show clearly the presence of blob-shaped substructure that the study by
\citetads{2017ApJ...851L...6R}
suggested to be signatures of plasmoids.
The \MgII\ enhancements overlap largely with the \Halpha\ and \CaII\ emission,
while \mbox{Si\,\specchar{iv}}\ appears somewhat more offset, however this is difficult to
establish conclusively given that the raster does not cover the entire event as
observed with the SST.
The underlying magnetogram shown in the right-most panels, in this case derived
from the blue wing of \FeI\,6301.5\,\AA, again points to opposite polarity
reconnection.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig3}}
\vspace{-2ex}
\caption[]{\label{fig:eventA_ca8_maps} %
Inversion maps of the second snapshot of Event A
(cf.~Fig.~\ref{fig:subfov}, second row) from the \CaIR-only run at
several heights in the model atmosphere ({\it three left-hand
columns\/}), as well as synthetic and observed intensity images for
comparison ({\it three right-hand columns\/}).
{\it From left to right\/}: temperature, the temperature difference with
respect to the initial input model, line-of-sight velocity, synthetic
({\it first and third rows\/}) and
observed ({\it second and fourth rows\/}) intensity images in the
wings of \CaIR, at the wavelength offsets indicated in the synthetic image
panels.
The \hbox{log $\tau_{500}$}\ height for each row is indicated in the lower left of the
left-most panels.
Panels are scaled by column for the three left-hand columns, while all
panels in the three right-hand columns are scaled to the same values, \ie\
similar colours in different panels means similar values at different
heights or wavelength offsets. The intensity panels have been
multiplied by 1.25 to offset absolute intensity differences with \CaIIK,
allowing direct comparison with the \CaIR\ panels in
Figs.~\ref{fig:eventA_maps} and \ref{fig:eventA_mg_maps}.
}
\end{figure*}
\subsection{Inversions of CRISP and CHROMIS data}
\label{sec:invAB_sst}
We first consider inversions of the CRISP and CHROMIS data of both Events A and
B.
Figures~\ref{fig:eventA_ca8_maps} and \ref{fig:eventA_maps} show inversion maps of
the second snapshot of Event A at four heights in the atmosphere
($\hbox{log $\tau_{500}$}\simeq[-1.1,-2.1,-3.1,-4.1]$, averaged over three nodes around each)
from, respectively, the \CaIR-only inversion and the inversion including both
the \CaIR\ and \CaIIK\ lines.
Figure~\ref{fig:eventB_maps} shows similar maps of the combined \CaII\
inversions for both Event B snapshots of Fig.~\ref{fig:subfov}.
\paragraph{Event A: \CaIR-only results.}
The right-hand columns in Fig.~\ref{fig:eventA_ca8_maps} show that the general
intensity patterns are well-recovered by the inversion code, including the range
of intensity values (the difference in dynamic range between each set of
synthetic and observed panels is negligible), although the darker patch to
the left of the bright event in the fourth and fifth panels of the
third row clearly shows the code has trouble in some parts of the sub-FOV.
This feature corresponds morphologically well with the bright event top in
the $-$0.90\,\AA\ panels (first and second panels in the fourth column), but
shows unexpectedly a redshift suggesting the code has difficulties
there---likely because of the contribution from the dark fibrils visible in the
$-$0.30\,\AA\ panels (first and second panels in the last column).
The general shape of the Ellerman bomb\ is also recovered in the temperature maps at
$\hbox{log $\tau_{500}$}\tis-$2, albeit with less of the spatial structure than is visible in the
intensity panels.
At higher heights the event is practically lost, but at $\hbox{log $\tau_{500}$}\tis-$1 the compact
brightening in the top intensity panels is recovered as an enhanced temperature
of some \hbox{$\Delta T$}=1000--2000\,K (at $(x,y)\simeq(44\farcs{2},24\farcs{5})$), with a
hint of enhanced temperature in a semi-circular arc to the left of the main
brightening.
Overall the total temperature reaches order 7500--8500\,K, corresponding to some
\hbox{$\Delta T$}=3000--4500\,K over the local ambient temperature.
The highest temperatures rise of roughly \hbox{$\Delta T$}=4000\,K is found close to
$\hbox{log $\tau_{500}$}\tis-$3 and its location in the observed plane corresponds to the stronger
brightening at $(x,y)\simeq(44\farcs{3},24\farcs{0})$ in the extended jet
that is visible both in all intensity panels.
The cooler temperatures that cross the event at $\hbox{log $\tau_{500}$}\simeq-$3 and the noisy
temperature maps at $\hbox{log $\tau_{500}$}\tis-$4 are most likely due to the dark canopy fibrils that
are evident in the blue-wing images (in particular those near $-$0.3\,\AA), but
are ill-recovered due to the reduced temperature sensitivity of \CaIR\ at
those heights.
The line-of-sight velocity maps are even more confused.
Disregarding the earlier mentioned artefact, there is still a mix of up- and
downflows, both at the base of the event (at
$(x,y)\simeq(45\farcs{0},24\farcs{5})$) and what would correspond to the
jet-like extension towards the lower left.
The latter is distinguishable to some extent as a blue-shift protrusion flanked
by a small red-shifted feature to its right up to $\hbox{log $\tau_{500}$}\simeq-$3, at
$(x,y)\simeq(44\farcs{5},24\farcs{0})$, which also coincides spatially with
the location of the largest temperature enhancement.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig4}}
\vspace{-2ex}
\caption[]{\label{fig:eventA_maps} %
Inversion maps of Event A considering both \CaIR\ and \CaIIK.
Format as for Fig.~\ref{fig:eventA_ca8_maps}, except that the right-hand panels in
the lower two rows now show the synthetic and observed \CaIIK\ intensity
images (at the specified wavelength offsets) and those in the upper two rows
(\ie\ \CaIR) have been multiplied by a factor 1.25 to offset the intrinsic
intensity difference between both calcium lines.
The coloured plus markers in the right-hand column indicate the locations
for which similarly-coloured profiles are shown in
Fig.~\ref{fig:eventAB_profs}.
For reference, the same markers are overplotted on the third temperature
difference and line-of-sight velocity maps (\ie\ around $\hbox{log $\tau_{500}$}\tis-3.1$),
albeit in black for better visibility.
}
\end{figure*}
\paragraph{Event A: \CaIR\ and \CaIIK\ results.}
Comparison with Fig.~\ref{fig:eventA_maps} evidences the advantage of
considering multiple diagnostics simultaneously.
In particular the panels at $\hbox{log $\tau_{500}$}\simeq-$3 and $-$4 show much better
defined structures than with \CaIR\ alone.
In part this is because of the high-resolution CHROMIS \CaIIK\ data displaying
more fine structure than the CRISP \CaIR\ images, however, the model is also
better constrained by including lines formed at somewhat different heights.
The added continuum point from the \CaIIK\ observations further constrains
the temperature at the lowest heights, resulting in a better fit to the lines
and consequently a better constraint of the line-of-sight velocity gradients.
Again, the synthetic images in the first and third rows coincide well with the observed
ones in the second and third, both in terms of feature shapes and dynamic range, suggesting
that also here most profiles are well-fitted.
The top two rows of Fig.~\ref{fig:eventAB_profs} highlight this by comparing
single-pixel fits to observed profiles for a number of sampling locations in
Event A (indicated with identically coloured plus markes in Fig.~\ref{fig:eventA_maps}).
While the fitted profiles (solid lines) are not perfect everywhere, generally
good agreement is obtained for both lines, albeit more so for \CaIIK\ than for
\CaIR---likely an effect of assigning more weight to the former.
Of those shown, the orange and cyan profiles have the largest mismatch issues in
one (or both) of the lines, which is likely related to wing asymmetries.
For instance, the cyan \CaIIK\ profile exhibits a strong blue-over-red
asymmetry, which presumably drives the solution to have a similar asymmetry in the
8542\,\AA\ line; for the orange profile the absence of such asymmetry in the
8542\,\AA\ has lead to underestimate/overestimate the \CaIIK\ red/blue wing and
\mbox{K$_2$}\ peaks.
Another issue may be the time difference (in this case of 8.8\,s) between the
\CaIIK\ and the \CaIR\ scans, \ie\ the observed profiles are not perfectly
co-temporal even though STiC assumes they are.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig5}}
\vspace{-2ex}
\caption[]{\label{fig:eventAB_profs} %
\CaIIK, \CaIR\ and temperature profiles for six selected pixels in Events A
({\it top two rows\/}) and B ({\it bottom two rows\/}) at the identically coloured
locations marked in Figs.~\ref{fig:eventA_maps} and \ref{fig:eventB_maps},
respectively.
The left-hand and middle panels show fits ({\it solid lines\/}) to the observed profiles
({\it filled circles\/}) of the \CaIIK\ ({\it left panel\/}) and \CaIR\ ({\it middle
panel\/}) lines, respectively.
The corresponding temperature stratification ({\it right-hand panels\/}) is shown
using the same colour coding.
In the latter, the black dashed curve represents the input temperature
stratification (initialised from FAL-C, for which the \CaII\ panels
show the profiles with similar line style for reference),
while the vertical dash-dotted lines
indicate the \hbox{log $\tau_{500}$}\ heights at which the maps in Figs.~\ref{fig:eventA_maps}
and \ref{fig:eventB_maps} are shown.
}
\end{figure*}
While in the \CaIR-only inversion Event A is most clearly visible in the
temperature map at $\hbox{log $\tau_{500}$}\tis-$2, it is not as clear at that height when combining both
calcium lines.
At this height, only a narrow zig-zag shaped temperature enhancement that coincides
spatially with a brightening of similar morphology in the \CaIIK\ blue wing at
$-$0.55\,\AA\ (third and fourth panels in the fourth column) is evident.
Interestingly, this enhanced brightness and temperature coincides with the
boundary of the red-shift signature to the right and blue-shifts to the left.
On the other hand, the strongest temperature enhancements in the combined \CaII\
inversion are reached in similar locations as in the \CaIR-only run.
The green and purple crosses mark two of such locations and the corresponding
temperature stratifications indicate that some 9500--10,000\,K (up to \hbox{$\Delta T$}=5500\,K
over the ambient input temperature) is reached around $\hbox{log $\tau_{500}$}\tis-$3.
In addition, for most samplings a sharp temperature increase is found around
$\hbox{log $\tau_{500}$}\simeq-$5.5, which for some (\eg\ the blue and cyan profiles) represents a
considerable lowering of the transition region, while for others (the red,
purple, orange and green profiles) it seems to be the increase to a
``chromospheric'' temperature plateau at some 2.5--3.0\,\mbox{$\times10^{4}$\;K}.
Including \CaIIK\ has also considerably altered the line-of-sight
velocity maps.
The extended jet visible in the intensity images is now clearly recovered as an
elongated blue-shifted feature of some 15--20\,\hbox{km\;s$^{-1}$}\ towards the observer, while
strong red-shifts of similar magnitude are found at the base of the event.
Such bi-directional jet signature was already implied in the composite \CaIIK\
image of Fig.~\ref{fig:subfov} (second row, third panel), but is now also
confirmed from the inversions.
\begin{figure*}[h]
\centerline{\includegraphics[width=\textwidth]{fig6}}
\vspace{-2ex}
\caption[]{\label{fig:eventB_maps} %
Inversion maps of Event B based on \CaIR, \CaIIK\ and \FeI\ data, in format
similar to Fig.~\ref{fig:eventA_maps}, but only showing maps at two
heights for each Event B1 and B2.
The top two rows (B1) show maps for the first snapshot of Event B in
Fig.~\ref{fig:subfov}; the bottom rows (B2) show those for the second
snapshot. The coloured plus signs mark locations for which \CaII\ profiles are
shown in the bottom two rows of Fig.~\ref{fig:eventAB_profs}. For the
orange location in B1 Fig.~\ref{fig:eventAB_mg_profs} also shows
\MgII\ profile fits.
}
\end{figure*}
\paragraph{Event B: \CaII\ and \FeI\ results.}
Figure~\ref{fig:eventB_maps} shows the inversion maps for both snapshots of
Event B, from including both \CaII\ lines and \FeI\,6301.5\,\AA\ (latter not
shown), in similar format as Fig.~\ref{fig:eventA_maps}.
The upper two rows correspond to the first snapshot (marked B1) and the lower
two rows to the second one (B2).
Again STiC is able to recover the fine structure in the intensity images (but
also the \hbox{$v_{\rm{LOS}}$}\ maps), doing slightly better for snapshot B1 than B2, though
with very similar results for both \CaII\ lines.
Overall, for both snapshots the discrepancies are mostly in the \CaIR\ panels:
in case B1 an imprint of the \CaIIK\ brightenings is visible that is
not there in the observations, while for both
B1 and B2 the dark fibrilar structure overlying the event is relatively
well-reproduced in intensity at the wavelengths shown.
The blob-like substructure is evident for the first snapshot (B1), in particular in
the \CaIIK\ intensity panels (two right-most panels in the first and second
rows).
The difference in dynamic range is 1--2$\times10^{-6}$\,\hbox{erg\;s$^{-1}$\;cm$^{-2}$\;Hz$^{-1}$\;sr$^{-1}$}\ at most for all
panels shown, except the red-wing \CaIIK\ images of frame B2 (third panel
in the fifth column), where
the maximum synthetic intensity falls some 4$\times10^{-6}$\,\hbox{erg\;s$^{-1}$\;cm$^{-2}$\;Hz$^{-1}$\;sr$^{-1}$}\ below the
observed values.
This assessment is supported by the profile fits shown in the lower two rows of
Fig.~\ref{fig:eventAB_profs}, highlighting four pixels from B1 and two from B2.
In some cases the large \mbox{K$_{2R}$}\ peak appears to drive a stronger decrease in the
\CaIR\ red wing (\eg\ the green profiles) or both lines are overestimated in the
wing emission (\eg\ red and blue, in particular for \CaIIK); in others (\eg\
purple and orange) both lines are well-fitted simultaneously.
The velocity maps for both frames B1 and B2 do not show as clear a
spatially resolved bi-directional jet structure as for Event A.
Rather, the majority of blobs in B1 show either a clear red-shift/blue-shift
of order 10--12\,\hbox{km\;s$^{-1}$}\ away/towards the observer.
The line-of-sight velocities are strongest around the height where the
temperature peaks, $\hbox{log $\tau_{500}$}\tis-$3.
The dark fibrils seen in the blue wing of \CaIR\ are well-recovered in the
synthetic intensity images and the $\hbox{log $\tau_{500}$}\tis-$4 panel for Event B1 (third
panel in the second row) displays a moderate blue-shift with similar morphology
at that same location.
In terms of temperatures, the inversions suggest even higher values are reached
in Event B than for Event A.
Both snapshots B1 and B2 show hot patches of up to 15,000\,K total
temperature, peaking between
$\hbox{log $\tau_{500}$}\tis-2$ and $-$4 (cf.~Fig.~\ref{fig:eventAB_profs}), and while one would be
hard-pressed to recognise the blob-like morphology from these temperature maps,
the largest temperature enhancements are found at the locations where the
brightest blobs are visible in the \CaIIK\ images.
The temperature stratification shown in the last panel of the lower two rows of
Fig.~\ref{fig:eventAB_profs} is similar to that for Event A: a temperature
increase around $\hbox{log $\tau_{500}$}\tis-$3 and a rise to the transition region or enhanced
chromospheric plateau close to $\hbox{log $\tau_{500}$}\tis-$5, even though for half of the samplings
the transition region rise lies close to that of the input model.
The apparent decrease above $\hbox{log $\tau_{500}$}\tis-$7 for the orange sampling is an effect of
extrapolation beyond the last node with the slope at the last node.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig7}}
\vspace{-2ex}
\caption[]{\label{fig:eventB_Bmaps} %
Inversion maps of magnetic field quantities for both Event B snapshots.
{\it From left to right\/}: line-of-sight magnetic field strength (with
positive/negative (\ie\ red/black) corresponding to field oriented
towards/away from the observer),
horizontal magnetic field strength and azimuth for Event B1, followed by
similar maps for B2.
As only one node was used in azimuth, the corresponding maps
are identical for the three heights shown.
The panels have been scaled by column to the same values for both B1 and B2
to facilitate comparison of the time evolution between them.
}
\end{figure*}
\subsection{Magnetic fields}
As for Event A only \CaIR\ polarimetry was available, we allowed fewer degrees
of freedom and the inverted field was consequently not well-defined with height.
However, with the availability of \FeI\ for Event B we added a third node in
line-of-sight magnetic field strength and obtained more reasonable results.
Figure~\ref{fig:eventB_Bmaps} shows the magnetic field inversion maps at the
same \hbox{log $\tau_{500}$}\ heights as Fig.~\ref{fig:eventB_maps} shows the other inversion
parameters, as well as around $\hbox{log $\tau_{500}$}\tis-$1.
While obviously noisy, clear signal is obtained for the longitudinal and
horizontal magnetic fields.
As only one node was used to fit the azimuth, the maps look the same at all
shown heights.
Especially at the lowest heights the photospheric magnetic field pattern visible
in \FeI\ Stokes $V/I$ (right-hand panels in the lower two rows of
Fig.~\ref{fig:subfov}) is also clearly recovered in the \hbox{$B_{\rm{lon}}$}\ panels.
Going to higher heights the signal gets weaker and more homogenous, as one would
expect from the expansion of the field.
The horizontal magnetic field strength maps are largely devoid of signal and
generally noisy, yet do show enhanced signal at the location where the
brightenings are visible at $\hbox{log $\tau_{500}$}\tis-$3 and $-$4.
For B1 this enhanced horizontal field interestingly overlaps with the location
where the \hbox{$v_{\rm{LOS}}$}\ signature flips sign in the top row of
Fig.~\ref{fig:eventB_maps}.
The azimuth maps are noisy at best and do not show a clearly defined structure
coinciding with the event brightening, temperature or line-of-sight velocity
structures, although the values are typically low (below about 40\deg) and
spatially smoother at the locations of enhanced \hbox{$B_{\rm{hor}}$}.
Considering the changes with time going from B1 to B2 we observe an increase of
the longitudinal magnetic field, mostly at the lower heights \ie\ $\hbox{log $\tau_{500}$}\tis-$2 and
$-$3, while the horizontal fields at the location of the event decrease by
nearly 500\,G (the maximum \hbox{$B_{\rm{hor}}$}\ values are slightly over 1.1\,kG for B1).
In addition, the B2 maps for \hbox{$B_{\rm{lon}}$}\ at $\hbox{log $\tau_{500}$}\tis-$2 (displaying clear opposite
polarity footpoints) and \hbox{$B_{\rm{hor}}$}\ at $\hbox{log $\tau_{500}$}\tis-$4 (with enhanced horizontal fields)
are consistent with with a $\cap$-configuration or possibly the shared
horizontal part of a post-reconnection $\cap$- below $\cup$-configuration.
\begin{figure*}[h]
\centerline{\includegraphics[width=\textwidth]{fig8}}
\vspace{-2ex}
\caption[]{\label{fig:eventA_tmaps} %
Time sequence of inversion maps of Event A from combining \CaIR\ and
\CaIIK. From left to right the columns
show temperature ({\it left-most three columns\/}), the temperature difference
with the input model ({\it middle three columns\/}) and line-of-sight velocity ({\it right-most
three columns\/}) at the \hbox{log $\tau_{500}$}\ heights specified in the first row panels.
The time in UT is indicated in the top left of the first panel of each row.
The panels have been bytescaled by column (ranges indicated by the colour
bars at the top of each column).
The dashed lines in rows 5, 8 and 9 indicate the lines along which
Fig.~\ref{fig:eventA_tcrossmaps} shows cross-cut inversion maps.
}
\end{figure*}
\subsection{Time evolution} \label{sec:time}
Let us now consider the time evolution of Event A.
Figure~\ref{fig:eventA_tmaps} shows the top-down inversion maps based on
\CaIR\ and \CaIIK for temperature,
temperature difference and line-of-sight velocity for this event as function of
time at roughly 50\,s intervals.
Throughout this sequence the event stands out in the temperature maps as an
enhancement of a few thousand kelvin up to a total temperature of
8000--10,000\,K, in particular around $\hbox{log $\tau_{500}$}\tis-$3.
At $\hbox{log $\tau_{500}$}\tis-$2 and $-$3 it is clearly hotter than its surroundings, while at
$\hbox{log $\tau_{500}$}\tis-$4 the whole sub-field-of-view appears enhanced in temperature by
\hbox{$\Delta T$}=2000--2500\,K and the event starts to blend in.
The bi-directional jet signature so clearly visible in the middle column of
Fig.~\ref{fig:eventA_maps} (and second-to-last row of this figure, at
10:02:42\,UT) can be
observed at several stages during the time evolution, although a line-of-sight
blue-shift of up to $-$25\,\hbox{km\;s$^{-1}$}\ appears to be the persistent velocity
signature.
In addition, both the top few rows (09:56:41--09:58:25\,UT) and bottom few ones
(10:01:51--10:03:34\,UT) appear to show the blue-shift velocity imprint from
overlying canopy fibrils to the left of the event.
Similar to some of its preceding frames (not shown here), the top row shows
red-shift artefacts embedded in otherwise smooth blue-shifts in the \hbox{$v_{\rm{LOS}}$}\ maps
or enhanced and decreased temperatures at $\hbox{log $\tau_{500}$}\tis-$2 and $-$3, respectively,
which corresponds to locations where the maps suffer from worse line fits.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig9}}
\vspace{-2ex}
\caption[]{\label{fig:eventA_tcrossmaps} %
Cross-cut inversion maps of Event A from selected time steps
highlighted in Fig.~\ref{fig:eventA_tmaps}.
{\it From left to right\/}: $x$-\hbox{log $\tau_{500}$}\ cut in temperature, temperature
difference and line-of-sight velocity, followed by the $y$-\hbox{log $\tau_{500}$}\ cuts in the
same quantities. The panels have been scaled by column, where the
quantities have been clipped to the respective colour bar values.
The vertical markers in the $x$-\hbox{log $\tau_{500}$}\ ($y$-\hbox{log $\tau_{500}$}) panels indicate the
location where they are intersected by the $y$-\hbox{log $\tau_{500}$}\ ($x$-\hbox{log $\tau_{500}$}) cut.
The lower row displays the cross-cuts of the same frame as in
the third row, but now from the inversions including \MgII.
}
\end{figure*}
Figure~\ref{fig:eventA_tcrossmaps} shows cross-sectional inversion maps
for three selected frames from the time sequence in Fig.~\ref{fig:eventA_tmaps}
along the lines indicated in the respective panels; the third and
fourth rows of
maps correspond to the second Event A snapshot of Fig.~\ref{fig:subfov},
where the fourth row is from the inversions including \MgII.
As was already implied by the temperature maps in Fig.~\ref{fig:eventA_tmaps}
and the temperature profiles in Fig.~\ref{fig:eventAB_profs}, the temperature
enhancements related to the Ellerman bomb/UV burst\ are generally located between $\hbox{log $\tau_{500}$}\tis-$2
and $-$4, peaking around $\hbox{log $\tau_{500}$}\tis-$3.
As indicated before, between $\hbox{log $\tau_{500}$}\tis-$5 and $-$6 we find a sharp temperature
increase to an enhanced chromospheric plateau or due to the actual transition
region overlying the event coming down compared to the FAL-C input (where it lies
close to $\hbox{log $\tau_{500}$}\tis-$8).
The line-of-sight velocity cross-cuts suggest bi-directional flows with the
expected red-shifts below blue-shifts within some of the pixels (cf.~\eg\ the
$x$-\hbox{log $\tau_{500}$}\ cuts for all three frames or the $y$-\hbox{log $\tau_{500}$}\ for the top two).
Interpreting these as the bi-directional reconnection jet within the pixel (\ie\
within the $x$-\hbox{log $\tau_{500}$}\ or $y$-\hbox{log $\tau_{500}$}\ column) would, however, imply the
reconnection takes place above the main temperature increase, as the \hbox{$v_{\rm{LOS}}$}\
divergence point is located around $\hbox{log $\tau_{500}$}\tis-$5 in these examples.
Given that the bi-directional signature is clearly separated spatially in the
top-down view, one could imagine the blue-shift should be observed in adjacent
pixels rather than in the same (\ie\ to the top left in these $x$-\hbox{log $\tau_{500}$}\ cuts).
This is the case for all three examples shown, but the extension of both
signatures over $\hbox{log $\tau_{500}$}\tis[0,-4]$ is somewhat puzzling, \ie\ even though opposite
signatures in adjacent pixels is expected, we would also expect the blue-shifts
to be found predominantly at comparatively higher heights than the red-shifts.
To some extent this is the case in the $x$-\hbox{log $\tau_{500}$}\ \hbox{$v_{\rm{LOS}}$}\ panel of the
first frame (top row, third panel), where a blue-shift can be found to the top
left (around $\hbox{log $\tau_{500}$}\tis-$4.5) of the stronger red-shift, although this again places
the outflow point above the main heating location.
The same cut for the next two frames shows this much less clearly (if at all); the
blue-shifts left of the red-shifts extend only marginally higher than the
red-shifts.
An issue that could play a role here is the limited number of \hbox{$v_{\rm{LOS}}$}\ nodes used
in the inversions, thereby preventing STiC from finding a consistent solution
that places the outflow point lower down.
Increasing the number of nodes may have alleviated this, however, as pointed out
previously increasing the number generally resulted in worse fits to the \CaII\
lines, hence our choice not to do so.
The fourth row, from inversions including \MgII, is further discussed in the
following section, but we note here that the general temperature and velocity
patterns are largely the same between the inversions with and without that
\MgII\ lines.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig10}}
\vspace{-2ex}
\caption[]{\label{fig:eventA_mg_maps} %
Inversion maps of Event A considering both \CaII\ lines and \mbox{Mg\,\specchar{ii}{\specand}k}.
Format as for Fig.~\ref{fig:eventA_maps}, except that the six panels in
the lower right now show (from left to right) an image of the
\MgII\,\mbox{k$_{2V}$}, \mbox{k$_{2R}$}\ and the red wing of the subordinate \MgII\ triplet.
The \CaIR, \mbox{Mg\,\specchar{ii}\,\,k}\ and \MgII\ triplet images have been multiplied by 1.25,
7.5 and 10, respectively, to roughly offset the intrinsic intensity
difference with the \CaIIK\ line and all panels in the three right-hand
columns have been scaled to the same values subsequently
(these are the same values as in Figs.~\ref{fig:eventA_ca8_maps} and
\ref{fig:eventA_maps}).
The triangular area to the (upper) right of the inversion maps has values
set to zero as these fall outside the IRIS raster.
The coloured plus markers in the second and fourth rows mark the
locations for which profiles are shown in the top two rows of
Fig.~\ref{fig:eventAB_mg_profs}; the coloured diamond markers those
locations for which Fig.~\ref{fig:siiv_invprof} shows profile fits including
\mbox{Si\,\specchar{iv}}.
The purple cross marks the same location as the purple cross
in Fig.~\ref{fig:eventA_maps}.
}
\end{figure*}
\subsection{Inversions of combined SST and IRIS data}
\label{sec:invAB_sst_iris}
We have seen earlier that including additional diagnostics generally constrains
the model atmospheres better and given that \mbox{Mg\,\specchar{ii}{\specand}k}\ are typically formed
somewhat higher than \CaII\ (cf.~\eg\ Fig.~1 from
\citetads{2013ApJ...772...90L}
or Fig.~15 from
\citetads{2018A&A...611A..62B}),
this should help the inversions.
Moreover, finding an atmospheric model that can explain also the UV part of an
Ellerman bomb\ is of great interest.
When including IRIS the instrumental resolution differences are, however,
non-negligible and a likely source of errors: already between CRISP \CaIR\
and CHROMIS \CaIIK\ there is a factor 2 difference in resolution, going up
to over a factor 8 when comparing CHROMIS and these IRIS raster observations.
As we chose not to sacrifice resolution, the IRIS profiles have been
interpolated to the CHROMIS pixel scale, \ie\ a single IRIS spectral profile is
spread out over many SST pixels.
Consequently, STiC considers these one-to-one, while in reality many SST
profiles should be contributing to the atmosphere that explains the single IRIS
profile.
Without a well-defined spatial point spread function for either the SST
or the IRIS spectrograph it is impossible for STiC to take the resolution
difference into proper account.
Also, as with the previous inversions, additional uncertainty arises from the
acquisition time difference between the different instruments.
On both days the time difference with IRIS can be as large as nearly 11\,s.
While this effect can be minimised, it may still play an important role,
particularly when considering the fast evolution of the \CaIIK\ substructure
where some plasmoid blobs are sometimes only visible for a single frame.
Notwithstanding these issues, the solution to which inversions converge is not
significantly different from the runs without \MgII, as we show below.
\begin{figure*}[h]
\centerline{\includegraphics[width=\textwidth]{fig11}}
\vspace{-2ex}
\caption[]{\label{fig:eventAB_mg_profs} %
\CaIIK, \CaIR, \MgII\ and temperature profiles for selected pixels in event
A ({\it red, blue and purple\/}) and Event B ({\it
orange\/})
at the similarly-coloured locations marked in the right-hand panels of
Figs.~\ref{fig:eventA_mg_maps} and \ref{fig:eventB_maps}, respectively.
Format simlar as for Fig.~\ref{fig:eventAB_profs}, except that the
second row now shows the \mbox{Mg\,\specchar{ii}\,\,k}\ and \MgII\ triplet lines (the zoom-in has
been chosen such to allow easier comparison of the observed and inverted
profiles, omitting \mbox{Mg\,\specchar{ii}\,\,h}\ because it behaves similarly to \mbox{Mg\,\specchar{ii}\,\,k}).
The input observed \MgII\ profiles are shown with every other point to
avoid cluttering the plot.
}
\end{figure*}
\subsubsection{Combining \CaII\ and \MgII}
\label{sec:invAB_sst_iris_mg}
Figure~\ref{fig:eventA_mg_maps} presents in similar format as before the
inversion results from including \MgII\ along with the \CaII\ lines for Event A.
Comparing Figs.~\ref{fig:eventA_maps} and \ref{fig:eventA_mg_maps} for Event A
we see that at lower heights ($\hbox{log $\tau_{500}$}\tis-$1 and $-$2) the temperature maps look
very similar between the runs without and with \MgII\ (a hint of the fine
structure imprint from \CaIIK\ remains visible in the \MgII\ results).
In fact, at both heights the range of temperatures is similar between the runs
and also the average temperatures fall within 50--250\,K of each other.
Also for both, the line-of-sight velocity maps show clearly the bi-directional
jet in most panels of the middle column, most clearly so at $\hbox{log $\tau_{500}$}\tis-$2.
However, when including \MgII\ the bi-directional pattern is much more noisy at
$\hbox{log $\tau_{500}$}\tis-$1.
The differences are more pronounced higher up.
On one hand, the contribution from the IRIS-observed \MgII\ is evident in the
temperature maps around $\hbox{log $\tau_{500}$}\tis-$3 through a more dispersed temperature
enhancement, devoid of much of the substructure that was visible when
considering only SST data.
At $\hbox{log $\tau_{500}$}\tis-$4 the event nearly blends into the background in the
temperature maps, with the whole sub-FOV displaying a more or less homogeneous
temperature enhancement of roughly \hbox{$\Delta T$}=2000\,K over the input temperature.
Also, while in the \CaII\ run the bi-directional velocity signature remains
clearly visible throughout the inverted atmosphere, when including \MgII\ this
signature is more pronounced in the sense that the redshifts are stronger at
lower heights, likely due to the contribution from the \MgII\ triplet (cf.~the
lower right intensity panels), and disappear almost entirely at $\hbox{log $\tau_{500}$}\tis-$4.
This is also reflected in the line-of-sight velocity cross-cut pattern
differences between the third and fourth rows of
Fig.~\ref{fig:eventA_tcrossmaps}, the latter from inversions including \MgII.
For instance, the strong red-shifts in the $x-\hbox{log $\tau_{500}$}$ cut (third panel of the
third row) do not extend as high when including \MgII\ (fourth row) and in the
latter the adjacent blue-shift (at $x\simeq44\farcs{2}$) is also largely
concentrated between $\hbox{log $\tau_{500}$}\tis-$2 and $-$4, rather than extending all the way from
\hbox{log $\tau_{500}$}\tis0 to $-$4.
This fits with the expectation that the blue-shifts should be found at
comparatively higher heights than the red-shifts.
Figure~\ref{fig:eventAB_mg_profs} shows the \CaII\ and \MgII\ fits with
corresponding temperature profiles for identically coloured selected pixels in
Figs.~\ref{fig:eventA_mg_maps} (red, blue and purple; Event A) and
\ref{fig:eventB_maps} (orange; Event B).
The fits are generally good, although getting agreement in these three lines
simultaneously is more challenging than for the two \CaII\ lines alone.
For the examples from Event A, the profile asymmetries are strongest in \CaIIK\
and the brighter \mbox{K$_2$}\ peak is also typically the one that is better fitted
(cf.~\eg\ the red and blue profiles), while the \Kthree\ core is sometimes too
dark (\eg\ purple and orange profiles), and in some cases the line wings
are not bright enough (orange profile).
By comparison, the fits to \CaIR\ show generally fewer discrepancies with the
observations than \CaIIK (except for the orange sampling).
As before, it is however possible that differences in the magnitude of the
asymmetries between these lines may affect the fitting of one of the line wings.
For \mbox{Mg\,\specchar{ii}\,\,k}\ the \mbox{k$_2$}\ peaks and inner wings are typically well-fitted,
while showing more (though still minor) issues in the \kthree\ core and further
out in the wings, beyond $\pm$0.75\,\AA, the latter being usually too bright
compared to the observations.
The \MgII\ triplet and its asymmetries are also generally well-reproduced.
Considering the temperature stratification, the peak temperatures of order
1--1.5$\times$10$^{4}$\,K that resulted from the \CaII\ inversions are
interestingly not found when including \MgII.
Rather the temperature enhancement is a moderate \hbox{$\Delta T$}=2500\,K over the ambient
temperature at $\hbox{log $\tau_{500}$}\tis-$3 and for both events the stratification shows an
extendend plateau over a range of $\Delta \hbox{log $\tau_{500}$} \simeq\ 2.5$ upward from there.
This is also evident from the temperature cross-cuts
(cf.~the lower row of Fig.~\ref{fig:eventA_tcrossmaps}).
In particular the purple profile for Event A (or the orange one for Event B),
corresponding to the same sampling location for which the purple (orange) profile
in the first (fourth) row of Fig.~\ref{fig:eventAB_profs} is shown, now only
reaches a total temperature of some 6500\,K.
We discuss these differences further in Section~\ref{sec:discussion}, but note
already here that this is likely an effect of the pixel size difference between
the IRIS and SST data.
\subsubsection{Challanges posed by \mbox{Si\,\specchar{iv}}}
\label{sec:invAB_sst_iris_si}
As defined in
\citetads{2018SSRv..214..120Y},
the primary identification of UVBs is through their enhanced and broadened
\mbox{Si\,\specchar{iv}}\ lines and properly reproducing this diagnostic is therefore important for
a complete description of these events.
Unfortunately, at this point this is easier said than done and after several
tests we decided to refrain from inverting maps including the \mbox{Si\,\specchar{iv}}\ lines.
Figure~\ref{fig:siiv_invprof} evidences why.
This figure shows the profile fitting results of single pixel inversions at the
locations marked with diamonds in the top rows of Fig.~\ref{fig:subfov}
(two of which are also overplotted in Fig.~\ref{fig:eventA_mg_maps}).
Apart from the \CaII\ and \MgII\ lines as shown before, a fourth spectral panel
now displays the \mbox{Si\,\specchar{iv}}\ lines with the 1394\,\AA\ (1403\,\AA) observations and
fits as dots and solid lines (plus markers and dashed lines), respectively.
The \mbox{Si\,\specchar{iv}}\,1402\,\AA\ profiles have been multiplied by 2 to account for the offset
between the two \mbox{Si\,\specchar{iv}}\ lines when formed under optically thin conditions: comparison of
the dots and plus markers for each sampling line pair indeed suggests they
likely are, as the \mbox{Si\,\specchar{iv}}\,1394\,\AA\ to 1403\,\AA\ ratio is close to 2 for
the purple and blue samplings, while the red one is clearly non-thin given a
line ratio of 1.7.
The samplings shown were selected to test fitting of \mbox{Si\,\specchar{iv}}\ profiles with
varying degrees of complexity (blue-asymmetry for both blue and red
samplings versus more ragged-top purple sampling profiles).
While STiC may be able to reproduce the general broadening, enhancement and
asymmetry of the \mbox{Si\,\specchar{iv}}\ lines, this results in completely off profile fits for
\MgII\ in particular.
Of both \mbox{Si\,\specchar{iv}}\ lines, the 1403\,\AA\ line (plus markers and dashed lines) is
sometimes considerably better fitted, \eg\ the red sampling.
Even though not being perfect, the red-shift side ``plateau'' mimicks the
observations better for 1403\,\AA\ than the solid line follows the 1394\,\AA\ dots.
This further supports the non-thin formation already implied by the line
ratio departure from 2.
For the blue sampling both lines are fitted equally bad, in the sense that the
hump on the red-shift side is not fitted at all, but the fit retains a rather
Gaussian shape while recovering the peak intensity.
In contrast, the purple sampling shows the best fits for both lines of these three
examples, even though it does not fully reproduce the observed intensities
between about $-$50\,\hbox{km\;s$^{-1}$}\ and the nominal line centre.
The temperature profiles (top right panel, solid coloured lines) do show some
changes with respect to the inversion results when only SST diagnostics were
considered (dashed coloured lines).
The temperature peaks that after cycle 2 were already there close to the input
temperature minimum (\eg\ blue and red) get shifted to lower heights by
about $\hbox{log $\tau_{500}$}\tis0.5$, while also increasing by a few thousand kelvin.
For the purple sampling the temperature enhancement close to the temperature
minimum was less pronounced when considering only SST data, but gets a
noticeable peak now just below $\hbox{log $\tau_{500}$}\tis-$3 when including both \MgII\ and \mbox{Si\,\specchar{iv}}.
The behaviour at higher heights is also changed.
For instance, where the red and purple samplings previously showed an increase to
a chromospheric plateau of about 3\,\mbox{$\times10^{4}$\;K}\ and 2.5\,\mbox{$\times10^{4}$\;K}, respectively, the
red temperature profile now shoots up to a plateau over 3.5\,\mbox{$\times10^{4}$\;K}\ and the
purple profile shows a pronounced peak up to 4.3\,\mbox{$\times10^{4}$\;K}\ at $\hbox{log $\tau_{500}$}\simeq\,-6.3$.
The blue sampling does not show such enhanced temperature between $\hbox{log $\tau_{500}$}\tis-$7
and $-$6, but the transition region temperature rise starts about
$\hbox{log $\tau_{500}$}\tis0.5$ lower when including both \MgII\ and \mbox{Si\,\specchar{iv}}.
It should be noted, however, that the inversions with \mbox{Si\,\specchar{iv}}\ were run with 13
nodes in temperature, two more than the second cycle inversions, which may
explain both the slight shift of the low temperature peaks (since the nodes are
slightly shifted), and the ability to resolve the pronounced temperature
peak at $\hbox{log $\tau_{500}$}\simeq\,-6.3$ for the purple sampling (considering that the purple
dashed temperature profile already showed a high temperature chromospheric
plateau).
All in all, while both \CaII\ lines are also well-fitted for all three samplings
(and better so than \mbox{Si\,\specchar{iv}}), it is clear that in general agreement cannot be
reached for all diagnostics simultaneously with the proposed node models (\MgII\
being the most difficult to reconcile).
Considering the temperature profiles this is likely due to the temperature peaks
to some 10$^{4}$\,K between $\hbox{log $\tau_{500}$}\tis-$3 and $-$4, which (as shown in the first
part of this section) were suppressed when considering \CaII\ and \MgII\
together.
Other issues that may play a role include the limitation to pixel-by-pixel
atmospheres (\ie\ restriction to 1.5D), the difference in data resolution,
non-zero acquisition time differences for the spectra and the limited number
and/or placement of temperature inversion nodes.
We discuss these further in Section~\ref{sec:discussion}, but address the latter
shortly here.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig12}}
\vspace{-2ex}
\caption[]{\label{fig:siiv_invprof} %
Observed and inverted \CaIR, \CaIIK, \MgII, \mbox{Si\,\specchar{iv}}\ profiles, as well as the
corresponding temperature stratification, for selected
pixels in Event A (highlighted with diamond markers in
Fig.~\ref{fig:eventA_mg_maps} for the red and blue profiles; the purple
profiles corresponds a pixel in the middle of the \mbox{Si\,\specchar{iv}}\ brightening in the
earlier Event A snapshot shown in the top row of Fig.~\ref{fig:subfov}).
Format similar to Fig.~\ref{fig:eventAB_mg_profs}, except that the lower row
contains an additional panel with the \mbox{Si\,\specchar{iv}}\,1394\,\AA\ (1403\,\AA) line,
where the observed profiles are shown as dots (plus markers) and the fits as
solid (dashed) lines.
The observational and synthetic \mbox{Si\,\specchar{iv}}\,1403\,\AA\ data have been multiplied
by 2 to offset the intrinsic intensity difference with 1394\,\AA\ under
optically thin conditions.
The temperature profile panel also includes dash-dotted lines for each
sampling indicating the result from the previous inversion cycle (\ie\ SST data only).
}
\end{figure*}
\subsubsection{Effects of sharp temperature enhancements}
\label{sec:siiv_spikes}
Certain temperatures need to be reached in order to yield any measurable
intensity increase in \mbox{Si\,\specchar{iv}}, but because of the node distribution such temperature
bump will typically be broad in \hbox{log $\tau_{500}$}\ corresponding to significant heating over
a large height range.
This is probably one of the reasons for the overestimated enhancement of \MgII\
seen in Fig.~\ref{fig:siiv_invprof}.
Considering the observation of plasmoid-like blobs and indications from
numerical studies (\eg\
\citeads{2018ApJ...852...95N};
\citeads{2018arXiv180405631N})
that in some configurations the plasmoids may have associated confined slow- and
fast-mode shocks reaching sufficiently high temperatures to explain \mbox{Si\,\specchar{iv}}\
emission, it may be that this emission is very localised and hence impossible to
resolve with the current node representation.
As a first step towards further investigating this possibility, we
forward-modeled the emergent intensity in \mbox{Si\,\specchar{iv}}, \MgII\ and \CaII\ assuming a
sharp temperature enhancement at specific heights, sharper than our inversion
node placement would allow to recover.
We tested a grid of temperatures (at $\Delta T=2500\,K$), peak locations (at
$\Delta \hbox{log $\tau_{500}$}\tis0.2$) and base widths ($\hbox{log $\tau_{500}$}\tis0.05$, 0.1, 0.2 and 0.3), but here
only show a sub-selection to highlight certain effects of varying these three
parameters.
Figure~\ref{fig:siiv_synthprof} presents the results of this spectral synthesis for
several examples of single temperature spikes on top of the inverted
temperature profiles based on \CaII\ and \MgII\ data that produce \mbox{Si\,\specchar{iv}}\ emission
within an order of magnitude of the observed profile in terms of peak intensity,
where the top (bottom) subfigure shows the effect of localised temperature
enhancements between $\hbox{log $\tau_{500}$}\tis-$5.9 and $-$4.9 ($-$4.1 and $-$3.1).
It is clear that regardless of the input temperature profile, the broadening of
the \mbox{Si\,\specchar{iv}}\ is not well-reproduced, yet this is not surprising: for one, the
velocity components (both line-of-sight and non-thermal) were not modified from
the preceding inversion output, but more importantly, in optically thin
formation the width is only influenced by microturbulence and velocity gradients
over the formation region and since the latter is purposefully narrow, the
profile width will be relatively small as well.
Considering the higher altitude perturbations (\ie\ top subfigure) first, adding
the shown temperature spikes does not considerably change the \CaII\ or \MgII\
profiles, except the \kthree\ core of the latter when the spike is located below
about $\hbox{log $\tau_{500}$}\tis-$5.5 (orange-red and purple profiles).
Comparing identically coloured temperature and \mbox{Si\,\specchar{iv}}\ profiles, we see that an
increased base width at the same height and peak temperature (\eg\ full and dark
red, as well as all three purple profiles) causes an increased \mbox{Si\,\specchar{iv}}\ peak
intensity.
This is to be expected as the volume that is exposed to the heating is larger,
thus resulting in a stronger \mbox{Si\,\specchar{iv}}\ emission.
At the same time this does not appear to affect its line width, \ie\ the
full-width-at-half-maximum remains essentially unchanged.
The low-altitude perturbations (bottom subfigure) have an effect on all lines,
but most conspicuously on the \CaIR\ core, \MgII\ peaks and wings and the \mbox{Si\,\specchar{iv}}\
line.
\CaIIK\ shows similar profiles regardless of the temperature spike location,
height and width, although the \Kthree\ core is darker for the lower location
temperature spikes and the \mbox{K$_2$}\ peaks are more enhanced for the higher located
spikes.
The higher location spikes (\ie\ close to $\hbox{log $\tau_{500}$}\tis-$4) affect both the \CaIR\
core---which is considerably enhanced compared to the observations---and the
\mbox{Mg\,\specchar{ii}\,\,k}\ and \MgII\ triplet peaks.
On the other hand, the lower location temperature spikes (purple profiles) show
a larger influence on the (quasi-)continua outside the \MgII\ and \mbox{Si\,\specchar{iv}}\ lines,
overestimating their intensity, sometimes by more than an order of magnitude.
As one would expect, at higher heights one generally requires higher
temperatures than at lower heights to get similar \mbox{Si\,\specchar{iv}}\ emission, but in either
case similar profiles can be obtained from either a narrow but tall or a broader
but lower temperature spike.
The narrowest of temperature spikes can indeed reproduce specific \mbox{Si\,\specchar{iv}}\
intensities at (or close to) the nominal line core, but better profile
correspondence likely requires modifying other parameters, such as the
line-of-sight velocity.
In addition, the temperature spikes at lower heights pose problems for the
\MgII\ emission in particular, but depending on the height also for the \CaIR\
core intensity; this may explain why the inversions favoured a solution where
the temperatures were enhanced closer to $\hbox{log $\tau_{500}$}\tis-$6, rather than around the
heights where \CaII\ inversions suggested the temperature increase to be.
Hence, it is clear that this requires further study, yet as this likely requires
a different approach we defer further investigation into the UV burst\ \mbox{Si\,\specchar{iv}}\ line
formation to a future study.
\begin{figure*}[bht]
\centerline{\includegraphics[width=\textwidth]{fig13}}
\vspace{-2ex}
\caption[]{\label{fig:siiv_synthprof} %
Spectral synthesis of \CaII, \MgII\ and \mbox{Si\,\specchar{iv}}\ based on modified inversion
atmospheres of two well-fitted pixels of Event A.
Temperature peaks have been added at three \hbox{log $\tau_{500}$}\ heights
(differentiated by colour) with various peak heights and base widths
(differentiated by colour shades, increasingly darker for increasing width
and/or decreasing peak height).
Subfigure A ({\it top two rows}) shows the results for placing such
enhancement of \hbox{log $\tau_{500}$}-width 0.1--0.3 ({\it from light to dark shade\/})
somewhere between $\hbox{log $\tau_{500}$}\tis-$5.9 and $-$4.9.
Subfigure B ({\it bottom two rows}) offers a similar display for spikes of
\hbox{log $\tau_{500}$}-width 0.05, 0.1 and 0.3 ({\it from light to dark\/}) at heights
between $\hbox{log $\tau_{500}$}\tis-$4.1 and $-$3.1.
The underlying observations are from different pixels for the two
subfigures.
Each subfigure has similar format as Fig.~\ref{fig:siiv_invprof}, with the
spectral panels showing the observations as black dotted profiles and
synthesis output as coloured solid lines.
}
\end{figure*}
\section{Discussion}\label{sec:discussion}
\subsection{Temperature stratification}
\label{sec:discussion_tstrat}
The observation of \mbox{Si\,\specchar{iv}}\ and \CII\ emission sometimes correlated and co-spatial
with typical \Halpha-observed Ellerman bombs\ has added an additional layer of complexity
requiring explanation and also led to a heated debate over the past few years on the
temperatures these events may reach.
Prior to the IRIS launch, Ellerman bomb\ temperature estimates ranged anywhere between
a few hundred to a few thousand kelvin over the ambient temperature at close to
temperature minimum heights, generally based on 1D semi-empirical
modelling
(\citeads{1983SoPh...87..135K};
\citeads{2010MmSAI..81..646B};
\citeads{2013A&A...557A.102B};
\citeads{2014A&A...567A.110B})
or two-cloud modelling
\citepads{2014ApJ...792...13H}.
On the other hand,
\citetads{2006SoPh..235...75S}
obtained temperatures of 1--1.2\,\mbox{$\times10^{4}$\;K}\ between roughly
$\hbox{log $\tau_{500}$}\tis-$3.5 and $-$5 from inversions of the \CaII\,8498\,\AA\ and 8542\,\AA\ lines
using a predecessor of the inversion code NICOLE.
A recent study by
\citetads{2017A&A...598A..33L},
considering \mbox{He\,\specchar{i}\,\,D$_{3}$}\ and 10830\,\AA, suggests temperatures of order
15,000\,K would be consistent with their observations.
Profile fitting efforts by
\citetads{2016A&A...593A..32G}
were able to reproduce the observed IRIS \mbox{Mg\,\specchar{ii}\,\,h}\ profiles with temperature
enhancements of \hbox{$\Delta T$}=1100--3350\,K (up to roughly 8000\,K total
temperature), while also yielding
Ellerman bomb-like \Halpha\ profiles.
More recently,
\citetads{2017ApJ...835L..37R}
and
\citetads{2017ApJ...845..144H}
used the radiative hydrodynamics code RADYN
to model Ellerman bombs\ and found that temperature enhancements of up to \hbox{$\Delta T$}=3000\,K yielded
\Halpha\ and \CaIR\ profiles similar to observations.
Both studies also considered \mbox{Mg\,\specchar{ii}\,\,k}, where a larger energy deposition was
required to obtain profile shapes similar to the observations, however,
simultaneous agreement between all three diagnostics could not be achieved in
either of those studies.
They also noted---as did
\citetads{2017RAA....17...31F}---that
temperatures in excess of 10,000\,K caused the computed \Halpha, \CaIR\ and
continuum emission to be overestimated compared to observations.
Our results largely agree with these previous findings, yet with some notable
exceptions.
As shown in the temperature maps (\eg\ Figs.~\ref{fig:eventA_maps},
\ref{fig:eventB_maps} or \ref{fig:eventA_tmaps}), combined \CaIR\ and \CaIIK\
inversions yield total temperatures of some 7000--9000\,K close to the
temperature minimum throughout most of the events, however, localised
temperatures of 1.5\,\mbox{$\times10^{4}$\;K}\ are found as well.
The highest temperatures are typically associated with the blue-shifted parts
of the events (cf.~\eg\ the second and third panels of the third row in
Fig.~\ref{fig:eventA_maps} or in the first row of Fig.~\ref{fig:eventB_maps}).
Another difference with previous studies is the occurrence of marked temperature
plateaus at 2--3\,\mbox{$\times10^{4}$\;K}\ above $\hbox{log $\tau_{500}$}\tis-6$ for some pixels (\eg\ purple and
red samplings in the top row of Fig.~\ref{fig:eventAB_profs} or the green
sampling in the bottom row of the same figure).
However, the sensitivity of the \CaII\ (and \MgII) lines is very limited above
$\hbox{log $\tau_{500}$}\simeq-6$ (cf.~also the response functions to temperature in appendix
Fig.~\ref{fig:lte_nlte_rf}) and while in some cases
the line cores may sense (and thus require) the lower part of this temperature
rise around $\hbox{log $\tau_{500}$}\simeq-$5.5, the presence or absence of the higher-located
plateaus should not be overinterpreted.
The case is clearly different when considering \mbox{Si\,\specchar{iv}}, as both
Figs.~\ref{fig:siiv_invprof} and \ref{fig:siiv_synthprof} suggest that
temperature modifications at these heights may have an appreciable effect on
the \mbox{Si\,\specchar{iv}}\ line intensity.
A notable effect of taking \mbox{Mg\,\specchar{ii}{\specand}k}\ into consideration is that the peak
temperatures around $\hbox{log $\tau_{500}$}\tis-3$ are reduced compared to the \CaII\ inversion
results.
Intuitively, as \MgII\ is formed higher than either of the \CaII\ lines, one
would expect higher temperatures to be recovered when including the \mbox{Mg\,\specchar{ii}{\specand}k}\
lines (or at the very least not as strong a reduction as we find).
However, this is likely not a physical effect, but rather because of the
difference in resolution between SST/CHROMIS and IRIS (and possibly timing
differences as well), as pointed out previously in
Section~\ref{sec:invAB_sst_iris_mg} and discussed in Section~\ref{sec:limitations}.
Both the selected pixel examples (Figs.~\ref{fig:eventAB_profs} and
\ref{fig:eventAB_mg_profs}) and the cross-cuts
(Fig.~\ref{fig:eventA_tcrossmaps}) generally show that the transition region
comes down to somewhere between $\hbox{log $\tau_{500}$}\tis-$5.5 and $-$6.
Also the temperature (difference) maps at $\hbox{log $\tau_{500}$}\tis-$4 indicate that the whole
sub-FOV is enhanced in temperature by some \hbox{$\Delta T$}=2000--2500\,K over the input
temperature at that height, which could be interpreted as a heating of the
chromosphere above the event.
On the other hand, recent larger field-of-view inversions by
\citetads{2018A&A...612A..28L}
indicate persistent and space-filling heating above $\hbox{log $\tau_{500}$}\tis-$4 during flux
emergence, wherein localised brightenings (\eg\ Ellerman bombs) only take up a fraction of
the flux emergence brightenings and may therefore play only a minor role in the
overall heating of the chromosphere.
\subsection{Reproducing UV burst\ \mbox{Si\,\specchar{iv}}\ emission}
Many studies have quoted the transition region equilibrium temperature of
order 8\,\mbox{$\times10^{4}$\;K}\ as requirement to explain the observed \mbox{Si\,\specchar{iv}}\ emission in UV bursts.
As discussed above, for those events that show Ellerman bomb\ properties this
poses the obvious problems of producing too strongly enhanced \Halpha\ and
\CaII\ lines or continua.
Under certain assumptions, such as LTE at the onset of the events
\citepads{2016A&A...590A.124R}
or if heating were to happen in the photosphere where densities are high
\citepads{2017ApJ...839...22H},
lower temperatures of order 1--2\,\mbox{$\times10^{4}$\;K}\ may suffice to cause \mbox{Si\,\specchar{iv}}\ emission,
yet this does appear to certainly be the lower limit.
Considering the localised temperatures of order 1--1.5$\times 10^{4}$\,K that we
obtained from the \CaII\ inversions, these may in fact be marginally sufficient
to cause \mbox{Si\,\specchar{iv}}\ emission.
Indeed the inversions including \mbox{Si\,\specchar{iv}}\ retain a temperature enhancement of that
order between $\hbox{log $\tau_{500}$}\tis-$2 and $-$4 (cf.~Fig.~\ref{fig:siiv_invprof}) and show
good fits to both \CaIR\ and \CaIIK, yet suffer notably from ill-fitted \MgII\
lines.
On the other hand, it also remains to be seen whether these temperatures are
needed as low down as Ellerman bombs\ are typically believed or inferred to occur.
From their {\it Bifrost\/}\ simulations
\citetads{2017ApJ...839...22H}
suggest that in different magnetic topology, and considering that the Ellerman bomb\ jets
have a sizeable vertical extent in Joule heating, their impact may reach
sufficiently high to produce \mbox{Si\,\specchar{iv}}\ emission at the Ellerman bomb\ tops
(analogous results of extended heating were found by
\citetads{2017ApJS..229....5D}
for weaker-field Ellerman bomb-like events in MURaM simulations).
One could envision this as shock heating at the pertinent heights, but a similar
effect could be reached if reconnection cascades further upwards, in which case
the actual reconnection at $\sim$2\,Mm heights would result in the typical UV burst\
emission.
Such offsets between the main \Halpha\ and \mbox{Si\,\specchar{iv}}\ emission are not evident
in our data (likely because the viewing angle is too close to the vertical
in both data sets), but small offsets were already reported by
\citetads{2015ApJ...812...11V}
and should be more easily disentangled closer to the limb.
On the other hand, this should not be an issue {\it a priori} for the inversions
with STiC, as it could adapt the temperature profile to such a scenario,
provided sufficient depth resolution in the temperature node specification.
The purple profiles in Fig.~\ref{fig:siiv_invprof} are a clear example of such
a case with two separate heating sources, the lower one around $\hbox{log $\tau_{500}$}\tis-$3
explaining the \CaII\ profiles, while the upper one around $\hbox{log $\tau_{500}$}\tis-$6 is
likely responsible for the \mbox{Si\,\specchar{iv}}\ emission (inspection of its response
function to temperature perturbations shows maximum response between
$\hbox{log $\tau_{500}$}\tis-6$ and $-$6.5 around the \mbox{Si\,\specchar{iv}}\,\,1394\,\AA\ and 1403\,\AA\ rest wavelengths).
In fact, comparing with the other two samplings, for that case both \CaII\ lines
and \mbox{Si\,\specchar{iv}}\ show better fits and also the \MgII\,\kthree\ core and both \mbox{k$_2$}\
peaks are well-reproduced in absolute intensity (and even shape, out to about
$\pm$0.25\,\AA).
Even though it does not resolve the general overestimation of \MgII, considering
the synthesis results of high- versus low-atmosphere temperature peaks, this may
indeed be a direction in which to seek the solution.
In addition, while the offset between the temperature peak locations appears
large in \hbox{log $\tau_{500}$}, in terms of physical height this may in fact be squeezed closer
together in the presence of strong magnetic fields---not a far-fetched
assumption under these circumstances.
The synthesis results (Section~\ref{sec:siiv_spikes}) indicate that
the the addition of sharp temperature enhancements above $\hbox{log $\tau_{500}$}\simeq-$6 would
not strongly affect the \CaII\ and \MgII\ while providing sufficient \mbox{Si\,\specchar{iv}}\
emission to at least reach the observed intensities (though not broadening).
At heights below about $\hbox{log $\tau_{500}$}\tis-$5 even the sharpest enhancements
considered (of base width $\hbox{log $\tau_{500}$}\tis0.05$) with sufficient temperature to yield
\mbox{Si\,\specchar{iv}}\ emission are incompatible with the \MgII\ profiles.
Implementing ``temperature spike''-fitting capibility may be worthwile to
explore further, even though likely not sufficient in and of itself to attain
agreement with all observables.
All in all, the \mbox{Si\,\specchar{iv}}\ results indicate that if its emission indeed
primarily originates around $\hbox{log $\tau_{500}$}\tis-$6, temperatures of order 3.5--6.0\,\mbox{$\times10^{4}$\;K}\
could suffice, rather than the often-quoted 8\,\mbox{$\times10^{4}$\;K}.
We also investigated whether the assumption of LTE versus non-LTE electron
densities could have made a difference.
The comparison in Appendix~\ref{sec:appendix} shows that while the overall
temperature and velocity stratification are similar for both inversions
combining \CaIIK\ and \CaIR, and those including \MgII\ as well, individual
pixels may show more pronounced effects.
Moreover, Fig.~\ref{fig:lte_nlte_nne} suggests larger deviations between the LTE
and non-LTE electron densities above \hbox{log $\tau_{500}$}$\simeq-$4 (with typically lower
values for the latter), a height above which the temperature profiles in
Fig.~\ref{fig:siiv_invprof} also show the largest change with respect to the
inversions without \mbox{Si\,\specchar{iv}}.
Hence, the results with \mbox{Si\,\specchar{iv}}\ may be more strongly affected by the choice for
LTE or non-LTE electron density.
Unfortunately, stability of the inversions was a limiting issue here and while
we attempted different approaches (including use of a more extended 23-level
\mbox{Si\,\specchar{iv}}\ model atom) we could converge atom populations for only two samplings
(blue and purple), in both cases reproducing the \mbox{Si\,\specchar{iv}}\ peak intensities but not
their asymmetries, and failing altogether for the third (red) sampling.
However, when running our synthesis tests assuming non-LTE electron densities,
the sharp temperature enhancements had to be placed at smaller \hbox{log $\tau_{500}$}\ values
(\ie\ typically higher electron densities) to achieve similar response in the
\mbox{Si\,\specchar{iv}}\ lines, suggesting that in the inversions the temperature enhancement may
move to somewhat lower heights compared to the results presented in
Fig.~\ref{fig:siiv_invprof}.
Alternatively, the solution should perhaps not be sought thermally alone.
Another way of producing \mbox{Si\,\specchar{iv}}\ without the need for excessively high
temperatures in the lower atmosphere is through high-energy particles
produced during the reconnection.
For instance,
\citetads{2017A&A...603A..14D}
showed that the ionisation rates increase dramatically at low temperatures in
the presence of accelerated particles (even if they only make up a small
fraction) and may extend the formation temperatures of \mbox{Si\,\specchar{iv}}\ down to
1--1.5\,\mbox{$\times10^{4}$\;K}.
Our \CaII\ (and \mbox{Si\,\specchar{iv}}) inversions yield very similar temperatures in localised
hot pockets and may thus be compatible with this picture.
\subsection{Line-of-sight velocity patterns}
In the reconnection scenario, outflows from the reconnection point---\ie\
bi-directional jets---are to be expected and this has been corroborated for both
Ellerman bombs\ and UV bursts\ from previous observations and numerical experiments.
Of the events we investigated, Event A shows the clearest spatially separated
bi-directional jet signature with line-of-sight velocities of order 15-25\,\hbox{km\;s$^{-1}$}\
both towards and away from the observer, though with slightly higher blue
shifts.
The time evolution of this event (Fig.~\ref{fig:eventA_tmaps}) shows this is a
persistent signature with similar velocities throughout the event's lifetime.
Ellerman bomb\ velocities quoted in the past are typically much lower, a few \hbox{km\;s$^{-1}$}\ at most
both from observations
(\eg\
\citeads{2008PASJ...60...95M})
and simulations
\citepads{2009A&A...508.1469A},
with somewhat higher values from inversions
(\eg\
\citeads{2006SoPh..235...75S};
\citeads{2017A&A...598A..33L}),
however our highest values are similar to those obtained from the {\it Bifrost\/}\
numerical experiments of Ellerman bombs\ by
\citetads{2017ApJ...839...22H}.
When including \MgII\ (and in particular due to the contribution from the \MgII\
triplet lines) the red-shifts are more concentrated at lower heights and not as
pronounced above $\hbox{log $\tau_{500}$}\tis-$3 as they were when considering \CaII\ only
(cf.~Figs.~\ref{fig:eventA_maps} and \ref{fig:eventA_mg_maps}).
Event B shows somewhat lower velocities, in particular in its second (B2)
snapshot.
The first snapshot exhibits the blob-like substructure that
\citetads{2017ApJ...851L...6R}
interpreted as plasmoids, an important ingredient in their argument that the
broadening and non-Gaussian shapes commonly observed in UV burst\ \mbox{Si\,\specchar{iv}}\ spectra may
result, at least in part, from a superposition of plasmoid blobs of different
line-of-sight velocities within the IRIS resolution element.
Assuming that the \mbox{Si\,\specchar{iv}}\ emission would originate in the same structures, the
line-of-sight velocities inferred from \CaII\ are insufficient to
explain the broadening observed in \mbox{Si\,\specchar{iv}}.
However, if the latter emission would originate in plasmoid-connected shocks
(Ni et al.~\citeyearads{2018ApJ...852...95N},
\citeyearads{2018arXiv180405631N})
these may in fact attain sufficiently high velocities to explain broadening at
least out to some 50\,\hbox{km\;s$^{-1}$}.
\subsection{Magnetic fields}
One of our aims was to characterise the magnetic field configuration of Ellerman bombs\
with UV burst\ signature, but due to the limitations of the data (\ie\ no
well-constrained photospheric fields for September 3 and in general limited
chromospheric field sensitivity due to observing program choices),
reconstructing the topology through the atmosphere is a challanging task
at best.
We do, however, find increased horizontal fields co-spatial with the stronger
intensity enhancements (cf.~Figs.~\ref{fig:eventB_maps} and
\ref{fig:eventB_Bmaps})
which is consistent with the $\cup$-loop reconnection scenario suggested for
both Ellerman bombs\ and UV bursts\ in many observational studies (\eg\
\citeads{2002ApJ...575..506G},
\citeads{2004ApJ...614.1099P},
\citeads{2008PASJ...60..577M},
\citeads{2009ApJ...701.1911P},
\citeads{2010PASJ...62..879H})
and established in a number of numerical studies as well (\eg\
\citeads{2009A&A...508.1469A},
\citeads{2017A&A...601A.122D},
\citeads{2017ApJ...839...22H}).
Furthermore, the changes from snapshot B1 to B2 at 1.5\,min interval suggest the
horizontal fields decrease and become near invisible at lower heights, while
retaining some signal higher up and at the same time an opposite-polarity
signature remains visible in the line-of-sight component.
This could be interpreted as seeing the imprint of a $\cap$-loop topology or
possibly at the post-reconnection $\cup$-shaped fields rising further through
the atmosphere while the $\cap$-shaped fields below the reconnection point sink
further down.
However, observations with higher polarimetric sensitivity are required to
better constrain the inferral of (low-)chromospheric fields and their evolution.
\subsection{Limitations of the inversion approach}
\label{sec:limitations}
We have obtained results that are consistent with the observations (in
terms of intensities and profile asymmetries) and theoretical and numerical
studies (\eg\ the presence of bi-directional flow signatures, temperature
enhancements close to the temperature minimum, enhanced temperatures at
locations of enhanced horizontal fields, etc.), in particular when
combining CRISP and CHROMIS observations.
However, when including IRIS diagnostics we are unable to fit
all observables simultaneously with the proposed models.
Particularly challenging appears to be the reconciliation of \MgII\ with the
other diagnostics in the presence of \mbox{Si\,\specchar{iv}}.
This suggests we reached certain limitations of our approach, which we have
already largely discussed before.
We shortly summarise them here:
\begin{enumerate}
\item The spectra are not strictly co-temporal, even though STiC works under
the assumption that they are.
For the cases presented this can range anywhere between 2.3--9.2\,s, meaning
the effects on following fast-evolving substructure can be substantial.
While this could in principle be minimised further, this is not always
feasible as seeing effects also play a role.
\item The instrumental resolution differences are large, in
particular between CHROMIS and IRIS.
The choice not to sacrifice high-resolution means a single IRIS spectral
profile is spread over many SST pixels and thus compared with profiles that
are not strictly co-spatial at the CHROMIS pixel level.
Test inversions of combined SST and IRIS observations, where SST data were
downsampled to IRIS resolution (so as to mimick taking the resolution
differences into account), yielded similar results as the high-resolution
inversions, but with typically lower $\chi^{2}$ values for the profile fits
(and while to a lesser extent, \MgII\ remained overestimated in the presence
of \mbox{Si\,\specchar{iv}}).
This suggests that proper accounting for the resolution difference would
indeed improve results, but may not be sufficient in itself to reconcile all
observables.
\item While 3D radiative transfer effects are important for the \CaII\ and
\MgII\ line cores, these do likely not play a determining role in the \mbox{Si\,\specchar{iv}}\
formation.
Nonetheless, the pixel-by-pixel inversion may be too restrictive, \eg\ if the
\mbox{Si\,\specchar{iv}}\ emission were to originate from the low-altitude temperature enhancement,
radiation has no means to escape sideways and will heat up the entire pixel
atmosphere, which may explain part of the \MgII\ overestimation found.
On the other hand, the densities are much higher close to the temperature
minimum than in the upper chromosphere and the effects of horizontal
scattering therefore expectedly smaller.
\item The node-based inversions are limited in resolving what in reality must
be continuous atmosphere.
While we did not observe strong effects on the line-of-sight velocity this
may play an important role for the temperature stratification, in particular
if \mbox{Si\,\specchar{iv}}\ originates in temperature enhancements that are very localised
with height.
\end{enumerate}
Spatially-coupled (\ie\ two-dimensional) inversions could provide a
solution to---or at the very least alleviate---some of these problems (notably
(3) and likely also (2) in the list above).
The idea behind this is that the spectra in adjacent pixels are not independent,
both due to observational effects (\eg\ smearing by the telescope point spread
function (PSF)) and physical ones (\eg\ horizontal radiative transfer), and an
evident improvement over the 1.5D inversions performed here would thus be to
account for this (horizontal) spatial coupling.
A promising two-dimensional inversion approach is the one proposed by
\citetads{2015A&A...577A.140A}
and builds on the concept of sparsity, providing dual gains as it not only
reduces the number of unknowns, but also ensures spatial coupling given that a
reduced fraction of elements describes the behaviour of all pixels.
In addition, taking the PSFs of the different instruments into account
(similar to the approach for {\it Hinode\/}\ alone in
\citetads{2012A&A...548A...5V})
would solve the previously stated resolution difference issues.
Unfortunately, at this point we are not able to explore this further, since the
current STiC code design does not allow for implementation of such a
two-dimensional approach.
Finally, the assumption of hydrostatic equilbrium is a counterintuitive one
considering the dynamics of Ellerman bombs\ and UV bursts, but since the quantities are
derived with respect to column mass rather than in a physical height scale,
this is in fact not an unreasonable simplification for radiative transfer
calculations.
We do note that hydrostatic equilibrium prescribes a monotonic increase of gas
pressure inward, meaning that bumps and discontinuities in pressure (\eg\ as a
result of (bi-directional) jets or shocks) are impossible to reproduce with this
code and may in turn lead to locally over/underestimated temperatures and
densities.
However, we believe other factors play a larger role and that the uncertainties
are primarily set by the limited number of nodes, the absence of spatial
coupling between the solutions and the effects of greatly different instrumental
resolution.
\section{Conclusion} \label{sec:conclusion}
We have presented first-time non-LTE inversions of \CaIR, \CaIIK, \MgII\ and
\mbox{Si\,\specchar{iv}}\ in Ellerman bombs\ with UV burst\ signatures, using the STockholm Inversion Code---a
powerful tool allowing multi-line, multi-species non-LTE inversions.
The revived interest in Ellerman bombs\ over the past decade-and-a-half has led to a
better understanding of the phenomenon, but also added to the diagnostic
visibilities that require explanation, in particular since the launch of IRIS.
As such STiC is particularly well-suited to address the pressing issue of
explaining this wide range in diagnostic formation temperatures from a seemingly
limited atmospheric volume.
We have found that we can largely reproduce the observational properties of the
events (\eg\ specific intensities, profile asymmetries and morphology) with
temperature stratifications that typically peak close to the classical
temperature minimum and velocity profiles that suggest bi-directional jet flows.
The inferred temperatures fall partly in the range of earlier expections with
enhancement of a few thousand kelvin, yet we also find localised hot pockets of
up to 15,000\,K when considering SST diagnostics only.
In general the atmospheric parameters are better constrained when including more
diagnostics and the addition of \MgII\ appears to yield more moderate
temperature enhancements, while the \MgII\ triplet lines help constrain the
low-atmosphere velocity gradients.
The assumption of LTE versus non-LTE hydrogen ionisation appears to have
little effect on the spatial distribution of the event heating, both in the
observed plane and in \hbox{log $\tau_{500}$}\ height, but likely plays a more prominent
role for \mbox{Si\,\specchar{iv}}.
The latter's intensities can be reproduced in double-peaked temperature
stratifications with enhancements of 35,000--60,000\,K around
$\hbox{log $\tau_{500}$}\tis-$6, while requiring in excess of 10,000\,K if the \mbox{Si\,\specchar{iv}}\ emission
should originate close to the temperature minimum.
At the same time it is also clear that we run into certain limitations of our
approach, as with the current setup and inferred model atmospheres we are unable
to reproduce all UV burst\ and Ellerman bomb\ signatures in full agreement simultaneously.
This is likely a combined effect of the difference in instrument resolution,
non-zero time difference between the acquisition of the spectra (which given the
fast evolution of the substructure may represent a significant effect) and also
the limitation to pixel-by-pixel plane-parallel atmospheres.
For the case of \mbox{Si\,\specchar{iv}}\ emission, our study suggests that considering
double-peaked temperature solutions and allowing sharp temperature enhancements
may be worth exploring further.
Ultimately, though, dealing differently with the instrument pixel size
differences---including
moving to spatially-coupled inversions---is
likely a necessary step to reach better diagnostic agreement and by extension a
more complete picture.
\begin{acknowledgements}
This work was supported under the CHROMOBS grant by the Knut and Alice
Wallenberg Foundation.
JdlCR is supported by grants from the Swedish Research Council (2015-03994), the
Swedish National Space Board (128/15) and the Swedish Civil Contingencies Agency
(MSB). This project has received funding from the European Research Council
(ERC) under the European Union's Horizon 2020 research and innovation programme
(SUNMAG, grant agreement 759548).
This research is also supported by the Research Council of Norway, project number
250810, and through its Centres of Excellence scheme, project number 262622.
The Swedish 1-m Solar Telescope is operated on the island of La Palma by the
Institute for Solar Physics of Stockholm University in the Spanish Observatorio
del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias.
IRIS is a NASA small explorer mission developed and operated by LMSAL with
mission operations executed at NASA Ames Research center and major contributions
to downlink communications funded by ESA and the Norwegian Space Centre.
We are grateful to Shahin Jafarzadeh, Tomas Hillberg and Pit S\"utterlin for
participating in the observations at the SST and to Paul Bryans as IRIS planner
for the IRIS--SST coordination.
The inversions were performed on resources provided by the Swedish National
Infrastructure for Computing (SNIC) at the High Performance Computing Center
North at Ume\aa\ University.
This work profited from discussions at the meetings ``Solar UV bursts -- a new
insight to magnetic reconnection'' (International Team 360) and ``Studying
magnetic-field-regulated heating in the solar chromosphere'' (International Team
399) at the International Space Science Institute (ISSI) in Bern.
We made much use of NASA's Astrophysics Data System Bibliographic Services.
We also acknowledge the community effort to develop open-source
packages used here: \tt{numpy} (\url{numpy.org}), \tt{matplotlib}
(\url{matplotlib.org}), \tt{sunpy} (\url{sunpy.org}).
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,497,218 | arxiv |
\section{Model Architecture Change For Layer-Drop}
One natural way to increase the model capacity is simply increasing the layer number $L$. Unfortunately, Figure~\ref{fig:vanishing-gradients} shows that this would raise gradient vanishing issue at the lower layers. When $L=12$, which is the default layer in the original BERT paper, the gradient vanishing issue exists, but is not severe. However, as we stack more Transformer encoder blocks, the vanishing gradient issue becomes significantly worse. The gradients at the bottom layers are so close to 0 that the learning at those layers becomes close to halt.
\subsubsection{Revisiting the Transformer networks}
We define Transformer networks as the composition of Transformer blocks, where all Transformer blocks have the same architecture. In particular, a Transformer block consists of two sublayers: a dot-product self-attention layer and a feed-forward layer, with both layers having a skip connection.
More concretely, for an input $X \in \mathds{R}^{d \times n}$ consisting of d-dimensional embeddings of $n$ tokens, a Transformer block consists of the following two sublayers:
\begin{equation}
\label{eqn:transformer-self-attention}
\begin{split}
SL_1(X) &= LayerNorm(X + Attn(X)) \\
&= LayerNorm(X + {W_O}{W_V}X\cdot softmax[(W_KX)^TW_QX])
\end{split}
\end{equation}
\begin{equation}
\label{eqn:feed-forward}
\begin{split}
SL_2(X) &= LayerNorm(SL_1(X) + FF(SL_1(X))) \\
&= LayerNorm(SL_1(X) + W_2\cdot gelu(W_1\cdot SL_1(X)))
\end{split}
\end{equation}
Based on Eqn.~\ref{eqn:transformer-self-attention}--\ref{eqn:feed-forward}, we have for each layer $l$:
\begin{equation}
\label{eqn:analysis-1}
\begin{split}
X_{l+1} &= LayerNorm(SL_1(X_l) + FF(SL_1(X_l))) \\
&= LayerNorm(LayerNorm(X_{l} + Attn(X_{l})) + FF(LayerNorm(X_{l} + Attn(X_{l})))) \\
\end{split}
\end{equation}
The layer normalization as in \cite{layer-norm} keeps the magnitude of the hidden layers from growing large. As can be seen, the LayerNorm layer alters the signal that passes through the skip connection and impedes information propagation, as reflected by the difficulties on reducing training loss after we increase the model depth (Fig.~\ref{}). Furthermore, SGD and dropout perturb the normalisation, leading to high variance in training error. The effect gets worse with depth, so simply stacking more Transformer blocks tend to perform poorly.
Explain why LayerNorm causes an issue.
\href{https://arxiv.org/pdf/1911.03179.pdf}{Why Deep Transformers are Difficult to Converge?
From Computation Order to Lipschitz Restricted Parameter Initialization}
"We empirically show
that with proper parameter initialization, deep
Transformers with the original computation order can converge, which is quite in contrast
to all previous works, and obtain significant
improvements with up to 24 layers."
Plot: Gradient norm (y-axis) of each encoder layer (top)
and decoder layer (bottom) in Transformer with respect to layer depth (x-axis).
Show the difference between L12, L24, L48, L72, L101
1. By adding identity mapping, the output x can be thought of as a recursive summation of the outputs from all previous layers. In contrast, without skip connections, then the output is the products of a series of matrix-vector.
2. Gradients at skip connection boundary can be decomposed into two additive terms: a term that propagates information directly without concerning any weight layers, and another term that propagates through the weight layers. The first term ensures that the information is directly propagated back to any shallower unit.
Another hypothesis is that the second term makes it more unlikely to have vanished gradients (this remains unclear as whether gradients indicate vanish in early layers in BERT, something to verify and potentially can be used to show if skip connections help).
3. Even though additional gating and conv shortcut introduces more parameters and should have a stronger representational abilities than identity shortcuts. Representation ability is not the only factor that decides whether a model behave good or not. There are at least two aspects: optimization issues and representational abilities.
4. If additional components such as BN are added along the shortcut link, signals are altered before passing to previous layers (which indicates that we should keep the highway clean).
5. Using asymmetric after-addition activation is equivalent to constructing a pre-activation Residual Unit.
\begin{figure}
\centering
\includegraphics[scale=0.4]{figs/preln-improve-lr}
\caption{Identity mapping reordering makes training more stable and helps the model to learn with a larger learning rate.}
\minjia{Add learning rate schedules.}
\minjia{Add changes of gradient norm before and after identity mapping and with different learning rate. We need fine-grained features to show that "identity mapping has an effect on stabilizing network parameters". Another way is to show the L2 distance and cosine similarity of the input and output embeddings for each layer. Check how Albert shows it with weight sharing.}
\label{fig:preln-improve-lr}
\end{figure}
\section{Preliminary Analysis}
\label{sec:analysis}
This section presents several studies that guided the design of the approach introduced in Section~\ref{sec:method}. We used BERT trained on Bookcorpus and Wikipedia dataset from Devlin et. al. with standard settings as the baseline~\footnote{Appendix~\ref{sec:hyperparameters} provides detailed training hyperparameters.}. First, we carry out a comparison between BERT with PostLN and PreLN. Our goal is to measure how effective these two methods at stabilizing BERT training.
Our second analysis considers measuring the dynamics of BERT pre-training, including both spatial and temporal dimensions. Finally, we analyze the effect of the removal of the Transformer layers. This leads us to identify appealing choices for our target operating points.
\subsection{Training Stability: PostLN or PreLN?}
\label{subsec:training-stability}
We consider two variants of BERT, namely the PostLN and PreLN. The default BERT employs PostLN, with layer normalization applied after the addition in Transformer blocks. The PreLN changes the placement of the location of $f_{LN}$ by placing it only on the input stream of the sublayers so that
${h}_i = x_i + f_{S-ATTN}(f_{LN}(x_i))$ and then $x_{i+1} = h_i + f_{FFN}(f_{LN}(h_i))$, which is a modification described by several recent works to establish identity mapping for neural machine translation~\cite{deep-Transformer,on-layer-norm,adaptive-inputs,sparse-Transformers,Transformer-without-tears}. Fig.~\ref{fig:stability-gradient-norm} reports the norm of gradients with respect to weights in backward propagation for both methods, varying the depth $L$ (e.g., 12, 24, 48). The plot shows that while PostLN suffers from unbalanced gradients (e.g., vanishing gradients as the layer ID decreases), PreLN eliminates the unbalanced gradient problem (solid green lines) and the gradient norm stays almost same for any layer. Furthermore, Fig.~\ref{fig:grad-norm-preserving} shows that for PreLN the gradients with respect to input $x_i$ have very
similar magnitudes (norm preserving ratio close to 1) at different layers, which is consistent with prior findings that a neural model should preserve the gradient norm between layers so as to have well-conditioning and faster convergence~\cite{understanding-difficulty-of-training-dnn,norm-preservation}.
Indeed, we find that PostLN is more sensitive to the choice of hyperparameters, and training often diverges with more aggressive learning rates (more results in Section~\ref{sec:eval}), whereas PreLN avoids vanishing gradients and leads to more stable optimization. We also provide preliminary theoretical results in Appendix~\ref{sec:preln-analysis} on why PreLN is beneficial.
\minjia{This is superficial. Need math to support. Also, having one result figure here is good. Having the similarity results here is too much for one design component. The L2/Cosine results can be in the eval for robustness analysis.}
\later{
This is important for our approach, since our approach reduces BERT depth, we want to increase the learning rate as a shallower and less complex model learns faster and suffers less from the effect of large variance of gradients in its deeper counterpart~\cite{}.
}
\minjia{Layer drop is type of pruning. Without stabilizing the training, it can be destructive.}
\href{https://arxiv.org/pdf/1706.02515.pdf}{Self-Normalizing Neural Networks}
\href{https://towardsdatascience.com/what-is-weight-initialization-in-neural-nets-and-why-it-matters-ec45398f99fa}{Weight initialization}
\href{https://leimao.github.io/blog/Layer-Normalization/}{Layer Normalization Explained}
\href{https://stats.stackexchange.com/questions/304755/pros-and-cons-of-weight-normalization-vs-batch-normalization}{Pros and cons of weight normalization vs batch normalization}
\href{https://papers.nips.cc/paper/8689-understanding-and-improving-layer-normalization.pdf}{Understanding and Improving Layer Normalization}
\minjia{Can we simply the second and third term? \minjia{Done}}
\subsection{Corroboration of Training Dynamics}
\label{subsec:training-dynamics}
Hereafter we investigate the representation $x_i$ learned at different phases of BERT pre-training and at different layers. Fig.~\ref{fig:similarity-analysis} shows the L2 norm distances and cosine similarity, which measures the angle between two vectors and ignores their norms, between the input and output embeddings, with PostLN and PreLN, respectively. We draw several observations.
First, the dissimilarity (Fig.~\ref{fig:similarity-l2-norm-step300} and Fig.~\ref{fig:similarity-cosine-step300}) stays high for both PostLN and PreLN at those higher layers in the beginning, and the L2 and cosine similarity seems to be less correlated (e.g., step = 300).
This is presumably because, at the beginning of the training, the model weights are randomly initialized, and the network is still actively adjusting weights to derive richer features from input data. Since the model is still positively self-organizing on the network parameters toward their optimal configuration,
dropping layers at this stage is not an interesting strategy, because it can create inputs with large noise and disturb the positive co-adaption process.
Second, as the training proceeds (Fig.~\ref{fig:similarity-l2-norm-step2000} and Fig.~\ref{fig:similarity-cosine-step2000}), although the dissimilarity remains relatively high and bumpy for PostLN, the similarity from PreLN starts to increase over successive layers, indicating that while PostLN
is still trying to produce new representations that are very different across layers, the dissimilarity from PreLN is getting close to zero for upper layers, indicating that the upper layers are getting similar estimations. This can be viewed as doing an unrolled iterative refinement~\cite{iterative-estimation}, where a group of successive layers iteratively refine their estimates of the same representations instead of computing an entirely new representation.
Although the viewpoint was originally proposed to explain ResNet, we demonstrate that it is also true for language modeling and Transformer-based networks. Appendix~\ref{sec:unrolled-analysis} provides additional analysis on how PreLN provides extra preservation of feature identity through unrolled iterative refinement.
\begin{figure}[ht!]
\centering
\small
\subfloat[]{\includegraphics[scale=0.29, keepaspectratio=true]{figs/similarity/similarity-l2-norm-step300}\label{fig:similarity-l2-norm-step300}}
\subfloat[]{\includegraphics[scale=0.29, keepaspectratio=true]{figs/similarity/similarity-cosine-step300}\label{fig:similarity-cosine-step300}}
\subfloat[]{\includegraphics[scale=0.29, keepaspectratio=true]{figs/similarity/similarity-l2-norm-step2000}\label{fig:similarity-l2-norm-step2000}}
\subfloat[]{\includegraphics[scale=0.29, keepaspectratio=true]{figs/similarity/similarity-cosine-step2000}\label{fig:similarity-cosine-step2000}}
\caption{The L2 distance and cosine similarity of the input and output embeddings for BERT with PostLN and PreLN, at different layers and different steps. We plot the inverse of cosine similarity (arccosine) in degrees, so that for both L2 and arccosine, the lower the more similar.}
\minjia{One experiment we could add: PreLN stableness. I'm guessing it is even more stable.}
\minjia{Measure again the gradient norm. This time, measure the aggregated norms for the entire layer and then get the mean.}
\minjia{TODO: Also show that outputs are norm-preserving: the norm of the gradient with respect to the input is close to the norm of gradient with respect to the output.}
\label{fig:similarity-analysis}
\end{figure}
\later{
These results indicate that for PostLN, even in later phase of the training, the network is still trying to produce new representations that are very different across layers. In contrast, for PreLN, the lower layers (e.g., 1 to 3) actively compute new level of representations to provide a good estimate for the final representation. At these layers, the network is probably learning low level representations that tend to be relatively simple and need little iterative refinement. Subsequent layers on the other likely need to handle more complex representations with numerous dependencies and therefore need more iterative refinement.}
\subsection{Effect of Lesioning}
We randomly drop layers with a keep ratio $\theta = 0.5$ to test if dropping layers would break the training because dropping any layer changes the input distribution of all subsequent layers. The results are shown in Fig.~\ref{fig:leisioning-analysis}. As shown, removing layers in PostLN significantly reduces performance. Moreover, when increasing the learning rate, it results in diverged training.
In contrast, this is not the case for PreLN. Given that later layers in PreLN tend to refine an estimate of the representation, the model with PreLN has less dependence on the downsampling individual layers. As a result, removing Transformer layers with PreLN has a modest impact on performance (slightly worse validation loss at the same number of training samples). However, the change is much smaller than with PostLN. It further indicates that if we remove layers, especially those higher ones, it should have only a mild effect on the final result because doing so does not change the overall estimation the next layer receives, only its quality. The following layers can still perform mostly the same operation, even with some relatively little noisy input. Furthermore, as Fig.~\ref{fig:similarity-analysis} indicates, since the lower layers remain to have a relatively high dissimilarity (deriving new features), they should be less frequently dropped. Overall, these results show that, to some extent, the structure of a Transformer network with PreLN can be changed at runtime without significantly affecting performance.
\minjia{TODO: Add another section on how removing layers would have an impact to model training dynamics. Similar as Section 4.1 in https://arxiv.org/pdf/1605.06431.pdf.}
\section{Establishing Identity Mapping}
\section{Pre-training Hyperparameters}
\label{sec:hyperparameters}
Table~\ref{tbl:pretraining-hyperparameters} describes the hyperparameters for pre-training the baseline and PLD\xspace.
\begin{table}[!ht]
\centering
\caption{Hyperparameters for pre-training the baseline and PLD\xspace.}
\begin{tabular}{|l|l|l|}
\hline
\textbf{Hyperparameter} & \textbf{Baseline} & \textbf{PLD\xspace} \\ \hline
Number of Layers & 12 & 12 \\ \hline
Hidden zies & 768 & 768 \\ \hline
Attention heads & 12 & 12 \\ \hline
Dropout & 0.1 & 0.1 \\ \hline
Attention dropout & 0.1 & 0.1 \\ \hline
Total batch size & 4K & 4K \\ \hline
Train micro batch size per gpu & 16 & 16 \\ \hline
Optimizer & Adam & Adam \\ \hline
Peak learning rate & 1e-04 & 1e-03 \\ \hline
Learning rate scheduler & warmup\_linear\_decay\_exp & warmup\_linear\_decay\_exp \\ \hline
Warmup ratio & 0.02 & 0.02 \\ \hline
Decay rate & 0.99 & 0.99 \\ \hline
Decay step & 1000 & 1000 \\ \hline
Max Training steps & 200000 & 200000 \\ \hline
Weight decay & 0.01 & 0.01 \\ \hline
Gradient clipping & 1 & 1 \\ \hline
\end{tabular}
\label{tbl:pretraining-hyperparameters}
\end{table}
\section{Establishing Identity Mapping with PreLN}
\label{sec:preln-analysis}
Prior studies~\cite{resnet,identity-mapping} suggest that establishing \emph{identity mapping} to keep a \emph{clean} information path (no operations except addition) is helpful for easing optimization of networks with residual connections. With the change of PreLN, we can express the output of the i-th Transformer layer as the input $x_i$ of that layer plus a residual transformation function $f_{RT} = f_{S-ATTN}(f_{LN}(x_i)) + f_{FFN}(f_{LN}(x_i^{'}))$, and the output layer $x_L = x_l + \sum_{i=l}^{L-1} f_{RT}(x_i)$ as the recursive summation of preceding $f_{RT}$ functions in shallower layers (plus $x_l$). If we denote the loss function as $\mathcal{E}$, from the chain rule of backpropagation~\cite{auto-diff-survey} we have:
\begin{equation}
\label{eqn:analysis-3}
\frac{\partial\mathcal{E}}{\partial x_l} = \frac{\partial\mathcal{E}}{\partial x_L}\frac{\partial x_L}{\partial x_l} =
\frac{\partial\mathcal{E}}{\partial x_L}(1 + \frac{\partial}{\partial x_l}\sum_{i=l}^{L-1} f_{RT}(x_i))
\end{equation}
Eqn.~\ref{eqn:analysis-3} indicates that the gradient $\frac{\partial\mathcal{E}}{\partial X_l}$ can be decomposed into two additive terms: a term of $\frac{\partial\mathcal{E}}{\partial X_L}$ that propagates information directly back to any shallower $l$-th block without concerning how complex $\frac{\partial}{\partial x_l}\sum_{i=l}^{L-1} f_{RT}(x_i))$ would be, and another term of $\frac{\partial\mathcal{E}}{\partial X_L}(\frac{\partial}{\partial X_l}\sum_{i=l}^{L-1} f_{RT}(X_i))$ that propagates through the Transformer blocks.
The equation also suggests that
it is unlikely for the gradient $\frac{\partial}{\partial X_l}$ to be canceled out for a mini-batch, and in general the term $\frac{\partial}{\partial X_l}\sum_{i=l}^{L-1} f_{RT}(X_i)$ cannot be always -1 for all samples in a mini-batch.
This explains why the gradients of Transformer layers in Fig.~\ref{fig:stability-gradient-norm} become more balanced and do not vanish after identity mapping reordering. In contrast, the PostLN architecture has a series of layer normalization operations that constantly alter the signal that passes through the skip connection and impedes information propagation, causing both vanishing gradients and training instability.
Overall, PreLN results in several useful characteristics such as avoiding vanishing/exploding gradient, stable optimization, and performance gain.
\section{PreLN From the View of Unrolled Iterative Refinement}
\label{sec:unrolled-analysis}
From a theoretical point of view~\cite{iterative-estimation}, a noisy estimate for a representation by the first Transformer layer should, on average, be correct even though it might have high variance. The unrolled iterative refinement view says if we treat "identity mapping" (as in PreLN) as being an unbiased estimator for the target representation, then beyond the first layer, the subsequent Transformer layer outputs $x_i^n$ (e.g., $i \in {2...L}$) are all estimators for the same latent representation $H^n$, where $H^n$ refers to the (unknown) value towards which the $n$-th representation is converging. The unbiased estimator condition can then be written as the expected difference between the estimator and the final representation:
\begin{equation}
\underset{x \in X}{\mathds{E}}[x_i^n - H^n] = 0
\end{equation}
With the PreLN equation, it follows that the expected difference between outputs of two consecutive layers is zero, because
\begin{equation}
{\mathds{E}}[x_i^n - H^n] - {\mathds{E}}[x_{i-1}^n - H^n] = 0 \Rightarrow {\mathds{E}}[x_i^n - x_{i-1}^n] = 0
\end{equation}
If we write representation $x_i^n$ as a combination of $x_{i-1^n}$ and a residual ${f_{RT}}^n$, it follows from the above equation that the residual has to be zero-mean:
\begin{equation}
x_i^n = x_{i-1}^n + {f_{RT}}^n \Rightarrow {\mathds{E}}[{f_{RT}}^n] = 0
\end{equation}
which we have empirically verified to be correct, as shown in Figure~\ref{fig:grad-norm-preserving}. Therefore, PreLN ensures that the expectation of the new estimate will be correct, and the iterative summation of the residual functions in the remaining layers determines the variance of the new estimate ${\mathds{E}}[{F_{RT}}_i]$.
\paragraph{The effect of learning rates on downstream tasks.}
We focus on evaluating larger datasets and exclude very small datasets, as we find that the validation scores on those datasets have a large variance for different random seeds.
For fine-tuning models on downstream tasks,
we consider training with batch size 32 and performing a
linear warmup for the first 10\% of steps followed by
a linear decay to 0. We fine-tune for 5 epochs and
perform the evaluation on the development set.
We report the median development
set results for each task over five random initializations, without model ensemble.
Results are visualized in Fig.~\ref{fig:fine-tune-heatmap}, which shows that the baseline is less robust on the choice of learning rates. Specifically, the fine-tuning results are often much worse with a large learning rate. In comparison, PLD\xspace is more robust and often achieves better results with large learning rates.
\section{Background and Related Work}
\label{sec:background}
\href{https://towardsdatascience.com/deep-learning-isnt-hard-anymore-26db0d4749d7}{Deep learning isn’t hard anymore}
\href{https://ruder.io/transfer-learning/
}{Transfer Learning - Machine Learning's Next Frontier}
\notes{Andrew Ng, chief scientist at Baidu and professor at Stanford, said during his widely popular NIPS 2016 tutorial that transfer learning will be -- after supervised learning -- the next driver of ML commercial success.}
Pre-training with Transformer-based architectures like BERT~\cite{bert} has been demonstrated as an effective strategy for language representation learning~\cite{roberta,xlnet,albert,megatron-lm}. The approach provides a better model initialization for downstream tasks by training on large-scale unlabeled corpora, which often leads to a better generalization performance on the target task through fine-tuning on small data.
Consider BERT, which consists a stack of $L$ Transformer layers~\cite{transformer}.
Each Transformer layer encodes the the input of the i-th Transformer layer $x_i$ with $h_i = f_{LN}(x_i + f_{S-ATTN}(x_i))$, which is a multi-head self-attention sub-layer $f_{ATTN}$, and then by $x_{i+1} = f_{LN}(h_i + f_{FFN}(h_i))$, which is a feed-forward network $f_{FFN}$, where $x_{i+1}$ is the output of the i-th Transformer layer. Both sub-layers have an AddNorm operation that consists a residual connection~\cite{resnet} and a layer normalization ($f_{LN}$)~\cite{layer-norm}. The BERT model recursively applies the transformer block to the input to get the output.
\href{https://arxiv.org/pdf/1711.08393.pdf}{BlockDrop: Dynamic Inference Paths in Residual Networks}
\notes{Transformers essentially uses Convolutional Neural Networks together with attention models.}
\href{https://openreview.net/pdf?id=ByxRM0Ntvr}{ARE TRANSFORMERS UNIVERSAL APPROXIMATORS
OF SEQUENCE-TO-SEQUENCE FUNCTIONS?}
\href{https://towardsdatascience.com/transformers-141e32e69591}{How Transformers Work}
\notes{What are the “query”, “key”, and “value” vectors?
They’re abstractions that are useful for calculating and thinking about attention}
\notes{The second step in calculating self-attention is to calculate a score. Say we’re calculating the self-attention for the first word in this example, “Thinking”. We need to score each word of the input sentence against this word. The score determines how much focus/importance to place on other parts of the input sentence as we encode a word at a certain position. (MZ: Embeddings are therefore a function of its neighbors/context.) The fifth step is to multiply each value vector by the softmax score (in preparation to sum them up). The intuition here is to keep intact the values of the word(s) we want to focus on, and drown-out irrelevant words (by multiplying them by tiny numbers like 0.001, for example). Transformers basically work like that. There are a few other details that make them work better. For example, instead of only paying attention to each other in one dimension, Transformers use the concept of Multihead attention.}
\href{https://arxiv.org/pdf/1610.03022.pdf
}{Very deep convolutional networks for end-to-end speech recognition}
\href{https://arxiv.org/pdf/1909.11942.pdf
}{ALBERT: A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS}
\section*{Broader Impact}
Pre-training large-scale language models like BERT have an incredible ability to extract textual information and apply to a variety of NLP tasks, but pre-training requires significant compute and time. Pre-training the BERT baseline model is typically done through hardware acceleration and scaling the training on 100s to 1000s of GPUs across multiple nodes. However, such a method is very costly and consumes magnitudes higher energy.
The proposed solution achieves similar or better quality with shorter training time. It improves robustness to further reduce the hyperparameter tuning required, improving the productivity of AI scientists. It also saves hardware resources and trims down the total energy cost of in-situ, resource-constrained training, yielding a less amount of carbon footprint produced. Furthermore, the optimizations not only benefit BERT; they are also applicable to many other recent models such as RoBERTa~\cite{roberta}, GPT-2~\cite{gpt-2}, XLNet~\cite{xlnet}, and UniLM~\cite{unilm}, which all adopt Transformer as the backbone. Finally, our techniques can also help advance language understanding and inference, enabling enterprise or consumer-facing applications, such as conversational AI. We will open-source the code so that other practitioners and researchers can reproduce our results or re-use code into their ventures in this field.
There are no apparent negative outcomes. However, like other AI technology, we should be mindful of using it to transform our systems to be more efficient in fulfilling goodwill.
\section{Conclusion}
\label{sec:conclusion}
Unsupervised language model pre-training is a crucial step for getting state-of-the-art performance on NLP tasks. The current time for training such a model is excruciatingly long, and it is very much desirable to reduce the turnaround time for training such models. In this paper, we study the efficient training algorithms for pre-training BERT model for NLP tasks. We have conducted extensive analysis and found that model architecture is important when training Transformer-based models with stochastic depth. Using this insight, we propose the Switchable-Transformer\xspace block and a progressive layer-wise drop schedule. Our experiment results show that our training strategy achieves competitive performance to training a deep model from scratch at a faster rate.
\later{
\begin{itemize}
\item Test out GPT-2. Then this is no longer an ad hoc one.
\item Can I prove "This verifies our theoretical analysis that the proposed algorithm converges to critical points with a rate of O(1/T). -- ouroboros"
\item how to prove mathematically our approach (ensemble/stochastic depth) is more beneficial to downstream tasks. We need some theoretical analysis and explanation.
\item The story should go this way: Accelerating BERT is crucial. The BERT computation is linear wrt to the Transformer blocks. Straightforward idea is to reduce the depth. However, reduced depth causes serious issues such as low accuracy. On the other hand, stochastic depth is difficult to train, because XXX and it does not give on-par accuracy (show training curve). Given that the depth of the model is reduced, if we want to make the training converge faster, increase the learning rate, the training simply diverges (given analytical results and experiment results). Then we give our solution: (1) switchable Transformer blocks make training stable and converge faster. (2) Progressive layer drop schedule to overcome warmup. Need some deeper understanding of stochastic depth.
\minjia{Also add that stochastic depth + postLN leads to suboptimal results, because we cannot use a large learning rate.}
\item We need theoretical analysis (find some followup of stochastic depth and curriculum learning -- add theorem and proofs -- prove that stochastic depth works, and prove that curriculum/progressive schedule is correct (converge) -- essentially deeper explanation).
\item {TODO: Add another subsection in the analysis part on how removing layers would have an impact to model training dynamics. Similar as Section 4.1 in https://arxiv.org/pdf/1605.06431.pdf.}
\item Show out-of-range variance figure (in motivation). This can be done by setting warmup ratio as 0 so training directly start with high learning rate.
\item Add results on regular drop rate: 0.75 is fine, but how about 0.5? This is helpful to show a constant drop rate is suboptimal (we need this claim).
\item Try different keep ratio.
\item Fine-tune 9-layer BERT -- change the finetune script to get 9 layer.
\item Try switch-curriculum.
\item Show variance analysis for preLN and postLN, similar to Xiaodong's work.
\item Adjust learning rate dynamically based on the stochastic depth?
\item Adjust curriculum based on the learning rate
\minjia{The issue is that it then becomes coupled with the learning rate schedule, We need something more general than the curriculum curve we provided.}
\item \sout{May need to redraw training curve and validation curve based on the updated model performance results.}
\item \sout{If we are going to show our approach with two curves with different lr, we probably should do the same for the baseline.}
\item \sout{Try again on seq512 using a larger lr to get the validation curve}
\item \sout{May need to switch the data preprocessing pipeline.}
\item \sout{Investigate whether seq128 can show validation curve.}
\item \sout{Train with a 9-layer BERT directly.}
\item \sout{May need to report fine-tune results.}
\item \sout{Investigate why actual training takes longer than expected. Using instrumentation.}
\item \sout{Change data loader worker count to 1 and measure all drop performance again.}
\item \sout{Report variance of fine-tuning results (right methodology). }
\item \sout{Vanishing gradients for 24-layer, 48-layer BERT. There is a caveat, although postLN initially has vanishing gradient problem, it later disappears after a few hundred steps. In the "On the Pre-Layer Norm" paper, they mentioned "the gradients are well behaved without any exploding or vanishing \textbf{at initialization} for the Pre-LN Transformer both theoretically and empirically". So this is an effect happens only at the initial stage. If it happens only at the initial stage, then it is possible to address it through good initialization. }
\item \sout{Plot more than one step of gradient information (only if it helps)}
\item \sout{Plot different learning rate / differences of fine-tune results}
\item \sout{Fine-tune baseline and report variance.}
\item \sout{Collect full fine-tuning results. May need to be based on the one with the larger learning rate (only our results need to be updated. The baseline one can still use their best results).}
\end{itemize}
}
\minjia{Niranjan recently demonstrated that it was possible to train Adam with 64K batch size and PreLN. This confirms the understanding here. By incorporating this finding, we can come up with a paper that talks about large batch training + Adam + Stochastic depth + ZeRO. I think some learnings are (1) PreLn enables Adam to scale to large batch (PostLn Adam hits 4k but not beyond) (2) If Lamb can be avoided, Adam instead has lesser hyperparameters to tune (no min and max) and also Adam is less expensive than Lamb due to to no normalization; these 2 learnings can then be used for large model (eg, Turing NLG, etc); perhaps there are more insight as well?}
\section{Our Approach: Progressive Layer Dropping}
\label{sec:method}
This section describes our approach, namely progressive layer dropping (PLD\xspace), to accelerate the pre-training of Transformer-based models. We first present the Switchable-Transformer blocks, a new unit that allows us to train models like BERT with layer drop and improved stability. Then we introduce the progressive layer drop procedure.
\subsection{Switchable-Transformer\xspace Blocks}
In this work, we propose a novel transformer unit, which we call "Switchable-Transformer\xspace" (ST\xspace) block. Compared with the original Transformer block (Fig.~\ref{fig:gated-transformer-v2a}), it contains two changes.
\paragraph{Identity mapping reordering.} The first change is to establish identity mapping within a transformer block by placing the layer normalization only on the input stream of the sublayers (i.e., use PreLN to replace PostLN) (Fig.~\ref{fig:gated-transformer-v2b}) for the stability reason described in Section~\ref{subsec:training-stability}.
\href{https://arxiv.org/pdf/1910.06764.pdf}{STABILIZING TRANSFORMERS
FOR REINFORCEMENT LEARNING}
\begin{figure}[h!]
\centering
\begin{minipage}[c]{0.60\textwidth}
\subfloat[Original]{\includegraphics[scale=0.5, keepaspectratio=true]{figs/architecture/gated-transformer-v2a}\label{fig:gated-transformer-v2a}}
\hspace{1.00em}
\subfloat[Identity mapping reordering]{\includegraphics[scale=0.5, keepaspectratio=true]{figs/architecture/gated-transformer-v2b}\label{fig:gated-transformer-v2b}}
\hspace{1.00em}
\subfloat[Switchable Transformer]{\includegraphics[scale=0.5, keepaspectratio=true]{figs/architecture/gated-transformer-v2c}\label{fig:gated-transformer-v2c}}
\caption{Transformer variants, showing a single layer block.}\label{fig:gated-transformer-v2}
\end{minipage}%
\hspace{0.50em}
\begin{minipage}[c]{0.34\textwidth}
\includegraphics[scale=0.38, keepaspectratio=true]{figs/expected-GFLOPS-curve-graph}
\caption{FLOPS per training iteration normalized to the baseline.}
\label{fig:expected-GFLOPS-curve-graph}
\end{minipage}
\end{figure}
\href{https://openreview.net/pdf?id=SylO2yStDr}{REDUCING TRANSFORMER DEPTH ON DEMAND WITH STRUCTURED DROPOUT}
\paragraph{Switchable gates.}
Next, we extend the architecture to include a gate for each sub-layer (Fig.~\ref{fig:gated-transformer-v2c}), which controls whether a sub-layer is disabled or not during training. In particular, for each mini-batch, the two gates for the two sublayers decide whether to remove their corresponding transformation functions and only keep the identify mapping connection, which is equivalent to applying a conditional gate function $G$ to each sub-layer as follows:
\minjia{Our curriculum should be based on the learning rate and some other model properties. The fact that both warmup and learning rate decay indicate that there are some intrinsic dynamics in ths process. A function of learning rate and variance? Somehow subsume the current setting would be good. Generalize the current settings. Epsilon that is controlled by the variance of adaptive learning rate. Think about RAdam. Learning rate decay schedule. In this work, we did not talk about the optimizer.}
\begin{equation}
\begin{split}
h_{i} &= x_{i} + G_{i} \times f_{S-ATTN}(f_{LN}(x_{i})) \times \frac{1}{p_i} \\
x_{i+1} &= h_{i}+ G_{i} \times f_{FFN}(f_{LN}(h_{i})) \times \frac{1}{p_i} \\
\end{split}
\end{equation}
\notes{According to Xiaodong, we cannnot say deeper models are subject to overfitting in the context of unsupervised learning. It is hard to explain. Theoretically, if the model size is infinite, then sure it will overfit. However, under the context of unsupervised learning, the possible combinations (e.g., different ways of applying masks) can be combinatorial explosive, making it difficult to overfit. It would be better to describe the motivation of applying this technique from the perspective of accelerating the training process perspective.}
\href{https://openreview.net/forum?id=ByxRM0Ntvr
}{Are Transformers universal approximators of sequence-to-sequence functions?}
In our design, the function $G_i$ only takes 0 or 1 as values, which is chosen randomly from a Bernoulli distribution (with two possible outcomes), $G_i \sim B(1, p_i)$,
where $p_i$ is the probability of choosing 1. Because the blocks are selected with probability $p_i$ during training and are always presented during inference, we re-calibrate the layers' output by a scaling factor of $\frac{1}{p_i}$ whenever they are selected.
\notes{Scaling 1/1-p is similar to dropout: The outputs are scaled by a factor of $\frac{1}{1-p}$ during training. This means that during evaluation the module simply computes an identity function. In contrast, if we do not do this, then we need to scale the output during the test time by p so that a layer only contribute proportionally to the times it participates in training. More detailed discussion can be found here: \url{https://medium.com/@zhang_yang/scaling-in-neural-network-dropout-layers-with-pytorch-code-example-11436098d426}.}
\notes{
At test time all neurons are active always, which means we must scale the activations so that for each neuron: output at test time = expected output at training time. This also has the interpretation as an ensemble of all subnetworks.
Because layer drop is active only during training time but not inference time, without the scaling, the expected output would be larger during inference time because the elements are not being randomly chosen to be dropped (set to 0). But we want the expected output with and without going through the network to be the same. Therefore, during training, we compensate by making the output of the dropped layer larger by the scaling factor of 1/(1-p). A larger p means more aggressive dropout, which means the more compensation we need, i.e. the larger scaling factor 1/(1-p). Instead of making the output of the dropout layer larger during training, one could equivalently make the output of the identity function during inference smaller. However the former is easier to implement. There is a discussion on stackoverflow that provides some details. But be careful, the p in that discussion (from slides of Standford CS231n: Convolutional Neural Networks for Visual Recognition) is the ratio for keeping instead of for dropping.}
\minjia{How does increasing network depth affect learning rate? Hank once mentioned that as the depth of the network increases, one may want to decrease the learning rate as that could mitigate the effect of large variance of gradients caused by more complex model. However, doesn't deeper model suffer from vanishing gradient issue more? To overcome vanishing gradient issue, perhaps one should consider increase the gradient?}
\minjia{TODO: Another option to organize the paper: Put the schedule on the right of the architecture figure. Then Remove the cosine similarity figure. Then put the similarity and vanishing gradient curve figures together. No need to put the variance curve.}
\subsection{A Progressive\xspace Layer Dropping Schedule}
\label{subsec:curriculum-schedule}
\minjia{TODO: more theory on curriculum learning}
Based on the insights from Section~\ref{subsec:training-dynamics}, and inspired by prior work on curriculum learning~\cite{curriculum-drop,curriculum-learning}
\later{
using the same hyperparameters (e.g., learning rate), which causes divergence at the beginning or at the end of training, due to the BERT training dynamics analyzed in Section~\ref{subsec:training-dynamics}.
In particular, training at the beginning is more difficult than later phases~\cite{transformer-training-tips}, due to the unbalanced gradients and unexpected large variance of gradients issue.
As a result, a warmup phase with a carefully designed learning rate schedule or a rectified Adam optimizer~\cite{radam} is often required to overcome the initial training difficulties.
As it proceeds, the training becomes more stable and enters a cooling down phase, using a learning rate decay schedule (e.g., exponential decay).
Dropping layers at this phase is not preferred, since it would again increase the variance and hampers stability of the training. }
we propose a progressive schedule\xspace $\theta(t)$ -- a temporal schedule for the expected number of ST\xspace blocks that are retained.
We limit ourselves to monotonically decreasing functions so that the likelihood of layer dropping can only increase along the temporal dimension.
We constrain ${\theta}(t)$ to be ${\theta}(t) \ge \bar{\theta}$ for any t, where $\bar{\theta}$ is a limit value, to be taken as $0.5 \le \bar{\theta} \le 0.9$ (Section~\ref{sec:eval}).
Based on this, we define the progressive schedule $\theta(t)$ as:
\begin{defn}
A progressive schedule is a function $t \rightarrow \theta(t)$ such that $\theta(0)$ = 1 and $\lim_{t\rightarrow \infty}{\theta(t)} \rightarrow \bar{\theta}$, where $\bar{\theta} \in (0,1]$.
\label{defn:schedule}
\end{defn}
\paragraph{Progress along the time dimension.} Starting from the initial condition $\theta(0) = 1$ where no layer drop is performed, layer drop is gradually introduced. Eventually (i.e., when $t$ is sufficiently large), $\theta(t) \rightarrow \bar{\theta}$.
According to Def.~\ref{defn:schedule}, in our work, we use the following schedule function:
\begin{equation}
\label{eqn:curriculum-schedule}
\bar{\theta}(t) = (1 - \bar{\theta})exp(-\gamma \cdot t) + \bar{\theta}, \gamma > 0
\end{equation}
\href{https://arxiv.org/pdf/1703.06229.pdf}{Curriculum Dropout}
\later{
When t is small, the drop rate is set to zero ($\bar{\theta}(0)$ = 1) and we do not perform any ST\xspace block drop at all. There are two reasons for this: (1) Comparing to other neural architectures, BERT pre-training is sensitive in the beginning phase, and removing the warmup stage results in serious consequences such as model divergence~\cite{transformer-training-tips,radam}.
(2) The network weights still have values which
are close to their random and statistically independent initialization. Dropping blocks at early training steps is likely to introduce additional perturbation that makes training difficult. }
By considering Fig.~\ref{fig:expected-GFLOPS-curve-graph}, we provide intuitive and straightforward motivation for our choice. The blue curve in Fig.~\ref{fig:expected-GFLOPS-curve-graph} are polynomials of increasing degree $\delta=\{1,..,8\}$ (left to right). Despite fulfilling the initial constraint $\theta(0)=1$, they have to be manually thresholded to impose $\theta(t)\rightarrow \bar{\theta}$ when $t \rightarrow \infty$, which introduces two more parameters ($\delta$ and the threshold).
\later{
The yellow curve is the inverse of our proposed schedule, but it does not satisfy our initial and convergence constraints. Moreover, by evaluating the area under the curve, we can intuitively measure how much FLOPS saving it results in, which is much smaller than the proposed schedule.}
In contrast, in our schedule, we fix $\gamma$ using the following simple heuristics $\gamma = \frac{100}{T}$, since it implies $|\theta(T) - \bar{\theta}| < 10^{-5}$ for $\theta(t) \approx \bar{\theta}$ when $t \approx T$, and it is reasonable to assume that T is at the order of magnitude of $10^4$ or $10^5$ when training Transformer networks. In other words, this means that for a big portion of the training, we are dropping ($1 - \bar{\theta}$) ST\xspace blocks, accelerating the training efficiency.
\minjia{Is it possible to address the warmup issue with drop schedule?}
\minjia{Wenhan pointed out for BERT, middle layers carry less information whereas the beginning and end layers are very important. It therefore perhaps makes sense to adjust the dropout policy to allow the end layers to participate more during the training.}
\paragraph{Distribution along the depth dimension.} The above progressive schedule\xspace assumes all gates in ST\xspace blocks take the same $p$ value at each step t. However, as shown in Fig.~\ref{fig:similarity-analysis},
the lower layers of the networks
should be more reliably present. Therefore, we distribute the global $\bar{\theta}$ across the entire stack so that lower layers have lower drop probability linearly scaled by their depth according to equation~\ref{eqn:linear-scale-depth}.
Furthermore, we let the sub-layers inside each block share the same schedule, so
when $G_i$ = 1, both the inner function $f_{ATTN}$ and $f_{FFN}$ are activated, while they are skipped when $G_i$ = 0.
Therefore, each gate has the following form during training:
\begin{equation}
\label{eqn:linear-scale-depth}
p_l(t) = \frac{i}{L}(1 - \bar{\theta}(t))
\end{equation}
\href{https://link.springer.com/content/pdf/10.1007\%2F978-3-642-35289-8.pdf}{Large Ensemble Averaging}
\href{https://arxiv.org/pdf/1606.08415.pdf}{GAUSSIAN ERROR LINEAR UNITS (GELUS)}
\href{https://crl.ucsd.edu/~elman/Papers/elman_cognition1993.pdf}{Learning and development in neural networks : the importance of starting small}
Combining Eqn.~\ref{eqn:linear-scale-depth} and Eqn.~\ref{eqn:curriculum-schedule}, we have the progressive schedule\xspace for an ST\xspace block below.
\begin{equation}
\label{eqn:combined-schedule}
\theta_i(t) = \frac{i}{L}(1 - (1 - \bar{\theta}(t))exp(-\gamma \cdot t) - \bar{\theta}(t))
\end{equation}
\setlength{\intextsep}{0pt}%
\setlength{\columnsep}{5pt}%
\begin{wrapfigure}{R}{0.5\textwidth}
\begin{minipage}{0.5\textwidth}
\begin{algorithm}[H]
\centering
\vspace{0em}
\caption{\hfill \textbf{Progressive\_Layer\_Dropping}}
\begin{algorithmic}[1]
\State \textbf{Input:} $keep\_prob$ $\bar{\theta}$
\label{lst:subgraph-sampling:input}
\State InitBERT($switchable\_transformer\_block$)
\State $\gamma\ \leftarrow \frac{100}{T}$
\For {t\ $\leftarrow$ 1 to T}
\State $\theta_t \leftarrow (1 - \bar{\theta})exp(-\gamma \cdot t) + \bar{\theta}$
\State step $\leftarrow \frac{1 - \theta_t}{L}$
\State $p \leftarrow 1$
\For {l\ $\leftarrow$ 1 to L}
\State action $\sim$ Bernoulli(p)
\If {action == 0}
\State $x_{i+1} \leftarrow x_i$
\Else
\State $x_{i}^{'} \leftarrow x_{i} + f_{ATTN}(f_{LN}(x_{i})) \times \frac{1}{p}$
\State $x_{i+1} \leftarrow x_{i}^{'} + f_{FFN}(f_{LN}(x_{i}^{'})) \times \frac{1}{p}$
\EndIf
\State $x_{i} \leftarrow x_{i+1}$
\State p $\leftarrow$ p - step
\EndFor
\State Y $\leftarrow$ $output\_layer(x_L)$
\State loss $\leftarrow loss\_fn(\bar{Y}, Y)$
\State backward(loss)
\EndFor
\end{algorithmic}
\label{algo:dropping-algorithm}
\end{algorithm}
\end{minipage}
\end{wrapfigure}
\paragraph{Putting it together.} Note that because of the exist of the identity mapping, when an ST\xspace block is bypassed for a specific iteration,
there is no need to perform forward-backward computation or gradient updates, and there will be updates with significantly shorter networks and more direct paths to individual layers.
Based on this, we design a stochastic training algorithm based on ST\xspace blocks and the progressive\xspace layer-drop schedule to train models like BERT faster, which we call \emph{progressive layer dropping\xspace} (Alg.~\ref{algo:dropping-algorithm}).
\minjia{Our algorithm has no additional parameters to be tuned? No, we need to tune $\theta$.}
The expected network depth, denoted as $\bar{L}$, becomes a random variable. Its expectation is given by: $E(\bar{L}) = \sum_{t=0}^{T}\sum_{i=1}^{L}\theta(i, t)$.
With $\bar{\theta} = 0.5$, the expected number of ST\xspace blocks during training reduces to $E(\bar{L}) = (3L - 1)/4$ or $E(\bar{L}) \approx 3L/4$ when T is large. For the 12-layer BERT model with L=12 used in our experiments, we have $E(\bar{L}) \approx 9$. In other words, with progressive layer dropping\xspace, we train BERT with an average number of 9 layers. This significantly improves the pre-training speed of the BERT model. Following the calculations
above, approximately 25\% of FLOPS could be saved under the drop schedule with $\bar{\theta}$ = 0.5. We recover the model with full-depth blocks at fine-tuning and testing time.
\later{
\minjia{This is actually problematic. In the actual code, I did not re-calibrate the output during the testing time (e.g., in validation, the stochastic flag is set to false. And if it is false, the BertEncoder will fallback to deterministic computation. Similar, in fine-tuning, although $pre\_layer\_norm$ flag is enabled. Model() in train() (line 989) does not set the stochastic depth flag to true, which means the fine-tuning is also done with stochastic depth disabled.)}
}
\section{Evaluation}
\label{sec:eval}
We show that our method improves the pre-training efficiency of Transformer networks, and the trained models achieve competitive or even better performance compared to the baseline on transfer learning downstream tasks. We also show ablation studies to analyze the proposed training techniques.
\later{
, and (3) whether the proposed approach is scalable as the depth of BERT increases.
}
\minjia{Train same model in less time, and train larger model with similar time.}
\minjia{Not going to the accuracy side. The goal is not to achieve the state-of-the-art accuracy results, but to explore how to make training BERT model faster.}
\minjia{Study the gradients at initialization, at output layer.}
\href{https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607}{37 Reasons why your Neural Network is not working}
\minjia{Somehow it is difficult to get the same accuracy using postLN. Try preLN baseline.}
\href{https://people.eecs.berkeley.edu/~zhuohan/pdf/double-bert-icml19.pdf}{Efficient Training of BERT by Progressively Stacking}
\minjia{TODO: Calling np.random in the forward pass might be a source of slowdown. Since PyTorch might be calling the random number generator on CPU and pass the generated results to GPU}
\minjia{TODO: FP16 is easier to run into numerical instability issue. Use Fp16's validation loss. But use Fp32 for other reported results.}
\minjia{We should be able to report validation loss on seq512 when training seq128. Loading a checkpoint would definitely work (either doing it while training or after the training).}
\href{https://web.stanford.edu/class/cs224n/reports/default/15734641.pdf}{Multi-Task Deep Neural Networks for Generalized Text Understanding}
\href{https://towardsdatascience.com/when-multi-task-learning-meet-with-bert-d1c49cc40a0c}{When Multi-Task Learning meet with BERT}
\href{https://github.com/OpenNMT/OpenNMT-py}{OpenNMT-py: Open-Source Neural Machine Translation}
\href{https://opennmt.net/OpenNMT-py/extended.html}{OpenNMT: Translation Example/Data}
\href{https://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model}{OpenNMT: How do I use the Transformer model?}
\paragraph{Datasets.} We follow Devlin et al.~\cite{bert} to use English Wikipedia corpus and BookCorpus for pre-training. By concatenating the two datasets, we obtain our corpus with roughly 2.8B word tokens in total, which is comparable with the data corpus used in Devlin et al.~\cite{bert}.
\notes{Token -- Wiki: 2.04B , BC: 0.8B. }
\notes{Npy -- Wiki: 165G, BC: 58G. Val: 765M}
\notes{Text -- Wiki: 12G, BC: 4.2G. Val: 54M}
We segment
documents into sentences with 128 tokens; We normalize,
lower-case, and tokenize texts using Wordpiece tokenizer~\cite{bert}. The
final vocabulary contains 30,528 word pieces.
We split documents into one training set and one validation set (300:1).
For fine-tuning, we use
GLUE (General Language Understanding Evaluation), a collection of 9 sentence or sentence-pair natural language understanding tasks including question answering, sentiment analysis, and textual entailment. It is designed to favor sample-efficient learning and knowledge-transfer across a range of different linguistic tasks in different domains.
\paragraph{Training details.}
We use our own implementation of the BERT model~\cite{bert} based on the Huggingface[1] PyTorch implementation. All experiments are performed on 4$\times$DGX-2 boxes with 64$\times$V100 GPUs. Data parallelism is handled via PyTorch DDP (Distributed Data Parallel) library~\cite{ddp}. We recognize and eliminate additional computation overhead: we overlap data loading with computation through the asynchronous prefetching queue; we optimize the BERT output processing through sparse computation on masked tokens. Using our pre-processed data, we train a 12-layer BERT-base model from scratch as the baseline. We use a warm-up ratio of 0.02 with lr$_{max}$=1e$^{-4}$. Following \cite{bert}, we use Adam as the optimizer. We train with batch size 4K for 200K steps, which is approximately 186 epochs. The detailed parameter settings are listed in the Appendix~\ref{sec:hyperparameters}. We fine-tune GLUE tasks for 5 epochs and report the median development
set results for each task over five random initializations.
\subsection{Experimental Results}
\paragraph{Pre-training convergence comparisons.}
Fig.~\ref{fig:different-lr} visualizes the convergence of validation loss regarding the computational time. We make the following observations. First, with lr$_{max}$=1e$^{-4}$, the convergence rate of our algorithm and the baseline is very close. This verifies empirically that our progressive layer dropping method does not hurt model convergence. Second, when using a larger learning rate lr$_{max}$=1e$^{-3}$, the baseline diverges. In contrast, our method shows a healthy convergence curve and is much faster. This confirms that our architectural changes stabilize training and allows BERT training with more aggressive learning rates.
\begin{figure}[t]
\centering
\begin{minipage}[c]{0.66\textwidth}
\subfloat[][\label{fig:different-lr}]{\includegraphics[scale=0.38]{figs/different-lr.png}}
\subfloat[][\label{fig:baseline-vs-pst-main}]{\includegraphics[scale=0.38]{figs/baseline-vs-pst-main.png}}
\caption{The convergence curve of the baseline and our proposed method regarding the wall-clock time. }\label{fig:convergence-comparison}
\end{minipage}%
\hfill
\begin{minipage}[c]{0.33\textwidth}
\newcommand{\hspace*{0.15em}}{\hspace*{0.1em}}
\newcommand{\hspace*{0.28em}}{\hspace*{0.28em}}
\small
\renewcommand{}
\centering
\tabcolsep=0.10cm
\captionof{table}{Training time comparison. Sample RD standards for sample reduction. SPD represents speedup.}
\begin{tabular}{|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|}
\hline
& \textbf{\begin{tabular}[c]{@{}c@{}}Training\\ Time\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Sample\\ RD\end{tabular}} & \textbf{SPD} \\ \hline
\begin{tabular}[c]{@{}c@{}}Baseline \\ ckp186\end{tabular} & 38.45h & 0 & 1 \\ \hline
\begin{tabular}[c]{@{}c@{}}PLD\xspace \\ ckp186\end{tabular} & 29.22h & 0 & 1.3$\times$ \\ \hline
\begin{tabular}[c]{@{}c@{}}PLD\xspace \\ ckp100\end{tabular} & 15.56h & 46\% & 2.5$\times$ \\ \hline
\begin{tabular}[c]{@{}c@{}}PLD\xspace \\ ckp87\end{tabular} & 13.53h & 53\% & 2.8$\times$ \\ \hline
\end{tabular}
\label{tbl:training-time-comparison}
\end{minipage}
\end{figure}
\paragraph{Speedup.}
Fig.~\ref{fig:baseline-vs-pst-main} shows both the training curve (dotted) and the validation curve (solid) of the baseline and PLD\xspace with a zoomed-in view. The baseline curve becomes almost flat at epoch 186, getting a validation loss of 1.75. In contrast, PLD\xspace reaches the same validation loss at epoch 87, with 53\% fewer training samples. Furthermore, PLD\xspace achieves a 24\% time reduction when training the same number of samples. This is because our approach trains the model with a smaller number of expected depth for the same number of steps. It is slightly lower than the 25\% GFLOPS reduction in the analysis because the output layer still takes a small amount of computation even after optimizations. The combination of these two factors, yields 2.8$\times$ speedup in end-to-end wall-clock training time over the baseline, as shown in Table~\ref{tbl:training-time-comparison}.
\minjia{Not going to the accuracy side. The goal is not to achieve the state-of-the-art accuracy results, but to explore how to make training BERT model faster.}
\paragraph{Downstream task accuracy.}
Despite improved training speed, one may still wonder whether such a method is as effective as the baseline model on downstream tasks.
Table~\ref{tbl:downstream-accuracy} shows our results on the GLUE dataset compared to the baseline. Our baseline is comparable with the original BERT-Base (on the test set), and our PLD\xspace method achieves a higher GLUE score than our baseline (83.2 vs. 82.1) when fine-tuning the checkpoint (186). We also dump model checkpoints from different epochs during pre-training and fine-tune these models.
The checkpoint 87 corresponds to the validation loss at 1.75 achieved by PLD\xspace. The GLUE score is slightly worse than the baseline (81.6 vs. 82.1). However, by fine-tuning at checkpoint 100, PLD\xspace achieves a higher score than the baseline (82.3 vs. 82.1) at checkpoint 186. In terms of the pre-training wall clock time, PLD\xspace requires 15.56h vs. the baseline with 39.15h to get similar accuracy on downstream tasks, which corresponds to a 2.5$\times$ speedup.
\minjia{TODO: Add footnotes that GLUE leader board is no longer useful.}
\begin{table}[!ht]
\newcommand{\hspace*{0.15em}}{\hspace*{0.06em}}
\newcommand{\hspace*{0.28em}}{\hspace*{0.28em}}
\small
\renewcommand{}
\centering
\tabcolsep=0.10cm
\caption{The results on the GLUE benchmark. The number below each task denotes the number of training examples.
The metrics for these tasks can be found in the GLUE paper~\cite{glue}. We compute the geometric mean of the metrics as the GLUE score.}
\begin{tabular}{|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|@{\hspace*{0.15em}}c@{\hspace*{0.15em}}|l|}
\hline
\multirow{2}{*}{Model} & \begin{tabular}[c]{@{}l@{}}RTE\\ (Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}MRPC\\ (F1/Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}STS-B\\ (PCC/SCC)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CoLA \\ (MCC)\end{tabular} & \begin{tabular}[c]{@{}l@{}}SST-2\\ (Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}QNLI\\ (Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}QQP\\ (F1/Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}MNLI-mm\\-/m (Acc.)\end{tabular} & \multicolumn{1}{c|}{\multirow{2}{*}{GLUE}} \\ \cline{2-9}
& 2.5K & 3.7K & 5.7K & 8.5K & 67K & 108K & 368K & 393K & \multicolumn{1}{c|}{} \\ \hline
BERT$_{base}$ (original) & 66.4 & 88.9/84.8 & 87.1/89.2 & 52.1 & \textbf{93.5} & \textbf{90.5} & 71.2/89.2 & \textbf{84.6}/83.4 & 80.7 \\ \hline
BERT$_{base}$ (Baseline, ckp186) & 67.8 & 88.0/86.0 & 89.5/\textbf{89.2} & 52.5 & 91.2 & 87.1 & 89.0/90.6 & 82.5/83.4 & 82.1 \\ \hline
BERT$_{base}$ (PLD\xspace, ckp87) & 66 & 88.2/85.6 & 88.9/88.4 & 54.5 & 91 & 86.3 & 87.4/89.1 & 81.6/82.4 & 81.6 \\ \hline
BERT$_{base}$ (PLD\xspace, ckp100) & 68.2 & 88.2/85.8 & 89.3/88.9 & 56.1 & 91.5 & 86.9 & 87.7/89.3 & 82.4/82.6 & 82.3 \\ \hline
BERT$_{base}$ (PLD\xspace, ckp186) & \textbf{69} & \textbf{88.9/86.5} & \textbf{89.6}/89.1 & \textbf{59.4} & 91.8 & 88 & \textbf{89.4/90.9} & 83.1/\textbf{83.5} & \textbf{83.2} \\ \hline
\end{tabular}
\minjia{It is weird to report just Squad. Also, GLUE release test. So no longer need to report test results. Dev results are sufficient. Need to update the other model's results. May not need ELMo and OpenAI.}
\label{tbl:downstream-accuracy}
\end{table}
Fig.~\ref{fig:fine-tune-comparison} illustrates the fine-tuning results between the baseline and PLD\xspace on GLUE tasks over different checkpoints
Overall, we observe that PLD\xspace not only trains BERT faster in pre-training but also preserves the performance on downstream tasks. In each figure, we observe that both curves have a similar shape at the beginning because no layer drop is added. For later checkpoints, PLD\xspace smoothly adds layer drop. Interestingly, we note that the baseline model has fluctuations in testing accuracy. In contrast, the downstream task accuracy from PLD\xspace is consistently increasing as the
number of training epochs increases. This indicates that PLD\xspace takes a more robust optimization path toward the optimum. We also observe that our model achieves higher performance on MNLI, QNLI, QQP, RTE, SST-2, and CoLA on later checkpoints, indicating that the model trained with our approach also generalizes better on downstream tasks than our baseline does.
From a knowledge transferability perspective, the goal of training a language model is to learn a good representation of natural language that ideally ignores the data-dependent noise and generalizes well to downstream tasks. However, training a model with a constant depth is at least somewhat noisy and can bias the model to prefer certain representations, whereas PLD\xspace enables more sub-network configurations to be created during training Transformer networks.
Each of the L ST\xspace blocks is either active or inactive, resulting in $2^L$ possible network combinations. By selecting a different submodular in each mini-batch, PLD\xspace encourages the submodular to produce good results independently. This allows the unsupervised pre-training model to obtain a more general representation by averaging the noise patterns, which helps the model to better generalize to new tasks.
On the other hand, during inference, the full network is presented, causing the effect of ensembling different sub-networks.
\begin{figure}[ht!]
\centering
\small
\subfloat[MNLI-m]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-MNLI-m-ci}\label{fig:MNLI-m-comparison}}
\subfloat[MNLI-mm]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-MNLI-mm-ci}\label{fig:MNLI-mm-comparison}}
\subfloat[QNLI]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-QNLI-ci}\label{fig:QNLI-comparison}} \\
\subfloat[QQP]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-QQP-acc-ci}\label{fig:QQP-comparison}}
\subfloat[RTE]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-RTE-acc-ci}\label{fig:RTE-comparison}}
\subfloat[SST-2]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-SST-2-ci}\label{fig:SST-2-comparison}} \\
\subfloat[WNLI]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-WNLI-ci}\label{fig:WNLI-comparison}}
\subfloat[CoLA]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-CoLA-ci}\label{fig:CoLA-comparison}}
\subfloat[MRPC (acc.)]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-MRPC-acc-ci}\label{fig:MRPC-acc-comparison}} \\
\subfloat[MRPC (F1.)]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-MRPC-f1-ci}\label{fig:MRPC-f1-comparison}}
\subfloat[SST-B (PCC)]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-SST-B-Pearson-ci}\label{fig:SST-B-pearson-comparison}}
\subfloat[SST-B (SCC)]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/fine-tune/fine-tune-SST-B-Spearman-ci}\label{fig:SST-B-spearman-comparison}}
\caption{The fine-tuning results at different checkpoints.}
\label{fig:fine-tune-comparison}
\end{figure}
\minjia{Original explanation in "ResNet as ensemble" uses "short path" and "long path" to explain why ResNet achieves good results. May need to understand what path refers to.}
\href{https://arxiv.org/pdf/1712.03556.pdf}{Stochastic Answer Networks for Machine Reading Comprehension}
\subsection{Ablation Studies}
\label{subsec:ablation}
\paragraph{Downstream task fine-tuning sensitivity.}
To further verify that our approach not only stabilizes training but also improves downstream tasks, we show a grid search on learning rates
\{1e-5, 3e-5, 5e-5, 7e-5, 9e-5, 1e-4\}. As illustrated in Fig.~\ref{fig:fine-tune-heatmap}, the baseline is vulnerable to the choice of learning rates. Specifically, the fine-tuning results are often much worse with a large learning rate, while PLD\xspace is more robust and often achieves better results with large learning rates.
\begin{figure}[ht!]
\centering
\small
\subfloat[MNLI-m]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/heatmap/heatmap-MNLI-m}\label{fig:heatmap-MNLI-m}}
\subfloat[MNLI-mm]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/heatmap/heatmap-MNLI-mm}\label{fig:heatmap/heatmap-MNLI-mm}}
\subfloat[QNLI]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/heatmap/heatmap-QNLI}\label{fig:heatmap/heatmap-QNLI}} \\
\subfloat[QQP]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/heatmap/heatmap-QQP}\label{fig:heatmap/heatmap-QQP}}
\subfloat[RTE]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/heatmap/heatmap-RTE}\label{fig:heatmap/heatmap-RTE}}
\subfloat[SST-2]{\includegraphics[scale=0.33, keepaspectratio=true]{figs/heatmap/heatmap-SST-2}\label{fig:heatmap/heatmap-SST-2}}
\caption{The fine-tuning results at different checkpoints.}
\label{fig:fine-tune-heatmap}
\end{figure}
\paragraph{The Effect of $\bar{\theta}$.} We test different values of the keep ratio $\bar{\theta}$ and identify $0.5 \le \bar{\theta} \le 0.9$ as a good range, as shown in Fig.~\ref{fig:varying-keep-ratio} in the Appendix. We observe that the algorithm may diverge if $\bar{\theta}$ is too small (e.g., 0.3).
\begin{figure}
\centering
\includegraphics[scale=0.5]{figs/varying-keep-ratio.png}
\caption{Convergence curves varying the keep ratio $\bar{\theta}$.}
\label{fig:varying-keep-ratio}
\end{figure}
\paragraph{PLD\xspace vs. PreLN.} To investigate the question on how PLD\xspace compares with PreLN, we run both PreLN with the hyperparameters used for training PostLN (lr=1e-4) and the hyperparameters used for PLD\xspace (lr=1e-3) to address the effect from the choice of hyperparameters. We train all configurations for the same number of epochs and fine-tune following the standard procedure. In both cases, PreLN is 24\% slower than PLD\xspace, because PreLN still needs to perform the full forward and backward propagation in each iteration.
Table~\ref{tbl:downstream-accuracy-ablation} shows the fine-tuning results on GLUE tasks. When trained with the same hyperparameters as PostLN, PreLN appears to have a much worse GLUE score (80.2) compared with PostLN (82.1) on downstream tasks. This is because PreLN restricts layer outputs from depending too much on their own residual branches and inhibits the network from reaching its full potential, as recently studied in \cite{understanding-transformer-difficulty}. When trained with the large learning rate as PLD\xspace, PreLN's result have improved to 82.6 but is 0.6 points worse than PLD\xspace (83.2), despite using 24\% more compute resource. PLD\xspace achieves better accuracy than PreLN because it encourages each residual branch to produce good results independently.
\begin{table}[!ht]
\newcommand{\hspace*{0.15em}}{\hspace*{0.05em}}
\newcommand{\hspace*{0.28em}}{\hspace*{0.28em}}
\small
\centering
\tabcolsep=0.10cm
\caption{Ablation studies of the fine-tuning results on the GLUE benchmark.}
\begin{tabular}{|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|@{\hspace*{0.15em}}l@{\hspace*{0.15em}}|}
\hline
Model & \begin{tabular}[c]{@{}l@{}}RTE\\ (Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}MRPC\\ (F1/Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}STS-B\\ (PCC/SCC)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CoLA \\ (MCC)\end{tabular} & \begin{tabular}[c]{@{}l@{}}SST-2\\ (Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}QNLI\\ (Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}QQP\\ (F1/Acc.)\end{tabular} & \begin{tabular}[c]{@{}l@{}}MNLI-m/mm\\ (Acc.)\end{tabular} & GLUE \\ \hline
BERT (Original) & 66.4 & \textbf{88.9}/84.8 & 87.1/89.2 & 52.1 & \textbf{93.5} & \textbf{90.5} & 71.2/89.2 & \textbf{84.6/83.4} & 80.7 \\ \hline
BERT + PostLN & 67.8 & 88.0/86.0 & 89.5/89.2 & 52.5 & 91.2 & 87.1 & 89.0/90.6 & 82.5/83.4 & 82.1 \\ \hline
BERT + PreLN + Same lr &66.0 & 85.9/83.3 & 88.2/87.9 & 46.4 & 90.5 & 85.5 & 89.0/90.6 & 81.6/81.6
& 80.2 \\ \hline
BERT + PreLN + lr$\uparrow$ &67.8 & 86.7/84.5 & \textbf{89.6/89.1} & 54.6 & 91.9 & 88.1 & 89.3/\textbf{90.9} & \textbf{83.6/83.7}
& 82.6 \\ \hline
Shallow BERT + PreLN + lr$\uparrow$ &66.0 & 85.9/83.5 & 89.5/88.9 & 54.7 & 91.8 & 86.1 & 89.0/90.6 & 82.7/82.9
& 81.8 \\ \hline
BERT + PreLN + lr$\uparrow$ + Rand.
&68.2 & 88.2/86.2 & 89.3/88.8 & 56.8 & 91.5 & 87.2 & 88.6/90.3 & 82.9/83.3
& 82.7 \\ \hline
BERT + PreLN + lr$\uparrow$ + TD
&68.2 & 88.6/\textbf{86.7} & 89.4/88.9 & 55.9 & 91.3 & 86.8 & 89.1/90.7 & 82.7/83.1
& 82.7 \\ \hline
BERT + PreLN + lr$\uparrow$ + PLD & \textbf{69.0} & \textbf{88.9}/86.5 & \textbf{89.6/89.1} & \textbf{59.4} & 91.8 & 88.0 & \textbf{89.4/90.9} & 83.1/83.5 & \textbf{83.2} \\
\hline
\end{tabular}
\label{tbl:downstream-accuracy-ablation}
\end{table}
\paragraph{PLD\xspace vs. Shallow network.}
\emph{Shallow BERT + PreLN + Large lr} in Table~\ref{tbl:downstream-accuracy-ablation} shows the downstream task accuracy of the 9-layer BERT. Although having the same same number of training computational GFLOPS as ours, the shallow BERT
underperforms PreLN by 0.8 points and is 1.4 points worse than PLD\xspace likely because the model capacity has been reduced by the loss of parameters.
\paragraph{PLD\xspace vs. Random drop.}
\emph{BERT + PreLN + Large lr + Random} drops layers randomly with a fixed ratio (i.e., it has the same compute cost but without any schedule), similar to Stochastic Depth~\cite{stochastic-depth}. The GLUE score is 0.9 points better than shallow BERT under the same compute cost and 0.1 point better than PreLN while being 24\% faster, indicating the strong regularization effect from stochastic depth. It is 0.5 points worse than PLD\xspace, presumably because a fixed ratio does not take into account the training dynamics of Transformer networks.
\paragraph{Schedule impact analysis.} \emph{BERT + PreLN + Large lr + TD only (32-bit*)} disables the schedule along the depth dimension (DD) and enables only the schedule along the temporal dimension (TD) in training. Its GLUE score matches "Random", suggesting that the temporal schedule has similar performance as the fixed constant schedule along the time dimension and accuracy gains of PLD\xspace is mostly from the depth dimension. However, without the temporal schedule enabled, the model diverges with \emph{NaN} in the middle of half-precision (16-bit) training and has to switch to full-precision (32-bit) training, slowing down training speed. Furthermore, this concept of starting-easy and gradually increasing the difficulty of the learning problem has its roots in curriculum learning and often makes optimization easier. We adopt the temporal schedule since it is robust and helpful for training stability, retaining similar accuracy while reducing training cost considerably.
\notes{
\begin{itemize}
\item Compared with the 12-layer BERT-base baseline model (orange line), our method also reaches similar pre-training training loss and validation loss at the end of training. However, the training time of our proposed method is about 15\% shorter (XXX hours vs. XXX hours). This is mainly because for the same number of steps, our approach trains the model with a smaller number of expected depth. It is lower than the 25\% speedup because there are additional layers (output layers) that take a large amount of time. In the scalability section, we show that as the depth of the BERT model increases, the speedup becomes more.
\item The switch paradigm induces a discontinuity in the objective value which can damage the performance with respect to the smooth transition performed by our progressive schedule\xspace.
\minjia{Anti-curriculum is one that is difficult to argue. Because it should provide some speedup. Given that the exact speedup is difficult to quantify without giving a schedule. It might be better not to mention.}
\item Progressive\xspace layer-drop smoothly increases the drop rate as training evolves, thereby improving the generalization of the model.
\item Compared with other layer drop training algorithms (regular and switch), our method can achieve either lower loss or faster speed. The other algorithms do not take training difficulties at different phases and depths into consideration, so they converge slower.
\end{itemize}
}
\section{Introduction}
\label{sec:intro}
Natural language processing (NLP) tasks, such as natural language inference~\cite{xlnet,roberta} and question answering~\cite{bert,commonsenseqa,bertserini}, have achieved great success with the development of neural networks. It has been demonstrated recently that Transformer-based networks have obtained superior performance in many NLP tasks (e.g., the GLUE benchmark~\cite{glue} and the challenging multi-hop reasoning task~\cite{multi-hop-reasoning}) than recurrent neural networks or convolutional neural networks. BERT trains a deep bidirectional Transformer and obtains outstanding results with transfer learning~\cite{bert}. RoBERTa~\cite{roberta}, which is a robustly optimized version of BERT trained with more steps and larger corpora, achieves state-of-the-art results on 9 GLUE tasks. Megatron-LM~\cite{megatron-lm} further advances the state-of-the-art in NLP by significantly increasing the size of BERT model. Finally, there are multiple research proposing different enhanced versions of Transformer-based networks, such as GPT-2/3~\cite{gpt-2,gpt-3}, XLNet~\cite{xlnet}, SpanBERT~\cite{span-bert}, BioBERT~\cite{biobert}, UniLM~\cite{unilm}, Turing-NLG~\cite{turing-nlg}, and T5~\cite{T5}.
Due to the exciting prospect, pre-training Transformer networks with a large corpus of text followed by fine-tuning on specific tasks has become a new paradigm for natural language processing.
Despite great success, a big challenge of Transformer networks comes from the training efficiency -- even with self-attention and parallelizable recurrence~\cite{transformer}, and extremely high performance hardware~\cite{tpu}, the pre-training step still takes a significant amount of time.
To address this challenge, mixed-precision training is explored~\cite{megatron-lm,mixed-precision-training}, where the forward pass and backward pass are computed in half-precision and parameter update is in single precision. However, it requires Tensor Cores~\cite{tensorcore}, which do not exist in all hardware. Some work resort to distributed training~\cite{gpipe,mesh-tensorflow,megatron-lm}. However, distributed training uses large mini-batch sizes to increase the parallelism, where the training often converges to sharp local minima with poor generalizability even with significant hyperparameter tuning~\cite{on-large-batch-training}. Subsequently, Yang et al. propose a layer-wise adaptive large batch optimizer called LAMB~\cite{lamb}, allowing to train BERT with 32K batch size on 1024 TPU chips. However, this type of approach often requires dedicated clusters with hundreds or even thousands of GPUs and sophisticated system techniques at managing and tuning distributed training, not to mention that the amount of computational resources is intractable for most research labs or individual practitioners.
In this paper, we speedup pre-training Transformer networks by
exploring architectural change and training techniques, not at the cost of excessive hardware resources.
Given that the training cost grows linearly with the number of Transformer layers, one straightforward idea to reduce the computation cost is to reduce the depth of the Transformer networks. However, this is restrictive as it often results in lower accuracy in downstream tasks compared to full model pre-training, presumably because of having smaller model capacities~\cite{distill-bert,tiny-bert}. Techniques such as Stochastic Depth have been demonstrated to be useful in accelerating supervised training in the image recognition domain~\cite{stochastic-depth}. However, we observe that stochastically removing Transformer layers destabilizes the performance and
easily results in severe consequences such as model divergence or convergence to bad/suspicious local optima. Why are Transformer networks difficult to train with stochastic depth? Moreover, can we speed up the (unsupervised) pre-training of Transformer networks without hurting downstream performance?
To address the above challenges,
we propose to accelerate pre-training of Transformer networks by
making the following contributions. (i) We conduct a comprehensive analysis to answer the
question: what makes Transformer networks difficult to train with stochastic depth. We find that both the choice of Transformer architecture as well as training dynamics would have a big impact on layer dropping. (ii) We propose a new architecture unit, called the \emph{Switchable-Transformer\xspace} (ST\xspace) block, that
not only allows switching on/off a Transformer layer for only a set portion of the training schedule, excluding them from both forward and backward pass but also stabilizes Transformer network training. (iii) We
further propose a \emph{progressive schedule\xspace} to add extra-stableness for pre-training Transformer networks with layer dropping --
our schedule smoothly increases the layer dropping rate for each mini-batch as training evolves by adapting in time the parameter of the Bernoulli distribution used for sampling. Within each gradient update, we distribute a global layer dropping\xspace rate across all the ST\xspace blocks to favor different layers. (iv) We use BERT as an example, and we conduct extensive experiments
to show that
the proposed method not only allows to train BERT 24\% faster
than the baseline under the same number of samples but also allows the pre-training to be 2.5$\times$ faster to get similar accuracy on downstream tasks.
Furthermore, we evaluate the generalizability of models pre-trained with the same number of samples as the baseline, and we observe
that while faster to train,
our approach achieves a 1.1\% higher GLUE score than the baseline, indicating a strong knowledge transferability.
\later{Our progressive schedule\xspace involves two aspects. From the temporal
aspect, we observe that BERT training has two noticeable phases -- a warmup phase in the beginning, and a cool-down phase that employs a learning rate decay schedule. Training, in the beginning, is more difficult than the later phase because there is out-of-bound variance, as also observed by some recent work~\cite{radam}, and the gradient variance becomes more stable after the warmup phase.
Based on this observation, we dynamically increase the number of
ST\xspace blocks that are dropped as a function of the number of gradient updates to adapt to the dynamic process of training. Since dropping ST\xspace blocks at early training steps introduces additional perturbation and complexities that make training difficult, we smoothly increase the dropping rate as training evolves by adapting in time the parameter of the Bernoulli distribution used for sampling. Within each gradient update, we distribute a global layer-drop rate across all the ST\xspace blocks to favor lower layer training. This is because we observe that lower layers are more subject to vanishing gradients~\cite{transformer-xl}, especially as the training proceeds, and they are hard to learn because of insufficient weight changes. By stochastically dropping upper layers more often, loss signals are directly propagating to lower layers, allowing lower layers to learn faster. \minjia{There are some issues in this text. We now know that unbalanced gradients only exist in the beginning of the training, so "especially as the training proceeds" part is not true.}}
\notes{Designing optimizer is like optimizing the search algorithm in ANN graph. Although it may lead to improvement, the underlying model structure (the graph property) often brings improvement that is more fundamental.}
\later{
More interestingly, we find that our proposed method consistently gains improved performance and accuracy as we scale up the depth of the BERT model, indicating the robustness and scalability of our approach.}
\later{
In summary, our contributions in this paper are as follows:
\begin{itemize}
\item We propose an efficient training method to accelerate BERT training using Switchable-Transformer\xspace and progressive\xspace layer-drop, which saves training computation while retaining the performance.
\item We validate the proposed method on BERT pre-training on large-scale dataset (Wikipedia + Bookcorpus).
\item We verify that although less training computation is used, the generalization ability of the pre-trained model using our approach is not affected in a variety of downstream tasks, even with certain improvement in some experiments.
\item We show that our approach is scalable and consistently leads to improved performance for downstream tasks as the model depth increases.
\end{itemize}
}
\section{Motivation and Challenges}
While the Transformer-based architecture has achieved breakthrough results in modeling sequences for unsupervised language modeling~\cite{bert,gpt-2}, previous work has also highlighted the training difficulties and excessively long training time~\cite{roberta}.
To speed up the pre-training, ELECTRA~\cite{electra} explores the adversarial training scheme by replacing masked tokens with alternatives sampled from a generator framework and training a discriminator to predict the replaced token. This increases the relative per-step cost, but leads to fewer steps, leading to the overall reduced costs.
Another line of work focus on reducing the per-step cost.
Since the total number of floating-point operations (FLOPS) of the forward and backward passes in the BERT pre-training process is linearly proportional to the depth of the Transformer blocks, reducing the number of Transformer layers brings opportunities to significantly speed up BERT pre-training. To show this, we plot the FLOPS per training iteration in Fig.~\ref{fig:expected-GFLOPS-curve-graph}, assuming we can remove a fraction of layers at each step. Each line in the figure shows the FLOPS using different layer removal schedules. Regardless of which schedule to choose, the majority of FLOPS are reduced in the later steps, with the rate of keep probability saturating to a fixed value $\bar{\theta}$ (e.g., 0.5). We will describe our schedule in Section~\ref{subsec:curriculum-schedule}.
\begin{figure}[h!]
\centering
\begin{minipage}[c]{0.32\textwidth}
\includegraphics[scale=0.38, keepaspectratio=true]{figs/stability-gradient-norm}
\caption{The norm of the gradient with respect to the weights, with PostLN and PreLN.}
\label{fig:stability-gradient-norm}
\end{minipage}%
\hfill
\begin{minipage}[c]{0.32\textwidth}
\includegraphics[scale=0.38, keepaspectratio=true]{figs/grad-norm-preserving.png}
\caption{The norm preserving ratio with respect to the inputs, with PostLN and PreLN.}
\label{fig:grad-norm-preserving}
\end{minipage}%
\hfill
\begin{minipage}[c]{0.32\textwidth}
\includegraphics[scale=0.38, keepaspectratio=true]{figs/leisioning-analysis.png}
\caption{Lesioning analysis with PostLN and PreLN.}
\label{fig:leisioning-analysis}
\end{minipage}
\end{figure}
Despite the FLOPS reduction, directly training models like BERT with a smaller depth incurs a significant loss in accuracy even with knowledge distillation~\cite{distill-bert,tiny-bert}. Prior work~\cite{progressive-stacking} proposes to accelerate pre-training by first training a 3-layer BERT model and then growing the network depth to 6-layer and subsequently 12-layer.
However, the number of steps required at each depth before the network growth is not known a prior, making applying this approach challenging in practice. On the other hand, stochastic depth has been successfully demonstrated to train deep models with reduced expected depth~\cite{stochastic-depth,reduce-depth-on-demand}. However, we observe that directly pre-training BERT with randomly dropping $f_{ATTN}$ and $f_{FFN}$ converges to bad/suspicious local optima under the same hyperparameter setting. When increasing the learning rate, the training often diverges even by tuning the warmup ratio.
What causes the instability of BERT pre-training with layer drop?
\section{Related Work}
\label{sec:related}
Our approach is a form of Dropout~\cite{dropout} applied to model layers instead of activation. Closer to our work, the Stochastic Depth approach drops layers randomly during training~\cite{stochastic-depth}. As opposed to our work, they are interested in accelerating the training of very deep ResNets in image recognition domain~\cite{resnet}, and they use a fixed dropping schedule for that goal. In contrast, we fill the gap by applying architectural change to the Transformer blocks first to stabilize the training and then introducing a progressive layer-drop schedule that takes into BERT training characteristics into account, shedding lights on this particular setting. Recent studies applied stochastic depth to train very deep Transformers for speech, but its goal is mostly to show the benefits of its regularization effect.
More generally, our method is a form of structured pruning~\cite{structure-pruning}, which removes coherent groups of weights to preserve the original structure of the network. However, it is mostly adopted for compressing the model for efficient inference, whereas we focus on accelerating the training.
\href{https://www.kdnuggets.com/2016/04/stochastic-depth-networks-accelerate-deep-learning.html}{Stochastic Depth Networks Accelerate Deep Network Training}
\href{https://arxiv.org/pdf/1711.08393.pdf}{BlockDrop: Dynamic Inference Paths in Residual Networks}
\subsubsection{\@startsection{subsubsection}{3}{\z@}%
\newcommand{\setvspace}[2]{%
#1 = #2
\advance #1 by -1\parskip}
\makeatletter
\def\thm@space@setup{%
\thm@preskip=3pt
\thm@postskip=\thm@preskip
}
\makeatother
\makeatletter
\g@addto@macro\normalsize{%
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1pt}
\setlength\abovedisplayshortskip{1pt}
\setlength\belowdisplayshortskip{1pt}
}
\makeatother
\makeatletter
\renewenvironment{proof}[1][\proofname]{\par
\vspace{1pt
\pushQED{\qed}%
\normalfont
\topsep0pt \partopsep0pt
\trivlist
\item[\hskip\labelsep
\itshape
#1\@addpunct{.}]\ignorespaces
}{%
\popQED\endtrivlist\@endpefalse
\addvspace{3pt plus 3pt}
}
\makeatother
|
1,116,691,497,219 | arxiv | \section{Introduction}
\label{i}
The method of \emph{relative entropy} has been successfully applied to partial differential equations of different types. \emph{Relative entropies}
are non-negative quantities that provide a kind of distance between two solutions of the same problem, one of which typically enjoys some extra
regularity properties. Carillo et al. \cite{CaJuMaToUn} exploited entropy dissipation, expressed by means of the relative entropy with respect to
a stationary solution, in order to analyze the long-time behavior of certain
quasilinear parabolic equations. Saint-Raymond \cite{SaRay} uses the relative entropy method to study the incompressible Euler limit of the Boltzmann
equation. Other applications of the method can be found in Grenier \cite{Greni}, Masmoudi \cite{MAS5}, Ukai \cite{Uka}, Wang and Jiang \cite{WanJia},
among others.
Germain \cite{Ger} introduced a class of (weak) solutions to the compressible Navier-Stokes system satisfying a relative entropy inequality with
respect to a (hypothetical) strong solution of the same problem, and established the weak-strong uniqueness property within this class. Unfortunately,
\emph{existence} of solutions belonging to this class, where, in particular, the density possesses a spatial gradient in a suitable Lebesgue space, is
not known. In \cite{FENOSU}, we introduced
the concept of \emph{suitable weak solution} for the compressible Navier-Stokes system, satisfying a general relative entropy inequality with respect
to any sufficiently regular pair of functions. To be more specific,
consider the fluid density $\vr = \vr(t,x)$, together with the velocity field
$\vu = \vu(t,x)$, $t \in R$, $x \in \Omega \subset R^3$, the time evolution of which is governed by the
\emph{Navier-Stokes system}:
\bFormula{i1}
\partial_t \vr + \Div (\vr \vu) = 0,
\eF
\bFormula{i2}
\partial_t (\vr \vu) + \Div (\vr \vu \otimes \vu) + \Grad p(\vr) =
\Div \tn{S}(\Grad \vu) + \vr \vc{f}, \eF \bFormula{i3} \tn{S} =
\mu \Big( \Grad \vu + \Grad^t \vu - \frac{2}{3} \Div \vu \tn{I}
\Big) + \eta \Div \vu \tn{I} , \ \mu > 0, \ \eta \geq 0, \eF
supplemented with suitable boundary conditions, say, \bFormula{i4}
\vu|_{\partial \Omega} = 0. \eF
If the domain $\Omega$ is unbounded, we prescribe the
far-field behavior: \bFormula{i4a} \vr \to \Ov{\vr}, \ \vu \to 0 \
\mbox{as}\ |x| \to \infty, \eF where $\overline\vr\ge 0$.
\emph{Relative entropy} ${\cal E}\Big( [\vr, \vu] \Big| [r ,
\vc{U}] \Big)$ with respect to $[r, \vc{U}]$ is defined as
\bFormula{i5} {\cal E} \Big( [\vr, \vu] \Big| [r, \vc{U}] \Big) =
\intO{ \left( \frac{1}{2} \vr |\vu - \vc{U}|^2 + H(\vr) - H'(r)
(\vr - r) - H(r) \right)}, \eF where \bFormula{i5a} H(\vr) = \vr
\int_{\Ov{\vr}}^\vr \frac{p(z)}{z^2} \ {\rm d}z.
\eF
Following \cite{FENOSU}, we say that $\vr$, $\vc{u}$ is a
\emph{suitable weak solution} to problem (\ref{i1} - \ref{i4a}) if
equations (\ref{i1}--\ref{i3}) are satisfied in a weak sense, and,
in addition to (\ref{i1} - \ref{i4a}), the following (relative)
energy inequality \bFormula{i6} {\cal E} \Big( [\vr, \vu] \Big|
[r, \vc{U}] \Big) (\tau) + \int_0^\tau \intO{ \Big( \tn{S} (\Grad
\vu) - \tn{S} (\Grad \vc{U}) \Big): \Big( \Grad \vu - \Grad \vc{U}
\Big) } \ \dt \eF
\[
\leq {\cal E} \Big([\vr_0, \vu_0] \Big| [r(0, \cdot), \vc{U}(0, \cdot)] \Big) +
\int_0^\tau {\cal R}(\vr, \vu, r, \vc{U} ) \ \dt
\]
holds for a.a. $\tau > 0$, where
\[
\vr_0 = \vr(0, \cdot), \ \vu_0 = \vu(0, \cdot),
\]
and the remainder ${\cal R}$ reads
\bFormula{i7} {\cal R}\left( \vr, \vu, r, \vc{U} \right) \equiv
\intO{ \vr \Big( \partial_t \vc{U} + \vu \Grad \vc{U} \Big) \cdot
(\vc{U} - \vu )}
\eF
\[
+ \intO{\tn{S}(\Grad \vc{U}):\Grad (\vc U- \vc{u}) }
+\intO{\vr\vc f \cdot(\vc u-\vc U)}
\]
\[
+ \intO{ \left( (r - \vr) \partial_t H'(r) + \Grad H'(r) \cdot
\left( r \vc{U} - \vr \vu \right) \right) }
- \intO{
\Div \vc{U} \Big( p(\vr) - p(r) \Big) }.
\]
Here, the functions $r$, $\vc{U}$ are arbitrary smooth, $r$
strictly positive, and $\vc{U}$ satisfying the no-slip boundary
conditions (\ref{i4}). It is easy to check that (\ref{i6}) is
satisfied as an equality as soon as the solution $\vr$, $\vu$ is
smooth enough.
As shown in \cite[Theorem 3.1]{FENOSU}, the Navier-Stokes system
(\ref{i1} - \ref{i4a}) admits global-in-time suitable weak
solutions for any finite energy initial data. Moreover, the
relative energy inequality (\ref{i6}) can be used to show that
suitable weak solutions comply with the weak-strong uniqueness
principle, meaning, a weak and strong solution emanating from the
same initial data coincide as long as the latter exists. This can
be seen by taking the strong solution as the ``test'' functions
$r$, $\vc{U}$ in the relative entropy inequality (\ref{i6}).
Besides, a number of other interesting properties of the suitable
weak solutions can be deduced, see \cite[Section 4]{FENOSU}.
For the particular choice $r = \Ov{\vr}$, $\vc{U} = 0$, the relative energy inequality (\ref{i6}) reduces to the standard \emph{energy inequality}
\bFormula{i8}
{\cal E} [\vr, \vu] (\tau)
+ \int_0^\tau \intO{ \tn{S} (\Grad \vu):
\Grad \vu } \ \dt \leq {\cal E} [\vr_0, \vu_0] +
\int_0^\tau \intO{ \vr \vc{f} \cdot \vu } \ \dt \ \mbox{for a.a.}\
\tau > 0,
\eF
\[
{\cal E}[\vr, \vu] = \intO{ \left( \frac{1}{2} \vr |\vu|^2 +
H(\vr) - H'(\Ov{\vr}) \Big( \vr - \Ov{\vr}\Big) -H(\overline\vr)
\right) }.
\]
The weak solutions of the Navier-Stokes system satisfying, in addition, the energy inequality
(\ref{i8}) are usually termed \emph{finite energy weak solutions}, or, rather incorrectly,
turbulent solutions in the sense of Leray's original work \cite{LER}.
Our goal in this paper is to show that any finite energy weak
solution is in fact a suitable weak solution, in other words, the
standard energy inequality (\ref{i8}) implies the relative energy
inequality (\ref{i6}). In particular, the weak-strong uniqueness
property as well as other results shown in \cite{FENOSU} hold for
the seemingly larger class of finite energy solutions. This
observation extends easily to other types of boundary conditions
and to a large class of domains. This kind of result can be viewed
as an extension of the seminal work of Prodi \cite{PR} and Serrin
\cite{SERRIN1} (see also Germain \cite{Ger2} for more recent
results) to the compressible Navier-Stokes system. We provide an
ultimate answer to the weak-strong uniqueness problem intimately
related to the fundamental questions of the well-posedness for the
compressible Navier-Stokes equations addressed by several authors,
Desjardin \cite{DES2}, Germain \cite{Ger}, Hoff \cite{Hoff10},
\cite{Hoff9}, among others.
The paper is organized as follows. In Section \ref{w}, we provide an exact definition of finite
energy weak solutions and state the main result. Section
\ref{p} is devoted to the proof of the main theorem and
to possible extensions. Applications are discussed in Section \ref{a}.
\section{Main results}
\label{w} For the sake of simplicity, we assume that the pressure
$p = p(\vr)$ is a continuously differentiable function of the
density such that \bFormula{w1} p \in C[0,\infty) \cap
C^2(0,\infty), \ p(0) = 0,\ p'(\vr) > 0 \ \mbox{for all}\ \vr > 0,
\ \lim_{\vr \to \infty} \frac{ p'(\vr) } {\vr^{\gamma - 1}} = a >
0 \ \mbox{for a certain}\ \gamma > 3/2. \eF
Moreover, if $\overline\vr=0$, we suppose that $p$ becomes asymptotically small for
$\vr \to 0$ so that the function $H$ defined
in (\ref{i5a}) is finite for any $\vr>0$.
\subsection{Finite energy weak solutions to the Navier-Stokes system}
\label{fws}
{\bf Definition \ref{w}.1} \\
{\it We shall say that $\vr$, $\vu$ is a \emph{finite energy weak
solution} to the Navier-Stokes system (\ref{i1} - \ref{i4a})
emanating from the initial data $\vr_0$, $\vu_0$ if
\begin{itemize}
\item
\begin{equation}\label{d1}
\vr - \Ov{\vr} \in L^\infty(0,T;L^2 + L^\gamma(\Omega)),\; \vr
\geq 0\;\mbox{ a.a. in $(0,T) \times \Omega)$};
\end{equation}
\begin{equation}\label{d2}
\vu \in L^2(0,T;D^{1,2}_0(\Omega;R^3));
\end{equation}
\begin{equation}\label{d3}
\vr \vu \in L^\infty(0,T;L^2 + L^{2\gamma/(\gamma +
1)}(\Omega;R^3));
\end{equation}
\begin{equation}\label{d4}
p \in L^1_{\rm loc}([0,T] \times \Omega);
\end{equation}
\item $(\vr - \Ov{\vr}) \in C_{\rm weak}([0,T]; L^2 + L^\gamma(\Omega))$ and the integral identity
\bFormula{w2}
\intO{ \vr (\tau, \cdot) \varphi (\tau, \cdot) } -
\intO{ \vr_0 \varphi(0, \cdot) } =
\int_0^T \intO{ \Big( \vr \partial_t \varphi + \vr \vu \cdot \Grad \varphi \Big) } \ \dt
\eF
holds for any $\varphi \in \DC([0,T] \times \Ov{\Omega})$;
\item $\vr \vu \in C_{\rm weak}([0,T]; L^2 + L^{2\gamma/(\gamma + 1)}(\Omega;R^3))$
and the integral identity
\bFormula{w3}
\intO{ \vr \vu (\tau, \cdot) \cdot \varphi (\tau, \cdot) } -
\intO{ \vr_0 \vu_0 \cdot \varphi(0, \cdot) }
\eF
\[
= \int_0^T \intO{ \Big( \vr \vu \cdot \partial_t \varphi + (\vr \vu \otimes \vu): \Grad \varphi + p(\vr)
\Div \varphi - \tn{S} (\Grad \vu) : \Grad \varphi + \vr \vc{f} \cdot \varphi \Big) } \ \dt
\]
is satisfied for any
$\varphi \in \DC([0,T] \times \Omega ; R^3)$;
\item
the energy inequality \bFormula{w4} \intO{ \Big( \frac{1}{2} \vr
|\vu|^2 + H(\vr) - H'(\Ov{\vr}) (\vr - \Ov{\vr})-H(\overline\vr)
\Big) (\tau, \cdot) } + \int_0^\tau \intO{ \tn{S}(\Grad \vu) :
\Grad \vu } \ \dt \eF
\[
\leq \intO{ \Big( \frac{1}{2} \vr_0 |\vu_0|^2 + H(\vr_0) -
H'(\Ov{\vr}) (\vr_0 - \Ov{\vr}) -H(\overline\vr) \Big) } +
\int_0^T \intO{ \vr \vc{f} \cdot \vu } \ \dt
\]
holds for a.a. $\tau \in [0,T]$.
\end{itemize}
}
{\bf Remark \ref{w}.1} {\it We recall that the space
$D^{1,2}_0(\Omega)$ is defined as a completion of
$\DC(\Omega)$ with respect to the $L^2-$norm of the
gradient. In accordance with Sobolev's inequality,
\[
D^{1,2}_0(\Omega) \subset L^6(\Omega),
\]
see Galdi \cite{GAL}. }
\medskip
{\bf Remark \ref{w}.2} {\it In (\ref{w4}), we tacitly assume that
the initial data are chosen in such a way that the first integral
on the right hand side is finite.
}
\medskip
\subsection{Finite energy weak solutions satisfy the relative energy inequality}
Our main result reads as follows:
\bTheorem{w1} Let $\Omega \subset R^3$ be a domain.
Suppose that the pressure $p$ satisfies hypothesis (\ref{w1}),
\[
\vc{f} \in L^\infty(0,T; L^1 \cap L^\infty (\Omega; R^3)),
\]
and that $\Ov{\vr} \ge 0$. Let $\vr$, $\vu$ be a finite energy
weak solution to the Navier-Stokes system (\ref{i1} - \ref{i4a})
in the sense specified in Section \ref{fws}.
Then $\vr$, $\vu$ satisfy the relative energy inequality (\ref{i6}) for any
$\vc{U} \in \DC([0,T] \times \Omega;R^3)$, and $r > 0$,
$r - \Ov{\vr} \in \DC([0,T] \times \Ov{\Omega})$.
\eT
The proof and several extensions of Theorem \ref{Tw1} are presented in Section \ref{p}. Applications will
be discussed in Section \ref{a}.
\section{Proof of the main result}
\label{p}
\subsection{Proof of Theorem \ref{Tw1}}
Take $\vc{U}$ as a test function in the momentum equation (\ref{w3}) to obtain
\bFormula{p1}
\intO{ \vr \vu (\tau, \cdot) \cdot \vc{U} (\tau, \cdot) } =
\intO{ \vr_0 \vu_0 \cdot \vc{U} (0, \cdot) }
\eF
\[
+ \int_0^\tau \intO{ \Big( \vr \vu \cdot \partial_t \vc{U} + (\vr \vu \otimes \vu): \Grad \vc{U} + p(\vr)
\Div \vc{U} - \tn{S} (\Grad \vu) : \Grad \vc{U} + \vr \vc{f} \cdot \vc{U} \Big) } \ \dt
\]
Similarly, we can use the scalar quantity $\frac{1}{2} |\vc{U}|^2$ as a test function in (\ref{w2}):
\bFormula{p2}
\intO{ \frac{1}{2} \vr (\tau, \cdot) |\vc{U}|^2 (\tau, \cdot) } =
\intO{ \frac{1}{2} \vr_0 |\vc{U} (0, \cdot)|^2 } +
\int_0^\tau \intO{ \Big( \vr \vc{U} \cdot \partial_t \vc{U} + \vr \vu \cdot \Grad \vc{U} \cdot \vc{U} \Big) } \ \dt .
\eF
Finally, we test (\ref{w2}) on $H'(r) - H'(\Ov{\vr})$ to get
\bFormula{p3}
\intO{ \vr (\tau, \cdot) \Big( H'(r) (\tau, \cdot) - H'(\Ov{\vr}) \Big) } =
\intO{ \vr_0 \Big( H'(r) (0, \cdot) - H'(\Ov{\vr}) \Big) }
\eF
\[
+
\int_0^\tau \intO{ \Big( \vr \partial_t H'(r) + \vr \vu \cdot \Grad H'(r) \Big) } \ \dt .
\]
Summing up relations (\ref{p1} - \ref{p3}) with the energy inequality
(\ref{w4}), we infer that
\bFormula{p4}
\intO{ \left( \frac{1}{2} \vr |\vu - \vc{U}|^2 + H(\vr) -
\Big( H'(r) \vr - H'(\Ov{\vr}) \Ov{\vr} \Big) \right) (\tau, \cdot) }
\eF
\[
+ \int_0^\tau \intO{ \Big( \tn{S} (\Grad \vu) - \tn{S} (\Grad \vc{U}) \Big)
: \Big( \Grad \vu - \Grad \vc{U} \Big) } \ \dt
\]
\[
=\intO{ \left( \frac{1}{2} \vr_0 |\vu_0 - \vc{U} (0, \cdot) |^2 + H(\vr_0) -
\Big( H'(r(0, \cdot)) \vr_0 - H'(\Ov{\vr}) \Ov{\vr} \Big) \right) }
\]
\[
+ \int_0^\tau \intO{ \vr \Big(\partial_t \vc{U} + \vr \vu \cdot \Grad \vc{U} \Big)\cdot (\vc{U} - \vu) } \ \dt
\]
\[
+ \int_0^\tau \intO{\tn{S}(\Grad \vc{U}):\Grad (\vc U- \vc{u}) }
+\intO{\vr\vc f \cdot(\vc u-\vc U)} \ \dt
\]
\[
-\int_0^\tau \intO{ \Big( \vr \partial_t H'(r) + \vr \vu \cdot \Grad H'(r) \Big) } \ \dt - \int_0^\tau \intO{ p(\vr) \Div \vc{U} } \ \dt.
\]
Realizing that
\[
H'(r)r - H(r) - H'(\Ov{\vr}) \Ov{\vr} = p(r) - p(\Ov{\vr}),
\]
we compute
\[
\intO{ \Big( p(r) - p(\Ov{\vr}) \Big) (\tau, \cdot) } -
\intO{ \Big( p(r) - p(\Ov{\vr}) \Big) (0, \cdot) }
= \int_0^\tau \intO{ \partial_t p(r) } \ \dt ;
\]
whence, by virtue of the identity
\bFormula{p5}
\intO{ \left( r \partial_t H'(r) + r \Grad H'(r) \cdot \vc{U} + p(r)
\Div \vc{U} \right) } = \intO{ \partial_t p(r) },
\eF
relation (\ref{p4}) implies (\ref{i6}). Theorem \ref{Tw1} has been proved.
Note that (\ref{p5}) relies on the fact that $\vc{U} \cdot \vc{n}|_{\partial \Omega} = 0$.
\subsection{Possible extensions}
The conclusion of Theorem \ref{Tw1} can be extended in several
directions. Here, we shortly discuss the problem of an alternative choice of boundary conditions as well
as the class of admissible test functions $r$, $\vc{U}$.
\subsubsection{General slip boundary conditions with friction}
Similar result can be obtained provided the no-slip boundary
condition (\ref{i4}) is replaced by the slip boundary
conditions with friction (Navier's boundary condition) \bFormula{p6} \vu \cdot\vc n= 0 ,\;(\tn
S(\Grad \vc u)\vc n)_{\rm tan} +\beta \vc u_{\rm tan}=0\
\mbox{on} \ (0,T)\times
\partial \Omega, \eF
where $\beta\ge 0$ and $\vc v_{\rm tan}|_{\partial\Omega}=(\vc
v-(\vc v\cdot\vc n)\vc n)|_{\partial\Omega}$ denotes the tangential
componenet of a vector field $\vc{v}$ at the boundary. Note that the
so-called complete slip boundary conditions correspond to the particular
sitution $\beta=0$.
The definition of finite energy weak solutions is similar
to Section \ref{fws} with the following modifications:
\begin{itemize}
\item the spatial domain $\Omega$ possesses a Lipschitz boundary, where (\ref{d2}) is replaced by the requirement
$\vu \in L^2(0,T; D^{1,2}_n(\Omega;R^3))$, with
\[
D^{1,2}_n (\Omega;R^3) = \left\{ \vc{v} \in L^6_{\rm loc}(\Ov{\Omega};R^3) \ \Big| \
\Grad \vc{v} \in L^2(\Omega; R^{3 \times 3}), \ \vc{v} \cdot \vc{n}|_{\partial \Omega} = 0 \right\};
\]
\item the pressure satisfies
\begin{equation}\label{d4+}
p(\vr) \in L^1_{\rm loc}([0,T] \times \Ov{\Omega})
\end{equation}
instead of (\ref{d4});
\item the weak formulation of the momentum equation (\ref{w2}) has
to be replaced by
\bFormula{w2+} \int_0^\tau \intO{ \Big( \vr \vu \cdot \partial_t
\varphi + \vr (\vu \otimes \vu) : \Grad \varphi + p(\vr) \Div
\varphi \Big) } \ \dt \eF
\[
- \int_0^\tau \intO{ \tn{S}(\Grad \vu) : \Grad \varphi } \
\dt -\beta\int_0^\tau\int_{\partial\Omega}\vc u\cdot\varphi{\rm d S}\ \dt
\]
\[
=-\int_0^\tau\intO{\vr\vc f\cdot\varphi}\ \dt+ \intO{
(\vr\vc{u})(\tau) \cdot \varphi (\tau, \cdot) } - \intO{ \vr_0
\vc{u}_0 \cdot \varphi (0, \cdot) }
\]
for all $\tau\in [0,T]$, for any test function $\varphi \in
\DC([0,T] \times \overline\Omega; R^3)$, $\vc \varphi\cdot\vc n=0$
on $[0,T]\times\partial\Omega$;
\item energy inequality (\ref{w3}) is replaced by
\bFormula{w3+} \intO{ \left( \frac{1}{2} \vr |\vu|^2 + H(\vr) -
H'(\Ov{\vr}) (\vr - \Ov{\vr})-H(\overline\vr) \Big) (\tau, \cdot)
\right)(\tau, \cdot) } \eF \[
+ \int_0^\tau \intO{ \tn{S}(\Grad \vu) :
\Grad \vu } \ \dt +\beta\int_0^\tau\int_{\partial\Omega}|\vc
u |^2{\rm d S}\ \dt \]
\[
\leq\int_0^\tau\intO{\vr\vc f\cdot\vc u}\ \dt+ \intO{ \left(
\frac{1}{2} \vr_0 |\vc{u}_0|^2 + H(\vr_0) - H'(\Ov{\vr}) (\vr_0 -
\Ov{\vr})-H(\overline\vr) \Big) (\tau, \cdot)\right) } \ \mbox{for
a.a.}\ \tau \in (0,T).
\]
\end{itemize}
In this case, the conclusion of Theorem \ref{Tw1} remains valid
for any couple $(r,\vc U)$ such that \bFormula{p6+}r-\overline\vr
\in C^\infty_c([0,T]\times\overline\Omega),\quad\vc{U} \in \DC
([0,T] \times \Ov{\Omega};R^3), \ \vc{U} \cdot \vc{n}|_{\partial
\Omega} = 0 \eF with the relative entropy inequality that reads
\bFormula{w5+} {\cal E}([\vr,\vu]\Big| [r,\vc U]) (\tau, \cdot)
\eF
\[
+ \int_0^\tau \intO{ \left[
\tn{S} (\Grad \vu -\Grad \vc{U}) \right] : \Grad (\vc{u} - \vc{U})
} \ \dt + \beta\int_0^\tau\int_{\partial\Omega}|\vc u -\vc
U |^2{\rm d} S{\rm d}t
\]
\[
\leq {\cal E}([\vr_0,\vu_0]\Big| [r(0),\vc U(0)) (\tau ) +
\int_0^\tau {\cal R} \left( \vr, \vu, r, \vc{U} \right) \ \dt \
\mbox{for a.a.} \ \tau \in (0,T),
\]
where \bFormula{w6+} {\cal R}\left( \vr, \vu, r, \vc{U} \right)
=\intO{\vr\vc f\cdot(\vc u-\vc U)}
-\beta\int_0^\tau\int_{\partial\Omega}\vc U \cdot(\vc
u -\vc U ){\rm d}S{\rm d}t \eF
\[
+ \intO{ \left( \vr \Big(
\partial_t \vc{U} + \vu\cdot \Grad \vc{U} \Big) \cdot (\vc{U} -
\vu ) -\tn{S}(\Grad \vc{U}) :\Grad (\vu - \vc{U}) \right) } \]
\[
+ \intO{ \left( (r - \vr) \partial_t H'(r) + \Grad H'(r) \cdot
\left( r \vc{U} - \vr \vu \right) - \Div \vc{U} \Big( p(\vr) -
p(r) \Big) \right) }.
\]
\subsubsection{Extending the admissible class of test functions}
Using density arguments we can extend considerably the class of
test functions $r$, $\vc{U}$ appearing in the relative energy
inequality (\ref{i6}) resp. (\ref{w5+}). Indeed:
\begin{itemize}
\item
For the left hand side (\ref{i6}) resp. (\ref{w5+}) to be well
defined, the functions $r$, $\vc{U}$ must belong at least to the
class \bFormula{b1+} r -\overline\vr\in C_{\rm weak}([0,T]; L^2+
L^\gamma (\Omega)),\; \eF
\bFormula{b2} \vc{U} \in \ L^2(0,T; W^{1,2}(\Omega;R^3)).
\eF
\item
A short inspection (\ref{i7}) resp. (\ref{w6+}) implies that the integrals are well-defined
if, at least,
\bFormula{b3+}
\partial_t \vc{U} \in L^2(0,T; L^{3}\cap L^{6 \gamma/ (5 \gamma - 6)}(\Omega,
R^3))+ L^1(0,T; L^{4/3}\cap L^{2 \gamma/ (\gamma - 1)}(\Omega,
R^3)),
\eF
\bFormula{b5+} \Grad \vc{U} \in L^\infty(0,T; L^{6}\cap
L^{3 \gamma/ (2 \gamma - 3)}(\Omega, R^{3 \times 3})) + L^2(0,T; L^{12/7}
\cap L^{6 \gamma/ (4 \gamma - 3)}(\Omega, R^{3 \times 3}))
\eF
\[
+ L^1(0,T;
L^\infty(\Omega; R^3)), \]
\bFormula{b4} {\rm div} \vc{U} \in L^1(0,T; L^\infty(\Omega)),
\eF
\item
The function $r$ must be bounded below away from zero, and
\bFormula{b6}
\partial_t H'(r) \in L^1(0,T; L^{\gamma/ (\gamma -
1)}\cap L^2(\Omega)),
\eF
\bFormula{b7} \Grad H'(r) \in L^2(0,T; L^{3}\cap L^{6 \gamma/ (5
\gamma - 6)}(\Omega, R^3))+ L^1(0,T; L^{4/3}\cap L^{2 \gamma/
(\gamma - 1)}(\Omega, R^3)). \eF
\item Finally, the vector field $\vc U$ has to satisfy
\begin{equation}\label{b8}
\begin{array}{c}
\vc U|_{\partial\Omega}=0 \;\mbox{in the case of boundary
conditions (\ref{i4})},
\\ \\
\vc U\cdot\vc n|_{\partial\Omega}=0 \;\mbox{in the case of
boundary conditions (\ref{p6})}.
\end{array}
\end{equation}
\end{itemize}
Consequently, Theorem \ref{Tw1} is valid even if we replace the
hypotheses on smoothness and integrability of the test functions $(r,\vc U)$ by
weaker hypotheses, namely (\ref{b1+}--\ref{b8}).
In particular, $r$, $\vc{U}$ may be another (strong) solution
emanating from the same initial data $\vr_0$, $\vu_0$. Specific
examples will be discussed in the forthcoming section.
\section{Applications}
\label{a}
In this section, we show how Theorem \ref{Tw1} can be applied in order to establish weak-strong uniqueness property for the compressible Navier-Stokes
system in the class of finite energy weak solutions in bounded and unbounded domains. Other applications can be found in \cite{FENOSU}.
\subsection{Weak-strong uniqueness on bounded domains}
\subsubsection{No-slip boundary conditions}
To begin, observe that \emph{any} finite energy weak solution $\vr$, $\vu$ of the compressible Navier-Stokes system (\ref{i1} - \ref{i4}) in $(0,T)
\times \Omega$, where $\Omega$ is a bounded domain, belongs to the class
\[
\vr \in C_{\rm weak}([0,T]; L^\gamma(\Omega)),\
\vr \vu \in C_{\rm weak}([0,T]; L^{2\gamma/(\gamma + 1)}(\Omega;R^3)), \
\vu \in L^2(0,T;W^{1,2}_0(\Omega;R^3)),
\]
and, by virtue of the energy inequality (\ref{w4}),
\[
p(\vr) \in L^\infty(0,T; L^1(\Omega)).
\]
Moreover, it is easy to check that
\bFormula{a1}
H(\vr) - H'(r)(\vr - r) - H(r) \geq c(r)
\left\{ \begin{array}{l} (\vr - r)^2 \ \mbox{for}\ r/2 < \vr < 2 r,
\\ \\ (1 + \vr^\gamma) \ \mbox{otherwise}
\end{array} \right.,
\eF
where $c(r)$ is uniformly bounded for $r$ belonging to compact sets in $(0, \infty)$.
Finally, note that, since the total mass is a conserved quantity on a bounded domain, we can take $\Ov{\vr}$ in (\ref{i5a}) so that
\[
\intO{ (\vr - \Ov{\vr}) } = 0.
\]
The rather obvious leading idea of the proof of weak-strong uniqueness is to take $r = \tilde \vr$, $\vc{U} = \tilde \vu$ in the relative energy
inequality (\ref{i6}), where $\tilde \vr$, $\tilde \vu$ is a (hypothetical) regular solution, originating from the same initial data. The following
formal computations will require certain smoothness of $\tilde \vr$, $\tilde \vu$ specified in the concluding theorem. Moreover, we assume that
$\tilde \vr$ is bounded below away from zero on the whole compact time interval $[0,T]$.
Our goal is to examine all terms in the remainder (\ref{i7}) and to show they can be ``absorbed'' by the left-hand side of (\ref{i6}) by means of a
Gronwall type argument.
\begin{enumerate}
\item We rewrite
\[
\intO{ \vr \Big( \partial_t \tilde \vc{u} + \vu \cdot \Grad \tilde \vc{u} \Big) \cdot
(\tilde \vc{u} - \vu )} = \intO{ \vr \Big( \partial_t \tilde \vc{u} + \tilde \vc{u} \cdot \Grad \tilde \vc{u} \Big) \cdot
(\tilde \vc{u} - \vu )} + \intO{ \vr (\vc{u} - \tilde \vc{u}) \cdot \Grad \tilde \vc{u} \cdot
(\tilde \vc{u} - \vu )}.
\]
Seeing that
\[
\partial_t \tilde \vc{u} + \tilde \vc{u} \cdot \Grad \tilde \vc{u} =
\frac{1}{\tilde \vr} \Div \tn{S} (\Grad \tilde \vu) + \vc{f} - \Grad H'(\tilde \vr) ,
\]
we go back to (\ref{i7}) to obtain
\[
{\cal R}(\vr, \vu, \tilde \vr, \tilde \vu) = \intO{ \vr (\vc{u} - \tilde \vc{u}) \cdot \Grad \tilde \vc{u} \cdot
(\tilde \vc{u} - \vu )} + \intO{ \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu) \cdot (\tilde \vu - \vu) }
\]
\[
+ \intO{(\tilde \vr - \vr) \Big( \partial_t H'(\tilde \vr) + \Grad H'(\tilde \vr)
\cdot \tilde \vu \Big)} - \intO{ \Div \tilde \vu \Big( p(\vr) - p(\tilde \vr) \Big) }.
\]
\item
Computing
\[
(\tilde \vr - \vr) \Big( \partial_t H'(\tilde \vr) + \Grad H'(\tilde \vr)
\cdot \tilde \vu \Big) = - \Div \tilde \vu (\vr - \tilde \vr) p'(\tilde \vr),
\]
we may infer that
\[
\intO{(\tilde \vr - \vr) \Big( \partial_t H'(\tilde \vr) + \Grad H'(\tilde \vr)
\cdot \tilde \vu \Big)} - \intO{ \Div \tilde \vu \Big( p(\vr) - p(\tilde \vr) \Big) }
\]
\[
= - \intO{ \Div \tilde \vu \Big( p(\vr) - p'(\tilde \vr) (\vr - \tilde \vr) - p(\tilde \vr) \Big) };
\]
whence
\bFormula{a2}
{\cal R} (\vr, \vu, \tilde \vr, \tilde \vu) = \intO{ \vr (\vc{u} - \tilde \vc{u}) \cdot \Grad \tilde \vc{u} \cdot
(\tilde \vc{u} - \vu )} - \intO{ \Div \tilde \vu \Big( p(\vr) - p'(\tilde \vr) (\vr - \tilde \vr) - p(\tilde \vr) \Big) }
\eF
\[
+\intO{ \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu) \cdot (\tilde \vu - \vu) }.
\]
\item
In view of (\ref{a1}), we have
\bFormula{a3}
\left|
\intO{ \vr (\vc{u} - \tilde \vc{u}) \cdot \Grad \tilde \vc{u} \cdot
(\tilde \vc{u} - \vu )} - \intO{ \Div \tilde \vu \Big( p(\vr) - p'(\tilde \vr) (\vr - \tilde \vr) - p(\tilde \vr) \Big) } \right|
\eF
\[
\leq c \| \Grad \tilde \vu \|_{L^\infty(\Omega;R^3)} {\cal E}
\Big( [\vr, \vu] \Big| [\tilde \vr, \tilde \vu ] \Big),
\]
provided
\bFormula{a4}
0 < \inf_{[0,T] \times \Ov{\Omega}} \tilde \vr \leq \tilde \vr (t,x)\leq
\sup_{[0,T] \times \Ov{\Omega}} \tilde \vr < \infty.
\eF
\item Finally, we write
\[
\intO{ \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu) \cdot (\tilde \vu - \vu) }
\]
\[
= \int_{ \{ \tilde \vr / 2 < \vr < 2 \tilde \vr \}} \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu) \cdot
(\tilde \vu - \vu) \ \dx
\]
\[
+ \int_{ \{ 0 \leq \vr \leq \tilde \vr/2 \}} \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu) \cdot (\tilde
\vu - \vu) \ \dx + \int_{ \{ \vr \geq 2 \tilde \vr \}} \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu)
\cdot (\tilde \vu - \vu) \ \dx,
\]
where, by virtue of H\" older's inequality,
\bFormula{a5}
\left|
\int_{ \{ \tilde \vr / 2 < \vr < 2 \tilde \vr \}} \frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde \vu) \cdot
(\tilde \vu - \vu) \ \dx
\right|
\eF
\[
\leq c(\delta) \left\| \frac{1}{\tilde \vr} \Div \tn{S}
(\Grad \tilde \vu) \right\|_{L^3(\Omega;R^3)}^2 \int_{ \{ \tilde \vr / 2 < \vr < 2 \tilde \vr \}}
(\vr - \tilde \vr)^2 \ \dx + \delta \| \tilde \vu - \vu \|^2_{L^6(\Omega;R^3)}
\]
for any $\delta > 0$.
Furthermore, in accordance with (\ref{a1}), we get \bFormula{a6}
\int_{ \{ \tilde \vr / 2 < \vr < 2 \tilde \vr \}} (\vr - \tilde
\vr)^2 \ \dx \leq c {\cal E} \Big( [\vr, \vu] \Big| [\tilde \vr,
\tilde \vu ] \Big), \eF while, by virtue of Sobolev's inequality
and Korn-type inequality (see e.g. Dain \cite{DAIN})
\begin{equation}\label{korn}
\|\vc z\|_{1,2}\le c\|\tn S(\Grad \vc z)\|_{L^2(\Omega;R^{3\times
3})},\; \vc z\in W^{1,2}(\Omega;R^3),
\end{equation}
we have
\bFormula{a7} \| \tilde \vu - \vu \|^2_{L^6(\Omega;R^3)} \leq
c \| \Grad \vu - \Grad \tilde \vu \|^2_{L^2(\Omega; R^{3 \times
3})}\le c\|\tn S(\Grad \vc u-\Grad\tilde\vc
u)\|^2_{L^2(\Omega;R^{3\times 3})} . \eF
Therefore,
\[
\left| \int_{ \{0\le \vr \leq \tilde \vr/2\}} \frac{1}{\tilde
\vr}\left( \vr - \tilde \vr \right) \Div \tn{S} (\Grad \tilde
\vu) \cdot (\tilde \vu - \vu) \ \dx \right|\le \]
\[
\leq c(\delta) \left\| \frac{1}{\tilde \vr}\Div \tn{S} (\Grad
\tilde \vu) \right\|_{L^3(\Omega;R^3)}^2 {\cal E} \Big( [\vr, \vu]
\Big| [\tilde \vr, \tilde \vu ] \Big) + \delta \|\tn S (\Grad \vu
- \Grad \tilde \vu )\|^2_{L^2(\Omega; R^{3 \times 3})}
\]
for any $\delta > 0$.
Next we realize that
$$
{\cal E}(\vr,\vt|\tilde\vr,\tilde\vt)\in L^\infty(0,T)
$$
and that
$$
\|\vr\|_{L^\gamma(\{\vr>2\overline\vr\})}\le c \Big[{\cal
E}(\vr,\vt| \tilde \vr,\tilde\vt)\Big]^{1/\gamma}, \quad
\|\vr^{\gamma/2}\|_{L^2(\{\vr>2\overline\gamma\})}\le c\Big[ {\cal
E}(\vr,\vt| \tilde \vr,\tilde\vt)\Big]^{1/2}.
$$
Using these facts, we deduce
\bFormula{a9} \left| \int_{ \{ \vr \geq 2\tilde \vr\}}
\frac{1}{\tilde \vr}\left( \vr - \tilde \vr \right) \Div \tn{S}
(\Grad \tilde \vu) \cdot (\tilde \vu - \vu) \ \dx \right|\le \eF
\[
\int_{\{ \vr \ge 2\tilde\vr \} }\left(
\left| \frac{\vr - \tilde \vr}{\vr\tilde\vr}\right|{\rm
max}\{\vr,\vr^{\gamma/2}\}\left| \Div \tn{S}(\Grad \tilde
\vu)\right| \,\left|(\tilde \vu - \vu ) \right|\right)(\tau,\cdot)
\ \dx\le
\]
\[
c\|\tn S(\Grad \vu - \Grad \tilde \vu )\|_{L^2(\Omega;
R^{3\times3})} \| \Div \tn{S}(\Grad \tilde \vu) \|_{L^q\cap
L^3(\Omega; R^3)} \Big[{\cal E}(\vr,\vt| \tilde
\vr,\tilde\vt)\Big]^{1/2}\le
\]
\[
\le \delta \| \tn S(\Grad\vu -\Grad \tilde \vu \|_{L^2(\Omega;
R^3))}^2 + c(\delta) \| \Div \tn{S}(\Grad \tilde \vu)
\|^2_{L^q\cap L^3(\Omega; R^3)}\; {\cal E}(\vr,\vt| \tilde
\vr,\tilde\vt),\; q = \frac{6 \gamma}{5 \gamma - 6}.
\]
\end{enumerate}
Summing up relations (\ref{a2} - \ref{a9}) we conclude that the relative entropy inequality, applied to $r = \tilde \vr$, $\vc{U} = \tilde \vu$,
yields the desired conclusion
\bFormula{a10}
{\cal E} \Big( [\vr, \vu] \Big| [\tilde \vr, \tilde \vu] \Big) (\tau) \leq
\int_0^\tau h(t) {\cal E} \Big( [\vr, \vu] \Big| [\tilde \vr, \tilde \vu] \Big) (t) \ \dt, \ \mbox{with}\ h \in L^1(0,T),
\eF
provided $\tilde \vr$ satisfies (\ref{a4}), and
\bFormula{a11}
\Grad \tilde \vu \in L^1 (0,T; L^\infty (\Omega; R^{3 \times 3})) \cap
L^2(0,T; L^2(\Omega; R^{3 \times 3})),\
\Div \tn{S} (\Grad \tilde \vu) \in L^2(0,T; L^3 \cap L^q(\Omega;R^3)),
\eF
with
\[
q = \frac{6 \gamma}{5 \gamma - 6}.
\]
We have shown the following result:
\bTheorem{a1}
Let $\Omega \subset R^3$ be a bounded Lipschitz domain, let the pressure $p$ satisfy hypothesis (\ref{w1}), and let
\[
\vc{f} \in L^1(0,T; L^{2\gamma/(\gamma - 1)}(\Omega;R^3)).
\]
Assume that $\vr$, $\vu$ is a finite energy weak solution to the Navier-Stokes system (\ref{i1} - \ref{i4}) in $(0,T) \times \Omega$, specified in
Section \ref{fws}. Let $\tilde \vr$, $\tilde \vu$ be a (strong) solution of the same problem belonging to the class
\[
0 < \inf_{(0,T) \times \Omega} \tilde \vr \leq \tilde \vr (t,x) \leq
\sup_{(0,T) \times \Omega} \tilde \vr < \infty,
\]
\[
\Grad \tilde \vr \in L^2(0,T; L^q(\Omega; R^3)),\
\Grad^2 \tilde \vu \in L^2(0,T; L^q(\Omega; R^{3 \times 3 \times 3})),
\ q > \max \left\{ 3 ; \frac{3}{\gamma - 1} \right\},
\]
emanating from the same initial data.
Then
\[
\vr = \tilde \vr , \ \vu = \tilde \vu \ \mbox{in}\ (0,T) \times \Omega.
\]
\eT
\medskip
{\bf Remark \ref{a}.1} {\it We need $\Omega$ to be at least
Lipschitz to guarantee the $W^{1,p}$ extension property, with the
associated embedding relations}.
\medskip
{\bf Remark \ref{a}.2} {\it
The reader will have noticed that the regularity properties required for $\tilde \vr$, $\tilde \vu$ in Theorem \ref{Ta1} are in fact \emph{stronger}
than (\ref{a11}). The reason is that all integrands appearing in the relative energy inequality (\ref{i6}) must be well defined.}
\medskip
{\bf Remark \ref{a}.3} {\it
\emph{Existence of finite energy weak solutions} was shown in \cite{FNP1} for
general (finite energy) data and without any restriction on imposed on smoothness of $\partial \Omega$.}
\medskip
{\bf Remark \ref{a}.4} {\it \emph{Local-in-time existence of
strong solutions} belonging to the regularity class specified in
Theorem \ref{Ta1} was proved by Sun, Wang, and Zhang
\cite{SuWaZh}, under natural restrictions imposed on the initial
data.}
\subsubsection{Navier boundary conditions with friction}
Theorem \ref{Ta1} holds in the case of Navier's boundary condition
(\ref{p6}).
The proof remains basically without changes; the standard Korn-type
inequality (\ref{korn}) has to be replaced by a more
sophisticated one, namely
\begin{equation}\label{korng}
\begin{array}{c}
\|\vc v\|^2_{W^{1,2}(\Omega,R^3)}\le c(M,K,p)\Big(\|\tn S(\Grad\vc
v)\|^2_{L^2(\Omega,R^{3\times 3})}+\| R \vc
v^2\|_{L^1(\Omega)}\Big)\\ \\
\mbox{for any $\vc v\in W^{1,2}(\Omega;R^3)$, $R\ge 0$,
$M\le\int_\Omega R {\rm d}x$, $\|R\|_{L^p(\Omega)}\le K$},
\end{array}
\end{equation}
where $M,K>0$, $p>1$ (see \cite[Theorem 10.17]{FEINOV}). It is
employed in estimate (\ref{a7}) with $\vc v=\vc u-\tilde\vc u$ and
$R=\vr$.
\subsection{Weak strong uniqueness on unbounded domains}
\subsubsection{No-slip boundary conditions}
If the Navier-Stokes system is considered on an unbounded domain $\Omega$, the far-field behavior (\ref{i4a}) must be specified. Here, we assume that
$\Ov{\vr} > 0$ so that the density $\tilde \vr$ of the (hypothetical) strong solution may be bounded below away from zero. Moreover, the finite energy
weak solutions necessarily belong to the class:
\bFormula{ar1}
\vr - \Ov{\vr} \in L^\infty(0,T;L^2 + L^\gamma(\Omega)),\
p(\vr) - p(\Ov{\vr}) \in L^\infty(0,T; L^2 + L^1 (\Omega)),
\eF
\bFormula{ar2}
\vu \in L^2(0,T;W^{1,2}_0(\Omega;R^3)),\ \vr \vu \in
L^\infty(0,T;L^2 + L^{2\gamma/(\gamma + 1)}(\Omega;R^3)).
\eF
An appropriate modification of Theorem \ref{Ta1} for unbounded
domains reads: \bTheorem{a2} Let $\Omega \subset R^3$ be an
unbounded domain with a uniformly Lipschitz boundary, let the
pressure $p$ satisfy hypothesis (\ref{w1}), and let
\[
\vc{f} \in L^1(0,T; L^{1} \cap L^\infty (\Omega;R^3)).
\]
Assume that $\vr$, $\vu$ is a finite energy weak solution to the Navier-Stokes system (\ref{i1} - \ref{i4}) in $(0,T) \times \Omega$, specified in
Section \ref{fws}, satisfying the far-field boundary
conditions (\ref{i4a}), with $\Ov{\vr} > 0$. Let $\tilde \vr$, $\tilde \vu$ be a (strong) solution of the same problem belonging to the class
\[
0 < \inf_{(0,T) \times \Omega} \tilde \vr \leq \tilde \vr (t,x) \leq
\sup_{(0,T) \times \Omega} \tilde \vr < \infty,
\]
\[
\Grad \tilde \vr \in L^2(0,T; L^2 \cap L^q(\Omega; R^3)),\
\Grad^2 \tilde \vu \in L^2(0,T; L^2 \cap L^q(\Omega; R^{3 \times 3 \times 3})),
\ q > \max \left\{ 3 ; \frac{3}{\gamma - 1} \right\},
\]
emanating from the same initial data, and satisfying the energy inequality (\ref{i8}).
Then
\[
\vr = \tilde \vr , \ \vu = \tilde \vu \ \mbox{in}\ (0,T) \times \Omega.
\]
\eT
\medskip
{\bf Remark \ref{a}.5} {\it The uniformly Lipschitz boundary
$\partial \Omega$ guarantees the $W^{1,p}$-
extension property as well as validity of Korn's inequality
(\ref{korn}). }
\medskip
{\bf Remark \ref{a}.6} {\it Since the strong solution satisfies the energy
(in)equality (\ref{i8}), it automatically belongs to the regularity class
(\ref{ar1}), (\ref{ar2}). }
\medskip
{\bf Remark \ref{a}.7} {\it \emph{Existence of finite energy weak solutions}
for certain classes of unbounded domains was shown in \cite{NOST4}, see also Lions \cite{LI4}.}
\medskip
{\bf Remark \ref{a}.8} {\it The reader may consult the nowadays classical papers by Matsumura and Nishida
\cite{MANI1}, \cite{MANI} for the existence of strong solutions,
more recent results can be found in Cho, Choe and Kim \cite{ChoChoeKim}, and in the references cited therein.}
\subsubsection{Navier boundary conditions}
Theorem \ref{Ta2} remains valid also for the Navier boundary conditions. We have however suppose that
on the considered unbounded domain a sort of Korn type inequality
holds, for example
\bFormula{korngu} \| \vc{v} \|_{W^{1,2}(\Omega;R^3)}^2 \leq c(|V|)
\left( \| \tn S (\Grad \vc{v} ) \|^2_{L^2(\Omega;R^3)} +
\int_{\Omega \setminus V} |\vc{v}|^2 \ \dx \right), \eF
\[
\mbox{for any}\ \vc{v} \in W^{1,2}(\Omega), \ |V| < \infty.
\]
Such inequality is known to hold in a half space, an exterior
domain, a cylinder, a plane slab, to name only a few.
Since
$$
\Big|\{|\vr-\overline\vr|\ge \overline\vr/2\}|<\infty,
$$
inequality (\ref{korngu}) implies the validity of (\ref{korng})
with $\vc v=\vc u-\tilde\vc u$ and $R=\vr$. This inequality has
to replace the standard Korn's inequality (\ref{korn}) in estimate
(\ref{a7}). Other arguments in the proof remain without changes.
\def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0
by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0
\hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
|
1,116,691,497,220 | arxiv | \section{Introduction}
Principal component analysis (PCA) and model-based clustering methods are popular ways to disentangle the ancestral genetic history of individuals and populations. One particular model, the admixture model \citep{Pritchardea2000}, has played a prominent role because of its simple structure and, in some cases, easy interpretability. PCA is often seen as being model free but as noted by \cite{engelhardtstephens2010}, the two approaches are very similar. The interpretation of the results of a PCA analysis is often based on assumptions similar to those of the admixture model, such that admixed individuals are linear combinations of the eigenvectors representing unadmixed individuals. In this way, the admixed individuals lie in-between the unadmixed individuals in a PCA plot. As shown for the admixture model, there are many demographic histories that can lead to the same result \citep{LawsonvanDorpFalush2018} and many demographic histories that violate the assumptions of the admixture model \citep{genisanders2020}. As we will show, this is also the case for PCA, since it has a similar underlying model \citep{engelhardtstephens2010}.
The admixture model states that the genetic material from each individual is composed of contributions from $k$ distinct ancestral homogeneous populations. However, this is often contested in real data analysis, where the ancestral population structure might be much more complicated than that specified by the admixture model. For example, the $k$ ancestral populations might be heterogeneous themselves, the exact number of ancestral populations might be difficult to assess due to many smaller contributing populations, or the genetic composition of an individual might be the result of continuous migration or recent backcrossing, which also violates the assumptions of the admixture model. Furthermore, the admixture model assumes individuals are unrelated, which naturally might not be the case. This paper is concerned with assessing the fit of PCA building on the special relationship with the admixture model \citep{engelhardtstephens2010}. In particular, we are interested in quantifying the model fit and assessing the validity of the model at the level of the sample as well as at the level of the individual. Using real and simulated data we show that the fit from a PCA analysis is affected by violations of the admixture model.
We consider genotype data $G$ from $n$ individuals and $m$ SNPs, such that $G_{si}\in\{0,1,2\}$ is the number of reference alleles for individual $i$ and SNP $s$. Typically, $G_{si}$ is assumed to be binomially distributed with parameter $\Pi_{si}$, where $\Pi_{si}$ depends on the number of ancestral populations, $k$, their admixture proportions and the ancestral population allele frequencies. For clustering based analysis such as ADMIXTURE \citep{alexander-lange}, $k$ is the number of clusters while in PCA, it is the $k-1$ top principal components. We give the specifics of the admixture model in the next section and show its relationship to PCA in the Material and methods section.
Several methods aim to estimate the best $k$ in some sense \citep{alexander-lange,evanno,Pritchardea2000,raj,wang2019}, but finding such $k$ does not imply the data fit the model \citep{lawson,janes}.
In statistics, it is standard to use residuals and distributional summaries of the residuals to assess model fit \citep{box2005}. The residual of an observation is defined as the difference between the observed and the predicted value (estimated under some model).
Visual trends in the residuals (for example, differences between populations) are indicative of model misfit, and large absolute values of the residuals are indicative of outliers (for example due to experimental errors, or kinship). If the model is correct, a histogram of the residuals is expected to be mono-modal centered around zero \citep{box2005}.
In our context, \cite{genisanders2020} argue that trends in the residual correlation matrix carries information about the underlying model and might be used for visual model evaluation. A method is designed to assess whether the correlation structure agrees with the proposed model, in particular, whether it agrees with the proposed number of homogeneous ancestral populations \citep{genisanders2020}. However, even in the case the model is correctly specified, the residuals are in general correlated \citep{box2005}, and therefore, trends might be observed even if the model is true, leading to incorrect model assessment. To adjust for this correlation, a leave-one-out procedure, based on maximum likelihood estimation of the admixture model parameters, is developed that removes the correlation between residuals in the case the model is correct, but not if the model is misspecified \citep{genisanders2020}. This approach could also be applied to PCA, where
expected genotypes could be calculated using probabilistic PCA \citep{meisner2021}.
This leave-one-out procedure is, however, computationally expensive.
To remedy the computational difficulties, we take a different approach to investigate the correlation structure.
We suggest two different ways of calculating the correlation matrix of the residuals. The first is simply the empirical correlation matrix of the residuals. The second might be considered an estimated correlation matrix, based on a model. Both are simple to compute. Under mild regularity assumptions, these two measures agree if the model is correct and the number of SNPs is large. Hence, their difference is expected to be close to zero, when the admixture model is not violated. If the difference is considerably different from zero, then this is proof of model misfit.
To explore the adequacy of the proposed method, we investigate different ways to calculate the predicted values of the genotype (hence, the residuals), using Principal Component Analysis (PCA) in different ways. However, we also show that this approach can be used on estimated admixture proportions.
Specifically, we use 1) an uncommon but very useful PCA approach (here, named PCA 1) based on unnormalized genotypes \citep{CabrerosStorey2019,chenstorey2015}, 2) PCA applied to mean centred data (PCA 2), see \cite{Patterson2006}, and 3) PCA applied to mean and variance normalised data (PCA 3) \citep{Patterson2006}. All three approaches are computationally fast and do not require separate estimation of ancestral allele frequencies and population proportions, as in \cite{genisanders2020}. Hence, the computation of the residuals are computationally inexpensive. Additionally, we show that this approach can also be applied to output from, for example, the software ADMIXTURE \citep{Alexander2009} to estimate $\Pi_{si}$ for each $s$ and $i$, and to calculate the residuals from these estimates. An overview of PCA can be found in \cite{Jolliffe2022}.
We demonstrate that our proposed method works well on simulated and real data, when the predicted values (and the residuals) are calculated in any of the four mentioned ways. Furthermore, we back this up mathematically by showing that the two correlation measures agree (if the number of SNPs is large) under the correct admixture model for PCA 1 and PCA 2.
For the latter, a few additional assumptions are required. The estimated covariance (and correlation coefficient) under the proposed model might be seen as a correction term for population structure. Subtracting it from the empirical covariance, thus gives a covariance estimate with baseline zero under the correct model, independent of the population structure. It is natural to suspect that similar can be done in models with population structure and kinship, which we will pursue in a subsequent study.
In the next section,
we describe the model, the statistical approach to compute the residuals, and how we evaluate model fit. In addition, we give mathematical statements that show how the method performs theoretically. In the `Results' section,
we provide analysis of simulated and real data, respectively. We end with a discussion.
Mathematical proofs are collected in the appendix.
\section{Materials and methods
\subsection{Notation}
For an $\ell_1\times\ell_2$ matrix $A=(A_{ij})_{i,j}$, $A_{\star i}$ denotes the $i$-th column of $A$, $A_{i \star }$ the $i$-th row, $A^T$ the transpose matrix, and $\text{rank}(A)$ the rank.
The Frobenius norm of a square $\ell\times\ell$ matrix $A$ is
$$\|A\|_F=\sqrt{\sum_{i=1}^\ell\sum_{j=1}^\ell A_{ij}^2}.$$
A square matrix $A$ is an orthogonal projection if $A^2=A$ and $A^T=A$. A symmetric matrix has $n$ real eigenvalues (with multiplicity) and the eigenvectors can be chosen such that they are orthogonal to each other. If the matrix is positive (semi-)definite, then the eigenvalues are positive (non-negative).
For a random variable/vector/matrix $X$, its expectation is denoted $\E[X]$ (provided it exist). The variance of a random variable $X$ is denoted $\var(X)$, and covariance between two random variables $X,Y$ is denoted $\cov(X,Y)$ (provided they exist). Similarly, for a random vector $X=(X_1,\ldots,X_n)$, the covariance matrix is denoted $\cov(X)$. For a sequence $X_m$, $m=0,\ldots,$ of random variables/vectors/matrices, if $X_m\to X_0$ as $m\to\infty$ almost surely (convergence for all realisations but a set of zero probability), we leave out `almost surely' and write $X_m\to X_0$ as $m\to\infty$ for convenience.
\subsection{The PCA and the admixture model}\label{subsec:model}
We consider a model with genotype observations from $n$ individuals, and $m$ biallelic sites (SNPs), where $m$ is assumed to be (much) larger than $n$, $m\ge n$. The genotype
$G_{si}$ of SNP $s$ in individual $i$ is assumed to be a binomial random variable
\begin{equation*}\label{eq:model}
G_{si}\sim \binomial(2,\Pi_{si}).
\end{equation*}
In matrix notation, we have $G\sim\binomial(2,\Pi)$ with expectation $\E( G\mid \Pi) = 2\Pi $, where $G$ and $\Pi$ are $m\times n$ dimensional matrices. Conditional on $\Pi$, we assume the entries of $G$ are independent random variables.
Furthermore, we assume the matrix $\Pi$ takes the form $\Pi=FQ$, where $Q$ is a (possibly unconstrained) $k\x n$ matrix of rank $k\le n$, and $F$ is a (possibly unconstrained)
$m\times k$ matrix, also of rank $k$ (implying $\Pi$ likewise is of rank $k$, Lemma \ref{lem:rankk}). Entry-wise, this amounts to
\[\Pi_{si}=(FQ)_{si}=\sum_{k=1}^kF_{sk}Q_{ki},\quad s=1,\dots,n,\quad i=1,\ldots,m. \]
For the binomial assumption to make sense, we must require the entries of $\Pi$ to be between zero and one.
In the literature, this model is typically encountered in the form of an admixture model
with $k$ ancestral populations, see for example, \cite{Pritchardea2000,genisanders2020}. The general unconstrained setting which applies to PCA has also been discussed \citep{CabrerosStorey2019}. In the case of an admixture model, $Q$ is a matrix of ancestral admixture proportions, such that the proportion of individual $i$'s genome originating from population $j$ is $Q_{ji}$. Furthermore, $F$ is a matrix of ancestral SNP frequencies, such that the frequency of the reference allele of SNP $s$ in population $j$ is $F_{sj}$. In many applications, the columns of $Q$ sum to one.
While we lean towards an interpretation in terms of ancestral population proportions and SNP frequencies, our approach does not enforce or assume the columns of $Q$ (the admixture proportions) to sum to one, but allow these to be unconstrained. This is advantageous for at least two reasons. First, a proposed model might only contain the major ancestral populations, leaving out older or lesser defined populations. Hence, the sum of ancestral proportions might be smaller than one. Secondly, when fitting a model with fewer ancestral populations than the true model, one should only require the admixture proportions to sum to at most one.
\subsection{The residuals
Our goal is to design a strategy to assess the hypothesis that $\Pi$ is a product of two matrices.
As we do not know the true $k$, we suggest a number $k'$ of ancestral populations and estimate the model parameters under this constraint. That is, we assume a model of the form
\begin{equation*
G\sim\binomial(2,\Pi_{k'}),\quad \Pi_{k'}=F_{k'}Q_{k'},
\end{equation*}
where each entry of $G$ follows a binomial distribution. $ Q_{k'}$ has dimension $k'\x n$, $F_{k'}$ has dimension $m\x k'$, and $\text{rank}(Q_{k'})=\text{rank}(F_{k'})=k'$, hence also $\text{rank}(\Pi_{k'})=k'$. Throughout, we use the index $k'$ to indicate the imposed rank condition, and assume $k'\le k$ unless otherwise stated. The latter assumption is only to guarantee the mathematical validity of certain statements, and is not required for practical use of the method.
Our approach is build on the residuals, the difference between observed and predicted data. To define the residuals, we let $P\colon \R^n\to\R^n$ be the orthogonal projection onto the $k$-dimensional subspace spanned by the $k$ rows of (the true) $Q$, hence $P=Q^T(QQ^T)^{-1}Q$, and $QP=Q$. Let $\widehat P_{k'}$ be an estimate of $P$ based on the data $G$, and assume $\widehat P_{k'}$ is an orthogonal projection onto a $k'$-dimensional subspace.
Later in this section, we show how an estimate $\widehat P_{k'}$ can be obtained from an estimate of $Q_{k'}$ or an estimate of $\Pi_{k'}$. Estimates of these parameters might be obtained using existing methods, based on for example, maximum likelihood analysis \citep{wang2003,Alexander2009,genisanders2020}. Furthermore, for the three PCA approaches, an estimate of the projection matrix can simply be obtained from eigenvectors of a singular value decomposition (SVD) of the data matrix.
We define the $m\times n$ matrix of residuals by
$$R_{k'} = G-2\widehat{\Pi} =G(I-\widehat P_{k'}),$$
where $G$ is the observed data and $G\widehat P_{k'}$, the predicted values. The latter might also be considered an estimate of $2\Pi$, the expected value of $G$.
This definition of residuals is in line with how the residuals are defined in a multilinear regression model as the difference between the observed data (here, $G$) and the projection of the data onto the subspace spanned by the regressors (here, $G\widehat P_{k'}$). The essential difference being that in a multilinear regression model, the regressors are known and does not depend on the observed data, while $\widehat P_{k'}$ is estimated from the data.
We assess the model fit by studying the correlation matrix of the residuals in two ways.
First, we consider the \emph{empirical covariance matrix } $\widehat B$ with entries
\begin{align*}
\widehat B_{ij}&=\frac 1{m-1}\sum_{s=1}^m (R_{k',si}-{\xbar R}_{k',i} )(R_{k',sj}-{\xbar R}_{k',j})\\
&=\frac 1{m-1}\sum_{s=1}^m (R_{k',si}R_{k',sj}-{\xbar R}_{k',i}\ {\xbar R}_{k',j}),
\end{align*}
where \[
{\xbar R}_{k'i} = \frac1m \sum_{s=1}^m R_{k',si},
\]
and the corresponding \emph{empirical correlation matrix} with entries
\[
\widehat b_{ij}=\frac{\widehat B_{ij}}{\sqrt{\widehat B_{ii}\widehat B_{jj}}},
\]
$i,j=1,\ldots,n$.
Secondly, we consider the \emph{estimated covariance matrix}
\[
\widehat C = (I-\widehat P_{k'})\widehat D(I-\widehat P_{k'})
\]
with corresponding \emph{estimated correlation matrix},
\begin{align*}
\widehat c_{ij} = \frac{\widehat C_{ij}}{\sqrt{\widehat C_{ii}\widehat C_{jj}}},
\end{align*}
$i,j=1,\ldots,n$.
Here, $\widehat D$ is the $n\times n$ diagonal matrix containing the average heterozygosities of each individual,
\begin{equation*}
\label{eq:hatD}
\widehat D_{ii}= \frac1m\sum_{s=1}^m G_{si}(2-G_{si}), \quad i=1,\ldots, n.
\end{equation*}
Under reasonable regularity conditions, we can quantify the behaviour of $\widehat B$ and $\widehat C$ as the number of SNPs become large. Specifically, we assume the rows of $F$ are independent and identically distributed with distribution $\text{Dist}(\mu, \Sigma)$, where $\mu$ denote the $k$-dimensional mean vector of the distribution, and $\Sigma$ the $k\x k$-covariance matrix, that is,
\begin{align*}
F_{s\star}=(F_{s1},\ldots,F_{sk})\, &\stackrel{\text{iid}}\sim \,\text{Dist}(\mu,\Sigma),
\end{align*}
$s=1,\ldots,m$.
The matrix $Q$ is assumed to be non-random, that is, fixed. These assumptions are standard and typically used in simulation of genetic data, see for example, \cite{PickrellPritchard2012,CabrerosStorey2019,genisanders2020}. Often $\text{dist}(\mu, \Sigma)$ is taken to be the product of $k$ independent uniform distributions in which case $\mu=0.5(1 ,1 ,\ldots ,1)$ and $\Sigma$ is a diagonal matrix with entries $1/12$, though other choices have been applied, see for example \citet{Balding1995,Conomosea2016}.
Let $D$ be the diagonal matrix with entries
\begin{equation}\label{eq:Dvar}
D_{ii}=2\E[\Pi_{si}(1-\Pi_{si})],\quad i=1,\ldots,n.
\end{equation}
It follows from Lemma~\ref{thm:unbiasedD} in the appendix,
that $\widehat D $ converges
to $D$ as $m\to \infty$. Furthermore, as $D_{ii}$ is the variance of $G_{si}$ (it is binomial), then $\widehat D_{ii}$ might be considered an estimate of this variance.
The proofs of the statements are in the appendix.
\begin{theorem}\label{thm:BC}
Let $k'\le k$. Under the given assumptions, suppose further that $\widehat P_{k'}\to P_{k'}$ as $m\to \infty$, for some matrix $P_{k'}$. Then, $P_{k'}$ is an orthogonal projection.
Furthermore, the following holds,
\begin{align*}
\widehat B &\,\to \, (I-P_{k'})(D+4 Q^T\Sigma Q)(I-P_{k'}), \\
\widehat C &\,\to\, (I- P_{k'}) D(I- P_{k'}),
\end{align*}
as $m\to\infty$. Hence, also
\begin{align*}
\widehat B\,-\,\widehat C\,\, &\to\,\, 4(I- P_{k'})Q^T\Sigma Q(I- P_{k'}) \\
&\,\,=\,\,4(P- P_{k'})Q^T\Sigma Q(P- P_{k'}),
\end{align*}
as $m\to\infty$. For $k'=k$, if $P_k= P$, then the right hand side is the zero matrix, whereas this is not the case in general for $k'<k$.
\end{theorem}
\begin{theorem}\label{thm:n-1}
Assume $k'=k$ and $P_k= P$. Furthermore, suppose as in Theorem~\ref{thm:BC} and that the vector with all entries equal to one is in the space spanned by the rows of $Q$ (this is, for example, the case if the admixture proportions sum to one for each individual). Then,
\begin{align}\label{eq:-1}
\frac{\sum_{i=1}^n\sum_{j= 1,i\not=j }^n\widehat B_{ij}}{\sum_{i=1}^n \widehat B_{ii}}&\to \ -1,\quad\text{as}\quad m\to\infty.
\end{align}
In addition, if $Q$ takes the form
$$Q=\begin{pmatrix} Q_1 & 0& \cdots &0\\ 0 & Q_2 &\cdots &0\\ \vdots &\vdots & \ddots & \vdots \\ 0&0&\cdots& Q_r\end{pmatrix}$$
where $Q_\ell$ has dimension $k_\ell\times n_\ell$, $\sum_{\ell=1}^r k_\ell=k$ and $\sum_{\ell=1}^r n_\ell=n$, then \eqref{eq:-1} holds for each component of $n_\ell$ individuals. If $Q_\ell=(1 \ldots 1)$, then
$$\widehat b_{ij}\ \to\ -\frac{1}{n_\ell-1},\quad\text{as}\quad m\to\infty,$$
for all individuals $i,j$ in the $\ell$-th component, irrespective the form of $Q_{\ell'}$, $\ell'\not=\ell$.
\end{theorem}
\begin{theorem}\label{thm:sub}
Assume $k'=k$ and $P_k= P$. Furthermore, suppose as in Theorem~\ref{thm:BC} and that $Q$ takes the form
$$Q=\begin{pmatrix} Q_1 & Q_2 \\ 0 & Q_3 \end{pmatrix},$$
where $Q_1=(1 \ldots 1)$ has dimension $ 1\times n_1$, $n_1\le n$.
Then, $\widehat b_{ij}$ converges as $m\to\infty$ to a value larger than or equal to $-\tfrac{1}{n_1-1},$
for all $i,j=1,\ldots,n_1$.
\end{theorem}
The same statements in the last two theorems hold with $\widehat B$ and $\widehat b$ replaced by $\widehat C$ and $\widehat c$, respectively.
The three theorems provide means to evaluate the model. In particular, Theorem~\ref{thm:BC} might be used to assess the correctness (or appropriateness) of the proposed $k'$, while Theorem~\ref{thm:n-1} and Theorem~\ref{thm:sub} might be used to assess whether data from a group of individuals (e.g., a modern day population) originates from a single ancestral population, irrespective, the origin of the remaining individuals. We give examples in the Results section.
The work flow is shown in Algorithm \ref{alg:alg1}. We process real and simulated genotype data using PCA 1, PCA 2, PCA 3, and the software ADMIXTURE, and evaluate the fit of the model.
\begin{algorithm}
\begin{enumerate}
\item Choose $k'$,
\item Compute an estimate $\widehat P_{k'}$ of the projection $P$,
\item Calculate the residuals $R_{k'}=G(I-\widehat P_{k'})$,
\item Calculate the correlation coefficients, $\widehat b$ and $\widehat c$,
\item Plot $\widehat b$ and the difference, the corrected correlation coefficients, $\widehat b-\widehat c$,
\item Assess visually the fit of the model.
\end{enumerate}
\caption{Work flow of the proposed method}\label{alg:alg1}
\end{algorithm}
\subsection{Estimation of $P_{k'}$
Estimation of $Q, F,$ and $ \Pi$ has received considerable interest in the literature, using for example, maximum likelihood \citep{wang2003,Alexander2009}, Bayesian approaches \citep{Pritchardea2000} or PCA \citep{engelhardtstephens2010}.
We discuss different ways to obtain an estimate $\widehat P_{k'}$ of $P$.
\subsubsection{Using an estimate $\widehat Q_{k'}$ of $Q_{k'}$ }
An estimate $\widehat P_{k'}$ might be obtained by projecting onto the subspace spanned by the $k'$ rows of $\widehat Q_{k'}$,
$$\widehat P_{k'}=\widehat Q_{k'}^T(\widehat Q_{k'}\widehat Q_{k'}^T)^{-1}\widehat Q_{k'},$$
assuming $\text{rank}(\widehat Q_{k'})=k'$ for the calculation to be valid.
We apply this approach to estimate the projection matrix using output from the software ADMIXTURE.
\subsubsection{Using an estimate $\widehat \Pi_{k'}$ of $\Pi_{k'}$ }
Let $\widetilde \Pi_{k'}$ be $k'$ linearly independent rows chosen from $\widehat \Pi_{k'}$ (out of $m$ rows). Then, an estimate $\widehat P_{k'}$ of $P_{k'}$ is
$$\widehat P_{k'}=\widetilde \Pi_{k'}^T(\widetilde \Pi_{k'}\widetilde \Pi_{k'}^T)^{-1}\widetilde \Pi_{k'},$$
assuming $\text{rank}(\widehat \Pi_{k'})=k'$ for the calculation to be valid. Alternatively, one might apply the Gram-Schmidt method in which case the vectors are orthonormal by construction and $\widehat P_{k'}=\widetilde \Pi_{k'}^T\widetilde \Pi_{k'}$. The estimate $\widehat P_{k'}$ is independent of the choice of the $k'$ rows, provided $\text{rank}(\widehat \Pi_{k'})=k'$.
\subsubsection{Using PCA 1
We consider a PCA approach, originally due to \cite{chenstorey2015}, to estimate the space spanned by the rows of $Q$. We follow the procedure laid out in \cite{CabrerosStorey2019}.
Let $\widehat H$ be the symmetric matrix
$$\widehat H= \frac1mG^TG-\widehat D.$$
Since $\widehat H$ is symmetric, all eigenvalues are real and the matrix is diagonalisable. Furthermore, $\widehat H$ is a variance adjusted version of $\frac1mG^TG$, see \eqref{eq:Dvar}.
Let $u_1,\ldots,u_{k'}$ be $k'\le k$ orthogonal eigenvectors belonging to the $k'$ largest eigenvalues of $\widehat H$, counted with multiplicities. Define the $n\x k'$ matrix $U_{k'}= (u_1,\ldots,u_{k'})$ and the $n\x n$ orthogonal projection matrix
$$\widehat P_{k'}= U_{k'}(U_{k'}^TU_{k'})^{-1}U_{k'}^T=U_{k'}U_{k'}^T$$
onto the subspace given by the span of the vectors $u_1,\ldots,u_{k'}$.
In this particular case, convergence of $\widehat P_{k'}$ can be made precise. Define the matrix $H=4Q^T(\Sigma+\mu\mu^T)Q$. Then, $H$ is symmetric and positive semi-definite because $\Sigma$ and $\mu\mu^T$ both are positive semi-definite. Hence, $H$ has non-negative eigenvalues.
Furthermore, according to Lemma~\ref{thm:hatHisunbiased} in the appendix,
$\widehat H$ converges
to $H$ as $m\to\infty$.
\begin{theorem}\label{thm:Pkconvergence}
Assume $k'\le k$. Let $\lambda_1\ge \ldots\ge \lambda_n\ge 0$ be the eigenvalues of $H$, with corresponding orthogonal eigenvectors $v_1,\ldots,v_n$. In particular, $\lambda_{k+1}=\ldots=\lambda_n=0$, as $Q$ has rank $k$. Let $P_{k'}$ be the orthogonal projection onto the span of $v_1,\ldots,v_{k'}$, that is,
$$P_{k'}= V_{k'}(V_{k'}^TV_{k'})^{-1}V_{k'}^T=V_{k'}V_{k'}^T,$$
where $V_{k'}=(v_1,\ldots,v_{k'})$.
Assume $k'=n$ or $\lambda_{k'}>\lambda_{k'+1}$, referred to as the eigenvalue condition. Then, $\widehat P_{k'}\to P_{k'}$
as $m\to\infty$. If the eigenvalue condition is fulfilled for $k'=k$, then $P_k=P$, that is, $P_k$ is the orthogonal projection onto the span of the row vectors of $Q$. In particular, the eigenvalue condition is fulfilled for $k'=k$ if and only if $\Sigma+\mu\mu^T$ is positive definite. The latter is the case if $\Sigma$ is positive definite.
\end{theorem}
For $k'=k$, the correct row space of $Q$ is found eventually, but not $Q$ itself. If $k'<k$, then a subspace of this row space is found, corresponding to the $k'$ largest eigenvalues.
As the data is not mean centred, we discard the first principal component, and use the subsequent $k'-1$ eigenvectors and eigenvalues.
\subsubsection{Using PCA 2 (mean centred data)}
A popular approach to estimation of $\Pi$ in the admixture model is PCA based on mean centred data, or mean and variance normalised data \citep{Pritchardea2000,engelhardtstephens2010,Patterson2006}.
Let $G_1=G-\tfrac 1n GE=G(I-\tfrac 1n E)$ be the SNP-wise mean centred genotypes, where $E$ is an $n\times n$ matrix with all entries equal to one. Following the exposition and notation in \cite{CabrerosStorey2019}, let $ G_1=U\Delta V^T$ be the SVD of $ G_1$, where $\Delta V^T$ consists of the row-wise principal components of $ G_1$, ordered according to the singular values.
Define
$$S_{k'}=\begin{pmatrix} U^T_{1:(k'-1)}\\ e\end{pmatrix},$$
where $e=(1\,1\,\ldots\,1)$ is a vector with all entries one, and $U^T_{1:(k'-1)}$ contains the top $k'-1$ rows of $U^T$. Then, an estimate of the projection is
$$ \widehat P_{k'}=S_{k'}^T(S_{k'}S_{k'}^T)^{-1}S_{k'}.$$
The squared singular values in the SVD decomposition of $G_1$ are the same as the eigenvalues of
$$\widehat H_1=\frac 1m G^T_1G_1=\frac 1m\left(I-\frac 1nE\right) G^TG \left(I-\frac 1nE \right) $$
\citep{Jolliffe2002}. We have
\begin{align}
\E[\widehat H_1] &=\frac 1m\left(I-\frac 1nE\right)\E[G^TG]\left(I-\frac 1nE\right) \nonumber\\
&=\left(I-\frac 1nE\right)(D+ 4Q^T(\Sigma+\mu\mu^T)Q)\left(I-\frac 1nE\right). \label{eq:H1}
\end{align}
Let $H_1$ denote the right hand side of \eqref{eq:H1}.
\begin{theorem}\label{thm:Pkconvergence2}
Let $\lambda_1\ge \ldots\ge \lambda_n$ be the eigenvalues of $H_1$, with corresponding orthogonal eigenvectors $v_1,\ldots,v_n$. In particular, $v_n=e$ and $ \lambda_n=0$. If $D$ has all diagonal entries positive, then $\lambda_{n-1}> 0$.
Let $k'\le n$ and let $P_{k'}$ be the orthogonal projection onto the span of $v_1,\ldots,v_{k'-1},e$, that is,
$$P_{k'}= V_{k'}(V_{k'}^TV_{k'})^{-1}V_{k'}^T,$$
where $V_{k'}=(v_1,\ldots,v_{k'-1},e)$.
If $k'=n$ or $\lambda_{k'}>\lambda_{k'+1}$, then $\widehat P_{k'}\to P_{k'}$ as $m\to\infty$.
\end{theorem}
There are no guarantees that for $k'=k$, we have $P_k=P$ and that the difference between $\widehat B$ and $\widehat C$ converges to zero for large $m$. However, this is the case under some extra conditions, and appears to be the case in many practical situations, see the Results section.
\begin{theorem}\label{thm:Pkconvergence3}
Assume $D=dI$ for some $d>0$. Furthermore, assume the vector $e$ is in the row space of $Q$ (this is, for example, the case if the admixture proportions sum to one for each individual). Then, $\lambda_k=\ldots=\lambda_{n-1}=d$, and $\lambda_n=0$.
If $\Sigma+\mu\mu^T$ is positive definite, then $\lambda_{k+1}>\lambda_k$ and $P_k=P$, where $P_k$ is as in Theorem~\ref{thm:Pkconvergence}.
As a consequence, with $k'=k$ in Theorem~\ref{thm:BC}, $\widehat B-\widehat C\to0$ as $m\to\infty$.
\end{theorem}
\subsubsection{Using PCA 3 (mean and variance normalised data)}
Let $ G_2= W^{-1}G_1$ be the SNP mean and variance normalised genotypes, where $W$ is an $m'\times m'$ diagonal matrix with $s$-th entry being the observed standard deviation of the genotypes of SNP $s$. All SNPs for which no variation are observed are removed, hence the number of SNPs might be smaller than the original number, $m'\le m$. Following the same procedure as for PCA 2,
let $ G_2=U\Delta V^T$ be the SVD of $ G_2$, where $\Delta V^T$ consists of the row-wise principal components of $ G_2$, ordered according to the singular values. Define
$$S_{k'}=\begin{pmatrix} V^T_{1:(k'-1)}\\ e\end{pmatrix},$$
where $e=(1\,1\,\ldots\,1)$, and $V^T_{1:(k'-1)}$ contains the top $k'-1$ rows of $V^T$. Then, an estimate of the projection is
$ \widehat P_{k'}=S_{k'}^T(S_{k'}S_{k'}^T)^{-1}S_{k'}.$
We are not aware of any theoretical justification of this procedure similar to Theorem~\ref{thm:BC}, but it appears to perform well in many practical situations, according to our simulations.
\subsection{Simulation of genotype data}\label{sec:simuldetails}
We simulated genotype data from different demographic scenarios using different sampling strategies. We deliberately choose different sampling strategies to challenge the method. We first made simple simulations that illustrate the problem of model fit as well as to demonstrate the theoretical and practical properties of the residual correlations that arise from having data from a finite number of individuals and a large number of SNPs. An overview of the simulations are given in Table \ref{tab:scenariosOverview}.
In the first two scenarios, the ancestral allele frequencies are simulated independently for each ancestral population from a uniform distribution, $F_{s i}\sim\text{Unif}(0,1)$ for each site $s=1,\ldots,m$ and each ancestral population $i=1,\ldots,k$. In scenario 1, we simulated unadmixed individuals from three populations with either an equal or an unequal number of sampled individuals from each population.
In scenario 2, we simulated two ancestral populations and a population that is admixed with half of its ancestry coming from each of the two ancestral populations.
In scenario 3, we set $F_{s i}\sim\text{Unif}(0.01,0.99)$ and simulated spatial admixture in a way that resembles a spatial decline of continuous gene flow between populations living in a long narrow island. We first simulated a single population in the middle of the long island. From both sides of the island, we then recursively simulated new populations from a Balding-Nichols distribution with parameter $F_{st}=0.001$ using the R package ‘bnpsd’ \citep{ochoaStorey2019a}. In this way, each pair of adjacent populations along the island has an $F_{st}$ of 0.001. Additional details on the simulation and an schematic visualization can be found in Figure 2 of \cite{genisanders2020}.
In scenario 4, we first simulated allele frequencies for an ancestral population from a symmetric beta distribution with shape parameter 0.03, $F_{s i}\sim\text{Beta}(0.3,0.3)$, which results in an allele frequency spectrum enriched for rare variants, mimicking the human allele frequency spectrum. We then sampled allele frequencies from a bifurcating tree (((pop1:0.1,popGhost:0.2):0.05,pop2:0.3):0.1,pop3:0.5), where pop1 and popGhost are sister populations and pop3 is an outgroup. Using the Balding-Nichols distribution and the $F_{st}$ branch lengths of the tree (see Figure \ref{Fig.5}), we sampled allele frequencies in the four leaf nodes. Then, we created an admixed population with 30\% ancestry from popGhost and 70\% from pop2. We sampled 10 million genotypes for 50 individuals from each population except for the ghost population which was not included in the analysis, and subsequently removed sites with a sample minor allele frequency below 0.05, resulting in a total of 694,285 sites.
In scenario 5, we simulated an ancestral population with allele frequencies from a uniform distribution $F_{s i} \sim$ \text{Unif} $(0.05, 0.95)$, from which we sampled allele frequencies for two daughter populations from a Balding Nicholds distributions with $F_{st} = 0.3$ from the ancestral population, using 'bnpsd'. We then created recent hybrids based on a pedigree where all but one founder has ancestry from the first population. The number of generations in the pedigree then determines the admixture proportions and the age of the admixture where F1 individuals have one unadmixed parent from each population and backcross individuals have one unadmixed parent and the other F1. Double backcross individuals have one unadmixed parent and the other is a backcross. We continue to quadruple backcross with one unadmixed parent and the other triple backcross. Note that for the recent hybrids the ancestry of the pair of alleles at each loci is no longer independent which is a violation of the admixture model.
\begin{table*}[htbp]
\centering
\caption{Overview of simulations.}
\begin{tableminipage}{\textwidth}
\begin{tabularx}{0.95\textwidth}{c|ccccc}
\hline
\bf Scenario & $\bm k$ & $\bm n$ & $\bm m$ &\bf Description & $\bm F_{is}$\footnote{Ancestral allele frequencies, $i=1,\ldots,k$} \\
\hline
1 & 3 & 20,20,20 & $500K$& Unadmixed & $\text{Unif}(0,1)$\\
1 & 3 & 10,20,30 & $500K$ & Unadmixed & $\text{Unif}(0,1)$\\
2 & 2 & 20,20,20 & $500K$ & Admixed & $\text{Unif}(0,1)$\\
2 & 2 & 10,20,30 & $500K$ & Admixed & $\text{Unif}(0,1)$\\
3 & & 500 & {\bf $100K$}\footnote{after applying MAF$>5$\% filtering, 88,082 remained.} & Spatial with $F_{st}=0.001$& $\text{Unif}(0.01,0.99)$\\ &&&& between adjacent populations \\
4 & 4 & 50,50,50,50,0\footnote{No reference samples are provided on the ghost population.} & {\bf$10M$}\footnote{after applying MAF$>5$\% filtering, 694,285 remained.} & Ghost admixture & $\text{Beta}(0.3, 0.3)$ \\
5 & 2 & 20,20,50 & $500K$ & Recent hybrids & $\text{Unif}(0.05,0.95)$\\
\hline
\end{tabularx}
\label{tab:scenariosOverview}
\end{tableminipage}
\end{table*}
\section{Results
\subsection{Scenario 1}
In this first set-up, we demonstrate the method using PCA 1 only. We simulated unadmixed individuals from $k=3$ ancestral populations
$$Q=\begin{pmatrix}
{1}_{n_{1}} & 0 &0 \\
0& {1}_{n_{2}} & 0 \\
0& 0& {1}_{n_{3}}
\end{pmatrix},$$
where $ {1}_{n_i}$ is a row vector with all elements being one, and $n_1+n_2+n_3=n$. We simulated genotypes for $n=60$ individuals with sample sizes $n_1, n_2$ and $n_3$, respectively, as detailed in the previous section.
In Figure~\ref{Fig.1}(A), we show the residual correlation coefficients for $k'=2,3$ and plot the corresponding major PCs. For the PCA 1 approach, the first principal component does not relate to population structure as the data is not mean centered, and we use the following $k'-1$ principal components.
When assuming that there are only two populations, $k'=2$, we note that the empirical correlation coefficients appear largely consistent within each population sample, but the corrected correlation coefficients are generally non-zero with different signs, which points to model misfit. In contrast, when assuming the correct number of populations is $k'=3$, the empirical correlation coefficients match nicely the theoretical values of $-\tfrac 1{n_i-1}$, which comply with Theorem~\ref{thm:n-1} (see Table~\ref{tab:my_label1}).
A fairly homogeneous pattern in the corrected correlation coefficients appears around zero across all samples.
This is a good indication that the model fits well and that the PCA plots using principal components 2 and 3 reflex the data well.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{figure1.pdf}
\caption{Results for simulated Scenario 1. (A) The upper triangle in the plots shows the empirical correlation coefficients $\hat{b}$ and the lower triangle shows the corrected correlation coefficients $\hat{b}-\hat{c}$. (B) The major principal components ($k'=3$) result in a clear separation of the three samples (all data points within each sample are almost identical). }
\label{Fig.1}
\end{figure}
\begin{table*}[!ht]
\caption{The mean (standard deviation) of $\hat{b}$ and $\hat{b}-\hat{c}$ within each population using PCA 1. }
\label{tab:my_label1}
\centering
\begin{tableminipage}{\textwidth}
\begin{tabularx}{0.95\textwidth}{c|cc|ccccc}
\hline
\bf Scenario 1 &$\bm k^\prime$ & $\bm n$ &&\bf pop1 &\bf pop2 &\bf pop3 \\
\hline
& $3$ & $(20,20,20)$ &$\hat{b}\ $\footnote{The second line of $\hat{b}$ in each case shows the theoretical value obtained from the limit in Theorem~\ref{thm:BC}.}& -0.0526 (0.0015) & -0.0526 (0.0016) & -0.0526 (0.0016) \\
& & && -0.0526 & -0.0526 & -0.0526 \\
& & &$\hat{b}-\hat{c}$ & 0e-04 (0.0015) & 0e-04 (0.0016) & 0e-04 (0.0016) \\
& & $(10,20,30)$ &$\hat{b}$& -0.1111 (0.0011) & -0.0526 (0.0016) & -0.0345 (0.0016)\\
& && & -0.1111 & -0.0526 & -0.0345 \\
&& &$\hat{b}-\hat{c}$& 0e-04 (0.0012) & 0e-04 (0.0016) & 0e-04 (0.0016)\\
\hline
\bf Scenario 2 &$\bm k^\prime$ & $\bm n$ &&\bf pop1 &\bf admixed &\bf pop3 \\
\hline
& $2$ & $(20,20,20)$&$\hat{b}$ & -0.0419 (0.0015) & -0.0192 (0.0015) & -0.0420 (0.0015) \\
& & & & -0.0420 & -0.0193 & -0.0420 \\
& & &$\hat{b}-\hat{c}$ & 0e-04 (0.0015) & 0e-04 (0.0015) & 0e-04 (0.0015) \\
& &$(10,20,30)$ &$\hat{b}$& -0.0701 (0.0018) & -0.0228 (0.0014) & -0.0304 (0.0016)\\
& & & & -0.0701 & -0.0229 & -0.0304 \\
&& &$\hat{b}-\hat{c}$ & 0e-04 (0.0017) & 0e-04 (0.0014) & 0e-04 (0.0016) \\
\hline
\bf Scenario 4 &$\bm {k^\prime}$ &$\bm n$&&\bf pop1 &\bf pop2 &\bf pop3 &\bf pop4\\
\hline
&$3$ & $(50,50,50,50)$ & $\hat{b}$& -0.0190 (0.0015) & 0.0027 (0.0015) & -0.0204 (0.0017) & 0.0122 (0.0013) \\
&&&$\hat{b}-\hat{c}$& 0.0009 (0.0015) & 0.0147 (0.0015) & 0e-04 (0.0017) & 0.0208 (0.0013)\\
&$4$ && $\hat{b}$& -0.0204 (0.0015) & -0.0204 (0.0015) & -0.0204 (0.0017)& -0.0204 (0.0014)\\
&&& $\hat{b}-\hat{c}$ & 0e-04 (0.0015) & 0e-04 (0.0015) & 0e-04 (0.0017)& 0e-04 (0.0013)\\
\hline
\end{tabularx}
\end{tableminipage}
\end{table*}
\subsection{Scenario 2}
In this set-up we also include admixed individuals. We simulated samples from two ancestral populations and individuals that are a mix of the two. We then applied all three PCA procedures and the software ADMIXTURE to the data. Specifically, we choose
$$Q=\begin{pmatrix}
{1}_{n_{1}} & \frac{1}{2} {1}_{n_{2}} &0 \\
0& \frac{1}{2} {1}_{n_{2}} & {1}_{n_{3}}
\end{pmatrix},$$
with $k=2$ true ancestral populations, and $(n_1, n_2, n_3)=(20,20,20)$ or $(n_1, n_2, n_3)=(10,20,30)$, see the previous section
for details. We analysed the data with $k^{\prime}=1,2,3$, and obtained the correlation structure shown in Figures~\ref{Fig.2} and \ref{Fig.3}, and Table~\ref{tab:my_label1}. The two standard approaches PCA 2 and PCA 3 show almost identical results, hence only PCA 2 is shown in the figures. Both PCA 2 and PCA 3 use the top principal components, while PCA 1 disregards the first, hence the discrepancy in the axis labeling in Figures~\ref{Fig.2}(b) and \ref{Fig.3}/b).
For $k'=1$ none of the principal components are used and the predicted normalized genotypes is simply 0.
All four methods show consistent results, in particular, for the correct $k'$ ($=2$), while there are smaller discrepancies between the methods for wrong $k'=1,3$. This is most pronounced for PCA 1 and ADMIXTURE. We note that the average correlation coefficient of $\widehat b$ within each population sample comply with Theorem~\ref{thm:BC} (see Table~\ref{tab:my_label1}).
A fairly homogeneous pattern in the corrected correlation coefficients appears around zero across all samples for $k'=2$, as in scenario 1, which shows that the model fits well. However, unlike in scenario 1 the bias for the empirical correlation coefficient is not a simple function of the sample size (see Table~\ref{tab:my_label1}).
In this case, and similarly in all other investigated cases, we don't find any big discrepancies between the four methods. Therefore, we only show the results of PCA 1 for which we have theoretical justification for the results.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{figure2.pdf}
\caption{Results for simulated Scenario 2 with equal sample sizes. (A) For each of PCA 1, PCA 2 and ADMIXTURE, the upper left triangle in the plots shows the empirical correlation $\hat{b}$ and the lower right triangle shows the difference $\hat{b}-\hat{c}$ with sample sizes $(n_1, n_2, n_3)=(20,20,20)$. (B) The major principal component for the PCA based methods for $k'=2$ (in which case there is only one principal component). Individuals within each sample have the same color. (C) The estimated admixture proportions in the case of ADMIXTURE. }
\label{Fig.2}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{figure3.pdf}
\caption{Results for simulated Scenario 2 with unequal sample sizes. (A) For each of PCA 1, PCA 2 and ADMIXTURE, the upper left triangle in the plots shows the empirical correlation $\hat{b}$ and the lower right triangle shows the difference $\hat{b}-\hat{c}$ with sample sizes $(n_1, n_2, n_3)=(20,20,20)$. (B) The major principal component for the PCA based methods for $k'=2$ (in which case there is only one principal component). Individuals within each sample have the same color. (C) The estimated admixture proportions in the case of ADMIXTURE. }
\label{Fig.3}
\end{figure*}
\subsection{Scenario 3}
We simulated genotypes for $n=500$ individuals at $m=88,082$ sites with continuous genetic flow between individuals, thus there is not a true $k$. We analysed the data assuming $k'=2,3$, see Figure~\ref{Fig.4}. In the figure, the individuals are ordered according to the estimated proportions of the ancestral populations, hence it appears there is a color wave pattern in the empirical and the corrected correlation coefficients, see Figure~\ref{Fig.4}(A). As expected, the corrected correlation coefficients are closer to zero for $k'=3$ than $k'=2$, though the deviations from zero are still large. We thus find no support for the model for either value of $k'$. This is consistent with the plots of the major PCs, that show continuous change without grouping the data into two or three clusters, see Figure~\ref{Fig.4}(B).
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{figure4.pdf}
\caption{Results for simulated scenario 3. (A) The upper triangle in the plots shows the empirical correlation $\hat{b}$ and the lower triangle shows the difference $\hat{b}-\hat{c}$. (B) The major principal components (only one in the case of $k'=2$). }
\label{Fig.4}
\end{figure}
\subsection{Scenario 4}
This case is based on the tree in Figure~\ref{Fig.5}, which include an unsampled (so-called) ghost population, popGhost. The popGhost is sister population to pop1.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\linewidth]{figure5.pdf}
\caption{Schematic of the tree used to simulate population allele frequencies for Scenario 4, including 5 populations: pop1, pop2, pop3, pop4 and popGhost. The pop4 population is the result of admixture between pop2 and popGhost, for which there are no individuals sampled and is therefore a ghost population. The values in the branches indicate the drift in units of $F_{ST}$. The values along the two admixture edges are the admixture proportions coming from each population. }
\label{Fig.5}
\end{figure}
We simulated genotypes for $n=200$ individuals: 150 unadmixed samples from pop1, pop2, and pop3; and 50 samples admixed with 0.3 ancestry from popGhost and 0.7 ancestry from pop2 (as pop4), as detailed in the previous section.
As there is drift between the populations and hence genetic differences, the correct $k=4$ (pop1, pop2, pop3, popGhost). This is picked up by our method that clearly shows $k'=3$ is wrong with large deviation from zero in the corrected correlation coefficients. In contrast, for $k'=4$, the corrected correlation coefficients are almost zero (Figure~\ref{Fig.6}).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{figure6.pdf}
\caption{Results for simulated scenario 4. (A) The upper triangle in the plots shows the empirical correlation $\hat{b}$ and the lower triangle shows the difference $\hat{b}-\hat{c}$. (B) The major principal components for $k'=4$, that result in a clear separation of the four samples (all data points within each sample are almost identical). }
\label{Fig.6}
\end{figure}
\subsection{Scenario 5}
In the last example, we simulated two populations (originating from a common ancestral population) and created admixed populations by backcrossing, as detailed in the previous section.
Thus, the model does not fulfil the assumptions of the admixture model in that the number of reference alleles are not binomially distributed, but depends on the particular backcross and the frequencies of the parental populations.
We simulate genotypes for $n=90$ individuals at $m=500,000$ sites. There are 20 homogeneous individuals from each parental population, and 10 different individuals from each of the different recent admixture classes. Then, we analysed the data with $k'=2$ and found the corrected correlation coefficients deviated consistently from zero, in particular for one of the parental populations (Figure~\ref{Fig.7}). We are thus able to say the admixture model does not provide a reasonable fit.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{figure7.pdf}
\caption{Results for simulated scenario 5 (recent admixture). (A) The upper triangle in the plots shows the empirical correlation $\hat{b}$ and the lower triangle shows the difference $\hat{b}-\hat{c}$. (B) The major principal component for $k'=2$. }
\label{Fig.7}
\end{figure}
\subsection{Real data
We analysed a whole genome sequencing data set from the 1000 Genomes Project \citep{Auton}, see also \citet{genisanders2020} where the same data is used. It consists of data from five groups of different descent: a Yoruba group from Ibadan,
Nigeria (YRI), residents from Southwest US with African ancestry (ASW), Utah residents with Northern and Western European ancestry (CEU), a group with Mexican ancestry from Los Angeles, California (MXL), and a group of Han Chinese from Beijing, China (CHB) with sample sizes $108, 61, 99, 63$ and $103$, respectively, in total, $n=434$. We kept only sites present in the Human Origins SNP panel \citep{lazaridis2014}, with a total of $m=406,279$ SNPs were left after a MAF filter of 0.05.
We analyzed the data with $k'=3,4$. For $k'=3$, Figure~\ref{Fig.8} shows that it is not possible to explain the relationship between MXL, CEU and CHB, indicating that MXL is not well explained as a mixture of the two. For $k'=4$, the color shades of the corrected correlation coefficients are almost negligible within each population, pointing at a contribution from a native american population. This is further corroborated in Figure~\ref{Fig.8}(D) that shows estimated proportions from the four ancestral populations using the software ADMIXTURE.
\begin{figure}[!ht]
\centering
\captionsetup{font={small},{labelfont=bf}}
\includegraphics[width=0.8\linewidth]{figure8.pdf}
\caption{The residual correlation coefficient, the inferred population structure and the admixture proportions of a real human data from 1000 Genomes project. (A) The upper triangle in the plots shows the empirical correlation coefficient $\hat{b}$ and the lower triangle shows the difference $\hat{b}-\hat{c}$. (B) The three major principal component for PCA 1 for $k'=4$. (C) The eigenvalue for the first PC is removed and the eigenvalues corresponding to the remaining PCs are close to 0 after the forth PC. (D) The admixture proportions as estimated with ADMIXTURE. }
\label{Fig.8}
\end{figure}
\section{Discussion
We have developed a novel approach to assess model fit of PCA and the admixture model based on structure of the residual correlation matrix. We have shown that it performs well for simulated and real data, using a suit of different PCA methods, commonly used in the literature, and the ADMIXTURE software to estimate model parameters. By assessing the residual correlation structure visually, one is able to detect model misfit and violation of modelling assumptions.
The model fit is assessed by comparing visually two matrices of residual correlation coefficients. The theoretical and practical advantage of our approach lie in three aspects. First, our approach is computationally simple and fast. Calculation of the two residual correlation matrices and their difference is computationally inexpensive. Secondly, our approach provides a unified approach to model fitting based on PCA and clustering methods (like ADMIXTURE). In particular, it provides simple means to assess the adequacy of the chosen number of top principal components to describe the structure of the data. Assessing the adequacy by plotting the principal components against each other might lead to false confidence. In contrast, our approach exposes model misfit by plotting the difference between two matrices of the residual correlation coefficients.
Thirdly, it comes with theoretical guarantees in some cases. These guarantees are further back up by simulations in cases, we cannot provide theoretical validity.
Finally, our approach might be adapted to work on NGS data without estimating genotypes first, but working directly on genotype likelihoods.
\section*{Data availability}
The data sets used in this study are all publicly available, including simulated and real data. Information about the R code used to analyze and simulate data is available at \url{https://github.com/Ginwaitthreebody/evalPCA}. The variant calls for the 1000 Genomes Project data used are publicly available at \url{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/}.
\section*{Acknowledgements}
The authors are supported by the Independent Research Fund Denmark (grant number: 8021-00360B) and the University of Copenhagen through the Data+ initiative. SL acknowledges the financial support from the funding agency of China Scholarship Council. GGE and AA are supported by the Independent Research Fund Denmark (grant numbers: 8049-00098B and DFF-0135-00211B respectively).
|
1,116,691,497,221 | arxiv | \section{Introduction}
The statistical error in lattice QCD for certain quantities, such as for low-lying baryon masses, is currently reduced to levels such that it is of a similar magnitude to the systematic error due to neglecting QED effects.
Pushing forward for increased precision therefore requires the inclusion of QED if accuracy is to be maintained.
One complication in the formulation of QED on the lattice is that Gauss' Law dictates that, on a torus with periodic boundary conditions, only states with net-zero electric charge belong to the physical Hilbert space.
A naive local gauge--fixing results in unconstrained global zero-modes. The principle behind QED$_L$, the more common formulation of lattice QED,
is to decouple zero-modes from gauge field dynamics through enforcing the constraint $\int_{L^3} d^3x A_\mu(t,\mathbf{x}) = 0$. A disadvantage of this approach is that this constraint is non-local.
Therefore, it is not guaranteed that properties such as renormalisability will hold; they must be proven for individual observables, although it is relatively simple to address these issues at $\mathcal{O}(\alpha)$, and in fact it has been proven that at this order these properties hold for the spectrum.
The QED$_L$ formulation has been used in the reference work on the baryon spectrum by the BMW collaboration~\cite{bmw_2015}.
In this work, we will use the C${}^*$~ formulation~\cite{kronfeld1991,tantalo_2016} of QED on the lattice, which is based upon enforcing $C$-parity boundary conditions along the spatial directions for all the fields.
This results in a spatially anti-periodic $U(1)$ gauge field, meaning that spatial zero-modes sum to zero.
As this approach is totally local, renormalisability is guaranteed. Thus the spectra of electrically--charged states may be calculated without perturbation theory or gauge--fixing.
A caveat must be made on the ability to determine the spectra of flavoured particles. C${}^*$~ boundary conditions allow some flavour violation when the particles travel around the torus.
Colourless particles only (given a large enough box) may violate flavour under the conditions: $\Delta Q = 0 \text{ mod } 2$; $\Delta B = 0 \text{ mod } 2$; $\Delta F = 0 \text{ mod } 6$ where $Q$ is the electric charge in units of $e$, $B$ is the baryon number and $F = \sum_f F_f$ is the total sum of flavour numbers.
While the flavour mixing of pseudoscalar mesons is harmless as they will not mix with lighter states, and nucleons cannot mix with $B=0$ and are the lightest $B=1$ states, the $\Omega^{-}$ baryons can mix with lighter states. A possible example of this mixing is given in Fig.~\ref{fig:flavour_mixing}. However, it is expected on the basis of a detailed theoretical analysis that any flavour violation is strongly exponentially suppressed with volume~\cite{tantalo_2016} and here we are working under this assumption, postponing as future work a detailed numerical investigation of this issue.
This baryon mass calculation forms part of a larger effort by the RC${}^{*}$ collaboration. These simulations are $1+2+1$ simulations of $O(a)$-improved Wilson fermions with three C${}^*$~ dimensions and periodic boundaries in time. Details of and justification for the trajectories of renormalisation, along which the physical point will be reached, are presented in the companion proceedings~\cite{bushnaq2021update} and will not be discussed here. A Lüscher-Weisz $SU(3)$ gauge action with
$\beta = 3.24$ is used, along with the SW improvement coefficients depending on the fermion electric charge $c^{q=2/3}_{SW, SU(3)} = c^{q=-1/3}_{SW, SU(3)} = 2.18859$ and $c^{q=2/3}_{SW, U(1)} = c^{q=-1/3}_{SW, U(1)} = 1.0$.
The ensemble analysed in these proceedings is labelled Q*D-$32$-$1$ in the companion proceedings, in which full details of the ensemble may be found. This ensemble, of $64$ time-points and spatial dimensions $L =1.682(5)\text{fm} = 32a$, is relatively far from the physical point, with $\alpha_{\text{em}} = 0.04077(6) \sim 6\alpha_{\text{phys}}$.
\begin{figure}[th!]
\centering
\begin{tikzpicture}[scale=0.5]
\draw[red,thick] (22,1) coordinate (H) -- ++(-4.5,0) coordinate (I) -- ++(-3,3) coordinate (L) -- ++(-7,0) coordinate (M) -- ++(-4,-4) coordinate (N) -- (22,0) coordinate (O);
\draw[preaction={draw,line width=4,white},blue,thick] (0,0) coordinate (A) -- ++(2,0) coordinate (B) -- ++(5,5) coordinate (C) -- ++(8,0) coordinate (D) -- ++(4,-4) coordinate (E) -- ++(0,-2) coordinate (F) -- (0,-1) coordinate (G);
\draw[blue,thick] (0,-2) coordinate (P) -- ++(20,0) coordinate (Q) -- ++(0,1) coordinate (R) -- (22,-1) coordinate (S);
\draw[decorate,decoration={markings,mark=at position .57 with {\arrow[blue,scale=3]{>}}}] (B) -- (C);
\draw[decorate,decoration={markings,mark=at position .47 with {\arrow[blue,scale=3]{>}},mark=at position .58 with {\arrow[blue,scale=3]{<}}}] (C) -- (D);
\draw[decorate,decoration={markings,mark=at position .5 with {\arrow[blue,scale=3]{<}}}] (D) -- (E);
\draw[decorate,decoration={markings,mark=at position .5 with {\arrow[red,scale=3]{<}}}] (I) -- (L);
\draw[decorate,decoration={markings,mark=at position .45 with {\arrow[red,scale=3]{<}},mark=at position .6 with {\arrow[red,scale=3]{>}}}] (L) -- (M);
\draw[decorate,decoration={markings,mark=at position .55 with {\arrow[red,scale=3]{>}}}] (M) -- (N);
\draw[decorate,decoration={markings,mark=at position .65 with {\arrow[red,scale=3]{>}}}] (10,0) -- ++(2,0);
\draw[decorate,decoration={markings,mark=at position .65 with {\arrow[blue,scale=3]{>}}}] (10,-1) -- ++(2,0);
\draw[decorate,decoration={markings,mark=at position .65 with {\arrow[blue,scale=3]{>}}}] (10,-2) -- ++(2,0);
\draw[line width=3] (1,-2.5) -- (1,.5) node[above] {$\Omega^-$};
\draw[line width=3] (9,-2.5) -- (9,.5) node[above] {$\Xi^0$};
\draw[line width=3] (8,3.5) -- (8,5.5) node[above] {$K^-$};
\draw[line width=3] (14,3.5) -- (14,5.5) node[above] {$K^+$};
\draw[line width=3] (21,-1.5) -- (21,1.5) node[above] {$\Sigma^{*+}$};
\path (A) node[left] {$s$};
\path (G) node[left] {$s$};
\path (P) node[left] {$s$};
\path (H) node[right] {$u$};
\path (O) node[right] {$u$};
\path (S) node[right] {$s$};
\end{tikzpicture}
\caption{Example of flavour mixing for $\Omega^{-}$ baryon}
\label{fig:flavour_mixing}
\end{figure}
\section{Method}
\subsection{$U(1)$--gauge--invariant charged correlators}
Following the treatment prescribed in Ref.~\cite{tantalo_2016}, we can create through the use of a dressing factor an electrically--charged fermion operator $\Psi$ that is invariant under $U(1)$ local-gauge transformations. We use the `string' dressing factor given in Equation~(3.9) of Ref.~\cite{tantalo_2016}. In the rest of these proceedings, we use capital letters to denote $U(1)$--gauge--invariant quark operators.
\subsection{Baryon interpolating operators} \label{section: bar_interp}
For the proton, which is a spin-$\frac{1}{2}$ baryon, we use the interpolating operator
\begin{equation}
\mathcal{O}^{\pm}_{ a}(x) = P^{\pm}_{ab} \epsilon_{ABC} \left[U^{A}_c(x) \left(\mathcal{C} \gamma_5\right)_{cd} D^B_d (x)\right] U^C_{b} (x),
\end{equation}
where $A,B,C$ are colour and $a,b,c,d$ are Dirac indices, $\epsilon$ is the anti-symmetric tensor, $U$ and $D$ are the $U(1)$--gauge--invariant up and down quark operators and $\mathcal{C}$ is the charge--conjugation matrix. The neutron operator is simply obtained by $U\leftrightarrow D$. These interpolating operators are known to have a good projection on the ground state in QCD~\cite{zanotti2003}. The state is projected to a positive or negative parity state using the projector $P^{\pm} = \frac{1}{2} (1 \pm \gamma_0)$.
The $\Omega^-$ baryon belongs to a vertex of the spin-$\frac{3}{2}$ decuplet and is calculated here using the interpolating operators
\begin{equation}
\mathcal{O}^{i}_{a}(x) = \epsilon_{ABC} \left[S^{A}_b (x) \left(\mathcal{C} \gamma^i\right)_{bc} S^B_c (x)\right] S^C_{a} (x),
\end{equation}
where $i$ is a spatial Lorentz index and $S$ is the $U(1)$--gauge--invariant strange quark operator.
The correlator $C^{ij}_{ab}(t) = T\langle 0| \mathcal{O}^i_a(t)\bar{\mathcal{O}}^{j}_b(0)|0\rangle $ contains contributions from spin-$\frac{1}{2}$ and spin-$\frac{3}{2}$ states. Following the treatment in Ref.~\cite{zanotti2003} we use the projection
\begin{equation}
C^{\frac{3}{2}}_{ab} = \sum^3_{i,j=1} \left(C^{ij} \mathcal{P}^{ji}\right)_{ab}; \qquad
\mathcal{P}^{ij} = \delta^{ij} - \gamma^i \gamma^j,
\end{equation}
where $i,j$ are spatial Lorentz indices.
This correlator is then projected to a definite parity state by taking the trace over Dirac indices with the same parity projector $P^{\pm}$ as above:
\begin{equation}
C^{\frac{3}{2},\pm} = \text{Tr}[C^{\frac{3}{2}} P^{\pm}].
\end{equation}
Both the octet and decuplet correlators are finally folded according to $C(t) = C^+(t)-C^-(T-t)$ to reduce the statistical fluctuations on a given time-slice.
As anticipated in the Introduction, the results presented below correspond to the fermion--connected part of the baryon correlators. Indeed, by relying on the theoretical analysis of Ref.~\cite{tantalo_2016}, we postpone a detailed investigation of the contributions corresponding to the contractions $\langle\Psi(x) \Psi^T(0)\rangle$, that are peculiar of $C^*$--boundary conditions and that induce the spurius flavour mixings discussed above, to future work on the subject.
\subsection{Smearing}
In order to optimise the isolation of the ground state, we use a combination of gradient--flow gauge smearing and Gaussian fermion smearing. We smear the $SU(3)$ gauge fields $V(x,\mu)$ using the gradient--flow specified for periodic spatial boundary conditions~\cite{luscher_2010} by
\begin{align}
&\dot{V}_t(x, k) = -g_0^2 \{\delta_{x,k}S_w^{\mathrm{spatial}}(V)\}V_t(x,k)\;,
\qquad V_0(x,k)=V(x,k)\;,
\qquad k=1,2,3\;,
\nonumber \\
&\delta_{x,k}f(V) = T^a \delta^a_{x,k}f(V)\;,
\qquad\qquad\ \qquad \delta^a_{x,k}f(V) =
\frac{d}{ds}f(e^{sX}V)\large|_{s=0}\;,
\nonumber \\
&X(y,i) = \begin{dcases} T^a &\mbox{if } (y, i) = (x, k)
\\
0 & \mbox{otherwise}
\end{dcases} \;,
\end{align}
to produce smeared gauge links $V_t(x,k)$. Here $T^a$ are the generators of the $SU(3)$ Lie algebra and $S_w^{\mathrm{spatial}}(V)$ is the spatial part of the $SU(3)$--Wilson action (the sum over spatial plaquettes without any prefactor).
It is important to note here that the smearing is applied on the spatial dimensions only, i.e.~$\dot{V}_t(x,0) = 0$,
and only on the gauge links that enter into the fermion smearing operators. We have applied to our correlators one level of gauge smearing with evolution time $t = 180 \varepsilon$ with a resolution of $\varepsilon=0.02$.
We have also checked that this procedure is roughly equivalent to using the more conventional APE smearing when the plaquette, computed in terms of smeared links, is matched. A technical advantage of using the gradient---flow is that unitarity of the links is exactly preserved at any stage.
The smeared gauge links are used in the Gaussian smearing of the gauge--invariant fermion operator $\Psi$ to give the smeared operator $\Psi_{\text{smeared}}$:
\begin{align}
&\Psi_{\text{smeared}} = (1+\kappa_{g}H)^{N} \Psi; \\
&H_t(x, y) = \sum^{3}_{j=1}\left\{ V_t(x,j)\delta (x+\hat{j},y)+ V_t(x-\hat{j},j)^{\dagger} \delta (x-\hat{j},y) \right\}\;.
\end{align}
We have applied Gaussian smearing on both the source and the sink, with three levels of smearing each $N$ = ($0$, $200$, $400$), whilst keeping $\kappa_{g}$ = $0.5$ fixed.
\subsection{Generalised Eigenvalue Problem}
The Generalised Eigenvalue Problem (GEVP)~\cite{gevp} is a standard method of spectral decomposition used to optimise the ground state overlap and to explore excited states. We build a basis of interpolating operators using all possible combinations of three levels of Gaussian smearing on both the source and the sink. All chosen interpolating operators have the same amount of gradient--flow gauge smearing. The correlators with different levels of fermion smearing can then be expressed as a $3$-by-$3$ correlator matrix $C_{nm}$ with $n$ and $m$ indexing the smearing levels on the source and sink respectively. This correlator matrix is then fed into the GEVP. The normalisation time-point for the GEVP was chosen to be $x_0 = 1$.
Fig.~\ref{fig: n_corr} shows the neutron correlator for different levels of Gaussian smearing, folded as described in Section~\ref{section: bar_interp}, and then solved with the GEVP to obtain the spectrum, see Fig.~\ref{fig: n_gevp}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{corr_allsmears_C5C5_udd_oct.pdf}
\caption{Correlator for different levels of Gaussian smearing} \label{fig: n_corr}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{gevp_energies_C5C5_udd_oct.pdf}
\caption{Spectrum from GEVP} \label{fig: n_gevp}
\end{subfigure}
\caption{\textbf{Neutron analysis.} Fig.~\ref{fig: n_corr} shows the neutron correlator at three different levels of fermion smearing at both the source and the sink; the correlators are labelled in the legend as $(n_{\text{source}},n_{\text{sink}})$, indicating $n_{\text{source}}$ smearing levels on the source and $n_{\text{sink}}$ levels on the sink. We see that increased smearing reduces the curvature of the correlator at small times. These correlators with different smearing levels when analysed using the GEVP give three distinct energy levels, as shown in Fig.~\ref{fig: n_gevp}, with the ground state in blue, for which we see a long plateau, and excited states in red and green. The plateau without error is shown in lighter blue, with the range of the line showing the points used in the fit.}
\label{fig:neutron}
\end{figure}
\section{Results}
\begin{figure}[ht!]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{gevp_energies_C5C5_uud_oct_fv.pdf}
\caption{Proton spectrum}\label{fig:proton_spectrum}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{gevp_ground_state_C5C5_uud_oct_fv.pdf}
\caption{Proton ground state plateau}\label{fig:proton_plateau}
\end{subfigure}
\caption{\textbf{Proton analysis.} Fig.~\ref{fig:proton_spectrum} shows the proton spectrum from the GEVP. The ground state effective mass is given in blue, while excited states are shown in red and green. The ground state is shown in more detail in Fig.~\ref{fig:proton_plateau}, with the plateau in red and its error in lighter blue. We obtain a mass of $m_p = 1282(8)$~MeV.}
\label{fig:proton}
\end{figure}
\begin{figure}[ht!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{gevp_energies_sp32.CGmu_ddd_dec_fv.pdf}
\caption{$\Omega^{-}$ spectrum}\label{fig:omega_spectrum}
\end{subfigure}\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{gevp_ground_state_sp32.CGmu_ddd_dec_fv.pdf}
\caption{$\Omega^{-}$ ground state plateau}\label{fig:omega_plateau}
\end{subfigure}
\caption{\textbf{$\Omega^{-}$ baryon analysis.} Fig.~\ref{fig:omega_spectrum} shows the GEVP output for the $\Omega^{-}$ baryon. The blue points show the ground state effective mass, whereas the red and green points show the excited states. Fig.~\ref{fig:omega_plateau} shows the ground state in more detail. The plateau is shown in red and its error in lighter blue, and we see that the plateau is rather long for these statistics. We find a mass of $m_{\Omega^{-}} = 1633(8)$~MeV.}
\label{fig:omega}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.5\textwidth]{gevp_ground_state_C5C5_mass_diff_uud_oct_udd_oct_fv.pdf}
\caption{\textbf{Proton-neutron mass difference analysis.} The difference between the proton and neutron effective mass curves is shown here. The plateau is shown in red and its error is shown in lighter blue, with the zero-intercept also shown in blue. We see a plateau that starts at a small $x_0$-value and that the signal lasts for a reasonably long $x_0$-length, giving the mass difference $m_n - m_p = 9(1)$~MeV.}
\label{fig: pn_md}
\end{figure}
The results presented here were obtained with $1993$ gauge configurations by performing four point--like propagator inversions starting from random points. We stress once again that this ensemble is rather far from the physical point. In order to help the interpretation of the baryon mass results given below we quote for reference the value of the charged pion (and kaon) mass calculated for this ensemble, $m_{\pi^+}=m_{K^+}= 496(2)$~MeV (see Ref.~\cite{bushnaq2021update} for more details).
This result, as well as all the following ones, take into account the \textit{universal} finite volume corrections on charged hadron masses given by the first two terms of Eq.~($5.1$) in Ref.~\cite{tantalo_2016}.
The proton spectrum is shown in Fig.~\ref{fig:proton_spectrum}. We see a reasonably long ground state plateau for the statistics used, with a clear first excited state. The ground state plateau and its error are shown more clearly in Fig.~\ref{fig:proton_plateau}. The mass of the ground state is found to be $1282(8)$~MeV. An estimate of the lowest energy gap was found to be roughly consistent with a proton + photon state.
Next, we present the $\Omega^-$ baryon in Fig.~\ref{fig:omega}. We find a plateau that starts early and persists until around $x_0=24$. The mass result we obtain is $1633(8)$~MeV. The energy gap was found to be roughly equivalent to an $\Omega^-$ + photon state.
Fig.~\ref{fig: pn_md} shows the mass difference between the proton ground state and the neutron ground state, i.e.~the proton-neutron mass difference. We see here a plateau that starts early but the signal-to-noise ratio becomes poor around $x_0=20$. We find the mass difference to be $m_n - m_p = 9(1)$~MeV. It is worth noting here once again the un-physicality of the ensemble if one is tempted to compare it to the physical value of roughly $1$~MeV.
\section{Conclusion}
The results on the baryon mass spectrum presented in these proceedings form part of a larger effort by the RC$^{*}$ collaboration. The ultimate goal of this effort is to obtain physical results for the hadron spectrum by performing first--principles lattice simulations of QCD$+$QED without relying at any stage on gauge--fixing or perturbation theory. Alongside the companion proceedings~\cite{bushnaq2021update}, a first step towards this goal has been made here with results obtained on a single ensemble of gauge configurations corresponding to an unphysical setup in which $\alpha_{\text{em}}\simeq 6 \alpha_{\text{phys}}$, the lattice volume is $L\simeq 1.7$~fm with a lattice spacing $a\simeq 0.05$~fm and the four dynamical quark masses have been tuned at the $U$--spin symmetric point $m_d=m_s$.
Our results demonstrate that, in addition to charged meson masses, baryon masses can be calculated with satisfactory precision in QCD$+$QED$_C$ in a fully local and gauge--invariant setup. This makes us reasonably confident in the possibility of reaching our goal and providing phenomenologically relevant results on the full hadron spectrum in the near future.
\acknowledgments
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 765048.
The research of AC, JL and AP is funded
by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer
417533893/GRK2575 “Rethinking Quantum Field Theory”.
The authors acknowledge access to the Eagle HPC cluster at PSNC (Poland).
The work was supported by the Poznan Supercomputing and Networking Center (PSNC) through grants 450 and 466.
The work was supported by CINECA that granted computing resources on the Marconi supercomputer to the LQCD123 INFN theoretical initiative under the CINECA-INFN agreement.
We acknowledge access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland under the ETHZ's share with the project IDs go22 and go24.
The work was supported by the North-German Supercomputing Alliance (HLRN) with the project bep00085.
\bibliographystyle{JHEP}
|
1,116,691,497,222 | arxiv | \section*{Introduction}
When we teach indefinite integrals, we use the notation $\int$. But it is not clear for the students what the $\int$ means. What does it means $\int f(x) \mathrm{d}x$? Can one write $F(x) = \int f(x) \mathrm{d}x$? Is $\int f(x) \mathrm{d}x$ a new function, or it is family of functions? Or does it depend on the context?
We use to write $\int x \mathrm{d}x = \frac{x^2}{2}$, but also $\int x \mathrm{d}x = \frac{x^2}{2} + c$, where $c \in \mathbb{R}$ is any constant. But our students don not know which one to use and why. So our notation is confusing.
Rather than that, we use this notations to teach our students some "mathemagical" manipulation of $\mathrm{d}x$, $\mathrm{d}u$ and $\mathrm{d}y$. Those manipulations are useful but we do not prove them and they can lead us (or at least our students) to many mistakes. In the following lines, we will discuss some mistakes made by our students and mistakes we teach them how to make. We will also propose a (not so) new way of teaching Calculus using another notation.
This paper have two sections. In the first one we show examples of how we teach and we discuss what is wrong with each example. In the second section we propose a new notation for the indefinite integral and we solve each example already discussed using the new notation.
After I had prepared this material, I learned from one of my Calculus students that a similar notation had already appeared in a series of MIT video lectures available through the Internet: the \textit{Calculus Revisited: Single Variable Calculus}, whose instructor was Prof. Herbert Gross. For more information about this lectures see \cite{CalcRev}.
\tableofcontents
\section{How we use to teach}
In this section we show some examples of how we teach integrals to our students and we discuss what is wrong with each the examples.
\subsection{The integral $\int\frac{1}{x}\mathrm{d}x$}
We teach our students that $\int \frac{1}{x} \mathrm{d}x = \ln|x| + c$, where $c \in \mathbb{R}$ is any constant. So lets consider the function $f \colon \mathbb{R} \to \mathbb{R}$ given by
\[f(x) =
\begin{cases}
\ln(x) + 2, & \text{if} \ x > 0; \\
\ln(-x) - \pi, & \text{if} \ x < 0.
\end{cases}\]
Clearly, $f'(x) = \frac{1}{x}$ and $f(x) \ne \ln|x| + c$.
This example shows that the notation $\int$ does not take into consideration the function's domain, just its rule. When we take the function $x \mapsto \frac{1}{x}$ and do not say nothing about its domain, we are considering that the function's domain is the biggest set where the rule makes sense. In the Calculus context, it is the set $\mathbb{R}^* = \mathbb{R}\setminus \{0\}$.
The notation $\int$ is misleading because it does not take into consideration the function's domain. But, in this particular case, the biggest problem is not the notation: we should take the function's domain into account and teach our students to do the same.
\subsection{Changing variables and some "mathemagic"}\label{magic}
Here we give some examples of the practical way we teach our students to calculate some integrals changing the variables. Then we discuss the problems of this approach.
\begin{ex}[$\int\cos \left(x^2\right) x \mathrm{d}x$]
We use the following change of variables:
\[u = x^2 \Rightarrow \frac{\mathrm{d}u}{\mathrm{d}x} = 2x \Rightarrow x\mathrm{d}x = \frac{1}{2} \mathrm{d}u.\]
Then,
\[\int \cos \left(x^2\right) x \mathrm{d}x = \int \cos(u)\cdot \frac{1}{2} \mathrm{d}u = \frac{\sin u}{2} + k = \frac{\sin\left(x^2\right)}{2} + k.\]
\end{ex}
\begin{ex}[$\int \frac{1}{\sqrt{x^2 +1}} \mathrm{d}x$.]
Lets analyze the following triangle:
\begin{center}
\psfrag{1}{$1$}
\psfrag{x}{$x$}
\psfrag{b}{$\sqrt{x^2+1}$}
\psfrag{c}{$\theta$}
\includegraphics{trig1}
\end{center}
Based on the triangle above, if we call $x = \tan \theta$, we have that
\[\frac{1}{\sqrt{x^2+1}} = \cos \theta \quad \text{and} \quad \frac{\mathrm{d}x}{\mathrm{d}\theta} = \sec^2\theta \Rightarrow \mathrm{d}x = \sec^2\theta \mathrm{d}\theta.\]
Then,
\[\int \frac{1}{\sqrt{x^2 +1}} \mathrm{d}x = \int \cos\theta \sec^2 \theta \mathrm{d}\theta = \int \sec\theta \mathrm{d}\theta.\]
At this point, if we already know $\int \sec \theta \mathrm{d}\theta$, we can write
\[\displaystyle \int \frac{1}{\sqrt{x^2 +1}} \mathrm{d}x = \ln\left| \tan \theta + \sec \theta \right| + k = \ln\left|x + \sqrt{1+x^2} \right| + k.\]
\end{ex}
\begin{ex}[$\int \sqrt{1-x^2} \mathrm{d}x$]\label{ex nao difeo}
Lets now use the following triangle:
\begin{center}
\psfrag{1}{$1$}
\psfrag{x}{$x$}
\psfrag{b}{$\sqrt{1-x^2}$}
\psfrag{c}{$\theta$}
\includegraphics{trig2}
\end{center}
Calling $x= \sin \theta$, we have that
\[\sqrt{1-x^2} = \cos \theta \quad \text{and} \quad \frac{\mathrm{d}x}{\mathrm{d}\theta} = \cos \theta \Rightarrow \mathrm{d}x = \cos\theta \mathrm{d}\theta.\] Thus
\begin{align*}
& \int \sqrt{1-x^2} \mathrm{d}x = \int \cos(\theta) \cdot\cos(\theta) \mathrm{d}\theta = \int \cos^2(\theta) \mathrm{d}\theta = \\
& = \frac{\theta}{2} + \frac{\sin(\theta)\cos(\theta)}{2} + k = \frac{\arcsin(x)}{2} + \frac{x\sqrt{1-x^2}}{2} + k.
\end{align*}
\end{ex}
\begin{ex}[$\int_0^{\frac{3\pi}{4}} e^{\cos(x)}\cdot \sin(x) dx$]
Using the change of variables $u = \cos(x)$, we have that
\[\begin{cases}
x = 0 \Rightarrow u = 1; \\
x = \frac{3 \pi}{4} \Rightarrow u = -\frac{\sqrt 2}{2}; \\
\mathrm{d}u = -\sin(x)\mathrm{d}x;
\end{cases}\]
thus
\[\int_0^{\frac{3\pi}{4}} e^{\cos(x)}\cdot \sin(x) \mathrm{d}x = \int_1^{-\frac{\sqrt 2}{2}} -e^u \mathrm{d}u = \int_{- \frac{\sqrt 2}{2}}^1 e^u \mathrm{d}u = \left.e^u\right|_{-\frac{\sqrt 2}{2}}^1 = e-\frac{1}{e^{\frac{\sqrt 2}{2}}}.\]
\end{ex}
Lets point some observations about our method of changing variable at calculating integrals:
\begin{enumerate}
\item The very good thing about this method is that it is fast and we can calculate integrals using just a feel lines.
\item One problem about this method is that we make lots of calculations multiplying and dividing by $\mathrm{d}x$, $\mathrm{d}u$ and $\mathrm{d}\theta$. But $\mathrm{d}x$, $\mathrm{d}u$ and $\mathrm{d}\theta$ are (today) just symbols and we cannot add, subtract, multiply or divide by them.
\item In the change of variables in the examples above, we have used four different theorems, one for each example, and each theorem has different hypothesis. Using the traditional method, we do not check the theorems hypothesis before making calculations. Our students do not even know the theorems. So, we are tanking conclusions (calculations) based on theorems whose hypothesis we have not checked. It is a dangerous thing to take conclusions without checking the theorems hypothesis, and it is even more dangerous to teach students to do the same.
\end{enumerate}
\subsection{The integral $\int\frac{1}{1-\cos x + \sin x}\mathrm{d}x$} \label{last}
The ideas to calculate the integral $\int\frac{1}{1-\cos x + \sin x}\mathrm{d}x$ were found in \cite{Gui}.
Using the identities
\[\cos x = \frac{1-\tan^2\frac{x}{2}}{1+\tan^2\frac{x}{2}} \quad \text{and} \quad \sin x = \frac{2\tan\frac{x}{2}}{1+\tan^2\frac{x}{2}},\]
we can write
\[\int \frac{1}{1-\cos x + \sin x}\mathrm{d}x = \int \frac{1+\tan^2\frac{x}{2}}{2\tan^2\frac{x}{2}+2\tan\frac{x}{2}}\mathrm{d}x.\]
Now we can use the following change of variables:
\[u=\tan\frac{x}{2} \Rightarrow \frac{\mathrm{d}u}{\mathrm{d}x} = \frac{1}{2}\left(1+ \tan^2\frac{x}{2}\right) \Rightarrow \mathrm{d}x = \frac{2}{1+u^2}\mathrm{d}u.\]
Thus,
\begin{align*}
& \int \frac{1}{1-\cos x + \sin x}\mathrm{d}x = \int \frac{1+u^2}{2u^2+2u}\cdot \frac{2}{1+u^2}\mathrm{d}u = \int \frac{1}{u(u+1)}\mathrm{d}u = \\
&= \int \frac{1}{u} - \frac{1}{u+1}\mathrm{d}u = \ln|u| -\ln|u+1| + k = \ln\left|\frac{u}{u+1}\right|+ k = \\
&= \ln\left| \frac{\tan\frac{x}{2}}{1+\tan\frac{x}{2}}\right| + k.
\end{align*}
But,
\begin{align*}
& \frac{\tan\frac{x}{2}}{1+\tan\frac{x}{2}} = \frac{\sin\frac{x}{2}}{\cos\frac{x}{2}+\sin\frac{x}{2}} = \frac{\sin\frac{x}{2}\cos\frac{x}{2}}{\cos^2\frac{x}{2} + \sin\frac{x}{2}\cos\frac{x}{2}} = \\
& = \frac{\frac{\sin x}{2}}{\frac{1+\cos x}{2}+\frac{\sin x}{2}} = \frac{\sin x}{1 + \cos x + \sin x}.
\end{align*}
Therefore,
\[\int \frac{1}{1-\cos x + \sin x}\mathrm{d}x = \ln \left|\frac{\sin x}{1 + \cos x + \sin x}\right| + k.\]
All these calculations are not entirely wrong, they are necessary. But lets take a look at both functions we have:
\[x \mapsto \frac{1}{1-\cos x + \sin x} \quad \text{and} \quad x \mapsto \ln\left|\frac{\sin x}{1 + \cos x + \sin x}\right| + k.\]
Its easy to see that the first function is defined in $x = \pi$, while the last one is not. That means that the domain of our solution is not the same domain of our original function, which is absurd.
Using the same technique to calculate $\int \sqrt{1-\cos(x)} \mathrm{d}x$, we also find a primitive whose domain is smaller than the domain of the original function.
\subsection{The fundamental theorems of Calculus}
The following two Theorems below be found in \cite{Spi}.
\begin{theo}[First Fundamental Theorem of Calculus]\label{F1}
Let $I$ be an interval, $x_0 \in I$ and $f \colon I \to \mathbb{R}$ a function. If $f$ is \textbf{continuous}, then function $F \colon I \to \mathbb{R}$ given by $F(x) = \int_{x_0}^x f(t)dt$ is a $C^1$ function and $F'(x) = f(x)$, for all $x \in I$.
\end{theo}
\begin{theo}[Second Fundamental Theorem of Calculus]\label{F2}
Let $f \colon [a,b] \to \mathbb{R}$ a function. If $f$ is \textbf{integrable} and $F \colon [a,b] \to \mathbb{R}$ is a function such that $F'(x) = f(x)$, for all $x \in [a,b]$, then $\int_a^b f(x)\mathrm{d}x = F(b) - F(a)$.
\end{theo}
If we want to calculate $\int_a^b f(x) \mathrm{d}x$, we first calculate $F(x) = \int f(x) \mathrm{d}x$. Then we can write $\int_a^b f(x) \mathrm{d}x = F(b) - F(a)$. The problem here is that we use the same notation to calculate $F$ and to calculate $\int_a^b f(x)\mathrm{d}x$. But they are completely different problems. Besides that, once we get used to this calculations, we forget that $f$ must be integrable and we start to think that if $F(x) = \int f(x)\mathrm{d}x$, then $f$ is integrable and $\int_a^b f(x) \mathrm{d}x = F(b) - F(a)$.
In \cite{GO} we can find functions $F$ and $f$, defined in the same closed interval, and such that $F = \int f(x) \mathrm{d}x$ but $f$ is not integrable. See also \cite{Volt}.
\section{How we should teach}
The main idea to make things right is to teach our students to make calculations the same way we prove the theorems. The first part in order to prove theorems in Calculus, is to have good definitions
\subsection{Definitions}
We will first use a good, but not so formal, definition of function:
\begin{df}[Function]
A function is an object formed by 3 parts:
\begin{enumerate}
\item a set $A$ called the \textbf{domain} of the function,
\item a set $B$ called the \textbf{codomain} of the function and
\item a rule that relates each element $x \in A$ to a unique element $y \in B$.
\end{enumerate}
We use the notation $f(x)$ to denote the only element $y \in B$ which is related to the element $x \in A$ by the rule of the function $f$, ie., $f(x)=y$.
\end{df}
The 3 parts together ($A$, $B$ and the rule $x \mapsto f(x)$) is called function. If we want to give a name for a function, generally we use roman letters, for example, lets consider the function whose domain is $A = [0,\infty[$, codomain is $B=\mathbb{R}$ and its rule relates each number $x \in [0,\infty[$ to is square root $\sqrt{x}$. We can call this function $f$. To define this function $f$, we could just write
\[\begin{matrix}
f : & [0,\infty[ & \longrightarrow & \mathbb{R} \\
& x & \longmapsto & \sqrt{x}
\end{matrix}\]
If we write
\[\begin{matrix}
g : & A & \longrightarrow & B \\
& x & \longmapsto & g(x)
\end{matrix}\]
we mean that $g$ is the name of a function, $A$ is the domain of $g$, $B$ is the codomain of $g$ and $x \mapsto g(x)$ is the rule of the function $g$.
In Calculus, it is common to give just the rule of some function, and the function's domain and codomain are not explicitly given. For example, lets consider the function $\frac{1}{x}$. Here we mean the function whose rule is $x \mapsto \frac{1}{x}$. We have not said a word about the function's domain nor its codomain, but we are considering that its domain is the biggest subset of $\mathbb{R}$ where the rule makes sense and that the codomain is $\mathbb{R}$ itself. In other words, when we say "lets consider the function $\frac{1}{x}$", we mean the following function:
\[\begin{matrix}
\mathbb{R}^* & \longrightarrow & \mathbb{R} \\
x & \longmapsto & \frac{1}{x}
\end{matrix}\]
\begin{nt}
If $g$ is a function, $D_g$ denotes the domain of $g$ and $CD_g$ denotes the codomain of $g$. The image of $g$ is the set $\mathrm{Im}_g = \set{g(x)}{x \in D_g}$.
\end{nt}
If we say "lets consider the function $x \stackrel{h}{\mapsto} \sqrt{x^2-1}$", we mean that $h$ is the name of the function, $x \mapsto \sqrt{x^2-1}$ is its rule, $D_h = \ ]\infty, -1] \cup [1, \infty[$ and $CD_h = \mathbb{R}$.
It is important to remark here that, in Calculus, we do not work with any kind of domain for our functions. We just work with functions whose domains are intervals or unions of intervals. Besides that, we are consider that an interval has infinity many elements, so the sets $]a,a[ \, = \varnothing$ and $[a,a] = \{a\}$ are not intervals to us.
\begin{df}
Let $A \subset \mathbb{R}$ be na interval or a union of intervals. In order to make things easier, lets suppose also that $A$ have to following properties:
\begin{enumerate}
\item If $p \in A$ is a left accumulation point of $A$, then $]p-\epsilon,p] \subset A$, for some $\epsilon > 0$.
\item If $p \in A$ is a right accumulation point of $A$, then $[p,p+ \epsilon[ \ \subset A$, for some $\epsilon > 0$.
\item If $p \in A$ is not a left accumulation point of $A$, then $]p-\epsilon,p] \not\subset A$, for all $\epsilon > 0$.
\item If $p \in A$ is not a right accumulation point of $A$, then $[p,p+ \epsilon[ \ \not\subset A$, for all $\epsilon > 0$.
\end{enumerate}
\end{df}
Now we will need a equivalence relation between differentiable functions.
\begin{df}
Let $\mathcal{F}$ be the set of all differentiable real functions, that is,
\[\mathcal{F} = \set{f \colon A \to \mathbb{R}}{\text{$A$ is standard and $f$ is differentiable}}.\]
If $f, g \in \mathcal{F}$, we will say that $f$ and $g$ are \textbf{equivalent} (or that $f$ is equivalent to $g$) if $f' = g'$. If $f$ and $g$ are equivalent, we will write $f \sim g$.
If $f \in \mathcal{F}$, the equivalence class of $f$ will be denoted by $[f]$, that is,
\[[f] = \set{g \in \mathcal{F}}{g \sim f}.\]
\end{df}
It is important to remark that, if $f$ and $g$ are differentiable, than $f'= g'$ means that $D_f = D_g$ and that $f'(x) = g'(x)$, for all $x \in D_f$.
We need some operations between the equivalence classes of $\cal F$.
\begin{df}
Let $f, g \in \mathcal{F}$ be functions such that $D_f \cap D_g$ is standard and let $\alpha \in \mathbb{R}$. We will define $[f]+[g]$ and $\alpha [f]$
\begin{align*}
& [f]+[g] = \set{\varphi+\psi}{\varphi \in [f] \ \text{and} \ \psi \in [g]}; \\
& [f]-[g] = \set{\varphi-\psi}{\varphi \in [f] \ \text{and} \ \psi \in [g]}; \\
& \alpha [f] = \set{\alpha \varphi}{\varphi \in [f]}.
\end{align*}
\end{df}
Lets remember here which are the functions $f+g$, $f-g$ and $\alpha f$.
\begin{align*}
& \begin{matrix}
f+g \colon & D_f \cap D_g & \longrightarrow & \mathbb{R} \\
& x & \longmapsto & f(x) + g(x)
\end{matrix} \\
& \begin{matrix}
f-g \colon & D_f \cap D_g & \longrightarrow & \mathbb{R} \\
& x & \longmapsto & f(x) - g(x)
\end{matrix} \\
& \begin{matrix}
\alpha f \colon & D_f & \longrightarrow & \mathbb{R} \\
& x & \longmapsto & \alpha \cdot f(x).
\end{matrix}
\end{align*}
Its easy to show the following Lemma:
\begin{lem}\label{operations}
Let $f, g \in \mathcal{F}$.
\begin{enumerate}
\item If $\alpha \in \mathbb{R}$ and $\alpha \ne 0$, then $\alpha \cdot [f] = [\alpha \cdot f]$.
\item If $D_f\cap D_g$ is standard, then $[f]+[g] = [f+g]$ and $[f-g] = [f]-[g]$.
\end{enumerate}
\end{lem}
Now we can talk about primitives.
\begin{df}
Let $f$ and $F$ be two real functions with $F$ differentiable. We say that $F$ is a \textbf{primitive} of $f$ if $F' = f$. We will denote the set of all primitives of $f$ by $P(f)$:
\[P(f) = \set{ g \colon D_f \to \mathbb{R}}{g'= f}.\]
\end{df}
Lets remark two things here:
\begin{enumerate}
\item When we say that $F' = f$, we mean that $D_{F'} = D_f$, $CD_{F'} = CD_f$ and that $F'(x) = f(x)$, for all $x \in D_f$. Usually $CD_F = CD_{F'} = CD_f = \mathbb{R}$ and $D_{F'} = D_F$, because $F$ is differentiable. So, if we want to check if $F$ is a primitive of $f$, we usually have to check if $D_F = D_f$ and if $F'(x) = f(x)$, for all $x \in D_f$.
\item $F'=f \Leftrightarrow P(f) = [F]$.
\end{enumerate}
It follows from Lemma \ref{operations} the folowing Lemma:
\begin{lem}\label{operations prim}
Let $f$ and $g$ be functions such that $P(f) \ne \varnothing$ and $P(g) \ne \varnothing$.
\begin{enumerate}
\item If $\alpha \in \mathbb{R}$ and $\alpha \ne 0$, then $P(\alpha f) = \alpha P(f)$.
\item If $D_f \cap D_g$ is standard, then $P(f+g) = P(f)+P(g)$ and $P(f-g) = P(f) - P(g)$.
\end{enumerate}
\end{lem}
\subsection{The Fundamental Theorems of Calculus and others}
With our new definitions and notations, we can rewrite Theorem \ref{F1}:
\begin{theo}[First Fundamental Theorem of Calculus]\label{t1}
Let $I$ be an interval, $x_0 \in I$ and $f \colon I \to \mathbb{R}$ a function. If $f$ is \textbf{continuous}, then function $F \colon I \to \mathbb{R}$ given by $F(x) = \int_{x_0}^x f(t)dt$ is differentiable and $F \in P(f)$.
\end{theo}
Lets just remember here that $f$ is continuous in $[a,b]$, then $f$ is integrable in $[a,b]$. Thus, the function $F$ is well defined and Theorem \ref{t1} makes sense.
If we just want to know if some function $f$ has a primitive, we can use Theorem \ref{t1}: if $D_f$ is an interval and $f$ is continuous, then $f$ has a primitive, that is, $P(f) \ne \varnothing$.
We have to remark here that, if we want to make things easier to our students, we can limit ourselves to study primitives of functions whose domains are intervals, instead of studying primitives of functions whose domains are standard.
If $D_f$ is standard but not an interval, then $D_f = \bigcup\limits_{i \in \mathcal{I}} I_i$, where $\cal I$ is a set of indexes, each $I_j$ is an interval, and $I_i \cap I_j = \varnothing$, when $i \ne j$. Considering $f$ continuous, we know that for each $i \in \mathcal{I}$, there exists a $F_i \colon I_i \to \mathbb{R}$ such that $F_i' = \left.f\right|_{I_i}$. Thus we can define the function $F \colon D_f \to \mathbb{R}$ by
\[F(x) = F_i(x), \ \text{if} \ x \in I_i.\]
Then it is easy to show that $F'=f$, that is, $P(f) \ne \varnothing$. Summarizing, we have the following Corollary:
\begin{cor}\label{prim}
If $D_f$ is standard and $f$ is continuous, then $P(f) \ne \varnothing$.
\end{cor}
Lets take a look on Integration by Parts:
\begin{theo}[Integration by Parts]
Let $D_f$, $D_g$ and $D_f \cap D_g$ be standard domains and let $f$ and $g$ be $C^1$ functions. With these assumptions, $P(fg') \ne \varnothing$, $P(f'g) \ne \varnothing$ and
\[P(fg') = [fg] - P(f'g).\]
\end{theo}
\begin{proof}
Lets first remark that, $D_f = D_{f'}$ and $D_g = D_{g'}$, because $f$ and $g$ are $C^1$ functions. Thus, $D_{fg} = D_{f'g} = D_{fg'} = D_f \cap D_g$ is standard.
We know that $Fg \in P((fg)')$, so $P((fg)') \ne \varnothing$. We know also that $f$ and $g$ are $C^1$ functions, thus $fg'$ and $f'g$ are continuous, and it follows from Corollary \ref{prim} that $P(fg')\ne \varnothing$ and $P(f'g) \ne \varnothing$.
Lets make some calculations.
\[(fg)' = f'g + fg' \Rightarrow fg' = (fg)' - f'g.\]
Thus, applying Lemma \ref{operations prim}, we have that
\[P(fg') = P((fg)'-f'g) = P((fg)') - P(f'g) = [fg]-P(f'g). \qedhere\]
\end{proof}
One of the most important results proved in Calculus in order to develop techniques to find primitives is the following Theorem, which can be found in \cite{Spi}.
\begin{theo}\label{f'=g'}
Let $I$ be an interval and $f, g \colon I \to \mathbb{R}$. If $f$ and $g$ are differentiable and $f'=g'$, then there is a constant $c \in \mathbb{R}$ such that $f = g + c$.
\end{theo}
Based on this, it is easy to prove the following corollary:
\begin{cor}\label{prim2}
Let $I$ be an interval and $f \colon I \to \mathbb{R}$. If $F \in P(f)$ then $P(f) = \set{F+c}{c \in \mathbb{R}}$.
\end{cor}
The Corollary above is one of the main tools we use to calculate indefinite integrals. We will repeat here that, if we want to make Calculus easier, we can just study the primitives of functions whose domains are intervals, instead of functions whose domains are standard.
Lets see an example before continuing.
\begin{ex}
Let $f \colon \mathbb{R} \to \mathbb{R}$ be the function given by $f(x) = x \cos(x)$. Then $f = \id \cdot \cos$, where $\id$ is the identity function on $\mathbb{R}$. Thus
\begin{align*}
& P(f) = P(\id\cdot \sin') = [\id\cdot \sin] - P(\id'\cdot \sin) = [\id\cdot \sin] - P(1 \cdot \sin) = \\
& = [\id\cdot \sin]-[-\cos] = [\id\cdot \sin + \cos] = \\
&= \set{x \mapsto x\sin(x)+ \cos(x) +c}{c \in \mathbb{R}}.
\end{align*}
\end{ex}
The first Fundamental Theorem of Calculus uses the integral to define a primitive of a continuous function $f$. The Second Theorem of Calculus uses the primitive of a integrable function in order to calculate its integral:
\begin{theo}[Second Fundamental Theorem of Calculus]\label{t2}
Let $f \colon [a,b] \to \mathbb{R}$ a function. If $f$ is \textbf{integrable} and $F \in P(f)$, then $\int_a^b f(x)\mathrm{d}x = F(b) - F(a)$.
\end{theo}
\begin{cor}\label{t3}
If $f \colon [a,b] \to \mathbb{R}$ is continuous, then there exists $F \colon [a,b] \to \mathbb{R}$ such that $F'=f$ and $\int_a^bf(x)\mathrm{d}x = F(b) - F(a)$.
\end{cor}
\subsection{The integral $\int \frac{1}{x}\mathrm{d}x$}
With our notation, we will find $P\left(x \mapsto \frac{1}{x}\right)$, instead of $\int \frac{1}{x} \, \mathrm{d}x$, which is the same.
Let $f,F \colon \mathbb{R}^* \to \mathbb{R}$ be the functions given by the rules $f(x) = \frac{1}{x}$ and $F(x) = \ln|x|$. Its easy to see that $F \in P(f)$, then $P(f) = [F]$.
But, $\left.F\right|_{\mathbb{R}_-^*} \in P\left(\left.f\right|_{\mathbb{R}_-^*}\right)$, and $\left.F\right|_{\mathbb{R}_+^*} \in P\left(\left.f\right|_{\mathbb{R}_+^*}\right)$. Thus, by Corollary \ref{prim2},
\[P\left(\left.f\right|_{\mathbb{R}_-^*}\right) = \set{\left.F\right|_{\mathbb{R}_-^*} + c}{c \in \mathbb{R}} \quad \text{and} \quad P\left(\left.f\right|_{\mathbb{R}_+^*}\right) = \set{\left.F\right|_{\mathbb{R}_+^*} + c}{c \in \mathbb{R}}.\]
Thus,
\begin{align*}
& \varphi \in P(f) \Leftrightarrow \left.\varphi\right|_{\mathbb{R}_-^*} \in P\left(\left.f\right|_{\mathbb{R}_-^*}\right) \ \text{and} \ \left.\varphi\right|_{\mathbb{R}_+^*} \in P\left(\left.f\right|_{\mathbb{R}_+^*}\right) \Leftrightarrow \\
& \Leftrightarrow \left.\varphi\right|_{\mathbb{R}_-^*} = \left.F\right|_{\mathbb{R}_-^*} + c_1 \ \text{and} \ \left.\varphi\right|_{\mathbb{R}_+^*} = \left.F\right|_{\mathbb{R}_+^*} + c_2, \ \text{for some} \ c_1,c_2 \in \mathbb{R} \Leftrightarrow \\
& \Leftrightarrow \varphi(x) =
\begin{cases}
\ln|x| + c_1, & \text{if} \ x < 0; \\
\ln|x| + c_2, & \text{if} \ x > 0. \\
\end{cases}
\end{align*}
Lets remark here that we could have used the following corollary which is very useful to work with functions whose domains are standard.
\begin{cor}\label{prim3}
Let $\cal I$ be a set of indexes and, for each $i \in \mathcal{I}$, let $I_i$ be an interval. Lets also suppose that $I_i \cap I_j = \varnothing$, if $i \ne j$. With the above hypothesis, if $D_f = \bigcup\limits_{i \in \mathcal{I}} I_i$ and $F \in P(f)$, then $\varphi \in P(f)$ if, and only if, for each $i \in \mathcal{I}$ there exists $c_i \in \mathbb{R}$ such that $\varphi(x) = F(x) + c_i$, for all $x \in I_i$.
\end{cor}
\subsection{Changing variables without "mathemagic"}
In Calculus, we have 3 situations we use changing of variables:
\begin{enumerate}
\item When we want to calculate $P((f\circ g)\cdot g')$ and we first calculate $P(f)$.
\item When we want to calculate $P(f)$ but it is difficult and we first calculate $P((f\circ g)\cdot g')$, where $g$ is a convenient function we choose.
\item When we want to calculate one of the sides of the equality $\int_{g(a)}^{g(b)} f(x) \mathrm{d}x = \int_a^b f(g(x))\cdot g'(x) \mathrm{d}x$, but we calculate the other side instead, because it is easier.
\end{enumerate}
We can use 4 theorems to solve this 3 situations. Lets see the theorems:
\begin{theo}\label{x-u indefinida}
Let $f$ and $g$ be two real functions such that $g$ is differentiable and $\mathrm{Im}_g \subset D_f$.
\begin{enumerate}
\item If $F \in P(f)$, then $P((f\circ g)\cdot g') = [F\circ g]$.
\item If $F \in D_f$ and $D_g$ is an interval, then $P((f\circ g)\cdot g') = \set{F\circ g + c}{c \in \mathbb{R}}$.
\end{enumerate}
\end{theo}
\begin{proof}
Lets suppose that $F \in P(f)$. That means that $F$ is differentiable and that $F' = f$. Then $\mathrm{Im}_g \subset D_f = D_F$. Using the chain rule, we have that $(F\circ g)' = (F'\circ g)\cdot g' = (f \circ g)\cdot g'$. Therefore $F\circ g \in P((f\circ g)\cdot g')$.
Now, supposing also that $D_g$ is an interval, then $D_{f\circ g} = D_g = D_{(f\circ g)\cdot g'}$ is an interval and, by the Corollary \ref{prim2}, $P((f\circ g)\cdot g') = \set{F\circ g + c}{c \in \mathbb{R}}$.
\end{proof}
\begin{theo}\label{x-u inversa}
Let $g$ be a diffeomorphism.
\begin{enumerate}
\item If $\mathrm{Im}_g = D_f$, then $P(f) = \set{H \circ g^{-1}}{H \in P((f\circ g)\cdot g')}$.
\item If $\mathrm{Im}_g = D_f$, $D_f$ is an interval and $H \in P((f\circ g)\cdot g')$ , then $P(f) = \set{H\circ g^{-1} + c}{c \in \mathbb{R}}$.
\end{enumerate}
\end{theo}
\begin{proof}
Let $F \in P(f)$. Then $F = F\circ g \circ g^{-1}$, because $D_F = D_f = \mathrm{Im}_g$. If we call $G = F \circ g$, then $F = G \circ g^{-1}$ and
\[G' = (F\circ g)' = (F'\circ g)\cdot g'= (f\circ g)\cdot g'.\]
Therefore $G \in P((f\circ g)\cdot g')$ and $F \in \set{H \circ g^{-1}}{H \in P((f\circ g)\cdot g')}$.
Lets now suppose that $F \in \set{H \circ g^{-1}}{H \in P((f\circ g)\cdot g')}$. Then there exists an $G \in P((f\circ g)\cdot g')$ such that $F = G \circ g^{-1}$. Therefore $D_F = D_{g^{-1}} = D_f$ and
\begin{align*}
& F' = \left(G'\circ g^{-1}\right)\cdot {g^{-1}}' = \left[ \left((f \circ g)\cdot g'\right) \circ g^{-1}\right] \cdot {g^{-1}}' = \\
& = \left(f \circ g \circ g^{-1}\right)\cdot \left(g'\circ g^{-1}\right) \cdot {g^{-1}}' = f \cdot \left(g \circ g^{-1} \right) = f.
\end{align*}
Therefore $F \in P(f)$ and we conclude that
\[P(f) = \set{H \circ g^{-1}}{H \in P((f\circ g)\cdot g')}.\]
If $D_f$ is an interval and $H \in P\left((f\circ g)\cdot g'\right)$, then $H \circ g^{-1} \in P(f)$. Thus, by Corollary \ref{prim2}, $P(f) = \set{H\circ g^{-1} + c}{c \in \mathbb{R}}$.
\end{proof}
In the theorems above, we do not need to suppose that $f$ is continuous neither that $g'$ is continuous, because we are not using the First Fundamental Theorem of Calculus.
Sometimes the theorems above are not sufficient, so we have another one:
\begin{theo}\label{x-u not diffeomorfismo}
Let $g\colon D_g \to D_f$ be a differentiable and bijective function. Lets suppose also that $g^{-1}$ is continuous and that $g^{-1}$ is differentiable in the interior of its domain.
\begin{enumerate}
\item If $f \colon D_f \to \mathbb{R}$ is continuous, then $P(f) = \set{H\circ g^{-1}}{H \in P((f\circ g)\cdot g')}$.
\item If $D_f$ is an interval and $H \in P((f\circ g)\cdot g')$, then $P(f) = \set{H \circ g^{-1} + c}{c \in \mathbb{R}}$.
\end{enumerate}
\end{theo}
\begin{proof}[Proof of Theorem \ref{x-u not diffeomorfismo}]
Just like it was done at Theorem \ref{x-u inversa}, we can prove that $P(f) \subset \set{H\circ g^{-1}}{H \in P((f\circ g)\cdot g')}$.
Lets suppose now that $F \in \set{H\circ g^{-1}}{H \in P((f\circ g)\cdot g')}$. Then there exists an $G \in P((f\circ g)\cdot g')$ such that $F = G \circ g^{-1}$. Therefore $D_F = D_{g^{-1}} = D_f$.
Let $p \in D_F$ be an interior point. Thus
\begin{align*}
& F'(p) = G'\left(g^{-1}(p)\right)\cdot {g^{-1}}'(p) = \left[ \left(\left(f \circ g\right) \left(g^{-1}(p)\right)\cdot g'\left(g^{-1}(p) \right)\right) \right] \cdot {g^{-1}}'(p) = \\
& = f(p)\cdot g'\left(g^{-1}(p)\right) \cdot {g^{-1}}'(p) = f(p) \cdot \left(g \circ g^{-1} \right)'(p) = f(p) \cdot 1 = f(p).
\end{align*}
Now, lets suppose that $p \in D_f$ is not an interior point, that is, $p$ is an endpoint of one of the intervals which form $D_f$ (once $D_f$ is an interval or an union of intervals). Without loss of generality, we can suppose that $p$ is a left end point of one of the intervals that form $D_f$, but $p$ is not a left accumulation point of $D_f$. Thus,
\[\lim_{x \to p} \frac{F(x)-F(p)}{x-p} = \lim_{x \to p^+} \frac{\cancelto{0}{F(x)-F(p)}}{\cancelto{0}{x-p}} = \lim_{x \to p^+} \frac{F'(x)}{1} = \lim_{x\to p^+} f(x) = f(p).\]
In the above calculations, we have used the fact that $F'(x) = f(x)$ if $x$ is an interior point of $D_f$, and we have also used the L'Hospital rules and the hypothesis that $f$ is continuous.
Therefore $F'(p) = f(p)$. Thus, $F'(x) = f(x)$, for all $x \in D_F = D_f$. That is $F \in P(f)$.
The second part of the Theorem follows form Corollary \ref{prim2}.
\end{proof}
\begin{theo}\label{x-u definida}
Let $g \colon [a,b] \to \mathbb{R}$ a $C^1$ function and $f \colon [c,d] \to \mathbb{R}$ a continuous function such that $g([a,b]) \subset [c,d]$. Then
\[\int_{g(a)}^{g(b)} f(x)\mathrm{d}x = \int_a^b f(g(x))\cdot g'(x) \mathrm{d}x.\]
\end{theo}
\begin{proof}
By the First Fundamental Theorem of Calculus \ref{t1}, $P(f) \ne \varnothing$, because $f$ is continuous.
Let $F \in P(f)$. Then $F\circ g \in P((f\circ g)\cdot g')$ and $(f\circ g)\cdot g'$ is continuous. Using the Corollary \eqref{t3}, we have that
\[\int_{g(a)}^{g(b)} f(x) \mathrm{d}x = F(g(b)) - F(g(a)) = \int_a^b f(g(x))\cdot g'(x)\mathrm{d}x. \qedhere\]
\end{proof}
We want to remark that we just can apply the Theorem \ref{x-u definida} if $f$ and $g'$ are continuous, because we are using the First and Second Fundamental Theorems of Calculus for the functions $f$ and $(f\circ g)\cdot g'$. If we do not know if $f$ is continuous or $g'$ is continuous, then we have to assume that $P(f) \ne \varnothing$, that $f$ is integrable and that $(f \circ g)\cdot g'$ is also integrable.
Now we can solve the examples of Section \ref{magic} without "mathemagic".
\begin{ex}[$\int\cos \left(x^2\right) x \mathrm{d}x$]
First of all, lets give names to the functions:
\[\begin{matrix}
f : & \mathbb{R} & \longrightarrow & \mathbb{R} \\
& x & \longmapsto & \cos(x^2)x
\end{matrix} \qquad \text{and} \qquad
\begin{matrix}
g : & \mathbb{R} & \longrightarrow & \mathbb{R} \\
& x & \longmapsto & x^2
\end{matrix}\]
We want to calculate $P(f)$, but $f(x) = \cos(x^2)x = \cos(g(x))\cdot \frac{g'(x)}{2}$, that is, $f = \frac{1}{2}\cdot (\cos \circ g)\cdot g'$. Thus
\begin{align*}
& P(f) = P\left(\frac{1}{2}\cdot (\cos \circ g)\cdot g'\right) = \frac{1}{2} P\left((\sin \circ g)'\right) = \\
& = \frac{1}{2} \set{\sin \circ g + c}{c \in \mathbb{R}} = \set{\frac{1}{2} \sin \circ g + c}{c \in \mathbb{R}}.
\end{align*}
\end{ex}
\begin{ex}[$\int \frac{1}{\sqrt{x^2 +1}} \mathrm{d}x$]
Let be $f \colon \mathbb{R} \to \mathbb{R}$ given by $f(x) = \frac{1}{\sqrt{x^2+1}}$. In order to calculate $P(f)$, we just need to find $H \in P((f\circ g)\cdot g')$, where $g$ is a convenient diffeomorphism such that $\mathrm{Im}_g = D_f$. Then, by Theorem \ref{x-u inversa}, $P(f) = \set{H \circ g^{-1} + c}{c \in \mathbb{R}}$.
Lets use $g = \left.\tan\right|_I$, where $I = \left]-\frac{\pi}{2}, \frac{\pi}{2} \right[$. We know $g$ is a diffeomorphism..
By another side,
\begin{align*}
& \left((f\circ g)\cdot g'\right)(x) = f(\tan(x))\cdot \tan'(x) = \\
&= \frac{1}{\sqrt{\tan^2(x) + 1}}\cdot \sec^2(x) = \cos(x)\cdot\sec(x) = \sec(x).
\end{align*}
Therefore $P((f \circ g)\cdot g') = P(\sec|_I)$, and, if we already know $P\left(\sec|_I\right)$, we can write
\begin{align*}
&\varphi \in P((f\circ g)\cdot g') = P\left(\sec|_I\right) \Leftrightarrow \\
& \Leftrightarrow \varphi(x) = \ln|\tan x + \sec x| + c, \forall x \in I,
\end{align*}
where $c \in \mathbb{R}$ is constant.
Taking $H \colon I \to \mathbb{R}$ given by $H(x) = \ln |\tan(x) + \sec(x)|$, and applying Theorem \ref{x-u inversa}, we have that
\[P(f) = \set{H \circ \tan^{-1} + c}{c \in \mathbb{R}}.\]
But,
\[H\left(\tan^{-1}(x)\right) = \ln\left|x + \sec\left(\tan^{-1}(x)\right)\right| = \ln \left| x + \sqrt{x^2+1}\right|.\]
Therefore
\[h \in P(f) \Leftrightarrow h(x) = \ln\left(x+\sqrt{x^2+1}\right) + c, \ \forall x \in \mathbb{R}, \ \text{for some constant} \ c \in \mathbb{R}.\]
\end{ex}
\begin{ex}[$\int \sqrt{1-x^2} \mathrm{d}x$]
Let $f \colon [-1,1] \to \mathbb{R}$ be given by $f(x) = \sqrt{1-x^2}$. The function $f$ is obviously continuous.
The figure of example \ref{ex nao difeo}, in the last section, gives us the idea of using the function $g(\theta) = \sin(\theta)$. So, let $g \colon \left[-\frac{\pi}{2}, \frac{\pi}{2}\right] \to [-1,1]$ given by $g(\theta) = \sin(\theta)$. Thus $g$ is differentiable and bijective, and $g^{-1} \colon [-1,1] \to \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ is continuous and differentiable in $]-1,1[$. Therefore we can apply Theorem \ref{x-u not diffeomorfismo} to conclude that $P(f) = \set{H\circ g^{-1} + c}{c \in \mathbb{R}}$, where $H \in P((f\circ g)\cdot g')$.
But,
\[f(g(\theta))\cdot g'(\theta) = \sqrt{1-\sin^2(\theta)}\cdot \cos(\theta) = \cos^2(\theta), \ \forall \theta \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right].\]
Thus, $P((f\circ g)\cdot g') = P\left({\cos^2}|_I \right)$, where $I = \left[ -\frac{\pi}{2}, \frac{\pi}{2} \right]$. By another side, we know that the function $H \colon I \to \mathbb{R}$, given by $H(\theta) = \frac{\theta}{2} + \frac{\sin(\theta)\cos(\theta)}{2}$, is a primitive of ${\cos^2}_I = (f\circ g)\cdot g'$. Therefore, $P(f) = \set{H\circ g^{-1} + c}{c \in \mathbb{R}}$.
But
\begin{multline*}
H\left(g^{-1}(x) \right) = \frac{\arcsin(x)}{2} + \frac{\sin(\arcsin(x))\cos(\arcsin(x))}{2} = \\
= \frac{\arcsin(x)}{2} + \frac{x\sqrt{1-x^2}}{2}.
\end{multline*}
Thus $P(f) = \set{x\mapsto \frac{\arcsin(x)}{2} + \frac{x\sqrt{1-x^2}}{2} + c}{c \in \mathbb{R}}$.
\end{ex}
\begin{ex}[$\int_0^{\frac{3\pi}{4}} e^{\cos(x)}\cdot \sin(x) dx$]
Lets consider the exponential function $\exp \colon \mathbb{R} \to \mathbb{R}$, given by $\exp(x) = e^x$. We know that the functions $\exp$, $\cos$ and $\cos' = -\sin$ are continuous and that $\cos\left(\left[0,\frac{3\pi}{4}\right]\right) \subset \mathbb{R} = D_{\exp}$, thus, applying the theorem \ref{x-u definida}, we have that
\begin{align*}
& \int_0^{\frac{3\pi}{4}} e^{\cos(x)}\cdot \sin(x) \mathrm{d}x = -\int_0^{\frac{3\pi}{4}} e^{\cos(x)}\cdot \cos'(x) \mathrm{d}x = -\int_{\cos(0)}^{\cos\left(\frac{3\pi}{2}\right)} e^x \mathrm{d}x = \\
& = - \int_1^{-\frac{\sqrt 2}{2}} e^x \mathrm{d}x = \int_{\frac{\sqrt 2}{2}}^1 e^x \mathrm{d}x = e-\frac{1}{e^{\frac{\sqrt 2}{2}}}.
\end{align*}
\end{ex}
\subsection{The integral $\int\frac{1}{1-\cos x + \sin x}\mathrm{d}x$}
When we just say the function $\frac{1}{1-\cos x + \sin x}$, we mean that the function's rule is $x \mapsto \frac{1}{1-\cos x + \sin x}$ and the function's domain is the biggest subset of $\mathbb{R}$ where the rule makes sense. We cannot divide by 0, so the functions domain is the set
\[\set{x \in \mathbb{R}}{1-\cos x + \sin x \ne 0}.\]
Lets call $f(x) = \frac{1}{1-\cos x + \sin x}$ and calculate $D_f$. We believe that a university level student should take the trouble to calculate the domain of $f$ before anything else. But here again, we have to remark that it would be much easier to just calculate primitives of functions whose domains are intervals.
Lets consider the function $\phi(x) = 1 - \cos x + \sin x$, with $x \in \mathbb{R}$. Then $D_f = \set{x \in \mathbb{R}}{\phi(x) \ne 0}$ and $\phi'(x) = \sin x + \cos x$. Thus
\begin{enumerate}
\item $\phi'(x) = 0 \iff x \in \set{\frac{3 \pi}{4} +k\pi}{k \in \mathbb{Z}}$;
\item $\phi'(x) > 0 \iff x \in \, \bigcup\limits_{k \in \mathbb{Z}}\left]-\frac{\pi}{4} + 2k\pi, \frac{3\pi}{4} + 2k\pi \right[$;
\item $\phi'(x) < 0 \iff x \in \, \bigcup\limits_{k \in \mathbb{Z}}\left] \frac{3\pi}{4} + 2k\pi, 2\pi - \frac{\pi}{4} + 2k\pi\right[$.
\end{enumerate}
The following figure shows in the trigonometric circle the points where $\phi'(x)=0$, the points where $\phi'(x) > 0$ and the points where $\phi'(x) < 0$.
\begin{center}
\psfrag{a}{$\phi'(x)= 0$}
\psfrag{b}{$\phi'(x)> 0$}
\psfrag{c}{$\phi'(x)= 0$}
\psfrag{d}{$\phi'(x)< 0$}
\includegraphics[scale=0.6]{circle1}
\end{center}
This means that $\phi$ is strictly increasing in the intervals $\left[-\frac{\pi}{4} + 2k\pi, \frac{3\pi}{4} + 2k\pi\right]$, for all $k \in \mathbb{Z}$, and $\phi$ is strictly decreasing in the intervals $\left[ \frac{3\pi}{4} + 2k\pi, 2\pi - \frac{\pi}{4} + 2k\pi\right]$, for all $k \in \mathbb{Z}$. Thus, $\phi$ has no more than one zero in each of the following intervals:
\[\left[ 2k\pi -\tfrac{\pi}{4}, 2k\pi + \tfrac{3\pi}{4} \right] \quad \text{and} \quad \left[ 2k\pi + \tfrac{3\pi}{4}, (2k+2)\pi - \tfrac{\pi}{4} \right], \ \forall k \in \mathbb{Z}.\]
But,
\begin{enumerate}
\item $2k\pi \in \left[ 2k\pi - \frac{\pi}{4}, 2k\pi + \frac{3\pi}{4}\right]$,
\item $2k\pi + \frac{3\pi}{2} \in \left[ \frac{3\pi}{4} + 2k\pi, (2k+2)\pi - \frac{\pi}{4} \right]$ and
\item $\phi(2k\pi) = \phi\left(2k\pi + \frac{3\pi}{2}\right) = 0$.
\end{enumerate}
Therefore,
\[\set{x\in \mathbb{R}}{\phi(x) = 0} = \set{2k\pi, 2k\pi + \tfrac{3\pi}{2}}{k \in \mathbb{Z}}.\]
We conclude that
\begin{multline*}
D_f = \set{x \in \mathbb{R}}{x \ne 2k\pi \ \text{and} \ x \ne 2k\pi + \tfrac{3\pi}{2}, \ \forall k \in \mathbb{Z}} = \\
= \bigcup_{k \in \mathbb{Z}} \left( \, \left]2k\pi, 2k\pi + \tfrac{3\pi}{2} \right[ \cup \left] 2k\pi + \tfrac{3\pi}{2}, 2(k+2)\pi \right[ \, \right).
\end{multline*}
Now, for every $k \in \mathbb{Z}$, let
\[I_1(k) = \left]2k\pi, 2k\pi+\tfrac{3\pi}{2} \right[ \quad \text{and} \quad I_2(k) = \left] 2k\pi+\tfrac{3\pi}{2}, (2k+2)\pi \right[.\]
Thus, $D_f = \bigcup\limits_{k \in \mathbb{Z}} \left(I_1(k) \cup I_2(k) \right)$.
Now we know that $D_f$ is an union of infinite disjoint open intervals. In order to calculate $P(f)$, we will first calculate $P\left(\left.f\right|_I\right)$, where $I$ is an interval.
Using the identities
\[\cos x = \frac{1-\tan^2\frac{x}{2}}{1+\tan^2\frac{x}{2}} \quad \text{and} \quad \sin x = \frac{2\tan\frac{x}{2}}{1+\tan^2\frac{x}{2}},\]
we can write
\begin{equation}\label{trig}
\frac{1}{1-\cos x + \sin x} = \frac{1+\tan^2\frac{x}{2}}{2\tan^2\frac{x}{2}+2\tan\frac{x}{2}}
\end{equation}
Lets remark here that the identities above do not make sense for some points. For example, when $\frac{x}{2} = \frac{\pi}{2} + k\pi$ and $k \in \mathbb{Z}$, the identities do not make sense, because the tangent function is not defined at those points. So, lets suppose that both sides of equation \eqref{trig} make sense for every $x \in I$.
Let $g \colon I \to J$ be given by $g(x) = \tan\frac{x}{2}$, where $J = g(I)$. Then $g'(x) = \frac{\tan'(x)}{2} = \frac{1}{2}\left(1 + \tan^2 \frac{x}{2}\right)$. Then equation \eqref{trig} becomes
\[\frac{1}{1-\cos x + \sin x} = \frac{2g'(x)}{2\left(g^2(x)+g(x) \right)} = h(g(x))\cdot g'(x),\]
where $h \colon J \to \mathbb{R}$ is given by $h(x) = \frac{1}{x^2+x}$.
Using Theorem \ref{x-u indefinida}, $P\left(\left.f\right|_I\right) = P((h\circ g)\cdot g') = \set{H \circ g + c}{c \in \mathbb{R}}$, where $H \in P(h)$. But $h(x) = \frac{1}{x^2+x} = \frac{1}{x} - \frac{1}{x+1}$. If $h_1, h_2 \colon J \to \mathbb{R}$ are given by $h_1(x) = \frac{1}{x}$ and $h_2(x) =\frac{1}{x+1}$, then $h = h_1 - h_2$ and
\[P(h) = P(h_1-h_2) = P(h_1) - P(h_2).\]
By another side
\begin{align*}
& P(h_1) = \set{\varphi \colon J \to \mathbb{R}}{\varphi(x) = \ln|x|+ c, \ \forall x \in I, \ \text{where $c$ is constant}}; \\
& P(h_2) = \set{\varphi \colon J \to \mathbb{R}}{\varphi(x) = \ln|x+1|+ c, \ \forall x \in I, \ \text{where $c$ is constant}}.
\end{align*}
Therefore the function $H \colon J \to \mathbb{R}$ given by $H(x) = \ln|x|-\ln|x+1| = \ln\left|\frac{x}{x+1}\right|$ is a primitive of $h$ and
\[P(\left.f\right|_I) = \set{H \circ g + c}{c \in \mathbb{R}}.\]
Until here, we have that $\varphi \in P\left(\left.f\right|_I\right)$ if, and only if, $\varphi \colon I \to \mathbb{R}$ is given by
\[\varphi(x) = H(g(x)) + c = \ln\left|\frac{\tan\frac{x}{2}}{1+\tan\frac{x}{2}} \right| + c,\]
where $c$ is constant.
By the same calculations of the Subsection \ref{last}, we know that
\[\ln\left| \frac{\tan\frac{x}{2}}{1+\tan\frac{x}{2}}\right| = \ln \left|\frac{\sin x}{1 + \cos x + \sin x}\right|,\]
every time when both sides of the equality above makes sense.
Lets now consider the domain of the function $x \stackrel{f_2}{\mapsto} \ln \left|\frac{\sin x}{1 + \cos x + \sin x}\right|$, which is the biggest subset of $\mathbb{R}$ in which the rule makes sense. For this, lets consider the auxiliary function $\psi \colon \mathbb{R} \to \mathbb{R}$ given by $\psi(x) = 1 + \cos x + \sin x$. Its clear that if $\psi(x) = 0$, then $x \notin D_\psi$. So lets find the zeros of $\psi$.
We know that $\psi'(x) = \cos x - \sin x$. Thus
\begin{enumerate}
\item $\psi'(x) = 0 \iff x \in \set{\frac{\pi}{4} + k\pi}{k \in \mathbb{Z}}$,
\item $\psi'(x) < 0 \iff x \in \left] 2k\pi + \frac{\pi}{4}, (2k+1)\pi + \frac{\pi}{4}\right[$,
\item $\psi'(x) > 0 \iff x \in \left] (2k+1)\pi + \frac{\pi}{4}, (2k+2)\pi + \frac{\pi}{4}\right[$.
\end{enumerate}
The next figure shows in the trigonometric circle the points where $\psi'(x) =0$, the points where $\psi'(x) >0$ and the points where $\psi'(x)<0$.
\begin{center}
\psfrag{a}{$\psi'(x) = 0$}
\psfrag{b}{$\psi'(x) < 0$}
\psfrag{c}{$\psi'(x) = 0$}
\psfrag{d}{$\psi'(x) > 0$}
\includegraphics[scale=0.6]{circle2}
\end{center}
This means that $\psi$ is strictly increasing in the intervals
\[\left[ (2k+1)\pi + \frac{\pi}{4}, (2k+2)\pi + \frac{\pi}{4} \right], \ \forall k \in \mathbb{Z},\]
and $\psi$ is strictly decreasing in the intervals
\[\left[ 2k\pi + \frac{\pi}{4}, (2k+1)\pi + \frac{\pi}{4}\right], \forall k \in \mathbb{Z}.\]
Therefore $\psi$ has no more than one zero in each of the following intervals:
\[\left[ 2k\pi +\tfrac{\pi}{4}, (2k+1)\pi + \tfrac{\pi}{4} \right] \quad \text{and} \quad \left[ (2k+1)\pi + \tfrac{\pi}{4}, (2k+2)\pi + \tfrac{\pi}{4} \right].\]
By another side,
\begin{enumerate}
\item $(2k+1)\pi \in \left[ 2k\pi +\frac{\pi}{4}, (2k+1)\pi + \frac{\pi}{4} \right]$,
\item $2k\pi + \frac{3\pi}{2} \in \left[ (2k+1)\pi + \frac{\pi}{4}, (2k+2)\pi + \frac{\pi}{4} \right]$,
\item $\psi((2k+1)\pi) = \psi\left(2k\pi + \frac{3\pi}{2}\right) = 0$, and
\item $\sin(k\pi) = 0$.
\end{enumerate}
Therefore, $\set{x\in \mathbb{R}}{\psi(x) = 0} = \set{(2k+1)\pi, 2k\pi + \frac{3\pi}{2}}{k \in \mathbb{Z}}$ and we conclude that
\[D_{f_2} = \bigcup_{k \in \mathbb{Z}} \left( \left]2k\pi, (2k+1)\pi \right[ \cup \left] (2k+1)\pi, 2k\pi + \tfrac{3\pi}{2} \right[ \cup \left] 2k\pi+\tfrac{3\pi}{2}, (2k+2)\pi\right[ \right).\]
Here we observe that $D_{f_2} \subset D_f$ and that $D_f \setminus D_{f_2} = \set{(2k+1)\pi}{k \in \mathbb{Z}}$.
Lets calculate the following limit:
\begin{align*}
& \lim_{x \to (2k+1)\pi} f_2(x) = \lim_{x \to 2k\pi + \pi} \ln \left| \frac{\sin x}{1 + \cos x + \sin x}\right| = \\
& = \lim_{x \to 2k\pi + \pi} \ln \left| \frac{\cos x}{\cos x - \sin x}\right| = \ln\left|\frac{-1}{-1}\right| = 0.
\end{align*}
Lets define $F \colon \bigcup\limits_{k \in \mathbb{Z}} \left( I_1(k) \cup I_2(k)\right)\to \mathbb{R}$ by
\[F(x) = \begin{cases}
\ln \left|\frac{\sin x}{1 + \cos x + \sin x}\right|, & \text{if} \ x \ne (2k+1)\pi; \ \\
0, & \text{if} \ x = (2k+1)\pi;
\end{cases}\]
Now, $D_F = D_f$ and its possible to prove that $F' = f$, that is $F \in P(f)$. Thus, applying Corollary \ref{prim3}, we have that $\varphi \in P(f)$ if, and only if, for each $k \in \mathbb{Z}$, there exists $c_1(k), c_2(k) \in \mathbb{R}$ such that
\[\varphi(x) = \begin{cases}
\ln \left|\frac{\sin x}{1 + \cos x + \sin x}\right| + c_1(k), & \text{if $x \in I_1(k)$ and $x \ne (2k+1)\pi$}; \\
c_1(k), & \text{if $x = (2k+1)\pi$}; \\
\ln \left|\frac{\sin x}{1 + \cos x + \sin x}\right| + c_2(k), & \text{if $x \in I_2(k)$}.
\end{cases}\]
\bibliographystyle{acm}
\addcontentsline{toc}{section}{References}
|
1,116,691,497,223 | arxiv | \section{Introduction}
Ultracold molecules play a main role in modern
physics due to a large number of promising applications in quantum
information\cite{ref1-Krems}, precision spectroscopy\cite{ref3-Krems} and
ultracold chemistry\cite{ref4-Krems,Softley:09,Gianturco:09}.
Optical lattices of ultracold molecules are
predicted to be ideally suited for quantum simulation of complex
quantum systems\cite{ref2-Krems,ref8-Krems,ref5-NJP} and the engineering
of new schemes for quantum information storage and
processing\cite{ref6-Krems,ref7-NJP}. On the other hand, creation
of a Bose-Einstein condensate (BEC) of molecules may
enable studies of Bose-enhanced chemistry\cite{ref10-Krems}.
In the context of these studies, molecules must be confined within a trap.
For paramagnetic molecules, a magnetic trap is used since molecules
in a low-field-seeking state\cite{Pethick} ({\em lfs}) are
trappable provided that their translational energy
is lower than the trap depth\cite{Freidrich-Doyle}. This situation
could be achieved by direct cooling methods
such as Zeeman slowing \cite{ref37-Krems}, optical Stark deceleration
\cite{ref56-Krems}, single-photon cooling\cite{Raizen-single-photon}
or sympathetic cooling\cite{Salomon:01}.
It might also be possible to cool the molecules towards the ultracold regime by
evaporative cooling\cite{Hess}. As it is well known, this was the successful method for
achievement of BEC of atoms\cite{Wieman-BEC,Ketterle-BEC}.
Molecular collisions are fundamental in this context, as evaporative cooling
relies on efficient elastic collisions and, even more crucially, on the ratio of the
probabilities for elastic scattering and spin relaxation ($\gamma$) which must be
very large in order to prevent heating and trap loss. External
electromagnetic fields may serve to control the rate of inelastic collisions.
Tuning close to a Feshbach resonance has proved to be an extremely fruitful
means of controlling atom-atom collisions\cite{Chin:10}. Interestingly,
it has been recently shown\cite{Hutson:09} that inelastic collision rates in
atom-molecule collisions can
be tremendously reduced in the vicinity of a Feshbach resonance controlled by
an electric or magnetic field.
While a large amount of work has been carried out for atom-atom and
atom-molecule collisions, studies of molecule-molecule collisions in external
fields are still scarce. Most clues about these more complex systems have come
from atom-molecule studies. Krems and Dalgarno\cite{ref20-Krems} found that
the main mechanisms of spin relaxation in collisions of $^3\Sigma$ molecules
with He is given by couplings to rotationally excited states mediated by the
spin-spin interaction. Volpi and Bohn\cite{ref24-Krems} found that spin
depolarization is suppressed when the Zeeman splitting between incident and final
states does not exceed the height of the centrifugal barrier in the exit
channel. These ideas were confirmed for $^{17}O_2(^{3}\Sigma_{g}^{-})$ +
$^{17}O_2(^{3}\Sigma_{g}^{-})$ by Tscherbul {\em et
al}\cite{paper-Krems}, who carried out the first accurate computational
study involving two diatoms. In that work, the
experimentally derived potential energy surface (PES) of Aquilanti {\em et al}\cite{Perugia-PES} was
employed (Perugia PES in what follows). This collisional system is
interesting since oxygen has been postulated as a
reliable candidate for trapping and cooling\cite{Friedrich,ref25-Krems} and
progress in cooling this species has been
achieved recently\cite{ref37-Krems,ref38-Krems}.
The present work builds up along these lines by the investigation of the role
played by the PES in $O_2$+$O_2$ collisions in the
presence of a magnetic field. It is well known that
ultracold atom-atom collisions are very sensitive to the short range of the
potential\cite{Gribakin:93}. However, it has been recently
shown\cite{Hutson:07} that, in the presence of inelastic scattering
(i.e. atom-molecule collisions), peaks in cross sections around a Feshbach
resonance may become suppressed and hence dynamics becomes rather insensitive to the
details of the potential. This theory is tested here for a
rather anisotropic molecule-molecule system such as $O_2$ + $O_2$, using
a recent {\em ab initio} PES developed by Bartolomei {\em
et al}\cite{abi-PES}. In this potential, electronic
correlation is included by means of a high level supermolecular method in the
short range whereas long-range interaction coefficients have been obtained from first
principles as well\cite{long-range}. It is worthwhile to mention that inelastic
rate coefficients obtained with this PES have proved to be highly consistent with
measurements
of the evolution of rotational populations along supersonic
expansions in the temperature range 10 $\le T \le$ 34 K\cite{Montero:11}.
By comparing present scattering calculations with previous ones using
the Perugia PES\cite{paper-Krems} and with some additional test
modifications of the {\em ab initio} PES, the effect of the potential on
the cold and ultracold dynamics has been assessed.
The paper is organized as follows. In Sec. II, a summary of the theory for the
scattering between two identical $^3\Sigma$ molecules is given. Details
specific to the $^{17}O_2-^{17}O_2$ system are provided in Sec. III and in
Sec.IV, results are reported and discussed. A concluding remark is given in
Sec. V.
\section{Theory}
We give a summary of the theory -recently developed by Tscherbul
{\em et al}\cite{paper-Krems}- for the scattering of two $^{3} \Sigma$ identical
rigid rotor molecules in the presence of a magnetic field.
Diatom-diatom Jacobi coordinates are used in a space-fixed (SF) frame, including the
vector joining the centers of mass of the molecules $a$ and $b$, $\vec{R}$,
and the intramolecular unit vectors, $\hat{r}_{a}$ and
$\hat{r}_{b}$. Intramolecular distances are fixed to the molecular equilibrium
distance, $r_a=r_b=r_e$. The Hamiltonian for the interaction can be written as
\begin{equation}
\label{ec1}
\hat{H}=-\frac{1}{2\mu R} \frac{\partial^{2}}{\partial R^{2}} R +
\frac{\hat{l}^{2}}{2\mu R^{2}}+V(\vec{R},\hat{r}_{a},\hat{r}_{b})+ \hat{H}_{a}
+ \hat{H}_{b},
\end{equation}
\vspace{.2cm}
\noindent
where atomic units are used ($\hbar=1$), $\hat{l}$ is the orbital angular
momentum, $\mu$ is the reduced mass and
$V$ is the interaction potential or PES. The
internal Hamiltonian of the $^{3} \Sigma$ molecule $\hat{H}_{\alpha} (\alpha=
a,b)$ is given, within the rigid rotor approximation, by\cite{Mizushima}
\begin{eqnarray}
\label{ec2}
\hat{H}_{\alpha} & = & B_{e}\hat{n}_{\alpha}^{2}+2\mu_{B}\vec{B} \cdot
\hat{s}_{\alpha}+\gamma_{sr}\hat{n}_{\alpha} \cdot \hat{s}_{\alpha}+ \\ \nonumber
& & +
\frac{2}{3}\lambda_{ss}\sqrt{\frac{24\pi}{5}}
\sum_{q}Y_{2q}^{*}(\hat{r_{\alpha}})[\hat{s}_{\alpha}\otimes \hat{s}_{\alpha}]^{(2)}_{q},
\end{eqnarray}
\vspace{.2cm}
\noindent
where $\hat{n}_{\alpha}$ is the angular momentum associated with
$\hat{r}_{\alpha}$, $B_{e}$ is the rotational constant, $\mu_{B}$ is the Bohr
magneton, $\vec{B}$ is the external magnetic field and $\hat{s}$ is the
electron spin. The last two terms in Eq.\ref{ec2} correspond to the
spin-rotation and spin-spin interactions, parameterized by
$\gamma_{sr}$ and $\lambda_{ss}$, respectively.
Weaker interactions such as hyperfine and magnetic dipole-dipole are neglected
(see Ref.\cite{ref25-Krems} for discussion).
The total wave function is expanded in a basis set of SF
uncoupled and symmetry-adapted functions
\begin{equation}
\label{ec3}
\Psi^{M \eta \epsilon}=\frac{1}{R}\sum_{\tau_{a} \ge \tau_{b} l m_{l}}
u^{M\eta\epsilon}_{\tau_{a}\tau_{b}lm_{l}}(R) \phi^{M \eta \epsilon}_{\tau_{a}\tau_{b}lm_{l}}
(\hat{R},\hat{r}_{a} \hat{r}_{b}),
\end{equation}
\vspace{.2cm}
\noindent
with
\begin{equation}
\label{ec4}
\phi^{M \eta \epsilon}_{\tau_{a}\tau_{b}lm_{l}} =
\frac{1}{\left(2\left( 1+\delta_{\tau_{a},\tau_{b}}\right) \right)^{1/2}}
\left( |\tau_{a}\tau_{b}\rangle+\eta \epsilon |\tau_{b}\tau_{a}\rangle
\right) |l m_l\rangle,
\end{equation}
\vspace{.2cm}
\noindent
$|l m_l\rangle$ being a spherical harmonics and where $| \tau_{\alpha}
\rangle$ represents an uncoupled function of the $\alpha$ monomer
\begin{equation}
\label{ec5}
| \tau_{\alpha} \rangle = |n_{\alpha} m_{n_{\alpha}} \rangle |s_{\alpha}
m_{s_{\alpha}}\rangle.
\end{equation}
\vspace{.2cm}
\noindent
The basis of Eq.\ref{ec4} are a well-ordered set with $\tau_{a} \ge \tau_{b}$
that are normalized eigenfunctions of the
operator permuting the identical molecular skeletons
($\hat{P}$: $\hat{r}_{a}\rightarrow \hat{r}_{b}$; $\hat{r}_{b}\rightarrow \hat{r}_{a}$;
$\vec{R}\rightarrow -\vec{R}$), with eigenvalue $\eta$.
These basis functions are also eigenfunctions of
spatial inversion ($E^{*}$: $\hat{r}_{a}\rightarrow -\hat{r}_{a}$;
$\hat{r}_{b}\rightarrow -\hat{r}_{b}$; $\vec{R}\rightarrow -\vec{R}$) with
eigenvalue $\epsilon= (-1)^{n_a+n_b+l}$. Since the molecules
under study are homonuclear, $n_a$ and $n_b$ have the same parity
so $\epsilon=(-1)^{l}$. In addition to these symmetries, the Hamiltonian
commutes with the SF $z$-axis component of the total
angular momentum, so that for a given value of this projection, $M$,
basis functions in Eq.\ref{ec3} must satisfy
\begin{equation}
\label{ec6}
m_{n_a}+m_{s_a}+ m_{n_a}+m_{s_a}+ m_l = M.
\end{equation}
\vspace{.2cm}
Substitution of Eq.\ref{ec3} into the Schr{\"o}dinger equation leads to the
set of close-coupled equations for the radial coefficients:
\begin{eqnarray}
\label{ec7}
\left[ \frac{1}{2\mu} \frac{d^{2}}{d R^{2}}-\frac{l(l+1)}{2\mu R^{2}}+E\right]
u^{M \eta \epsilon}_{\tau_{a}\tau_{b}lm_{l}}(R)
& = &
\end{eqnarray}
\begin{eqnarray}
\hspace{-.4cm} & = & \hspace{-.6cm} \sum_{\tau'_{a} \ge \tau'_{b} l' m'_{l}} \hspace{-.4cm}
\langle \phi^{M \eta \epsilon}_{\tau_{a}\tau_{b}l m_{l}} |
(V+ \hat{H}_{a} + \hat{H}_{b}) | \phi^{M \eta \epsilon}_{\tau'_{a}\tau'_{b}l'm'_{l}} \rangle
u^{M\eta\epsilon}_{\tau'_{a}\tau'_{b}l'm'_{l}}(R), \nonumber
\end{eqnarray}
\vspace{.2cm}
\noindent
where $E$ is the total energy.
It must be pointed out that the asymptotic Hamiltonian $\hat{H}_{a} +
\hat{H}_{b}$ is not diagonal in the basis $\phi^{M
\eta\epsilon}_{\tau'_{a}\tau'_{b}l'm'_{l}}$ due to the spin-rotation and
spin-spin terms, and matrix elements of these terms are given
in Eqs.14 and 16 of Ref.\cite{ref20-Krems}, respectively.
On the other hand, potential matrix elements are
given as a sum of a direct and an exchange coupling terms\cite{paper-Krems}:
\begin{equation}
\langle \phi^{M \eta \epsilon}_{\tau_{a}\tau_{b}lm_{l}} | V |
\phi^{M \eta \epsilon}_{\tau'_{a}\tau'_{b}l'm'_{l}} \rangle =
\frac{1}
{ [(1+\delta_{\tau_{a},\tau_{b}})(1+\delta_{\tau'_{a},\tau'_{b}})]^{1/2}}
\times \nonumber
\end{equation}
\begin{equation}
\label{ec8}
\times \left[\langle \tau_a \tau_b l m_l | V | \tau'_a \tau'_b l' m'_l \rangle
+ \eta \epsilon \langle \tau_a \tau_b l m_l | V | \tau'_b \tau'_a l' m'_l
\rangle \right].
\end{equation}
\vspace{.2cm}
\noindent
The interaction potential depends on the total spin resulting from the coupling
of the $s_{a}=s_{b}=1$ spins of the $^{3}\Sigma$ molecules, $S=0,1,2$, and
can be represented as\cite{Tiesinga:93}:
\begin{equation}
\label{ec9}
V(\vec{R},\hat{r}_{a},\hat{r}_{b})=\sum_{S=0}^{2}\sum_{M_{S}=-S}^{S}V_{S}(\vec{R},\hat{r}_{a},
\hat{r}_{b}) |SM_{s}\rangle\langle SM_{s}|
\end{equation}
\vspace{.2cm}
\noindent
where $M_{S}$ is the projection of the total spin, $M_S=m_{s_a}+m_{s_b}$. We use this
representation in order to include directly the singlet, triplet and quintet
{\em ab initio} PESs of Ref.\cite{abi-PES} (an alternative approach was
followed in Ref.\cite{paper-Krems} since the Perugia PES is given as a sum of
a spin-independent and a spin-dependent contribution\cite{Perugia-PES}).
In this way, matrix elements of Eq.\ref{ec8} can be further developed as
\begin{equation}
\langle \tau_a \tau_b l m_l | V | \tau'_a \tau'_b l' m'_l \rangle =
\delta_{M_S,M'_S} \sum_{S=0}^2 (2 S + 1)
\nonumber
\end{equation}
\vspace{-.25cm}
\begin{equation}
\times
\small{\threejm{1}{m_{s_{a}}}{1}{m_{s_{b}}}{S}{-M_S}
\threejm{1}{m_{s'_{a}}}{1}{m_{s'_{b}}}{S}{-M'_S}}
\nonumber
\end{equation}
\vspace{-.25cm}
\begin{equation}
\label{ec10}
\times \langle
n_{a}m_{n_{a}}n_{b}m_{n_{b}}lm_{l}|V_{S}|n'_{a}m_{n'_{a}}n'_{b}m_{n'_{b}}l'm_{l'}\rangle,
\end{equation}
\vspace{.2cm}
\noindent
where $(:::)$ are 3-$j$
symbols. An explicit expression for $\langle
n_{a}m_{n_{a}}n_{b}m_{n_{b}}lm_{l}|V_{S}|n'_{a}m_{n'_{a}}n'_{b}m_{n'_{b}}l'm_{l'}\rangle$
is given in Eq.18 of Ref.\cite{paper-Krems}.
Close-coupled equations (Eq.\ref{ec7}) are solved by means a log-derivative
method\cite{Mano,Hybridprop} and using the basis set of Eq.\ref{ec4} in which, as
mentioned above, the asymptotic Hamiltonian is not diagonal.
At the point of imposing scattering boundary conditions and
thus, obtaining the scattering $S$-matrix, it is necessary to transform
to a new basis set $\psi^{\eta}_{\zeta_{a}\zeta_{b},l,m_l}$ giving the
eigenstates of the fragments. For each $l,m_l$ block:
\begin{equation}
\left[ \hat{H}_{a} + \hat{H}_{b} \right]
\psi^{M\eta\epsilon}_{\zeta_{a}\zeta_{b} l m_l} = (\varepsilon_{\zeta_a} +
\varepsilon_{\zeta_b}) \psi^{M \eta \epsilon}_{\zeta_{a}\zeta_{b} l m_l},
\label{ec11}
\end{equation}
\vspace{.2cm}
\noindent
where $\varepsilon_{\zeta_{\alpha}}$ is the Zeeman fine structure energy level of
molecule $\alpha$. An unitary transformation of the log-derivative matrix onto
the new basis is performed at the end of the propagation, and then scattering
$S$-matrices and transition $T$-matrices are obtained in a standard
way\cite{paper-Krems}. The integral cross section for a
transition $\zeta_{a}\zeta_{b}\rightarrow \zeta'_{a} \zeta'_{b}$ within a
given $(M,\eta,\epsilon)$ block is finally given as
\begin{eqnarray}
\label{ec12}
\sigma^{M \eta \epsilon}_{\zeta_{a}\zeta_{b}\rightarrow \zeta'_{a} \zeta'_{b}}
=\frac{\pi
\left(1+\delta_{\zeta_{a},\zeta_{b}}\right)}{k_{\zeta_{a}\zeta_{b}}^{2}} \hspace{-.2cm}
\sum_{lm_{l} l'm_{l'}} \hspace{-.2cm}
|T^{M \eta \epsilon}_{\zeta_{a}\zeta_{b}lm_{l};\zeta'_{a}\zeta'_{b}l'm_{l'}}|^{2},
\end{eqnarray}
\vspace{.2cm}
\noindent
where $T$ is the transition matrix and $k_{\zeta_{a}\zeta_{b}}^{2}/(2\mu)=
E- \varepsilon_{\zeta_a} + \varepsilon_{\zeta_b}$ is the translational
energy of the initial channel. In obtaining Eq.\ref{ec12}, integration of the
differential cross section has been restricted over half-space for final
states satisfying $\zeta'_{a} = \zeta'_{b}$ (see Ref.\cite{paper-Krems}).
This is equivalent to dividing by two the cross sections
integrated over full-space to avoid double counting when the state of the
outgoing molecules is the same\cite{ourjpca, Curtiss58}.
\begin{section}{Computation details}
The asymptotic Hamiltonian of Eq.\ref{ec2} is parameterized for $^{17}O_2$ by
means of accurate spectroscopic constants\cite{ref51-Krems}: $B_{e}$=1.353
cm$^{-1}$, $\gamma_{sr}$=-0.00396 cm$^{-1}$ and $\lambda_{ss}$=1.985
cm$^{-1}$. The three lowest states of the $n$= {\em even} manifold are given in
Table \ref{tableI} for a typical value of the magnetic field.
Dependence with magnetic
field of the combined $|\zeta_a,\zeta_b\rangle$ asymptotic states is depicted in
Fig.\ref{fig1}. In this work, we focus on the initial state
$|\zeta_a,\zeta_b\rangle=|3,3\rangle$, i.e., both molecules are, prior to
interaction, in their lowest {\em lfs} state. Elastic and inelastic integral
cross sections are obtained for translational energies ranging from
10$^{-8}$ to 0.05 K.
As we are dealing with collisions between identical (composite) bosons,
calculations are restricted to the $\eta=+1$ block (the role of nuclear
spin can be ignored, as explained in detail in Ref.\cite{ref25-Krems}).
Note also that to study processes involving identical internal states
(Eq.\ref{ec4}), calculations are constrained to the $\epsilon=+1$ parity (only
even $l$'s in the wavefunction expansion).
The intermolecular interaction is given by the global {\em ab initio}
PES of Bartolomei {\em et al}\cite{abi-PES}, specifically, the one referred
in that work as CC-PT2 PES.
Singlet, triplet and quintet ($S$=0,1,2)
potentials are given\cite{abi-PES} by the spherical harmonic expansion
\begin{equation}
\label{ec13}
V_{S}(\vec{R},\hat{r}_a,\hat{r}_b)=(4\pi)^{3/2} \hspace{-.15cm}
\sum_{\lambda_{a} \lambda_{b} \lambda} \hspace{-.15cm}
V_{S}^{\lambda_{a}\lambda_{b}\lambda}(R)
A_{\lambda_{a} \lambda_{b} \lambda}(\hat{R},\hat{r}_a,\hat{r}_b),
\end{equation}
\vspace{.2cm}
\noindent
\noindent
where $A_{\lambda_{a} \lambda_{b} \lambda}$ is given as a combination of
spherical harmonics and $\lambda_{a}$, $\lambda_{b}$ and $\lambda$ are even integers
(due to the symmetry of the four identical nuclei). The radial coefficients
$V_{S}^{\lambda_{a}\lambda_{b}\lambda}(R)$ were obtained by means of
quadratures of the supermolecular {\em ab initio} energies over the angular
variables, obtaining a total of 29 coefficients for the quintet PES and 27 for
the singlet and triplet ones.
The PESs are extended asymptotically ($R>$ 19 bohr) using analytical functions
(common to the three multiplicities) based on high level {\em ab
initio} calculations of electrostatic, dispersion and induction long range
coefficients\cite{long-range}. In the following Section, we present a comparison with
calculations using the Perugia PES\cite{Perugia-PES}, which comprises
just four radial terms (for each multiplicity) derived from a
multi-property fitting analysis.
To give a flavor of the similarities/differences
between the two PES considered, we present in Fig.\ref{fig2} the dependence
with the intermolecular distance of the potential matrix elements among the
{\em lfs} state $|3,3\rangle$ and the (one spin flipping) relaxation channel
$|3,1\rangle$. These matrix elements are relevant to the mechanisms proposed
by Krems and Dalgarno\cite{ref20-Krems} and by Volpi and
Bohn\cite{ref24-Krems}. Note that for initial states approaching in an $s$ wave,
conservation of $M$ forbids $s$ waves in the spin relaxation channels (see
Eq.\ref{ec6} and Ref.\cite{ref24-Krems}). It can be seen that there are some quantitative
differences in the coupling as well as in the long range behavior. A comparison of
properties related to the van der Waals (vdW) coefficient $C^{000}_6$ is
summarized in Table \ref{tableII}.
Cross sections are computed using the code developed
by Tscherbul {\em et al}\cite{paper-Krems}, modified by us to
include the hybrid log-derivative/Airy propagator of Alexander and
Manolopoulos\cite{Hybridprop}. Related routines were taken from the MOLSCAT
code\cite{Molscat}.
In this way, the log-derivative propagator of Manolopoulos\cite{Mano} is used
in the strongly coupled region (from 4.5 $a_{0}$ to 40.8 $a_{0}$) with a
fixed short step (0.04 $a_0$),
whereas the Airy propagator of Ref.\cite{Hybridprop} is used for the long
range region (from 40.8 $a_{0}$ to 202. $a_{0}$) with a variable step size
(the ratio between adjacent step sizes being of 1.05).
Comparing with the
original code of Tscherbul {\em et al}, where only the log-derivative
propagator was used, we found that the errors are less than $0.5\%$ while the new
propagation is about 10 times faster due to the smaller number of
integration steps as well as the use of the computationally less expensive
Airy propagator.
The total wave function is expanded using a basis set comprising
three rotational levels ($n_{a}$, $n_{b}$=0, 2, 4) and four partial
waves ($l$=0, 2, 4, 6), equal to that employed in Ref.\cite{paper-Krems}. Although
exact positions of the resonances might change with an increase of the basis
size, this basis is sufficient to retrieve the main features
of the collision dynamics.
Regarding the convergence of the cross sections with the
projection of the total angular momentum, $M$, it is found that for translational
energies lower than $10^{-4}$ K, just the $M=2$ block calculation is
sufficient, while for larger energies, five blocks ($M=$0-4) have to be summed
up. For a single energy and magnetic field calculation, typical run times are
of about 18 and 90 hours, respectively.
\end{section}
\section{Results and discussion }
\vspace*{-.2cm}
We present first the results concerning the $B$-field dependence at very low
energies and, in a subsequent section, we report those related to the translational
energy dependence, including the transition from the ultracold to the cold
regimes.
\vspace*{-.35cm}
\subsection{Magnetic field dependence at 1 $\mu$ K}
\vspace*{-.2cm}
The magnetic-field dependence of the {\em ab initio} and Perugia cross sections
for the {\em lfs} state $|3,3\rangle$ at 1 $\mu$K is summarized in
Fig.\ref{fig3} (panels a and b). In Fig.\ref{fig3}.c we report the
elastic-to-inelastic ratio, $\gamma$, more specifically,
the ratio between the elastic cross section and those inelastic ones leading
to untrapped states: $|\zeta'_a,\zeta'_b\rangle=|3,1\rangle,|2,2\rangle,
|2,1\rangle$ and $|1,1\rangle$. Note that new calculations with the Perugia
PES were performed using the same basis set as with the {\em ab initio} PES
(there are some quantitative changes between present calculations and those
given in Fig.3 of Ref.\cite{paper-Krems} were a smaller basis was employed). There are
various noticeable differences between the two PESs.
On the one hand, {\em ab initio} elastic and inelastic cross sections
(Figs.\ref{fig3}.a,b) are much larger than the Perugia ones and,
in addition, they exhibit more marked Feshbach resonance structures.
On the other hand, although there are large variations of the
elastic-to-inelastic ratio with the magnetic field, it can be seen that
both PESs produce values which, on average, are of the same order of
magnitude. The cases of very low fields ($B<$ 50 G), where $\gamma$ is much
larger for the Perugia PES, and around 1000 G, where the {\em ab initio} value
becomes very large, are discussed in more detail below.
We discuss first the background behavior of the cross sections of
Fig.\ref{fig3}. The elastic cross sections correspond to a background
scattering length, $a_{bg}$, of about 118 and 32 $a_0$ (in absolute value),
for the {\em ab initio} and Perugia PESs, respectively. These quantities are
larger than the scattering lenghts purely due to the vdW
potential\cite{Gribakin:93}, $\overline{a}$, of 22 and 24 $a_0$, respectively.
The particularly large value of the {\em ab initio} elastic cross section
can be explained by existence of a close quasibound state varying
with magnetic field at the same rate than the entrance channel.
Regarding inelastic cross sections, the {\em ab initio} one is on average
about 10 times larger than the Perugia result.
This difference can be qualitatively rationalized by resorting to the analytic van
der Waals theory\cite{Julienne:06,Gao:98b}, which takes the solutions of the vdW
potential\cite{Gao:98a} as the reference for the multichannel quantum
defect theory\cite{Julienne:89}. A key parameter in that approach is the
short range squared amplitude of the entrance channel wave function, which
near threshold is proportional to\cite{Julienne:06}
\begin{equation}
\lim_{k_0\rightarrow 0} C_{bg}(k_0)^{-2} = k_0
\overline{a} \left[ 1+\left(1-\frac{a_{bg}}{\overline{a}}\right)^2 \right],
\label{ec15}
\end{equation}
\noindent
$k_0$ being the wavenumber of the incoming channel. Since
inelastic cross sections are proportional to
$C_{bg}(k_0)^{-2}$\cite{Julienne:89}, Eq.\ref{ec15} implies that the
value of $a_{bg}$ affects the threshold behavior of the inelastic cross
sections. It follows, then, that the very large {\em ab initio}
inelastic cross sections are explained by the magnitude of the corresponding
background
scattering length. Within this framework, one can expect that the
elastic-to-inelastic ratio becomes less sensitive to $a_{bg}$ than the cross
sections themselves, since both elastic and inelastic cross sections are
approximately proportional to $a_{bg}^2$. This is the result of
Fig.\ref{fig3}.c, where the average value of $\gamma$ is about the same for
both potentials.
We now turn to discuss the resonant structures
of Fig.\ref{fig3}. At this point, it is convenient to mention
the work of Hutson\cite{Hutson:07} who analyzed the threshold behavior of
Feshbach resonances in the presence of inelastic scattering. He found
that -in contrast to the case of a pure elastic scattering- resonance peaks
may be significantly suppressed and, in this way,
the collisional process may become insensitive to the details of the
potential. With this in mind, the profiles obtained in Fig.\ref{fig3} are
rather unexpected given the considerable anisotropy of the $O_2-O_2$ interaction.
In connection with this issue, let us digress for a while and study
the resonance patterns for a purely elastic scattering event, as is the
case of the magnetic field dependence of the lowest {\em high field seeking}
({\em hfs}) state $|1,1\rangle$ (see Fig.\ref{fig1}). The result for the {\em
ab initio} PES at 1 $\mu$K, using a a reduced basis ($n_{max}$=4,$l_{max}$=4), is
shown in Fig.\ref{fighfs} and can be directly compared with Fig.4 of
Ref.\cite{paper-Krems}. For both PESs, a high density of very pronounced
resonances is obtained. For the {\em ab initio} PES there is a slightly larger
number of peaks, and some of them are wider. Also, the baseline
of the {\em ab initio} cross section is
much larger than the Perugia one, as occurs for the {\em lfs} state.
A similar density of quasibound states is expected when the entrance channel
is the $lfs$ state, but presence of inelastic channels substantially modify
the resonance lineshapes\cite{Hutson:07}. To show this, it is convenient to
write down the behavior of the $S$ matrix in the neighborhood of an isolated
resonance\cite{Feshbach:58,Hutson:07},
\begin{equation}
S_{jk}(E) = S^{bg}_{jk} - i \frac{g_{Ej}g_{Ek}}{E-E_r + i \Gamma_{E}/2},
\label{ec16}
\end{equation}
\noindent
where $k$ and $j$ are the incoming and outgoing channels, respectively,
$S^{bg}_{jk}$ is the background $S$ matrix, $E$ is the total energy, $E_r$ is
the resonance position, $\Gamma_{E}$ is the resonance
width and (complex) $g_{Ei}$ involve couplings between resonance and
channel $i$ wavefunctions\cite{Miller:70}, such that the partial width for
channel $i$ is given as $\Gamma_{Ei}=|g_{Ei}|^2$ and $\Gamma_{E}= \sum_i
\Gamma_{Ei}$. A key point in Hutson's argument is that $g_{Ek}$ elements are
proportional to the square root of the incoming channel wavenumber
$k_0^{1/2}$. Then, as $k_0$ decreases and if the resonant state is also
coupled to inelastic channels, the radius of the circle
described by $S_{jk}$ drops to zero and peaks in cross sections become
significantly supressed\cite{Hutson:07}. The analytical vdW theory gives a more
detailed threshold behavior of the $g_{Ek}$ elements, as they become proportional to
the square root of Eq.\ref{ec15}. Hence, if $a_{bg}$ is sufficiently large,
$g_{Ek}$ will tend its threshold value (zero) rather slowly, and as a
consequence, more pronounced peaks in the cross sections can be obtained. This
explains why we find a marked resonance structure, especially for the {\em ab
initio} PES. Nevertheless, as noted in Ref.\cite{Hutson:07}, a relatively
large ratio between elastic and inelastic partial widths is also needed in
order to obtain pronounced resonance profiles. It is reasonable to expect that,
among all the
quasibound states that should be crossing the $lfs$ state, only some of them
will have particularly large {\em elastic} partial widths, so only a few
marked resonance features will ``survive'', as in fact it occurs (Fig.\ref{fig3}).
We have just seen that a large $a_{bg}$ enhances the short range couplings
between the resonance and the incoming wavefunctions. In this situation the
dynamics must become very sensitive to the short range region of the potential.
In order to study the role
played by the short range vs. the long-range features of the intermolecular
potential, we have performed a test calculation where the long-range
anisotropy of the potential is switched off. To this end, the {\em ab initio} PES has been
modified by imposing, for $R>$ 19 $a_0$, an exponential decay of all
radial terms of Eq.\ref{ec13} except the isotropic one
$(\lambda_a,\lambda_b,\lambda)= (0 0 0)$. The new cross sections are compared
with those corresponding to the correct long range behavior in
Fig.\ref{fig5}. This figure clearly shows that the resonance structure is
rather insensitive to the long-range anisotropy of the
interaction and, therefore, short range couplings must be playing a dominant
role.
Finally, it is interesting to note from Fig.\ref{fig3}.b that, for the {\em
ab initio} PES, there is a significant suppression of inelastic
scattering for magnetic fields ranging from 750 to 1500 G. This feature must
be related with the prominent resonance at about 600 G and it must be due to
interferences between the background and resonant $S$ matrices leading to
asymmetric line-shapes of the state-to-state cross
sections\cite{Fano:61}. Note that this reduction entails
a considerable increase of the ratio $\gamma$
for a wide range of magnetic fields. A similar behavior (with an even
larger suppression of inelastic scattering) has been
found in $^4$He + $^{16}$O$_2$ magnetic Feshbach resonances\cite{Hutson:09}.
Analogously, it is also worth mentioning that, for the {\em ab initio} results,
the elastic scattering on the left-hand-side of the resonance at about $B$= 30
G is suppressed. This feature, already present in Fig.\ref{fig3}, can be more
clearly seen in Fig.\ref{fig5}, where
the {\em ab initio} elastic cross section becomes very small around 10 G. In
this case, the corresponding ratio $\gamma$ becomes much smaller than
expected (from the well known effect of suppresion inelastic scattering due to
centrifugal barriers\cite{ref25-Krems,ref24-Krems,paper-Krems}).
\subsection{Translational energy dependence}
In Fig.\ref{figdepE}, dependence of the cross sections with
kinetic energy is given for several selected values of the magnetic field. In
agreement with predictions based on the analytical
vdW theory\cite{Julienne:06}, two very different regimes are
noticed for energies larger or smaller than $E_{vdW} \approx$ 10 mK (see
Table \ref{tableII}).
For the higher energy range, elastic and inelastic cross sections exhibit a
weak dependence with the field, the Perugia ones
being larger than their {\em ab initio} counterparts, in consistency with
previous studies at higher energies\cite{ourjpca}. For
energies lower than the crossover ($E_{vdW}$), cross sections become more
dependent on the magnetic field. This is mainly due to the effect of the
resonances in the ultracold regime, but in the case of the Perugia PES,
supression of inelastic cross
sections at low fields (due to the centrifugal
barriers\cite{paper-Krems,ref25-Krems}) also plays a role.
It is interesting to highlight that a relatively high value of the
elastic-to-inelastic ratio has been obtained between 1 and 10 mK in the {\em ab
initio} calculation at 1000 G (Fig.\ref{figdepE}.c). This result is related
to the asymmetry of the lineshape and the suppression of spin-changing
processes on the right-hand-side of the resonance at 600 G and 1 $\mu$ K,
discussed above (Fig.\ref{fig3}).
A more detailed study of the {\em ab initio} cross sections for low
values of the field ($B\le$ 50 G)
is given in Fig.\ref{figres}. An impressive dependence with
$B$ is noticed for energies just below 10 mK. Between 1 and 10 mK,
complicated resonance structures are seen which are particularly acute for
the elastic cross section. These features are related to the
prominent resonance around 30 G at much lower energies (reported in
Fig.\ref{fig3} and more clearly seen in Fig.\ref{fig5}). In other words, they are
expressions -at several different energies and magnetic fields- of the same
quasibound state. For instance, note the
resemblance between the asymmetric line shapes of the elastic cross section
at $B$= 1 and 5 G and between 1 and 10 mK (Fig.\ref{figres}.a), with the magnetic
field dependence at much lower energies for fields $B<$ 30 G, shown in Fig.\ref{fig5}. A
detailed tracking of these resonances would involve non-trivial lineshape fittings
and has not been attempted here. On the other hand, it should
be noted that, for the range of magnetic fields of Fig.\ref{figres} and up to
translational energies of at least 1 mK, spin-changing collisions
should be suppressed due to existence of centrifugal barriers in all
outgoing channels\cite{paper-Krems,ref25-Krems}. In Fig.\ref{figres}.b it can
be seen that, except for the lowest value of $B$ (1 G), such a suppression does not
occur, in contrast with the results using the Perugia
PES (see Ref.\cite{paper-Krems} and Fig.\ref{figdepE}.b). This must be due to
a significant tunneling through the centrifugal barriers for energies/fields
close to the resonance. Consequently, the ratios $\gamma$ are particularly
small for this range of fields (Fig.\ref{figres}.b).
A further analysis of the sensitivity of the elastic-to-inelastic ratio to the
details of the PES has been performed. We have artificially modified the
anisotropy of the present {\em ab initio} PES by multiplying all the terms in
the spherical harmonic expansion (Eq.\ref{ec13}) -except the isotropic ones-
by a factor $\beta$ ranging form 0.98 to
1.02. In Fig.\ref{fig8} we show the results for different translational
energies and magnetic fields. In can be seen that, while for 20 mK there is
not a strong variation of $\gamma$ with $\beta$, for lower
energies (1 mK and 1 $\mu$K), this ratio changes tremendously with the
anisotropy of the potential. In the new calculations ($\beta=$ 0.98 and 1.02),
no nearby resonances appear for the energies/fields considered and hence,
results are ``more standard'', i.e., very large values of $\gamma$
are now attained for low values of the field ($B<$ 50 G), in agreement with
the expected suppression of inelastic scattering, but smaller $\gamma$'s are
obtained for $B$= 1000 G. However, note that, contrarily to a first order
perturbation theory, the largest ratios are obtained with the most
anisotropic PES ($\beta$= 1.02).
\section{Concluding discussion}
We have performed a detailed study of cold and ultracold molecule-molecule
collisions in the presence of a magnetic field for a system with a
significant anisotropy such as $O_2$+$O_2$. A thorough comparison has been
made between a high quality {\em ab initio} PES and previous
studies\cite{paper-Krems} where a a different PES was used.
Several interesting findings have emerged from this approach regarding the
anisotropy as well as the relative influence of long and short components of
the interaction. For the {\em ab initio} PES, a large background scattering
length gives rise to pronounced resonance structures in the
ultracold regime (translational energies $<$ 10 mK). As a consequence, the
ratio between elastic
and inelastic cross sections, $\gamma$, is very dependent on the
magnetic field as well as on the short range anisotropy of the PES.
Therefore,
quantitative predictions for this important parameter become rather risky.
However and as a general trend, we can indicate that high values of $\gamma$ could be
achieved in the vicinity of asymmetric Fano resonances, or for low fields, $B$
$<$ 50 G. Note that the maximum temperature that can be held in a trap with
such a depth would be of about 1 mK\cite{paper-Krems,Doyle:95}.
A key issue is the large density of quasibound states of the $O_2$+$O_2$
system, best illustrated in the magnetic field dependence of the elastic cross
sections of the lowest high field seeking state. In view of this,
having obtained a large background scattering length does not seem a rare event.
Present behavior might be characteristic of
a range of molecule-molecule systems as well, that is to say, as the number of degrees of freedom
increases, a larger density of quasibound states, including near threshold
resonances, can be expected\cite{Bohn:02}, which in turn makes dynamics
richer. Very recently, Suleimanov and
Krems\cite{Suleimanov:11} have proposed an efficient method for locating
Feshbach resonances in external fields. The new
method could be very useful for the comparison of spectral patterns
obtained from different potentials or between different molecular
systems.
\section{Acknowledgments}
We are indebted to Roman V. Krems for encouragement and for giving us essential
insight along several stages of this work. We wish to thank
M. H. Alexander, D. E. Manolopoulos and J. M. Hutson for the use of the Hybrid
Propagator routines of the MOLSCAT code, and to M. Bartolomei,
E. Carmona-Novillo and R. Hern{\'a}ndez-Lamoneda for the use of the {\em ab
initio} PES. J.P.-R. acknowledges hospitality in the Deparment of Chemistry
of UBC (Canada) and support from a predoctoral JAE CSIC grant.
The work has been funded by Ministerio de Ciencia e Innovaci{\'o}n
(Spain, grants CTQ2007-62898-BQU and FIS2010-22064-C02-02).
We also thank CESGA (Spain) for allocation of computing time.
\vspace{.3cm}
|
1,116,691,497,224 | arxiv | \section{Introduction}
Efficient mesh data structures play a fundamental role in a broad range of mesh processing applications in computer graphics, geometric modeling, scientific visualization, geospatial data science, finite element analysis, and, more recently, in data analysis and machine learning.
Although simple problems can be easily modeled on small low dimensional meshes, phenomena of interest might occur only on much larger meshes and in higher dimensions.
Thus, we often require flexibility to deal with increasingly complex meshes including those defined by irregularly connected heterogeneous and/or multidimensional cell types discretizing spaces with complicated topology.
Moreover, as advances in computing capabilities continue to outpace those in memory, it becomes increasingly important to optimize and exploit the mesh locality as we process and locally query it. Such queries are the primary means of interacting with the mesh and have traditionally been posed in terms of a few spatial and topological primitives.
However, while there are simple, intuitive models for representing polygonal surfaces, there are numerous challenges in generalizing these structures to higher dimensions and in scaling to very large meshes.
In this paper, we first introduce the \emph{Stellar decomposition}, a model for topological data structures that supports efficient navigation of the topological connectivity for simplicial complexes and of certain classes of cell complexes, e.g., those composed of quadrilaterals, polygons, hexahedra, prisms and pyramids.
The defining property of a Stellar decomposition is that the complex is broken up into \emph{regions} indexing a collection of vertices of the complex,
and each vertex within a region has sufficient information to locally reconstruct its \emph{star}, i.e., the set of cells from the complex incident in that vertex.
A Stellar decomposition is
\emph{general}, in that it can easily represent arbitrary complexes with a manifold or non-manifold domain,
\emph{scalable} to complexes both in high dimensions and with a large number of cells,
and \emph{flexible}, in that it enables users to defer decisions about which topological connectivity relations to encode.
It, therefore, supports the generation of optimal application-dependent local data structures at runtime.
Due to the locality of successive queries in typical mesh processing applications, the construction costs of these local topological data structures are amortized over multiple mesh operations while processing a local region.
We introduce the \emph{Stellar tree} as a concrete instance of the Stellar decomposition model for spatially embedded complexes.
Stellar trees utilize a hierarchical $n$-dimensional quadtree, or kD-tree, as vertex decomposition, and are easily \emph{tunable} using a single parameter
that defines the maximum number of vertices allowed in each local region of the decomposition.
The source code for our Stellar tree implementation is available in the public domain at~\cite{Fellegara_StellarTree_Github}.
The main contributions of this work are:
\begin{compactitem}
\item The formal theoretical definition of a Stellar decomposition over \emph{Canonical Polytope (CP) complexes},
a class of cell complexes that includes simplicial and cubical complexes of arbitrary dimension,
as well as cells in the finite element 'zoo', such as 2D polygons and 3D pyramids and triangle prisms.
\item The definition of the \emph{Stellar tree} as a concrete realization of the Stellar decomposition for spatially embedded complexes.
The decomposition in a Stellar tree is based on a hierarchical spatial index
with a simple tuning parameter to facilitate balancing storage and performance needs.
\item The definition of \emph{Sequential Range Encoding (SRE)}, a compact encoding for the entities indexed by each region of the decomposition.
When applied to CP complexes reindexed by the spatial decomposition of a Stellar tree, SRE yields compressed Stellar trees
with only a small overhead relative to the original CP complex.
%
As demonstrated in Section~\ref{sec:storage}, these results extend to a broad range of CP complexes.
Compressed Stellar trees are competitive with state-of-the-art topological data structures for triangle and tetrahedral complexes
and offer significant improvements for other CP complexes, especially over data structures for general simplicial complexes in 3D and higher dimensions.
\item A streaming mesh processing paradigm for applications defined on a Stellar tree,
where the necessary topological relations can be efficiently generated on demand and cached for repeated processing.
%
As a proxy for larger applications, we describe how the Stellar tree can be used
to generate popular existing topological data structures.
In addition to faster generation times, the reduced memory requirements of a Stellar tree enable generating these data structures
even on machines with limited resources.
\end{compactitem}
The remainder of this paper is organized as follow.
In Sections~\ref{background} and~\ref{related}, we review background notions and related work, respectively.
In Section~\ref{sec:stellar_decomposition}, we provide the definition of Stellar decomposition, in which we present all the components of a Stellar decomposition, plus the mapping functions that define the relation between the indexed entities and the regions of the decomposition.
Then, we describe the encoding of the complex and of the cells of the complex indexed by the regions of the decomposition.
In Section~\ref{sec:stellar_tree}, we define the Stellar tree, a spatio-topological realization of the Stellar decomposition,
as well as an encoding of its hierarchical decomposition.
Section~\ref{sec:stellar_tree_generation} provides an algorithm to generate a Stellar tree
and to reindex the underlying mesh for improved spatial locality and encoding.
In Section~\ref{sec:storage}, we compare the Stellar tree to several state-of-the-art topological data structures
for manifold and non-manifold complexes.
In Section~\ref{sec:general_strategy}, we describe a general mesh processing paradigm that can be followed by any Stellar tree application,
which we employ in Section~\ref{sec:local_topo_rels} to extract local topological features from the Stellar tree
and in Section~\ref{sec:stellar_build_structures} to generate existing topological data structures from a Stellar tree.
We conclude in Section~\ref{sec:stellar_conclusions} with some remarks and directions for future work.
\section{Background notions}
\label{background}
In this section, we review notions related to cell and simplicial complexes,
the basic combinatorial structures for representing discretized shapes.
Throughout the paper,
we use \sDim\ to denote the dimension of the ambient space,
\cDim\ to represent the dimension of the complex
and \tDim\ to denote the dimension of a cell from the complex, where $0 \leq \tDim \leq \cDim$, and typically $\cDim \leq \sDim$.
\NOTA{
\felleComment{
\textbf{R}: both cell and simplicial complexes are homeomorphic to a close k-dim ball.
We can state that as we are in a \emph{discrete} space (discussed it with Ulderico)}
}
A $\tDim$-dimensional \emph{cell} in the $\sDim$-dimensional Euclidean space $\eSpace$ is a subset of $\eSpace$ homeomorphic to a closed $\tDim$-dimensional ball $ B^\tDim = \{ x \in \etSpace : \|x\| \leq 1 \}$.
A $\cDim$-dimensional \emph{cell complex} $\cC$ in $\eSpace$ is a finite set of cells with disjoint interiors and of dimension at most $\cDim$
such that the boundary of each $\tDim$-cell $\cell$ in $\cC$ consists of the union of other cells of $\cC$ with dimension less than $\tDim$.
Such cells are referred to as the \emph{faces} of $\cell$.
A cell which does not belong to the boundary of any other cell in $\cC$ is called a \emph{top cell}.
$\cC$ is a \emph{pure} cell complex when all top cells have dimension $\cDim$.
The subset of $\eSpace$ spanned by the cells of $\cC$ is called the \emph{domain} of $\cC$.
An example of a pure cell 3-complex is shown in Figure \ref{fig:complexes_examples}(a): all its top cells are 3-cells (tetrahedra).
Throughout this paper, we are concerned with a restricted class of cell complexes whose cells can be fully reconstructed by their set of vertices,
e.g., via a canonical ordering~\cite{Scho94,Poir98,Rema03,Celes2005,Tautges2010Canonical}.
We refer to this class of complexes as \emph{Canonical Polytope complexes (CP complexes)},
and note that it includes simplicial complexes, cubical complexes, polygonal cell complexes
and heterogeneous meshes with cells from the finite element `zoo' (e.g., simplices, hexahedra, pyramids, and prisms).
In what follows, we denote a CP complex as $\sC$.
An example CP complex is shown in Figure~\ref{fig:complexes_examples}(b), which contains top edges, triangles, quads, and tetrahedra.
A pair of cells in a CP complex $\sC$ are mutually \emph{incident} if one is a face of the other. They are
\emph{$h$-adjacent} if they have the same dimension $\tDim > h$ and are incident in a common $h$-face.
We informally refer to vertices (0-cells) as \emph{adjacent} if they are both incident in a common edge (1-cell)
and, similarly, for \tDim-cells that are incident in a common $(\tDimMinusOne)$-cell.
The \emph{(combinatorial) boundary} of a CP cell $\simplex$ is defined by the set of its faces.
The \emph{star} of a CP cell $\simplex$, denoted as $\simplexstar(\simplex)$, is the set of its \emph{co-faces}, i.e., CP cells in $\sC$ that have $\simplex$ as a face.
The \emph{link} of a CP cell $\simplex$, denoted as $\simplexlink(\simplex)$, is the set of all the faces of cells in $\simplexstar(\simplex)$
that are not incident in $\simplex$.
Two $h$-cells $\simplex$ and $\simplex^{\prime}$ in \sC\ are \emph{$(h{-}1)$-connected}
if there is a sequence, called a \emph{$h$-path}, of $(h{-}1)$-adjacent $h$-cells in \sC\ from $\simplex$ to $\simplex^{\prime}$.
A complex $\sC$ is \textit{$h$-connected}, if every pair of $h$-simplices $\sigma_1$ and $\sigma_2$, there is an $h$-path in \sC\ joining $\sigma_1$ and $\sigma_2$.
We can now define a $\cDim$-dimensional \emph{CP complex} $\sC$, that we denote as a set of CP-cells in $\eSpace$ of dimension at most $\cDim$ such that:
\begin{enumerate}
\item $\sC$ contains all CP-cells in the boundary of the CP-cells in $\sC$;
\item the intersection of any two CP-cells in $\sC$ is \emph{conforming}, i.e., it is either empty, or it consists of faces shared by both CP-cells.
\end{enumerate}
\emph{Simplicial complexes} are an important subset of CP complexes whose cells are \emph{simplices}.
%
Let $\tDim$ be a non-negative integer. A \tDim-simplex $\simplex$ is the convex hull of $\tDim + 1$ independent points in $\eSpace$ (with $\tDim \leq \sDim$), called vertices of $\simplex$.
A \emph{face} of a \tDim-simplex $\simplex$ is an $h$-simplex ($0 \leq h \leq \tDim$) generated by $h + 1$ vertices of $\simplex$.
%
Other important notions are those of \emph{manifold}, and of \emph{combinatorial manifold}.
A subset $M$ of the Euclidean space $\eSpace$ is called a \emph{\cDim-manifold}, with $\cDim \leq \sDim$, if and only if every point of $M$ has a neighborhood homeomorphic to the open \cDim-dimensional ball.
The notion of \emph{combinatorial manifold} is defined based on the condition that the link of every vertex is a combinatorial $(\cDimMinusOne)$-sphere. Detecting a combinatorial $(\cDimMinusOne)$-sphere is an undecidable problem for $\cDim > 4$~\cite{Nabutovsky1996Geometry}.
A more practical concept for the purpose of representing CP complexes is that of pseudo-manifold.
A pure $\cDim$-dimensional CP complex $\sC$ is said to be a \emph{pseudo-manifold} when it is $(\cDimMinusOne)$-connected and its $(\cDimMinusOne)$-cells are incident in at most two \cDim-cells.
Informally, we refer to the connected and compact subspace of \eSpace\ not satisfying the manifold conditions as \emph{non-manifold}.
An example of a pure cell complex that is also a pseudo-manifold is shown in Figure \ref{fig:complexes_examples}(c). Note that the pure cell complex shown in Figure \ref{fig:complexes_examples}(a) is not pseudo-manifold, since it is not 2-connected.
\NOTA{
A subset $M$ of the Euclidean space $\eSpace$ is called a \emph{\cDim-manifold with boundary} (with $\cDim \leq \sDim$)
if and only if every point of $M$ has a neighborhood homeomorphic either to the open \cDim-dimensional ball
or the open \cDim-dimensional ball intersected with a hyperplane in $\eSpace$.
}
Queries on a cell complex are often posed in terms of \emph{topological relations}, which are defined by the adjacencies and incidences of its cells.
Let us consider a CP complex $\sC$ and a $k$-cell $\simplex\in\sC$, with $0\!\le k\!\le \cDim$:
\begin{itemize
\item a \emph{boundary relation} $\relation{k,p}(\simplex)$, with $0\le p<k$, consists of the $p$-cells of $\sC$ in the boundary of $\simplex$;
\item a \emph{co-boundary relation} $\relation{k,q}(\simplex)$, with $k < q \le \cDim$, consists of the $q$-cells of $\sC$ in the star of $\simplex$;
\item an \emph{adjacency relation} $\relation{k,k}(\simplex)$ consists of the set of $k$-cells of $\sC$ that are adjacent to $\simplex$.
\end{itemize}
Figure~\ref{fig:complexes_examples}(b) illustrates some topological relations on a CP complex.
Boundary relation \relation{3,0} for tetrahedron \simplex$_5$ is the list of its boundary vertices, i.e., \relation{3,0}(\simplex$_5$) = $\{v_0,v_2,v_4,v_5\}$.
Similarly, co-boundary relation \relation{0,2} for vertex $v_3$ is the list of its incident 2-cells (triangles and quads),
i.e., \relation{0,2}($v_3$) = $\{\simplex_0,\simplex_1,\simplex_2,\simplex_3,\simplex_4\}$.
Adjacency relation \relation{0,0} for vertex $v_0$, is the list of its adjacent vertices, i.e., \relation{0,0}($v_0$) = $\{v_1,v_2,v_3,v_4,v_5\}$.
\section{Related work}
\label{related}
\NOTA{
\leilaComment{for \textbf{Kenny} please add \\
1. paper on cache mesh \cite{Nguyen2017Cache} \\
2. comparison of Maximal Simplex tree and Simplex Array List \cite{Boissonnat2015Building}
}
\kennyComment{ Done. }
}
In this section, we review the state of the art on topological mesh data structures, hierarchical spatial indexes and data layouts.
\subsection{Topological mesh data structures}
\label{sec:related_topological_ds}
There has been much research on efficient representations for manifold cell and simplicial complexes, especially for the 2D case.
A comprehensive survey of topological data structures for manifold and non-manifold shapes can be found in~\cite{DeFloriani2005Data}.
A topological data structure over a cell complex encodes a subset of its topological relations and supports the efficient reconstruction of local topological connectivity over its cells.
Topological data structures can be classified according to:
\begin{inparaenum}[(i)]
\item the \emph{dimension} of the cell complex,
\item the \emph{domain} to be approximated, i.e., manifolds or
non-manifold shapes,
\item the subset of \emph{topological information} directly encoded, and
\item the \emph{organization} of topological information directly encoded, i.e., explicit or implicit data structures.
\end{inparaenum}
The explicit cells and connectivity relations can either be allocated on demand using small local structures, or contiguously, e.g. using arrays.
In the former case, pointers are used to reference the elements, which can be useful when the data structure needs to support frequent updates
to the underlying cells or their connectivity.
In the latter case, indexes of the cells within the array can be used to efficiently reference the elements.
Recently, \cite{Nguyen2017Cache} proposed an approach to reconstruct topological relations on demand and to cache them for later reuse.
Broadly speaking, topological data structures can be categorized as \emph{incidence-based} or \emph{adjacency-based} representations.
Whereas incidence-based data structures primarily encode their topological connectivity through incidence relations over all cells in the complex,
adjacency-based data structures primarily encode their connectivity through adjacency relations over the top cells of the complex.
The \emph{Incidence Graph} ($IG$)~\cite{Edelsbrunner1987Algorithms} is the prototypical incidence-based data structure for cell complexes in arbitrary dimension. The IG explicitly encodes all cells of a given cell complex $\cC$,
and for each $p$-cell $\cell$, its immediate boundary and co-boundary relations (i.e., \relation{p,p{-}1} and \relation{p,p{+}1}).
%
Several compact representations with the same expressive power as the IG have been developed for simplicial complexes~\cite{DeFloriani2004data,DeFloriani2010dimension},
which typically require less than half the storage space as the IG~\cite{Canino2014Representing}.
Several incidence-based data structures have been developed for manifold 2-complexes, which encode the incidences among edges.
The \emph{half-edge} data structure~\cite{Mantyla1988Introduction} is the most widely data structure of this type~\cite{CGAL,OML15}.
\emph{Combinatorial maps}~\cite{Lienhardt1994N,Damiand2014Combinatorial} generalize this notion to higher dimensions.
\emph{Indexed data structures}~\cite{Lawson1977Software} provide a more compact alternative by explicitly encoding only vertices, top cells
and the boundary relations from top cells to their vertices. Since the cells of a CP complex are entirely determined by their ordered list of vertices,
this provides sufficient information to efficiently extract all boundary relations among the cells, but not the co-boundary or adjacency relations.
The \emph{Indexed data structure with Adjacencies ($IA$)}~\cite{Paoluzzi1993Dimension,Nielson1997Tools}
extends the indexed representation to manifold simplicial complexes of arbitrary dimension by explicitly encoding adjacency relation \relation{\cDim,\cDim}, giving rise to an adjacency-based representation.
All remaining topological relations can be efficiently recovered if we also encode
a top simplex in the star of each vertex (i.e., a subset of relation \relation{0,\cDim}).
\NOTA{\ytodo{CoT and SOT}}
The \emph{Corner-Table (CoT)} data structure~\cite{Rossignac20013D} is also adjacency-based.
It is defined only for
triangle meshes, where it has the same representational power as the IA data structure.
It uses \emph{corners} as a conceptual abstraction to represent individual vertices of a triangle
and encodes topological relations among corners and their incident vertices and triangles.
%
Several efficient extensions of the Corner-Table data structure have been proposed
that exploit properties of manifold triangle meshes~\cite{Gurung2011SQuad,Luffel2014}.
%
The \emph{Sorted Opposite Table (SOT)} data structure \cite{Gurung2009SOT} extends the Corner-Table data structure to tetrahedral meshes
and introduces several storage optimizations.
Most notably, the SOT supports the reconstruction of boundary relation \relation{\cDim, 0} from co-boundary relations \relation{0,\cDim} (implicitly encoded) and \relation{\cDim,\cDim} relations (explicitly encoded), reducing its topological overhead by nearly a factor of two.
Since modifications to the mesh require non-local reconstructions of the associated data structures,
this representation is suitable for applications on static meshes.
%
\NOTA{\ytodo{IA*}}
The \emph{Generalized Indexed data structure with Adjacencies (\iastar\ data structure)}~\cite{Canino2011IA} extends the representational domain of the IA data structure to arbitrary non-manifold and mixed dimensional simplicial complexes.
The \iastar\ data structure is compact, in the sense that it gracefully degrades to the IA data structure in locally manifold neighborhoods of the mesh, and has been shown to be more compact than incidence-based data structures, especially as the dimension increases~\cite{Canino2014Representing}.
A detailed description can be found in Section \ref{sec:storage_other_structures}.
The Simplex tree \cite{Boissonnat2014simplex} also encodes general simplicial complexes of arbitrary dimension.
It explicitly stores all simplices of the complex within a \emph{trie}~\cite{Fredkin1960Trie} whose nodes are in bijection with the simplices of the complex.
It has been implemented in the \emph{GUDHI} library \cite{GUDHI}.
We provide a detailed description of this data structure in Section~\ref{sec:storage_other_structures}.
\NOTA{
\felleComment{removed the sentence with the comparison between IA* and Simplex tree done in Fugacci et al. as we are going to do the same type of comparison on much larger datasets, confirming practically what they are claiming \\ \textbf{notice}: I updated the reference of the Fugacci et al. paper \\ \textbf{notice2}: IA* is 10-30\% more compact than Simplex tree on low-dimensional datasets, on higher-dimensional they highlighted the trend that we identified in our comparisons.}
}
Boissonnat\etal~\cite{Boissonnat2017} also proposed two top-based data structures targeting a compact Simplex tree representation.
The \emph{Maximal Simplex Tree} ($MST$) is an induced subgraph of the Simplex tree, in which only the paths corresponding to top simplices are encoded,
but most operations require processing the entire complex.
The \emph{Simplex Array List} ($SAL$) is a hybrid data structure computed from the top simplices of a simplicial complex $\sC$
that improves processing efficiency by increasing the storage overhead.
Both the $MST$ and the $SAL$ are interesting structures from a theoretical point-of-view,
but, as described in~\cite{Boissonnat2017}, the model does not currently scale to large meshes and results were limited to complexes with only a few thousand vertices.
Moreover, to the best of our knowledge, there is no public domain implementation currently available.
\NOTA{KW: See beginning of Section 5.3 in \cite{Boissonnat2017} (page 544).
> ``Unfortunately, [showing the compactness of MST] not been possible due to the lack of available libraries
able to handle very large automata''
}
\NOTA{skeleton blocker}
The \emph{Skeleton-Blocker} data structure~\cite{Attali2012Efficient} encodes simplicial complexes that are close to \emph{flag complexes} (simplicial complexes whose top simplices are entirely determined from the structure of their 1-skeleton, i.e. the vertices and edges of the complex) and has been successfully employed for executing edge contractions on such complexes. It encodes the 1-skeleton
and the \emph{blockers}, simplices that are not in \sC, but whose faces are.
Its generation procedure is computationally intensive for general simplicial complexes since
identifying the \emph{blockers} requires inserting simplices of all dimensions.
%
We compare the Stellar tree representation with the IA, CoT, and SOT data structures as well as with the Simplex tree,
and \iastar\ data structures in Section~\ref{sec:storage_other_structures}.
\subsection{Hierarchical spatial indexes, optimized data layouts and distributed mesh data structures}
\label{sec:related_spatial_index}
A spatial index is a data structure used for indexing spatial information, such as points, lines or surfaces in the Euclidean space.
Spatial indexes form a decomposition of the embedding space into \emph{regions}. This can be driven by:
\begin{inparaenum}
\item an \emph{object-based} or a \emph{space-based} criterion for generating the decomposition.
\item an \emph{organization} of the regions, i.e., using a \emph{hierarchical} or a \emph{non-hierarchical} (\emph{flat}) organization.
\end{inparaenum}
These properties are independent, and thus, we can have hierarchical object-based decompositions as well as flat space-based ones.
We now consider how the regions of a decomposition can intersect.
In an \emph{overlapping} decomposition the intersection between the regions can be non-empty on both the interiors and on the boundary of their domain,
while, in a \emph{non-overlapping} decomposition intersections can only occur on region boundaries.
We say that a region is \emph{nested} within another region if it is entirely contained within that region.
In the remainder of this section, we focus primarily on \emph{hierarchical spatial indexes},
which can be classified by the dimensionality of the underlying ambient space and by the types of entities indexed.
Hierarchical spatial indexes for point data are provided by \emph{Point Region (PR)} quadtrees/octrees and kD-trees~\cite{Samet2006Foundations}.
In these indexes, the shape of the tree is independent of the order in which the points are inserted, and the points are only indexed by leaf blocks.
The storage requirements of these data structures can be reduced by allowing leaf blocks to index multiple points, as in the \emph{bucket PR} quadtree/octree~\cite{Samet2006Foundations}, whose \emph{bucketing threshold} determines the number of points that a leaf block can index before it is refined.
Several data structures have been proposed for spatial indexing of \emph{polygonal maps (PM)}, including graphs and planar triangle meshes.
\emph{PM quadtrees}~\cite{Samet1985Storing} extend the PR quadtrees to represent polygonal maps considered as a structured collection of edges.
While there are several variants (\emph{PM$_1$}, \emph{PM$_2$}, \emph{PM$_3$} and the randomized \emph{PMR)}, which differ in the criterion used to refine leaf blocks, all maintain within the leaf blocks a list of intersecting edges from the mesh.
The \textit{PM$_2$-Triangle quadtree}~\cite{DeFloriani2008Hierarchical} specializes PM quadtrees over triangle meshes and has been applied to terrain models.
The PM index family has also been extended to \emph{PM-octrees} encoding polyhedral objects in 3D~\cite{Carlbom1985hierarchical,Navazo1989Extended,Samet2006Foundations}, where the subdivision rules have been adjusted to handle edges and polygonal faces of the mesh elements.
Another proposal for triangulated terrain models are \emph{Terrain trees} \cite{Fellegara2017Efficient}, that are a spatial index family for the efficient representation and analysis of large-scale triangulated terrains generated from $LiDAR$ (\emph{Light Detection and Ranging}) point clouds.
\cite{DeFloriani2010Spatial} develops a collection of spatial indexes for tetrahedral meshes called \emph{Tetrahedral trees}.
\NOTA{ In~\cite{DeFloriani2010Spatial}, we have developed a collection of spatial indexes for tetrahedral meshes, that we call \emph{Tetrahedral trees}. }
We note that data structures in the PM family are \emph{spatial data structures} optimized for efficient spatial queries on a complex (e.g., point location, containment and proximity queries) and are not equipped to reconstruct the connectivity of the complex.
In contrast, the \emph{PR-star octree}~\cite{Weiss2011PR} is a topological data structure for tetrahedral meshes embedded in 3D space.
It augments the bucket PR octree with a list of tetrahedra incident in the vertices of its leaf blocks, i.e., those in the \emph{star} of its vertices.
This data structure has been shown to be effective with geometrical and topological applications including
local curvature estimation, mesh validation and simplification~\cite{Weiss2011PR},
morphological feature extraction~\cite{Weiss2013primaldual}
and morphological simplification~\cite{Fellegara2014Efficient}.
In this paper, we have generalized the PR-star data structure to handle a broader class of complexes (CP complexes)
in arbitrary dimensions and with an arbitrary domain (i.e., non-manifold and non-pure complexes).
At the same time, our new leaf block encoding exploits the spatial coherence of the mesh,
yielding a significant storage saving compared to PR-star trees (see Section~\ref{sec:storage_encodings}).
%
Considerable effort has been devoted to reindexing meshes to better exploit their underlying spatial locality,
for example to support streamed processing~\cite{Isenburg2005}, better cache locality~\cite{Yoon05} or compression~\cite{Yoon2007}.
%
Cignoni\etal~\cite{Cignoni2003External} introduce an external memory spatial data structure for triangle meshes embedded in $\eucl^3$.
Whereas our aim is to enable efficient topological operations on the elements of general simplicial and CP complexes,
the objective of~\cite{Cignoni2003External} is to support compact out-of-core processing of massive triangle meshes.
Since their data structure is dimension-specific, by exploiting geometric and topological properties of triangle meshes in $\eucl^3$,
it would be difficult to generalize to more general CP complexes and to higher dimensions.
%
Dey\etal~\cite{Dey2010Localized} use an octree to index a large triangle mesh for localized Delaunay remeshing.
Due to the significant overhead associated with their computations, their octrees are typically shallow,
containing very few octree blocks.
In the context of interactive rendering and visualization of large triangulated terrains and polygonal models,
Cignoni\etal~\cite{Cignoni2003BDAM,Cignoni2004Adaptive} associate patches of triangles with the simplices
of a multiresolution diamond hierarchy~\cite{Weiss2011Simplex}.
%
Stellar decompositions and trees are also related to distributed mesh data structures~\cite{Devine2009,Ibanez2016}, which partition large meshes
across multiple processors for parallel processing e.g.\ in numerical simulations~\cite{mfem-library,Kirk2006,Edwards2010}.
In the latter, each computational \emph{domain} maintains a mapping between its boundary elements and their counterparts on neighboring domains.
To reduce inter-process communication during computation, each domain might also include one or more
layers of elements from other domains surrounding its elements,
typically referred to as \emph{ghost}, \emph{rind} or \emph{halo} layers~\cite{Poirier2000,Lawlor2006,Ollivier2010}.
Although each region of a Stellar decomposition (or tree) can be seen as a computational domain in a distributed data structure with a single ghost layer
(i.e., the elements in the star of its boundary vertices),
Stellar trees are aimed at providing efficient processing on coherent subsets of the mesh (regions),
where users can generate optimized local topological data structures.
In a distributed regime, we envision Stellar trees helping more with fine-grained
(intra-domain) parallelism than with coarse-grained multi-domain partitions.
\section{Stellar decomposition}
\label{sec:stellar_decomposition}
The \emph{Stellar decomposition} is a model for data structures representing \emph{Canonical Polytope (CP) complexes}.
We denote a CP complex as \sC, and its ordered lists of vertices and \topcpcells\ as \sCV\ and \sCT, respectively.
We provide a definition of the Stellar decomposition in Section~\ref{sec:stellar_dec_def},
and describe its encoding in Section~\ref{sec:stellar_dec_enc}.
\subsection{Definition}
\label{sec:stellar_dec_def}
Given a CP complex \sC, a \emph{decomposition} \d\ of its vertices \sCV\ is a collection of subsets of \sCV\ such that every vertex $\vertex \in \sCV$ belongs to at least one of these subsets.
We will refer to the elements of decomposition \d\ as \emph{regions}, and we will denote a region as \R.
A Stellar decomposition \sDec\ defines a map from the regions of a decomposition \d\ of its vertex set \sCV\ to the vertices and \topcpcells\ of complex \sC.
Formally, a Stellar decomposition is defined by three components:
\begin{enumerate}
\item a \emph{CP complex} \sC;
\item a \emph{decomposition} \d\ whose regions cover the vertices of \sC;
\item a \emph{map} \PhiMap\ from regions of \d\ to entities of \sC.
\end{enumerate}
Thus,
a Stellar decomposition is a triple $\sDec = (\sC,\d,\PhiMap)$.
Since \sC\ is entirely characterized by its vertices and \topcpcells, we define map \PhiMap\
in terms of the two components: \PhiMapVert\ defines the mapping to vertices and \PhiMapTop\ defines the mapping to \topcpcells.
For the vertices, we have a map from \d\ to \sCV\ based on an application-dependent \emph{belonging} property.
Formally,
$\PhiMapVert: \d \rightarrow \mathcal{P}(\sCV)$
is a map from \d\ to the powerset of \sCV\ where
\begin{equation*}
\forall \R \in \d, \PhiMapVert(\R) = \{\vertex \in \sCV : \vertex \text{ \emph{belongs} to } \R \}
\end{equation*}
While a region \R\ in \d\ is associated with a subset of vertices from \sCV,
the above definition does not limit a vertex $\vertex \in \sCV$ to be in a single region.
However, we do require that each vertex belongs to at least one region, i.e., we impose the following additional property:
\begin{equation*}
\forall \vertex \in \sCV, \exists \R \in \d | \vertex \in \PhiMapVert(\R).
\end{equation*}
\begin{figure}[t]
\centering
\subfloat[]{
\resizebox{.4\columnwidth}{!}{
\includegraphics{imgs/mapping_function_v_start}
}
}
\hfil
\subfloat[]{
\resizebox{.4\columnwidth}{!}{
\includegraphics{imgs/mapping_function_v_end_regions}
}
}
\caption{
Example mapping function \PhiMapVert\ in 2D.
An initial set of points (a) is mapped to the regions of an overlapping decomposition \d\ (b).
}
\label{fig:mapping_verts_regions}
\end{figure}
\begin{figure*}[t]
\centering
\subfloat[]{
\resizebox{.25\textwidth}{!}{
\includegraphics{imgs/mapping_function_t_start_regions}
}
}
\hfil
\subfloat[]{
\resizebox{.25\textwidth}{!}{
\includegraphics{imgs/mapping_function_t_end_leaf1_regions}
}
}
\hfil
\subfloat[]{
\resizebox{.25\textwidth}{!}{
\includegraphics{imgs/mapping_function_t_end_leaf2_regions}
}
}
\caption{Mapping function \PhiMapTop\ for the decomposition \d\ from Figure~\ref{fig:mapping_verts_regions}.
Given a triangle mesh (a) and a vertex map \PhiMapVert\ on \d, \PhiMapTop\ maps the triangles in the star of the vertices in \PhiMapVert(\R) to \PhiMapTop(\R).
(b) and (c) highlight the triangles (green) mapped to two different regions (blue) of \d.
}
\label{fig:mapping_tri_regions}
\end{figure*}
Figure~\ref{fig:mapping_verts_regions} illustrates an example decomposition \d\ over a point set
where mapping function \PhiMapVert\ associates points with regions of \d.
The Stellar decomposition gets its name from the properties of its top cell map \PhiMapTop.
For each region \R\ of \d, \PhiMapTop(\R) is the set of all \topcpcells\ of \sCT\ incident in one or more vertices of \PhiMapVert(\R).
In other words, \PhiMapTop(\R) is defined by the union of cells in the \emph{star} of the vertices in \PhiMapVert(\R).
Formally, $\PhiMapTop: \d \rightarrow \mathcal{P}(\sCT)$ is a function from the regions of \d\ to the powerset of \sCT, where
\begin{equation} \label{eq:phitop_blocks}
\forall \R \in \d, \PhiMapTop(\R) = \{\simplex \in \sCT | \exists \vertex \in \relation{k,0}(\simplex) : \vertex \in \PhiMapVert(\R)\}
\end{equation}
Figure~\ref{fig:mapping_tri_regions} illustrates mapping \PhiMapTop\ for two regions of the decomposition of Figure~\ref{fig:mapping_verts_regions}(b) on a triangle mesh defined over its vertices.
We note that \PhiMapTop\ is based on a topological rather than a spatial property.
A \topcp\ \simplex\ is only mapped to a region \R\ when one (or more) of its vertices is mapped to \R\ under \PhiMapVert.
Specifically, it does not depend on spatial overlap.
To characterize this representation, we define the \emph{spanning number} \ChiSimplex\ of top cells in a Stellar decomposition
as the number of regions to which a \topcp\ is mapped.
\begin{defn} \label{def:chisimplex}
Given Stellar decomposition $\sDec = (\sC,\d,\PhiMap)$,
the \emph{spanning number} \ChiSimplex\ of a top CP cell $\simplex \in \sCT$
is the number of regions in \d\ that map to \simplex. Formally,
\begin{equation} \label{eq:chisimplex}
\forall \simplex \in \sCT,\ \ChiSimplex = | \{ \R \in \d | \simplex \in \PhiMapTop(\R) \} |
\end{equation}
\end{defn}
It is also interesting to consider the \emph{average spanning number} \Chi\
as a global characteristic of the efficiency of a Stellar decomposition
over a complex, measuring the average number of times each \topcp\ is represented.
\begin{defn} \label{def:chi}
The \emph{average spanning number} \Chi\ of a Stellar decomposition \sDec\ is the average number of regions indexing a \topcp\ \simplex.
Formally,
\begin{equation} \label{eq:chi}
\Chi = (\sum_{\simplex \in \sCT} \ChiSimplex) / |\sCT| = (\sum_{\R \in \d} |\PhiMapTop(\R)|) / |\sCT|
\end{equation}
\end{defn}
\subsection{Encoding}
\label{sec:stellar_dec_enc}
In this section, we describe how we represent the two components of a Stellar decomposition, providing a detailed description of the data structures for representing a CP complex (subsection \ref{sec:mesh_structure}),
and a compressed encoding for the regions of the decomposition (subsection \ref{sec:leaf_encodings}).
We do not describe how the decomposition \d\ is represented, as this is specific to each concrete realization of the Stellar decomposition model.
\subsubsection{Indexed representation of the CP complex}
\label{sec:mesh_structure}
We represent the underlying CP complex as an indexed complex,
which encodes the spatial position of the vertices and the boundary relation \relation{\tDim,0} of each \ktop\ in \sC.
In the following, we discuss the case of a \cDim-dimensional CP complex \sC\ embedded in \eSpace.
We use an array-based representation for the vertices and top cells of \sC.
Since the arrays are stored contiguously, each vertex \vertex\ has a unique position index \vIndex\ in the \sCV\ array
and, similarly, each \topcp\ \simplex\ in the \sCT\ array associated with its dimension has a unique position index \tIndex.
The \sCV\ array encodes the position of each vertex \vertex\ in \sC, requiring a total of $\sDim|\sCV|$ coordinates.
The \topcpcells\ are encoded using separate arrays \sCTk\ for each dimension $\tDim \le \cDim$ that has \topcpcells\ in \sC.
\sCTk\ encodes the boundary connectivity from its \ktopcpcells\ to their vertices, i.e., relation \relation{\tDim,0}
in terms of the indices \vIndex\ of the vertices of its cells within \sCV.
This requires $|\relation{\tDim,0}(\simplex)| $ references for a top \tDim-cell \simplex,
e.g., (\tDim+1) vertex indices for a \tDim-simplex and $2^{\tDim}$ references for a \tDim-cube.
Thus, the total storage cost of the indexed mesh representation is:
\begin{equation}
\sDim|\sCV| + \sum\limits_{\tDim=1}^d \sum\limits_{\simplex \in \sCTk} |\relation{\tDim,0}(\simplex)|.
\label{eq:storage_indexed_simplex}
\end{equation}
We note that when \sC\ is pure (i.e., its \topcpcells\ all have the same dimension \cDim), \sC\ encoding requires only two arrays:
one for the vertices and one for the top cells.
\subsubsection{A compressed region representation}
\label{sec:leaf_encodings}
In this subsection, we discuss two encoding strategies for the data mapped to each region of the decomposition.
We begin with a simple strategy that explicitly encodes the arrays of vertices and \topcpcells\ mapped to each region
and work our way to a compressed representation of these lists.
Coupling this compressed representation with a reorganization of the vertices and cells of the CP complex
(as we will describe in Section~\ref{sec:stellar_tree_generation})
yields a significant reduction in storage requirements for a Stellar decomposition,
as we will demonstrate in Section~\ref{sec:storage_encodings}.
Recall that under \PhiMap, each region \R\ in \d\ maps to a list of vertices \vR\ and a list of \topcpcells\ \tR\ from the complex \sC.
A straightforward strategy would be to encode lists of vertices and \topcpcells\ that explicitly list the mapped elements for each region \R.
We refer to this as the \datasetName{explicit} Stellar decomposition encoding.
An example of the \datasetName{explicit} encoding for a single region
with six vertices in \vR\ and twenty triangles in \tR\ is shown in Figure \ref{fig:encoding_explicit}.
\begin{figure}[t]
\centering
\subfloat[]{
\resizebox{.45\columnwidth}{!}{
\includegraphics{imgs/encoding_leaf_explicit}
}
}
\hfil
\subfloat[]{
\resizebox{.45\columnwidth}{!}{
\includegraphics[trim={0 1.5cm 0 0},clip]{imgs/encoding_lists_explicit}
}
}
\caption{\datasetName{explicit} encoding for triangles within a region (dotted square).
The lists explicitly encode the 6 vertices and 20 triangles in the region.}
\label{fig:encoding_explicit}
\end{figure}
It is apparent that the above encoding can be very expensive due to the redundant encoding of \topcpcells\ with vertices in multiple regions.
A less obvious redundancy is that it does not account for the ordering of the elements.
We now consider a \datasetName{compressed} Stellar decomposition encoding that compacts the vertex and \topcpcells\ lists
in each region \R\ by exploiting the \emph{locality} of the elements within \R.
The \datasetName{compressed} encoding reduces the storage requirements within region lists by replacing \emph{runs} of incrementing consecutive sequences of indices using a generalization of \emph{run-length encoding (RLE)}~\cite{Held1991Data}.
RLE is a form of data compression in which \emph{runs} of consecutive identical values
are encoded as pairs of integers representing the value and repetition count, rather than as multiple copies of the original value.
For example, in Figure~\ref{fig:rle_example}, the four entries with value `$2$'
are compacted into a pair of entries $[\text{-}2,3]$, where a negative first number indicates the start of a run and its value,
while the second number indicates the remaining elements of the run in the range.
While we do not have such duplicated runs in our indexed representation, we often have incrementing sequences of indexes,
such as \{40,41,42,43,44\}, within a local vertex list \vR\ or \topcpcells\ list \tR.
We therefore use a generalized RLE scheme to compress such sequences, which we refer to as \emph{Sequential Range Encoding (SRE)}.
SRE encodes a run of \emph{consecutive} non-negative indexes using a pair of integers,
representing the starting index, and the number of remaining elements in the range.
As with RLE, we can intersperse runs (sequences) with non-runs in the same list
by negating the starting index of a run (e.g.\ $[\text{-}40,4]$ for the above example).
Thus, it is easy to determine whether or not we are in a run while we iterate through a sequential range encoded list.
A nice feature of this scheme is that it allows us to dynamically append individual elements or runs to an SRE list with no storage overhead.
Furthermore, we can easily \emph{expand} a compacted range in place by replacing its entries with the first two values of the range
and appending the remaining values to the end of the list.
Figure~\ref{fig:sre_example} shows an example SRE list over a list, where, e.g., the sequence \{1,2,3,4\} is represented as $[\text{-}1,3]$.
\begin{figure}[t]
\centering
\subfloat[Run-length]{
\resizebox{.4\columnwidth}{!}{
\includegraphics{imgs/RLE}
}
\label{fig:rle_example}
}
\hfil
\subfloat[Sequential range]{
\resizebox{.4\columnwidth}{!}{
\includegraphics{imgs/sequentialRLE}
}
\label{fig:sre_example}
}
\caption{\emph{Run-length} and \emph{sequential range} encodings for non-negative integers.
Runs (a) and sequences (b) are highlighted in yellow.
}
\label{fig:rle}
\end{figure}
In order to compare the \datasetName{explicit} and \datasetName{compressed} representations of the Stellar decomposition,
we introduce a global characteristic that measures the average storage requirements for a \topcp\ in a Stellar decomposition representation.
\begin{defn} \label{def:mu}
The \emph{average reference number} \Mu\ of a Stellar decomposition is the average number of references
required to encode a \topcp\ in the \tR\ lists of the regions in \d.
%
Formally:
\begin{equation} \label{eq:mu}
\Mu = (\sum_{\R \in \d} |\tR|) / |\sCT|
\end{equation}
where $|\tR|$ is the size of the \topcpcells\ list in a region \R.
\end{defn}
In contrast to the average spanning number \Chi, which is a property of the decomposition, the average reference number \Mu\ is a property of how the decomposition in encoded.
An \datasetName{explicit} representation is equivalent to a \datasetName{compressed} representation without any compressed runs,
and, thus, it is always the case that $\Mu \leq \Chi$. In the \datasetName{explicit} representation (i.e.\ without any sequence-based compression), $\Mu = \Chi$,
while in the \datasetName{compressed} representation, \Mu\ decreases as the compression of the \tR\ lists becomes more effective.
Figure~\ref{fig:encoding_compressed} illustrates a \datasetName{compressed} representation
of the mesh from Figure~\ref{fig:encoding_explicit}
after its vertex and triangle arrays have been reordered (in an external process) and highlights its sequential ranges,
where \vR\ requires a single run to encode the indexed vertices
and \tR\ requires four sequential runs to encode the indices of its triangles.
\begin{figure}[t]
\centering
\subfloat[]{
\resizebox{.46\columnwidth}{!}{
\includegraphics{imgs/encoding_leaf_compressed}
}
}
\hfil
\subfloat[]{
\resizebox{.5\columnwidth}{!}{
\includegraphics[trim={0 2cm 0 0},clip]{imgs/encoding_lists_compressed}
}
}
\caption{\datasetName{compressed} encoding within a region (dotted square)
after reindexing the vertices and triangles of the mesh from Figure~\ref{fig:encoding_explicit}.}
\label{fig:encoding_compressed}
\end{figure}
\section{Stellar trees}
\label{sec:stellar_tree}
The Stellar decomposition is a general model that is agnostic about how the decomposition is attained and about its relationship to the underlying CP complex.
Thus, for example, we can define a Stellar decomposition using Voronoi diagrams or regular or irregular tilings
covering the vertices of a given CP complex.
In this section, we introduce \emph{Stellar trees} as a class of Stellar decompositions defined over nested spatial decompositions of the CP complex
and discuss some of our design decisions.
Before defining a Stellar tree (Section~\ref{sec:stellar_tree_def}) and its encoding (Section~\ref{sec:stellar_tree_enc}),
we review some underlying notions.
\begin{figure}[t]
\centering
\subfloat[]{
\resizebox{.4\columnwidth}{!}{
\includegraphics{imgs/mapping_function_v_start}
}
}
\hfil
\subfloat[]{
\resizebox{.4\columnwidth}{!}{
\includegraphics{imgs/mapping_function_v_end}
}
}
\caption{
A mapping function \PhiMapVert\ over a nested spatial decomposition \d.
The vertices (a) are partitioned into regions by \d's leaf blocks (b).
}
\label{fig:mapping_verts_quad}
\end{figure}
The \emph{ambient space} \aSpace\ is the subset of $\eSpace$ in which the data is embedded.
We consider the region bounding the ambient space to be a hyper-rectangular \emph{axis-aligned bounding block},
which we refer to simply as a \emph{block}.
A \tDim-dimensional \emph{closed} block \B\ in $\eSpace$, with $\tDim \leq \sDim$, is the Cartesian product of \tDim\ closed intervals $[l_i,u_i]$, with $i=1,\ldots \sDim$, where exactly \tDim\ of them are non-degenerate, i.e.,
\begin{math}
\B = \{ (x_1,\ldots,x_n)\in \eSpace \,\, | \,\, x_i \in [l_i,u_i]\}
\end{math}
and $\#\{ i\,\, |\, l_i<u_i\} = \tDim$.
Given two blocks $\B:=[l_i,u_i]$ and $\B':=[l'_i,u'_i]$,
$\B'$ is a \emph{face} of $\B$ if, for each dimension $i$, either their intervals overlap (i.e.\ $l'_i=l_i$ and $u'_i=u_i$)
or the $i$-th interval of $\B'$ is degenerate (i.e.\ $l'_i=u'_i=l_i$, or $l'_i=u'_i=u_i$).
Moreover, $\B'$ is a {\em proper face} of $\B$ if $\B'\neq \B$.
Given a block \B, we refer to its 0-dimensional face of degenerate intervals $x_i = l_i$ as its \emph{lower corner}
and to its 0-dimensional face where $x_i = u_i$ as its \emph{upper corner}.
The above block definition describes \emph{closed} blocks.
It can be useful to allow some faces of \B\ to be \emph{open},
especially on faces of neighboring blocks that overlap only on their boundaries.
A \tDim-dimensional \emph{half-open} block $\B$ in $\eSpace$ is defined as
\begin{math}
\B = \{ (x_1,\ldots,x_n)\in \eSpace \,\, | \,\, x_i \in [l_i,u_i)\}
\end{math}
and $\#\{ i\,\, |\, l_i<u_i\} = \tDim$.
Note that all faces of a half-open block \B\ incident in its lower corner are \emph{closed},
while all other faces of \B\ are \emph{open}.
We now focus on \emph{nested decompositions}, hierarchical space-based decompositions
whose overlapping blocks are nested and whose leaf blocks \dL\ (i.e., those without any nested blocks)
form a non-overlapping cover of the ambient space \aSpace.
The nesting relationship of the blocks defines a \emph{containment hierarchy} \h, which can be described using a rooted \emph{tree}.
The root \hR\ of the tree covers the entire ambient space \aSpace;
the leaves \hL\ of the tree correspond to the set of leaf blocks \dL\ of the decomposition;
and the internal nodes \hI\ of the tree correspond to the internal blocks \dI\ of the decomposition.
Nested decompositions can adopt different hierarchical refinement strategies.
Among the most popular are those based on \emph{regular} refinement
and \emph{bisection} refinement of simple primitives (e.g., simplices and cubes).
An $\sDim$-dimensional block \B\ is regularly refined by adding vertices at all edge and face midpoints of \B\
and replacing \B\ with $2^{\sDim}$ disjoint blocks covering \B.
This generates \emph{quadtrees} in 2D, and \emph{octrees} in 3D~\cite{Samet2006Foundations}.
In bisection refinement, a block is bisected along an axis-aligned hyperplane into two blocks, generating \emph{kD-trees}~\cite{Bentley1975Multidimensional}.
\subsection{Definition}
\label{sec:stellar_tree_def}
Since a Stellar tree \sTree\ is a type of Stellar decomposition, it consists of three components:
\begin{inparaenum}
\item a \emph{CP complex} \sC\ embedded in an \emph{ambient space} \aSpace;
\item a \emph{nested decomposition} \d\ covering the domain of \sC; and
\item a \emph{map} \PhiMap\ from blocks of \d\ to entities of \sC.
\end{inparaenum}
The nested decomposition is described by a containment hierarchy \h, represented by a \emph{tree}
whose blocks use the \emph{half-open} boundary convention
to ensure that every point in the domain is covered by exactly one leaf block.
Since Stellar trees are defined over nested spatial decompositions that cover the ambient space,
we customize the vertex mapping function \PhiMapVert\ to partition the vertices of \sC\ according to spatial containment:
each vertex is mapped to its single containing leaf block.
Formally,
\begin{equation} \label{eq:phivert_blocks}
\forall \B \in \dL, \PhiMapVert(\B) = \{\vertex \in \sCV : \vertex \cap \B \neq \emptyset \}
\end{equation}
A two-dimensional example is shown in Figure~\ref{fig:mapping_verts_quad}, where a set of points
are mapped to the leaf blocks of \d\ through \PhiMapVert.
The \topcpcells\ mapping function \PhiMapTop\ for a Stellar tree has the same definition as for the Stellar decomposition (see Equation~\ref{eq:phitop_blocks}).
A consequence of the unique mapping of each vertex in \PhiMapVert\
is that it provides an upper bound on the spanning number of a cell in a Stellar tree.
Specifically, the spanning number \ChiSimplex\ of a CP cell \simplex\ is bounded by the cardinality of its vertex incidence relation \relation{k,0}:
$1 \leq \ChiSimplex \leq |\relation{k,0}(\simplex)|$.
Figure \ref{fig:mapping_tri_quad} shows the mapping \PhiMapTop\ for two blocks of the nested kD-tree decomposition of Figure~\ref{fig:mapping_verts_quad}(b)
over the triangle mesh from Figure~\ref{fig:mapping_tri_regions}.
Once we have defined all the components that form a Stellar tree, we must decide how to generate efficient decompositions of the ambient space in which \sC\ is embedded.
Since the nested decomposition \d, and, consequently, the tree \h\ describing it,
are determined by the number of vertices indexed by a block, we utilize a \emph{bucket PR tree} to drive our decomposition.
This provides a single tuning parameter, the \emph{bucketing threshold}, which we denote as \kv,
that uniquely determines the decomposition for a given complex \sC.
Recall that a block \B\ in a bucket PR-tree is considered \emph{full} when it indexes more than \kv\ vertices (in our case, when $|\PhiMapVert(\B)| > \kv$).
Insertion of a vertex into a full block causes the block to refine
and to redistribute its indexed vertices among its children.
\begin{figure}[t]
\centering
\subfloat[]{
\resizebox{.45\columnwidth}{!}{
\includegraphics{imgs/mapping_function_t_end_leaf1}
}
}
\hfil
\subfloat[]{
\resizebox{.45\columnwidth}{!}{
\includegraphics{imgs/mapping_function_t_end_leaf2}
}
}
\caption{Top cell mapping function \PhiMapTop\ for two blocks (blue)
of the nested decomposition from Figure~\ref{fig:mapping_verts_quad}
on the triangle mesh from Figure~\ref{fig:mapping_tri_regions}.
\PhiMapTop(\B) maps the triangles in the star of the vertices in \PhiMapVert(\B).
}
\label{fig:mapping_tri_quad}
\end{figure}
As such, the domain decomposition of a Stellar tree depends only on the bucketing threshold \kv.
Smaller values of \kv\ yield deeper hierarchies whose leaf blocks index relatively few vertices and \topcpcells,
while larger values of \kv\ yield shallower hierarchies with leaf blocks that index more vertices and \topcpcells.
Thus, \kv\ and the average spanning number \Chi\ of a Stellar tree are inversely correlated:
\Chi\ decreases as \kv\ increases, and \topcpcells\ are, on average, indexed by fewer leaf blocks.
\subsection{Encoding}
\label{sec:stellar_tree_enc}
We represent the containment hierarchy \h\ using an explicit pointer-based data structure,
in which the blocks of \h\ use a type of \texttt{Node} structure that changes state from leaf to internal block
during the generation process of a Stellar tree (described in detail in Section~\ref{sec:stellar_tree_generation}).
We use a \emph{brood-based} encoding~\cite{Hunter1991Classification}, where each block in \h\ encodes a pointer to its parent block and a single pointer to its brood of children. This reduces the overall storage since leaves do not need to encode pointers to their children, and also allows us to use the same representation for n-dimensional quadtrees and kD-trees.
We explicitly encode all internal blocks, but only represent leaf blocks \B\ in \h\ with non-empty maps \PhiMap(\B).
\NOTA{
\kennyComment{Do either of you have access to \cite{Hunter1991Classification}?
I found the reference in \cite{Lacoste2007Appearance}, but couldn't access the paper. \\
\textbf{R}: no chance also from UMD library.. we do not have the Wiley subscription}
}
The mapped entities of the CP complex \sC\ are encoded in the leaf blocks \hL\ using the mapping function lists:
\begin{enumerate}
\item a list $\vB$ of vertex indices in \sCV\ defined by \PhiMapVert(\B);
\item a list of lists $\tB$ of top CP cell indices in \sCT\ defined by \PhiMapTop(\B) for each dimension $\tDim$.
\end{enumerate}
Note that each leaf block \B\ encodes the lists of vertices \vB\ and of \topcpcells\ \tB\ in terms of the indices \vIndex\ and \tIndex,
respectively, that identify \vertex\ and \simplex\ in the \sCV\ and \sCT\ arrays.
\begin{figure}[t]
\centering
\resizebox{.8\columnwidth}{!}{
\includegraphics{imgs/hierarchyStellar_details_v3}
}
\caption{Example of Stellar tree hierarchy \h.
The \emph{red} and \emph{blue} rectangles identify the internal blocks \hI\
while the \emph{green} ones represent the leaf blocks \hL\ along with their collections of vertices and \topcpcells.}
\label{fig:hierarchyStellar}
\end{figure}
Thus, the hierarchy \h\ of a Stellar tree requires $7 |\h|$ storage. For each block \B, we have:
\begin{enumerate}
\item three pointers for the hierarchy: one pointer to its parent, another to its list of children and it is pointed to by one parent;
\item a pointer to a list of vertices \vB\ and the size of this list;
\item a pointer to a list of \topcpcells\ \tB\ and the size of this list.
\end{enumerate}
Figure~\ref{fig:hierarchyStellar} illustrates a simple containment hierarchy representation.
Considering the encodings defined in Section \ref{sec:leaf_encodings}, we can estimate the storage requirements for the \datasetName{explicit} and \datasetName{compressed} Stellar trees.
An \datasetName{explicit} Stellar tree requires a total of $|\sCV|$ references for all such vertex lists, since each vertex is indexed by a single leaf block, and a total of $\Chi|\sCT|$ references for all \topcpcells\ lists.
Thus, the total cost of the \datasetName{explicit} Stellar tree, including the hierarchy (but excluding the cost of the indexed mesh) is:
\begin{equation}
7 |\h| + |\sCV| + \Chi|\sCT|.
\end{equation}
Conversely, in a \datasetName{compressed} Stellar tree,
we can reindex the vertex array \sCV\ in such a way that all vertices mapped to the same leaf block are indexed consecutively (see Section~\ref{sec:verts_reordering}).
Thus, we can encode the \vB\ lists using only two integers per leaf block for a total cost of $2|\hL|$ rather than $|\sCV|$.
Moreover, since leaf blocks no longer need to reference an arbitrary list, these two references can be folded into the block's hierarchical representation for \vB\
(i.e., instead of a pointer to a list and a size of the list, we simply encode the range of vertices in the same space).
As the cost of representing the \tB\ lists is $\Mu|\sCT|$,
the total cost for encoding a \datasetName{compressed} Stellar tree
(excluding the cost of the indexed mesh representation) is:
\begin{equation}
7 |\h| + \Mu |\sCT|.
\end{equation}
\section{Generating a Stellar tree}
\label{sec:stellar_tree_generation}
In this section, we describe how to generate a \datasetName{compressed} Stellar tree from an indexed CP complex \sC\ given a bucketing threshold \kv.
This process consists of four main phases:
\begin{enumerate}
\item generate the nested decomposition \h\ by inserting the vertices of $\Sigma$ into a bucket PR-tree with bucketing threshold \kv;
\item reindex the vertices of \sC\ following a traversal of the leaf blocks of \h\ and compress the \vB\ arrays using SRE compression;
\item insert the \topcpcells\ of \sC\ into \h;
\item reindex the \topcpcells\ of \sC\ based on locality within common blocks of \h\ and SRE-compress the leaf blocks \tB\ arrays.
\end{enumerate}
In the first step, given a user-defined bucketing threshold \kv, we generate a bucket PR-tree
over the set of vertices of $\Sigma$.
Note that this is the only phase of the generation process that depends on the geometry of \sC.
Although we do not maintain the spatial extent of each tree block, we can reconstruct it by tracking the split planes as we descend the tree
(based on a bounding box enclosing \sC\ defined as the root \hR\ of the hierarchy).
This stage can also deal with an input complex that is not already in an indexed representation. For example, if our input is a ``soup'' of CP cells in which each CP cell is specified by an explicit list of coordinates, we can easily generate an indexed representation of the complex as we insert the vertices and generate the decomposition.
The procedure for inserting a vertex \vertex\ with index \vIndex\ in \sCV\ into \h\ is recursive.
We use the geometric position of \vertex\ to traverse the internal blocks to reach the unique leaf block \B\ containing \vertex.
After adding \vertex\ to \B\ (i.e., appending \vIndex\ into the \vB\ array of \B),
we check if this causes an overflow in \B.
If it does, we refine \B\ and reinsert its indexed vertices into its children.
Once all the vertices in \sC\ have been inserted, the decomposition is fixed.
We then reindex the vertices following a traversal of the leaf blocks of \h\ in such a way that
all vertices mapped to a leaf block have a contiguous range of indices in the reindexed global vertex array \sCV\ (as detailed in Section~\ref{sec:verts_reordering}).
Figure~\ref{fig:vertices_reindexing} illustrates a reindexing of the vertices of a triangle mesh in the plane while generating a decomposition with $\kv=4$.
We then insert each \ktopcp\ \simplex, with index \tIndex\ in \sCTk, into all the leaf blocks of \h\ that index its vertices.
This is done by iterating through the vertices of \simplex\ and insert \tIndex\ into the \tB\ list
of each block \B\ whose vertex map \PhiMapVert(\B) contains at least one of these vertices.
As such, each \ktopcp\ \simplex\ appears in at least one and at most $|\relation{\tDim,0}(\simplex)|$ leaf blocks of \h.
Due to the vertex reindexing of step 2, this operation is extremely efficient.
Determining if a vertex of a given cell lies in a block requires only a range comparison on its index \vIndex\ rather than a geometric \textit{point-in-box} test based on its spatial location.
Finally, we reindex the \topcp\ arrays \sCT\ to better exploit the locality induced by the vertex-based decomposition and compress the local \tB\ arrays using a sequential range encoding over this new index.
The reindexing and the compression of the \topcpcells\ is obtained following a traversal of the leaf blocks of \h\ in such a way that all \topcpcells\ mapped from the same set of leaf blocks have a contiguous range of indices in the reindexed arrays \sCT.
This last step is detailed in Section~\ref{sec:tops_reordering} and in Appendix \ref{sec:tops_reordering_long}.
As we demonstrate in Section~\ref{sec:storage}, this compression yields significant storage savings.
\input{algorithms/vertices_insertion_and_reordering}
\subsection{Reindexing and compressing the vertices}
\label{sec:verts_reordering}
After generating the nested decomposition \d\ and vertex map \PhiMapVert\ for the Stellar tree,
we reindex the vertex array \sCV\ to better exploit the spatial coherence induced by \d.
At the end of this process, each block of \h\ has a consecutive range of indices within the global vertex array \sCV,
and thus it trivially compresses under SRE to two values
per block, which we denote as \vstart\ and \vend.
This reindexing procedure is organized into three major steps, as outlined in Algorithm~\ref{alg:main_vertices_reordering}.
The first step (described in Algorithm~\ref{alg:get_vertices_ordering}) performs a depth-first traversal of the tree, which generates new indices for the vertices in \sC.
For a leaf block \B, it generates a contiguous range of indices for the vertices in \B, while
for an internal block, it provides a single contiguous index range for the vertices in all descendant blocks.
For example, in Figure~\ref{fig:vertices_reindexing}, after executing Algorithm~\ref{alg:get_vertices_ordering} on leaf block \textit{b},
we have $\vstart = 4$ and $\vend= 7$.
Similarly, at the end of Algorithm~\ref{alg:get_vertices_ordering} the root \hR\ has $\vstart=1$ and $\vend=13$.
The new indexes are then incorporated into mesh \sC\ by updating the vertex indices in \relation{\tDim,0} relations for all \ktopcells\ in \sCTk\
(see step 2 of Algorithm~\ref{alg:main_vertices_reordering}) and then permuting the vertices
(see step 3 of Algorithm~\ref{alg:update_array} in Appendix \ref{sec:tops_reordering_long}).
These updates take place in memory without requiring any extra storage.
\subsection{Reindexing and compressing the \topcpcells}
\label{sec:tops_reordering}
After inserting the \topcpcells\ of \sCT\ into \h,
we reorder the \topcpcells\ array \sCT\ based on the tree decomposition
and apply SRE compaction to the leaf block lists to generate our \datasetName{compressed} encoding.
This reindexing exploits the spatial coherence of \topcpcells\ that are indexed by the same set of leaves,
translating spatial proximity in the ambient space \aSpace\ into index-space proximity in \sCT.
This procedure is organized into four main phases, as shown in Algorithm~\ref{alg:main_tops_reordering}.
A detailed description can be found in Appendix~\ref{sec:tops_reordering_long}.
The \AlgoName{extract\_leaf\_tuples} procedure (see Algorithm \ref{alg:get_leaf_top_association} in Appendix \ref{sec:tops_reordering_long}),
traverses the tree to find the tuple of leaf blocks $\tuple=(\B_1,\dots,\B_n)$ in the tree that index each \topcp\ \simplex.
Inverting this relation provides the list of top cells from \sC\ mapped to each such tuple of leaf blocks.
As we iterate through the tree, we ensure that each \topcp\ in the complex is processed by only one leaf block \B,
by skipping the \topcpcells\ whose minimum vertex index \vIndex\ is not in $\PhiMapVert(b)$.
For example in Figure~\ref{fig:reindexing}(a), triangle 5 is indexed by leaves $a$ and $b$, thus, its tuple is $\tuple = (a,b)$.
The complete list of triangles in tuple $(a,b)$ is $\{2,5,12\}$.
We use this inverted relation in \AlgoName{extract\_cell\_indices} (see Algorithm~\ref{alg:get_tops_reorderd_indexes} in Appendix~\ref{sec:tops_reordering_long}),
to generate a new spatially coherent order for the \topcpcells\ of \sCT.
Specifically, taking the prefix sum of the tuple cell counts
provides the starting index for cells in that group.
For example, when taken in lexicographic order, the first three leaf block tuples, $(a)$, $(a,b)$ and $(a,b,c)$ in Figure~\ref{fig:reindexing}(b),
with 1, 3 and 1 triangles, respectively, get starting indices $1$, $2$ and $5$.
We then assign incrementing indices to the \topcpcells\ of each group.
Thus, for example, the three triangles belonging to tuple $(a,b)$ get indices $\{2,3,4\}$ after this reindexing.
\input{algorithms/tops_reindexing_main}
\begin{figure}[t]
\centering
\subfloat[Original triangles]{
\resizebox{.22\textwidth}{!}{
\includegraphics{imgs/tetra_reordering_orig}
}
}
\hfil
\subfloat[Reindexed triangles]{
\resizebox{.22\textwidth}{!}{
\includegraphics{imgs/tetra_reordering_new}
}
}
\caption{ Top cell indices before (a) and after (b) tuple-based reindexing. }
\label{fig:reindexing}
\end{figure}
Finally, in \AlgoName{compress\_tree\_cells} and \AlgoName{permute\_array}
(see Algorithm~\ref{alg:compress_tree_representation} and~\ref{alg:update_array} in Appendix~\ref{sec:tops_reordering_long}),
we reorder and SRE-compact the \tB\ leaf block lists
and the global \topcpcells\ array \sCT.
We provide an experimental evaluation of the timings for generating a Stellar tree in Appendix~\ref{sec:generation_timings}.
\NOTA{
Recall that, we assume that all the blocks of \h\ are \emph{half-open} blocks unless a block \B\ is incident
in the block representing the ambient space \aSpace, as in this case we consider that face of \B\ as \emph{closed}.
}
\NOTA{
We insert the vertices sequentially, over which we start a visit of \h\ to insert a vertex \vertex\ in a leaf block \B\ containing it.
We recall that we do not explicitly encode within each block its domain, but we compute it at runtime, keeping track of the split planes.
We assume that the root block \hR\ completely covers the complex domain and
represents it as a \emph{closed block}.
}
\NOTA{
We insert each \top\ \simplex\ at once and for each boundary vertex \vertex\ of a \ktop\ \simplex, we find the leaf block \B\ of \hL\ that indexes \vertex\
and then, we add \simplex\ to \top\ array of \B\ (i.e., we insert \tIndex\ index in \tB\ array).
}
\NOTA{
\begin{enumerate}
\item first, we navigate the tree to compress it and to get the spatially coherent ordering of the vertices (see Algorithm \ref{alg:get_vertices_ordering});
\item then, for each \simplex\ in \sCT, we update its boundary relation \relation{k,0} (see rows 4 to 6 of Algorithm \ref{alg:main_vertices_reordering});
\item finally, we update the \sCV\ array according to the new position ordering on the vertices.
\end{enumerate}
%
As auxiliary data structure for this procedure, we need an array of integer references, that we call $v\_permutation$.
This array contains exactly $|\sCV|$ entries, and at each entry $i$, it is associated the $i$-th vertex \vertex\ in \sCV.
This entry in $v\_permutation$ contains the new \emph{spatially coherent} position of \vertex\ in \sCV.
Thus, the extra storage overhead is exactly $|\sCV|$.
$v\_permutation$ is the output of Algorithm \ref{alg:get_vertices_ordering}, and it is required as input during the update
of the \topcp\ boundary relations (rows 4 to 6 of Algorithm \ref{alg:main_vertices_reordering}) and during the update of \sCV\ array.
%
Algorithm \ref{alg:get_vertices_ordering} is a recursive procedure in which we visit all the blocks in \h.
If we are in an internal block (see rows 1 to 5 of the Algorithm), we recursively visit all the children, while if we are in a leaf block \B\
(see rows 7 to 12 of the Algorithm), we visit the vertices array \PhiMapVert(b), we set up a consecutive indexes run in \B\
and we update the corresponding entries in $v\_permutation$.
}
\NOTA{
After this stage, each leaf block contains a contiguous range of vertices and each internal block contains a continuous range
of vertices equal to the union of the ranges contained in its descendant.
Moreover, we do not require extra structures to encode this range, thanks to the \emph{sequential run-length} compression, and, thus,
in each vertex array, we require just two entries that have been initialized in order to represent the unique vertices run into the leaf block.
We denote the extreme vertices of this run as \vstart\ and \vend.
Then, we proceed with the updating of boundary relation \relation{k,0} on all \ktops\ in \sCT\ (see rows 4 to 6 in Algorithm \ref{alg:main_vertices_reordering}).
For each vertex \vertex\ (with index \vIndex\ in \sCV) in \relation{k,0} of a \ktop\ \simplex, we get its spatial coherent position
in \sCV\ from $v\_permutation$, by accessing the \vIndex\ entry. Once we get this position we update the entry, associated to \vertex,
in \relation{k,0}(\simplex) with the value in $v\_permutation[\vIndex]$.
Finally, we update the \sCV\ array accordingly with the spatially coherent ordering (in Algorithm \datasetName{update\_array}).
During the algorithm, we iteratively swap the index positions, updating at each swap operation a vertex \vertex,
for which we gather its new spatially coherent position from $v\_permutation[\vIndex]$. This algorithm does exactly $|\sCV|$ swaps without requiring any extra storage.
}
\NOTA{
In order to update $M$ and $t\_position$, we need to extract the tuple of leaf blocks indexing \simplex.
If \simplex\ is completely indexed into \B, then we have its tuple already available. Otherwise, we have to visit the tree to find the leaf blocks that index the other vertices of \simplex.
Once we get the tuple of leaf blocks indexing \simplex, we have to check if the tuple is already into $M$, and if it is not present we insert it. Conversely, if it is already into $M$, we simply increment the counter that keeps track of the \topcpcells\ indexed into that tuple.
}
\section{Evaluation of storage costs}
\label{sec:storage}
\NOTA{
Our implementation is based on the observation that the \iastar\ representation encodes its topological connectivity information in a \emph{stratified} manner.
Specifically, the \iastar\ representation can be decomposed into a union of data structures on the pure \tDim-dimensional subcomplexes of \sC\ defined by its \ktopcells.
For each \ktopcell\ \simplex\ in \sC, all topological information pertaining to \simplex\ is contained in the following relations, all restricted to the \ktopcells\ of \sC:
boundary relation \relation{\tDim,0}(\simplex),
adjacency relation $\relation{\tDim,\tDim}(\simplex)$,
co-boundary relation $\relation{\tDimMinusOne,\tDim}(\tau)$
and restricted vertex co-boundary relation $\relation{0,\tDim}(\vertex)$.
}
In this section, we evaluate the Stellar tree's storage costs for CP complexes.
After introducing the datasets used in our experimental evaluation (Section~\ref{sec:stellar_experimental_plan}),
we compare the cost of different Stellar tree encodings (Section~\ref{sec:storage_encodings}),
and compare the Stellar tree against several state-of-the-art topological mesh data structures (Section~\ref{sec:stellar_vs_indep_structs}).
\subsection{Experimental datasets}
\label{sec:stellar_experimental_plan}
\input{tables/table_summary}
We have performed experiments on a range of CP complexes consisting of
triangle, quadrilateral, tetrahedral and hexahedral meshes in $\eucl^3$
as well as pure non-manifold simplicial complexes in higher dimensions generated through a recursive Sierpinski-like refinement process
and higher dimensional non-manifold simplicial complexes (embedded in $\eucl^3$).
The triangle and tetrahedral meshes are \emph{native} models ranging from 4 to 28 million triangles and from 24 to 29 million tetrahedra,
where we use the term native to refer to models from public domain repositories discretizing objects in space.
Since we only had access to relatively small native quadrilateral and hexahedral meshes (with tens to hundreds of thousand elements),
we have generated some larger models ranging from 12 to 125 million elements from our triangle and tetrahedral models.
The generation procedure refines each triangle into three quadrilaterals and each tetrahedron into four hexahedra
by adding vertices at the face centroids.
To experiment with \emph{pure} non-manifold models in higher dimensions, we have generated some models based on a process that we call
\emph{probabilistic Sierpinski filtering}, where we regularly refinement all simplices in the complex
and randomly remove a fixed proportion of the generated simplices in each iteration.
For our experiments, we have created 5-, 7- and 40-dimensional models using differing levels of refinement
and a filtering threshold of 65\% yielding pure simplicial complexes with 16.5 million to 258 million \tops.
Finally, to experiment with general simplicial complexes in higher dimensions, we have generated several
(non-pure) \emph{Vietoris-Rips} complexes, which we embed in a lower dimensional space.
A Vietoris-Rips (V-Rips) complex is the \emph{flag} complex defined by a neighborhood graph over a point cloud
whose arcs connect pairs of points with distance less than a user-provided parameter $\epsilon$.
Given the neighborhood graph, the simplices of the V-Rips complexes are defined by its \emph{cliques},
subsets of the graph vertices that form a complete subgraph.
We refer to \cite{Zomorodi2010Fast} for further details.
For our experiments, we have generated V-Rips complexes
over the vertices of a triangle model (\datasetName{lucy})
and of two tetrahedral models (\datasetName{vismale} and \datasetName{foot}) from our manifold datasets
and set our distance threshold $\epsilon$ to $\{0.1\%,0.5\%,0.4\%\}$ of the bounding box diagonal, respectively.
The generated complexes range from 6.4 million to 64 million \tops\ and from dimensions 7 to 34.
Although the generated datasets are synthetic, they provide a good starting point to demonstrate the efficiency of the Stellar tree in higher dimensions.
For every model, we have built two Stellar trees to compare the dependence of performances on parameter \kv,
which determines the maximum number of vertices that each leaf block of the tree can index.
These two \kv\ values are chosen in order to obtain trees with different characteristics: one extremely deep and another relatively coarse.
In the following, we use \ks\ to refer to the smaller \kv\ value and \kl\ to the larger one.
We have also tried to maintain similar \Chi\ values across the datasets.
We have used different spatial indexes to represent the containment hierarchy \h, based on the dimension \sDim\ of the ambient space \aSpace:
in lower dimensions, we use a quadtree-like subdivision, and thus, we have a quadtree in 2D, an octree in 3D, and so on, up to 6D;
in dimensions higher than 6, we switch to a kD-tree subdivision.
While quadtree-like subdivisions are quite efficient in low dimensions,
the data becomes sparser in higher dimensions,
and is better modeled by kD-trees with fewer spatial splits~\cite{Samet2006Foundations}.
Table~\ref{tab:stellar_tree_index_summary} summarizes the number of elements and the sizes of the spatial decompositions
for the two Stellar tree representations (\ks\ and \kl) of each experimental dataset.
All tests have been performed on a PC equipped with a 3.2 gigahertz Intel i7-3930K CPU with 64 gigabytes of RAM.
The source code is available at \cite{Fellegara_StellarTree_Github}.
\subsection{Comparison among Stellar tree encodings}
\label{sec:storage_encodings}
We begin by comparing the \datasetName{explicit} and \datasetName{compressed} Stellar tree encodings
as well as a \datasetName{vertex-compressed} encoding, similar to the PR-star encoding for tetrahedral meshes~\cite{Weiss2011PR},
that compresses the vertex array but not the top cells arrays.
Table~\ref{tab:storage_encodings} lists the storage costs for the indexed complex representation (`Base Complex')
as well as the additional costs required for the three Stellar tree encodings, in terms of megabytes ($MBs$).
Stellar trees based on the \datasetName{compressed} encoding are always the most compact.
\input{tables/table_storage_encodings_MBs}
We first consider the storage requirements of the hierarchical structures with respect to our tuning parameter \kv\
and observe that while higher values of \kv\ always yield reductions in memory requirements, as expected,
this effect is more pronounced for the \datasetName{compressed} encoding than for the other two encodings.
Specifically, the \datasetName{explicit} and \datasetName{vertex-compressed} \kl\ datasets achieve a 20-50\% reduction in storage requirements
compared to their \ks\ counterparts, while the \datasetName{compressed} \kl\ datasets are 3-10 times smaller than their \ks\ counterparts.
For example, on the triangular \datasetName{neptune} dataset, storage requirements for the \datasetName{explicit} Stellar tree
reduce from 32.0 MB (\ks) to 26.2 MB (\kl), while the \datasetName{compressed} Stellar trees
reduce by more than a factor of 4 from 5.76 MB (\ks) to 1.24 MB (\kl).
Next, comparing the three encodings, we see that compressing the vertices alone, as in the \datasetName{vertex-compressed} representation,
achieves only 10-20\% reduction in storage requirements compared to the \datasetName{explicit} representation, in most cases.
In contrast, compressing the vertices and top cells, as in our \datasetName{compressed} representation,
yields an order of magnitude improvement, requiring around 10-20 times less storage than their \datasetName{explicit} counterparts.
This trend is nicely tracked for each dataset by the
differences between its average references number \Mu\ and its average spanning number \Chi.
Considering the hierarchical storage requirements against those of the original indexed base mesh,
we observe that \datasetName{explicit} Stellar trees require about 50\% to 80\% the storage of the base mesh,
while \datasetName{compressed} Stellar trees require only about around 10\% (\ks) and 1\% (\kl) the storage of the \datasetName{explicit} representation.
Thus, the vast majority of the overall storage costs for the \datasetName{compressed} representation
are due to the underlying indexed mesh, which the Stellar tree representation does not modify.
In the remainder of this paper, we restrict our attention to the \datasetName{compressed} Stellar Tree,
which we refer to as \emph{the} Stellar tree.
\subsection{Comparison with other data structures}
\label{sec:storage_other_structures}
Next, we compare the Stellar tree
with several dimension-independent topological data structures
as well as dimension-dependent topological data structures for 2D and 3D simplicial complexes.
Figures~\ref{hist:storage_nD_norm}, \ref{hist:storage_quad_hexa_norm} and \ref{hist:storage_tri_tetra_norm} %
compare the storage requirements for the different data structures
normalized against the storage costs of the indexed base complex.
The analysis compares the topological overhead of the data structures,
and thus, we omit the cost of the geometry of the underlying complex, which is common to all the data structures.
\input{charts/histogram_storage_nD_norm}
\input{charts/histogram_storage_quad_hexa_norm}
\input{charts/histogram_storage_tri_tetra_norm}
\label{sec:stellar_vs_indep_structs}
Based on our analysis of the literature (see Section~\ref{sec:related_topological_ds}),
the most relevant dimension-independent topological data structures that scale to our experimental datasets are:
the Incidence Graph (IG)~\cite{Edelsbrunner1987Algorithms},
the Incidence Simplicial (IS)~\cite{DeFloriani2010dimension},
the Simplex tree~\cite{Boissonnat2014simplex},
and the Generalized Indexed data structure with Adjacencies (\iastar)~\cite{Canino2011IA}.
Since Canino\etal\ \cite{Canino2014Representing} demonstrated that the \iastar\ data structure is more compact than the IG and the IS data structures for both low and high-dimensional datasets, we restrict our comparisons to the \iastar\ and Simplex tree data structures.
\NOTA{
\leilaComment{for \textbf{Kenny}: please say why we do not compare with the other data structures by Boissonnat (like $MST$ and $SAL$)}
\kennyComment{for \textbf{Leila}: I addressed this when we discuss MST and SAL in Section 3 and in the conclusions.
I also added the phrase `that scale to our experimental datasets'.
Please let me know if that is sufficient and/or update it. }
}
The \emph{\iastar\ data structure} has been defined for dimension-independent simplicial complexes, and in this work it has been extended to dimension-independent CP complexes.
It explicitly encodes all vertices and \ktopcpcells\ in $\Sigma$, with $0 < \tDim \leq \cDim$, as well as the following topological relations:
\begin{compactenum}[(i)]
\item boundary relation $\relation{\tDim,0}(\simplex)$, for each \ktopcp\ $\simplex$;
\item adjacency relation $\relation{\tDim,\tDim}(\sigma)$, for each \ktopcp\ $\simplex$;
\item co-boundary relation $\relation{\tDimMinusOne,\tDim}(\tau)$, for each non-manifold (\tDimMinusOne)-cell $\tau$ bounding a \ktopcp;
\item partial co-boundary relation $\partialrelation{0,\tDim}(\vertex)$,
for each vertex $\vertex$, consisting of one arbitrarily selected \ktopcp\ $\simplex$
from each \emph{\tDim-cluster} in the star of \vertex.
A \tDim-cluster is a ($\tDim{-}1$)-connected component of the star of $v$ restricted to its top CP $k$-cells.
\end{compactenum}
Note that for pure CP complexes, co-boundary relation $\relation{\tDimMinusOne,\tDim}$ is empty.
Further, for pseudo-manifold complexes, the partial vertex co-boundary relation $\partialrelation{0,\tDim}$ has cardinality 1,
and the \iastar\ data structure is identical to the IA data structure~\cite{Paoluzzi1993Dimension}.
The \emph{Simplex tree} encodes all $j$-simplices in $\Sigma$, with $0 \leq j \leq \cDim$, like the IG, while storing a subset of the incidence relations encoded by the IG.
The Simplex tree is defined over a total order on the vertices of \sC, and thus, each simplex \simplex\ is uniquely represented as an ordered path in a trie whose nodes correspond to the boundary vertices of \simplex.
Thus, the nodes of the tree are in bijection with the simplices of the complex, and a Simplex tree over a simplicial complex with $|\Sigma|$ simplices (of any dimension) contains exactly $|\Sigma|$ nodes.
This, provides an efficient representation for extracting all boundary relations of simplices in \sC.
We compare the Stellar tree to the implementation of the Simplex tree provided in~\cite{GUDHI},
where each node of a Simplex tree requires
a reference to the label of the vertex
and three references to the tree structure
(pointers to the parent node, to the first child and to the next sibling node)
for a total of $4|\Sigma|$ references.
\NOTA{
\felleComment{removed the sentence in which it was claimed that the Simplex tree cannot compute co-boundary relations.}
\NOTA{
We note that this implementation only supports the efficient extraction of boundary relations.
An implementation that supports extraction of co-boundary and adjacency relations, as proposed in~\cite{Boissonnat2014simplex}, would require additional storage.
}
}
Whereas the Simplex tree can represent only simplicial complexes,
the Stellar tree and the \iastar\ data structure can both represent CP complexes in arbitrary dimension and, thus, have the same expressive power.
Another difference is that Stellar trees require the complex to be embedded in an ambient space \aSpace,
while the other data structures are purely topological and do not require a spatial embedding.
We note, however, that while this is a requirement for Stellar trees, it is not a requirement of the Stellar decomposition.
In terms of storage requirements, we find that the Stellar tree is always more compact than the \iastar\ data structure, requiring approximately half of the storage,
nearly all of which is used for encoding boundary relation \relation{\tDim, 0} of top cells.
It is worth noting that we were unable to directly generate the \iastar\ data structure for several of our larger datasets
on our 64 GB test machine. We generated these datasets indirectly using our Stellar tree representation (as we describe in Section~\ref{sec:iastar_gen})
and we have marked these datasets with an $\otimes$ in the charts in Figures~\ref{hist:storage_nD_norm} and \ref{hist:storage_quad_hexa_norm}.
Comparing the Stellar tree to the Simplex tree, we observe that the Stellar tree is significantly more compact:
by an order of magnitude on the manifold and pure models,
and by two orders of magnitude or more on the non-manifold models.
Here too, we were unable to generate Simplex trees for several of the higher dimensional models on our test machine.
For these datasets (marked with $\odot$ in Figure~\ref{hist:storage_nD_norm}), we estimated the storage requirements based on the number of simplices of each dimension in the model. On two of these datasets, \datasetName{prob 40D} and \datasetName{lucy 34D},
we were unable to extract all simplices in all dimensions (even indirectly, see Section~\ref{sec:implicit_cell_extraction}),
and thus, the storage shown in Figure~\ref{hist:storage_nD_norm} is a lower bound of the real storage requirements.
\NOTA{
\felleComment{(for \textbf{Kenny}): added a sentence here for enforcing that on two dataset the Simplex tree storage requirements are partial, but the trend is still confirmed}
}
\label{sec:stellar_vs_2D3D_structs}
For our dimension-dependent comparisons on manifold simplicial complexes, we also considered the \emph{Corner Table} ($CoT$)~\cite{Rossignac20013D}
and the \emph{Sorted Opposite Table} ($SOT$)~\cite{Gurung2009SOT} data structures, both defined only for manifold triangle and tetrahedral complexes.
The $CoT$ data structure is similar to the IA data structure and explicitly encodes
boundary relation $\relation{\cDim,0}(\simplex)$ and adjacency relation $\relation{\cDim,\cDim}(\simplex)$ of each top \cDim-simplex \simplex.
The $SOT$ extends the $CoT$ by implicitly encoding boundary relation $\relation{\cDim,0}(\simplex)$.
It only explicitly encodes adjacency relation $\relation{\cDim,\cDim}(\simplex)$.
\NOTA{=== TABLES ===}
When comparing the Stellar tree to the corner-based data structures,
we observe that the $CoT$ data structure has similar storage requirements as the IA and is roughly twice as large as the Stellar tree,
while the $SOT$ data structure has similar storage requirements as the Stellar tree, requiring about 1\% to 10\% less space.
\NOTA{
All of these topological data structures involve a tradeoff between storage (e.g.\ compact encodings)
and compute time (e.g.\ to reconstruct the desired topological relations).
The $SOT$ achieves its compactness by dropping its boundary relations, which must be recomputed from its adjacency relations.
The \iastar\ explicitly encodes these relations and is able to more efficiency at reconstructing boundary and co-boundary relations at run-time.
In the same way, as the Stellar tree encodes the \relation{k,0} boundary relation for each \top\ and a very compact indexing structure only,
we have to consider an extra computational time in reconstructing the adjacency and co-boundary relations at run-time.
}
Finally, we consider the effects of different bucketing threshold on the size and efficiency of the Stellar tree representation.
For our experimental datasets, there was only about a 10\% difference in storage requirements between the large (\kl) and small (\ks) bucketing factors.
Clearly, this is not always true, especially in the limit, i.e. with $\kv = 1$ and $\kv = \infty$.
Very low bucketing thresholds (with $\kv$ near 1) yield deeper trees whose leaf blocks index only a few entities, leading to a high topological overhead
but more efficient execution for individual mesh processing operations.
Conversely, really large bucketing threshold values lead to lower storage overhead
at the expense of increased query and execution times for individual operations.
At the limit, when $\kv=\infty$, the Stellar tree is effectively identical to the indexed representation.
These results confirm that the Stellar tree can efficiently represent low-dimensional manifold and high-dimensional non-manifold CP complexes,
with only a slight overhead relative to that of the indexed base mesh.
This is largely due to the Stellar tree's exploitation of the complex's spatial locality via SRE compression.
\NOTA{
\kennyComment{Should we identify the datasets that can't be directly generated? Perhaps adding an asterisk (*) to the sizes in the figures.}
\felleComment{added $\odot$ or $\otimes$ next to datasets that cannot be generated directly using Simplex tree or \iastar, respectively. I didn't found a way to do that next to the corresponding bar..}
\kennyComment{We should emphasize the large overhead of the Simplex tree, and the low overhead of the Stellar trees, especially for the V-Rips complexes, which the Simplex tree is explicitly targeting.}
}
\section{General application paradigm}
\label{sec:general_strategy}
\NOTA{
An advantage of the Stellar tree is that it enables one to defer decisions
about the details of the topological data structure.
Thus, one can easily customize the structure and layout of the representation to better suit the needs of a given application.
We note that processing individual mesh elements in a Stellar tree can be expensive due to the compressed leaf block format.
For example, to reconstruct the star of a given vertex \vertex, we must first identify the leaf block \B\ indexing \vertex\ and visit the top cells in \B\ to identify those that are incident in \vertex.
To amortize the reconstruction costs, we adopt a \emph{batched} processing strategy for Stellar tree applications,
in which we reconstruct and process local subsets of the complex.
This tends to work well in practice since mesh processing applications are often applied to the entire complex
or within spatial regions of interest.
}
Mesh processing applications very often require to deal with the entire complex, or within regions of interest, while it is rare to process individual mesh elements.
Stellar trees are well suited for such processing, as they provide a compact representation and they enables deferring decisions about the details and layout of the topological data structure.
Thus, the structure and layout of the representation can be easily customized to better suit the needs of a given application. Additionally, Stellar trees naturally support a \emph{batched} processing strategy, in which local subsets of the complex are reconstructed and processed. This helps amortizing the reconstruction costs and, thus, processing the entire complex in an efficient way.
The general paradigm for executing applications on a Stellar tree is to iterate through the
leaf blocks of the hierarchy \h, locally processing the encoded complex in a streaming manner.
For each leaf block \B\ in \h, a local topological data structure is built catered to the application
used to process the local subcomplex.
We refer to this local data structure in a block \B\ as an \emph{expanded leaf-block representation}, and we denote it as \eB.
Once we finish processing leaf block \B, we discard \eB\ and begin processing the next block.
For efficiency and at relatively low storage overhead, we cache the expanded leaf block representation \eB, using a \emph{Least-Recent-Used (LRU)} cache.
This is especially advantageous in applications that require processing portions of the complex in neighboring leaf blocks.
Adopting a fixed-size cache allows us to amortize the extraction of the local data structures, with a controllable storage overhead.
Algorithm~\ref{alg:stellar_generic_application} outlines the general strategy for executing an application on the Stellar tree.
The algorithm recursively visits all the blocks of the hierarchy \h.
For each leaf block \B, we either recover \eB\ from the \emph{LRU} cache (rows 5--8),
or construct the desired application-dependent local topological data structure \eB.
After using this local data structure to process the local geometry in \B\ (row 9),
we either cache or discard \eB\ (rows 10--13).
\NOTA{ \kennyComment{
Do we use caching in any of the applications in this paper?
If not, can (should?) we point out a specific application benefits from caching?
} }
\NOTA{
\kennyComment{CHECK:
\\ Do we have an experiment that measures the effects of caching?}
\felleComment{not really.. we have used the cache in Morse and during the geometric simplification.. but never test the behavior extensively.. i.e., we only check for some sizes which was the best one.. and simply use it}
\kennyComment{ Do we have an experiment that measures the effects on memory and compute times for a broad range of different $k_v$ values? E.g. reconstruction of a relation, and its application to a larger problem? }
\felleComment{also here.. not really.. can we do this after the initial thesis submission for \relation{0,k} extraction?}
}
\input{algorithms/application_general_strategy}
Applications executed on a Stellar tree use either a \textit{local}, or a \textit{global} approach.
In the former case, the scope of data structures and auxiliary variables is limited to that of a single leaf block \B,
or to a restricted subset of its neighbors.
In the latter case, auxiliary variables are maintained globally
as we process the complex.
In general, a local approach is preferred for applications that extract, or analyze local features,
such as those that depend only on the link or star of mesh elements.
A global approach is preferable for applications that require the analysis or processing of the entire mesh, like
geometric simplification, or morphological segmentation.
The decision between using a local and global approach involves a tradeoff between minimizing memory usage and execution times.
Due to the limited scope of auxiliary data structures in the local approach, the storage overhead is typically proportional
to the complexity of the local complex. However, this strategy leads to an increased number of memory allocations compared to a global approach since each leaf block expansion requires memory allocations.
Conversely, while the auxiliary data structures in the global approach are allocated only once,
these structures can require significantly more storage space compared to the local approach.
In the following sections, we present applications and experimental results to demonstrate the capabilities and benefits of a Stellar tree.
In Section~\ref{sec:local_topo_rels}, we describe how to efficiently extract topological relations.
In Section~\ref{sec:stellar_build_structures}, we demonstrate how the Stellar tree can be used to efficiently generate popular topological data structures,
namely the \emph{half-edge} data structure over polygonal meshes~\cite{Mantyla1988Introduction}
and an adjacency based data structure for manifold (IA)~\cite{Nielson1997Tools,Paoluzzi1993Dimension} and non-manifold (\iastar)~\cite{Canino2011IA} CP complexes.
Thus, Stellar trees can be also used as an intermediary representation by applications that expect a specific topological data structure,
or on very large meshes, when there are insufficient resources to generate the original data structure.
\section{Extracting topological relations}
\section{Answering topological queries}
\label{sec:local_topo_rels}
In this section, we describe how to perform batched topological queries on a CP complex \sC\ in the Stellar tree representation.
These fundamental queries
are the key building blocks for locally traversing and processing the underlying complex.
\NOTA{
In the remainder of this section, we focus on
batched algorithms that reconstruct topological relations on the cells within a leaf block \B\ of a Stellar tree \sTree.
}
\NOTA{
It is a bit awkward to refer to the non-top cells in the complex.
Is there a good name for these, other than 'all $p$-cells (top or non-top)'?
}
Since these queries often depend on \emph{all} cells in the complex, not just on the (explicitly represented) top cells,
we first describe how we obtain and represent all cells by extracting the implicitly represented boundary relations for the cells of the complex
from the Stellar tree representation (Section~\ref{sec:implicit_cell_extraction}).
We next present the algorithm for extracting the co-boundary relations in Section~\ref{sec:coboundary_extraction}.
The description of how to extract adjacency relations is omitted for brevity, but in Section \ref{sec:iastar_gen}, we describe how to extract \relation{d,d} relations in the context of generating the \iastar\ data structure using a Stellar tree.
\subsection{Extracting boundary relations}
\label{sec:implicit_cell_extraction}
The Stellar tree's underlying indexed representation of a CP complex \sC\ explicitly encodes
only the vertices and top CP \tDim-cells of \sC\ for $\tDim \leq \cDim$ (see Section~\ref{sec:mesh_structure}).
However, many applications require access to non-top cells within the complex.
Since such cells are implicitly encoded within the Stellar tree representation,
we must create a local (explicit) representation for non-top cells to support algorithms for
processing and attaching data to such cells.
\input{algorithms/p_cell_extraction_v3}
Our strategy is to iterate through the top \tDim-cells of a leaf block
and to extract an ordered set of $p$-cells
for each dimension $0 < p \leq k \leq d$ (see Algorithm~\ref{alg:extract-p-cells}).
We use an associative array $m\_p$ to track the unique set of encountered $p$-cells with at least one vertex indexed by \B\ (row 4).
Array $m\_p$ maps the tuple of vertices for a $p$-cell $\tau$ to an integer index $id_\tau$ in the set,
accounting for changes in ordering and orientation through the \AlgoName{canonical\_tuple} routine (row 3).
In some applications, it is useful to also explicitly maintain the boundary relation \relation{p,0} for the $p$-cells
and/or the incidence relations \relation{\tDim,p} or \relation{p,\tDim} for the top \tDim-cells.
These are encoded using the local indices within the ordered set of extracted $p$-cells.
We note that, for truly high-dimensional datasets, it is not feasible to extract $p$-cells in all cases.
For example, there are ${41}\choose{21}$ 20-simplices within each 40-simplex.
Encoding these 269 billion simplices would require more than 40TB of storage.
However, even on these datasets, we can still extract the lowest and highest dimensional $p$-cells.
This highlights an advantage of only encoding the top cells of the complex (as in the Stellar tree and \iastar\ data structures)
compared to representations that encode all cells of the complex (as in the IG or Simplex tree data structures).
Stellar trees have no difficulty encoding and processing such high-dimensional complexes,
despite the combinatorial explosion in the number of overall cells.
\input{tables/table_app_pfaces}
\paragraph*{Experimental results}
We now analyze the effectiveness of the Stellar tree representation for (batched) $p$-cell extractions against our implementation of the \iastar\ data structure and the Simplex tree (as implemented in the \emph{GUDHI} framework~\cite{GUDHI}). Table~\ref{tab:pcells_results} lists the aggregate times and storage requirements for extracting all non-top $p$-cells from our experimental datasets. Notice that we do not consider the higher dimensional \emph{probabilistic} dataset and the \datasetName{lucy 34D} V-Rips complex, as extracting all $p$-cells on these datasets is unfeasible due to its computational and storage requirements.
First, we analyze the influence of the bucketing threshold \kv\ for Stellar trees. Smaller \kv\ values lead to faster extractions on all our experimental datasets.
This speedup increases with the dimension of the complex since the auxiliary data structure encoding a $p$-face type becomes smaller,
and thus, checking for the presence of duplicates has a lower computational cost.
The \iastar\ data structure follows a similar strategy to the Stellar trees for extracting its implicit $p$-cells
since both data structures use an indexed representation for encoding the boundary relations of a CP complex.
Table~\ref{tab:pcells_results} demonstrates the computational and storage advantages of the Stellar trees over the \iastar\ for this task.
It requires from 20\% to 55\% less time for the two-dimensional datasets and approximately 10\% less time on the higher dimensional ones.
In addition, the Stellar tree's auxiliary storage requirements are negligible compared to those of the \iastar\ data structure.
Notice that the \iastar\ data structure goes out of memory (OOM) on all \emph{hexahedral} datasets
and on the 7D \emph{probabilistic} and \datasetName{foot 10D} V-Rips datasets.
The Simplex tree explicitly encodes all simplices of a simplicial complex, thus, its $p$-cells can be enumerated by traversing all simplices at the $p$-th level of the tree. Explicitly encoding boundary relation \relation{p,0} would require the same auxiliary storage as the \iastar\ data structure, since both data structures require global structures. Table~\ref{tab:pcells_results} demonstrates that Stellar trees are slower than Simplex trees, but, still, competitive with respect to a representation that explicitly encodes all cells. This is possible thanks to the smaller local auxiliary data structures used by Stellar trees.
Notice that the Simplex tree goes out of memory (OOM) on our workstation for the 7D \emph{probabilistic} dataset and the \datasetName{foot 10D} V-Rips complex.
Since the Simplex tree can only represent simplicial complexes, it does not support $p$-cell extraction on our \emph{quad} and \emph{hexahedral} datasets.
\subsection{Extracting co-boundary relations}
\label{sec:coboundary_extraction}
Co-boundary queries arise in a variety of mesh processing applications,
including those requiring mesh simplification and refinement~\cite{Garland1997Surface,Natarajan2004Simplification,Zorin00},
or the \emph{dual} of a complex~\cite{Hirani2003,Mullen11,Weiss2013primaldual}.
Co-boundary queries are naturally supported by the Stellar decomposition model.
By definition, all regions of the decomposition that contain at least one vertex of a CP cell $\tau$
must index all CP cells in the star of $\tau$ (see Equation~\ref{eq:phitop_blocks}).
Since the top cells are explicitly represented in \sC,
we first describe how to extract the vertex co-boundary relation \relation{0,\tDim} restricted to the top \tDim-cells of \sC,
which we will refer to as the \emph{restricted co-boundary relation \relation{0,\tDim}}.
We will then discuss how to extend this to extract vertex co-boundary relation \relation{0,p} over \emph{all} $p$-cells in \sC,
and the general co-boundary relation \relation{p,q} with $0 \leq p < q \leq d$.
The \emph{restricted vertex co-boundary relation} \relation{0,\tDim} in a leaf block \B\ is generated by inverting boundary relation \relation{\tDim,0} on the \ktopcpcells\ in \PhiMapTop(\B).
Since the indexed vertices in the leaf blocks of a \datasetName{compressed} Stellar tree are contiguous,
with indices in the range $[\vstart,\vend)$,
we encode our local data structure using an array of size $|\PhiMapVert(\B)| = \vend - \vstart$.
Each position in the array corresponds to a vertex indexed by $\B$ and points to an (initially empty) list of indexes from \sCT.
As shown in Algorithm \ref{alg:vertex-tops}, we populate these arrays by iterating through relation \relation{\tDim,0} of the \ktopcpcells\ in \PhiMapTop(\B).
For each cell \simplex\ such that relation $\relation{\tDim,0}(\simplex)$ contains a vertex \vertex\ with index $\vIndex \in [\vstart,\vend)$, the index of \simplex\ is added to vertex $v$'s list.
\input{algorithms/VT_extraction}
Extending the vertex co-boundary relation to all $p$-cells in \B\
is complicated by the fact that we only have an explicit representation for the top cells in \sC.
A simple strategy we have developed for extracting $\relation{0,p}$ on all $p$-cells in \B\ is to first extract the explicit set of all $p$-cells in \B, as in Algorithm~\ref{alg:extract-p-cells} (see Section~\ref{sec:implicit_cell_extraction}).
We then invert $\relation{p.0}$ to obtain the complete relation $\relation{0,p}$ for the vertices in \B.
In some applications, we prefer to express \relation{0,p} entirely in terms of top cells from \sC.
Thus, another strategy we have developed is to extract the restricted co-boundary relation \relation{0,\tDim}
for all top \tDim-cells in \B, with $p \le \tDim \le \cDim$.
This redundant representation is thus used as an intermediate representation for $\relation{0,p}(v)$ since each \tDim-cell in $\relation{0,\tDim}(v)$ contains one (or more) $p$-face in the co-boundary of $v$.
For example, this provides a convenient representation for the star of a vertex $v$ as a union of restricted co-boundary relations $\relation{0,\tDim}(v)$, where $1 \le \tDim \le \cDim$.
Similarly, we have defined and implemented a strategy for generating the general co-boundary relation $\relation{p,q}$, where $p < q$.
First, the sets of all $q$-cells, which is expressed as $\relation{q,0}$, is extracted.
This implicitly provides also boundary relation $\relation{q,p}$.
Then, co-boundary relation $\relation{p,q}$ is extracted by inverting $\relation{q,p}$.
\input{tables/table_app_vt_v2}
\paragraph*{Experimental results}
We now analyze the effectiveness of the Stellar tree representation for co-boundary extractions.
Specifically, since the main co-boundary extraction in our applications (see Section~\ref{sec:stellar_build_structures})
is the restricted vertex co-boundary relation
and most of the other co-boundary extractions can be posed in terms of this primitive extraction,
we compare the performance of the Stellar tree against our implementation of the \iastar\ data structure for this query
and against the Simplex tree.
Table~\ref{tab:timings_vt} lists the extraction times and storage requirements for the vertex co-boundary relation \relation{0,\cDim}
on our manifold (\emph{triangular}, \emph{quad}, \emph{tetrahedral} and \emph{hex}) and pure (\emph{probabilistic}) complexes
and the sum of extraction times for the restricted vertex co-boundary relations \relation{0,\tDim} for each dimension \tDim\
with top cells on our non-manifold (\datasetName{V-rips}) complexes.
We first consider the influence of the bucketing threshold \kv\ for Stellar trees.
While there is not much difference in extraction times for the two-dimensional complexes,
larger \kv\ values lead to faster extractions for three-dimensional and non-manifold datasets in most cases.
While this comes with a slight increase in storage requirements for encoding the relation (see right column in Table~\ref{tab:timings_vt}),
the overall storage cost per block is pretty low, requiring at most a few megabytes for the probabilistic models, and a few kilobytes in all other cases.
The \iastar\ data structure extracts co-boundary relations through a traversal along the face adjacencies of its top cells (encoded in the \relation{\tDim,\tDim} adjacency relation).
The traversal for a given vertex $v$ is seeded by one top \tDim-cell per \tDim-cluster
(encoded by partial relation $\partialrelation{0,k}(v)$, see Section~\ref{sec:stellar_vs_indep_structs}; we refer to~\cite{Canino2011IA} for more details).
Since each such traversal is run on demand, there is a negligible memory impact for this query.
Table~\ref{tab:timings_vt} demonstrates that Stellar trees are significantly faster at extracting \relation{0,k} relations,
which can be performed in about one tenth of the time in most cases.
However, it is important to note that the Stellar tree extraction is batch-based (by leaf blocks of \h),
and individual co-boundary extractions would likely be faster on the \iastar\ data structure.
\input{charts/histogram_vt_smaller_datasets}
The Simplex tree extracts co-boundary relations through a traversal of the underlying trie.
Given a vertex \vertex, the procedure for extracting its restricted co-boundary first identifies the simplices incident in \vertex\ (i.e., its star),
and then extracts just the top simplices from the star.
The former requires a trie traversal, with a worst-case complexity linear in the number of nodes in the trie,
since, as stated in the GUDHI documentation~\cite{GUDHI}, this corresponds to a depth-first search of the trie starting from the node with value \vertex.
Identifying the top simplices in the star of a vertex has a negligible cost on low dimensional meshes, while it becomes a costly operation on higher-dimensional ones, where it accounts for nearly 50\% of the overall extraction time.
As with the \iastar, since this traversal is done on demand, this query imposes negligible memory impact.
On our experimental datasets, the Simplex tree is able to complete the extraction of restricted vertex co-boundary relations
only on the smaller triangle mesh \datasetName{neptune}, for which it requires nearly $72$ hours.
To provide a comprehensive performance comparison against the Stellar tree,
we consider two additional smaller datasets for this query:
a tetrahedral mesh (\datasetName{fighter2}) with 256 thousands vertices and 1.4 millions tetrahedra,
and a probabilistic-refinement CP complex with six thousands vertices and two millions top 6-simplices.
The results, shown in Figure~\ref{hist:vt_small}, highlight the Stellar tree's significant advantage
over the Simplex for restricted vertex co-boundary extraction (i.e.\ less than a second vs hours).
\NOTA{
\kennyComment{These are really nice results Riccardo, but reviewers might ask why we would want the restricted co-boundary for the Simplex tree,
when it already encodes all cells. Do you have a breakdown of the additional costs, if any, to filter out the non-top cells? }
}
\section{Generating topological data structures}
\label{sec:stellar_build_structures}
\NOTA{ Motivate:
Generation of existing state of the art topological data structures as proxy for mesh processing applications.
This localized generation algorithm is useful in its own right due to the memory savings: the auxiliary storage is proportional to the complexity in a leaf block.
}
\NOTA{
In this section, we want to show that the Stellar tree has the same expressive power as the topological data structures, that it can adapt to algorithms defined for a particular topological data structure, by building a local topological data structure only when needed, thus, requiring a fraction of the memory of a global topological data structure.
Moreover, we want to show that the Stellar tree can be used also as a tool to generate topological data structures.
}
As a proxy for mesh processing applications, we describe how
to generate two popular topological mesh data structures over CP complexes:
the \emph{half-edge} data structure over polygonal 2-manifolds (Section~\ref{sec:halfedge-gen})
and adjacency-based data structures for
CP complexes in arbitrary dimension (Section~\ref{sec:iastar_gen}).
These two applications demonstrate the versatility of the Stellar tree representation
and exercise many of the operations necessary for other mesh processing tasks.
In both cases, we define customized topological relations and auxiliary data structures as we stream through the leaf blocks of the tree
and take either a \emph{global} approach, to reconstruct the full topological data structure,
or a \emph{local} approach, which reconstructs coherent subsets of the full data structure restricted to the portion of the complex indexed within each leaf block.
In the former case, Stellar trees enable generating the global topological data structures
using a fraction of the memory as would be required to directly generate them from an indexed representation.
In the latter case, the local approach can be used to adapt local regions of the Stellar tree's underlying complex
to algorithms defined for existing topological data structures.
For both data structures, we present a local generation algorithm over a single leaf block of the Stellar tree,
and compare the local and global generation algorithms against a brute force approach
that generates the data structure from the original indexed mesh representation.
We do this within the Stellar tree framework by setting the bucketing threshold to infinity,
since $\kv = \infty$ produces a tree that indexes the entire complex \sC\ in its root block.
\subsection{Generating the half-edge data structure}
\label{sec:halfedge-gen}
The \emph{half-edge data structure}~\cite{Mantyla1988Introduction}
is one of the most popular topological data structures for polygonal 2-complexes,
and is available in several public domain software libraries,
including the CGAL~\cite{CGAL} and the \emph{OpenMesh} library~\cite{OML15}.
The half-edge data structure describes an edge \edge\ of a complex \sC\ as a pair of two oriented \emph{half-edges} ($he_0$ and $he_1$), and encodes a subset of the topological connectivity relations of \sC\ with vertices, half-edges and polygonal faces.
The following information are encoded for each half-edge $he_i$, $i=0,1$, (see Figure \ref{fig:he_example}):
\begin{inparaenum}[(i)]
\item a reference to its source vertex $v_i$, $i=0,1$;
\item a reference to the face $f_i$, $i=0,1$ on the left with respect to the orientation of half-edge $he_i$;
\item references to the previous and next half-edges on the boundary of face $f_i$ in counterclockwise order (half-edges $p_i$ and $n_i$ in Figure \ref{fig:he_example}), and
\item a reference to its opposite half-edge $he_{1-i}$.
\end{inparaenum}
Each face $f$ encodes a reference to one of its bounding half-edges, denoted as connectivity relation \partialrelation{2,he}($f$).
Similarly, each vertex $v$ encodes a reference to one of the half-edges originating from it, denoted as \partialrelation{0,he}($v$).
In our representation there is a one-to-one correspondence between a polygonal face and a top CP cell.
\begin{figure}[t]
\centering
\resizebox{.7\columnwidth}{!}{\includegraphics{imgs/half_edge_example3}}
\caption{Topological entities encoded in the \emph{half-edge} data structure, for edge $e = (v_0 , v_1 )$.
}
\label{fig:he_example}
\end{figure}
\NOTA{High-level generation procedure -- which relations, how encoded, how generated}
\NOTA{
Generate half-edges -- need list of half-edges.
\begin{compactitem}[*]
\item Half-edge $\rightarrow$ \{ v,f,prev, next \} can be gotten from \relation{2,0}.
\item Similarly for: face $\rightarrow$ half-edge
\item Half-edge $\rightarrow$ half-edge (?)
\item Vertex $\rightarrow$ half-edge (?)
\end{compactitem}
}
We first describe the algorithm for generating a \emph{local} half-edge data structure within a leaf block \B\ of a Stellar tree.
The algorithm generates three local arrays encoding half-edges, faces and vertices with their topological connectivity relations, as detailed above.
An auxiliary array $edge\_he$ is used to encode the pair of half-edges $he_1$ and $he_2$ associated with each edge $\edge$ in \B.
The algorithm first iterates on the top 2-cells of \B, looping through the boundary edges of each top 2-cell $f$ in counterclockwise order.
Each directed edge $e$ in \relation{2,1}($f$), with $e=(v,w)$,
defines a half-edge $he$, whose source vertex is $v$ and bounding face is $f$.
The algorithm also tracks the previous and next half-edges along $f$ when it adds $he$ to $edge\_he(e)$.
During this iteration, \partialrelation{2,he}($f$) and \partialrelation{0,he}($v$) are initialized with the first half-edge
found around face $f$ and vertex $v$, respectively.
Opposite half-edges are found by iterating over the $edge\_he$ array,
and pairing half-edges sharing a common edge \edge.
With a simple few adjustments, this algorithm can generate a \emph{global} half-edge data structure over the whole complex \sC.
Aside from encoding the auxiliary data structures at a global level, the other major difference with respect to
the local approach is that within each leaf block \B, the global algorithm creates half-edges only from those top 2-cells in \PhiMapTop(\B),
whose minimum vertex index is in \PhiMapVert(\B).
This guarantees that each half-edge is initialized only once.
For storage efficiency, as soon as both half-edges for an edge are identified, their corresponding entry is eliminated from $edge\_he$.
\NOTA{
As auxiliary data structures we need four local arrays: encoding half-vertices, half-edges, and half-faces, respectively, and one for the edges in \B, in which each edge $e$ is represented as a triple formed by its \relation{1,0} relation and the position index, in the local half-edge array, of an incident half-edge. This latter, called $edge\_he$, is needed to correctly initialize the opposite half-edges.
The procedure iterates on top 2-cells of \B, on which, given a top 2-cell $f$, we expand its boundary graph, by extracting the edges incident in $f$, using $\relation{2,1}$ relation, and, its vertices, using $\relation{2,0}$ relation. This is sufficient to initialize the half-edges of $f$.
Additionally, during this iteration, the entries in half-vertices and half-faces arrays are initialized, by associating an arbitrary half-edge of $f$.
Finally, we use a procedure similar to Algorithm~\ref{alg:top-tops} on $edge\_he$ array to find the opposite half-edges within \B. This array is analogous to the $face\_top$ array of Algorithm \ref{alg:top-tops}, and it is used for pairing half-edges \emph{sharing} the same underlying edge.
\felleComment{after moving the Rdd algorithm in the \iastar\ generation section, this paragraph above needs a review...}
With few adjustments, this algorithm is used to generate a \emph{global} half-edge data structure over the whole complex \sC. Within each leaf block \B, the global algorithm creates half-edges for top 2-cells in \PhiMapTop(\B) whose lowest indexed vertex is in \PhiMapVert(\B) and, then, it reuses a global $edge\_he$ array from Algorithm~\ref{alg:top-tops} on all leaf blocks.
For storage efficiency, we remove the edges from $edge\_he$ after we find its matching opposite half-edge.
}
For pseudo-manifold complexes, a similar approach can be used to generate
a \emph{quad-edge} data structure in 2D~\cite{Guibas1985Primitives}
or a \emph{half-facet} data structure in 3D~\cite{Dobkin1989Primitives,Lopes1997Structural,Lage2005CHF,Kremer2013OpenVolumeMesh}.
\input{tables/table_app_he_compact}
\paragraph*{Experimental results}
The half-edge generation results comparing the local and global Stellar tree approaches with the \emph{brute-force}
for \emph{triangle} and \emph{quad} meshes are summarized in Table~\ref{tab:timings_he}, which lists the time to generate
the local and global data structures
as well as the storage space to represent the half-edge data structure and the auxiliary data structures used for generating it.
\NOTA{ Results in Table~\ref{tab:timings_he} -- Stellar beats brute-force; global beats local due to reduction in extra processing}
We first consider the generation times, and note that, in most cases, the approaches (\ks\ and \kl) based on the Stellar tree
are about 50\% faster than their brute-force (\kinf) counterparts.
This is largely due to the increased locality and reduced search space afforded by the Stellar tree.
Further, the global approaches are 10-20\% faster than their local counterparts, since they have fewer memory allocations and only process each cell once.
Higher bucketing thresholds typically lead to faster processing times,
since they have less overlapping geometry to process (i.e., they have a lower spanning number \Chi).
In terms of memory requirements, approaches based on the Stellar tree have a relatively small footprint,
requiring at most a few Kilobytes for the local half-edge data structures and auxiliary memory,
while the brute-force approach requires tens to hundreds of Megabytes for its auxiliary data structures.
Both the global algorithm and the brute-force approach require the same storage (tens to hundreds of Megabytes)
to encode the topological relations in the half-edge data structure.
\subsection{Generating dimension-independent adjacency-based data structures}
\label{sec:iastar_gen}
In this subsection, we describe how the Stellar tree representation can be used to generate
a (local or global) indexed adjacency-based data structure over a \cDim-dimensional CP complex \sC\ embedded in \eSpace.
Recall from Section~\ref{sec:stellar_vs_indep_structs} that the \iastar\ data structure is an adjacency-based topological data structure
defined over non-manifold CP complexes that gracefully degrades to the IA representation over manifold complexes.
The IA data structure is defined over pseudo-manifolds, and, thus, each $(\cDimMinusOne)$-cell can be incident in at most two top CP \cDim-cells.
We first describe how to generate the IA data structure from the Stellar tree, and then extend this to the \iastar\ data structure.
The IA data structure encodes
the following topological relations:
\begin{inparaenum}[(i)]
\item boundary relation $\relation{\cDim,0}(\sigma)$,
\item partial co-boundary relation $\partialrelation{0,\cDim}(\vertex)$ for each vertex $\vertex$, consisting of one arbitrarily selected top CP \cDim-cell in the star of \vertex, and
\item adjacency relation $\relation{\cDim,\cDim}(\sigma)$, for each top CP \cDim-cell $\simplex$.
\end{inparaenum}
If $\simplex_1$ is adjacent to $\simplex_2$ through $(\cDimMinusOne)$-cell $\tau$, and $\tau$ is the $i$-th face of $\simplex_1$, then $\simplex_2$ will be in position $i$ in the ordered list of $\relation{\cDim,\cDim}(\simplex_1)$.
Since the Stellar tree explicitly encodes the $\relation{\cDim,0}$ relations for all top CP \cDim-cells, the generation of a \emph{local} IA data structure consists of extracting \partialrelation{0,\cDim}(\vertex), for each \vertex\ in \PhiMapVert(\B), and $\relation{\cDim,\cDim}$(\simplex), for each top CP \cDim-cell \simplex\ in \PhiMapTop(\B).
For vertices in \PhiMapVert(\B), the former is computed by iterating over the top CP \cDim-cells in \PhiMapTop(\B),
and selecting the first \topcp\ incident in \vertex\ that we find.
\input{algorithms/ADJ_extraction}
Algorithm~\ref{alg:top-tops} provides a description of a \emph{local} strategy for extracting \relation{\cDim,\cDim}(\simplex) relations in \B.
Note that it finds only the adjacencies for the \cDim-cells that have at least one vertex in \PhiMapVert(\B).
While we can locally reconstruct the full adjacency relation for top CP \cDim-cells with \cDim\ vertices in \PhiMapVert(\B), a top CP \cDim-cell \simplex\ with fewer vertices in \PhiMapVert(\B) will be missing at least one adjacency.
For example, in Figure~\ref{fig:adj_extraction_case}, we can completely reconstruct the adjacency relations of the triangles
having two vertices in \B\ (in yellow), while we can only partially reconstruct the adjacencies of triangles having just one vertex in \B\ (in gray).
Adjacencies on the edges opposite to the vertices in red cannot be reconstructed inside \B\ for gray triangles.
The algorithm first iterates on top CP \cDim-cells in \PhiMapTop(\B) (rows 1--3).
Given a top CP \cDim-cell \simplex, we cycle over the \cDim-tuples of the vertices of \simplex, where each \cDim-tuple defines a (\cDimMinusOne)-cell on the boundary of \simplex. The auxiliary data structure $d\_1\_cell\_top$ encodes, for each \cDim-tuple $\tau$, the top \cDim-cells sharing $\tau$,
corresponding to the \relation{\cDimMinusOne,\cDim} relation of $\tau$.
Then, the algorithm iterates over $d\_1\_cell\_top$ to initialize adjacency relations \relation{\cDim,\cDim}.
Given a (\cDimMinusOne)-cell $\tau$, if $\tau$ has two \cDim-cells in its co-boundary (row 5),
namely $\simplex_1$ and $\simplex_2$, we set $\simplex_1$ and $\simplex_2$ as adjacent along $\tau$ (rows 7--8).
Due to its local nature, the Stellar tree adjacency reconstruction provides considerable storage savings compared to its global counterpart:
the storage requirements are proportional to the number of top CP \cDim-cells in \B,
rather than those in \sCT.
\begin{figure}[t]
\centering
\resizebox{.5\columnwidth}{!}{
\includegraphics{imgs/adj_extraction_case}
}
\caption{ \emph{Local} adjacency reconstruction finds adjacencies across cells
with a vertex in the leaf block \B\ (dashed).
For yellow triangles, all edges have a vertex in \B, while some edges of gray triangles do not.
}
\label{fig:adj_extraction_case}
\end{figure}
Extending this algorithm to generate a \emph{global} IA data structure needs only few modifications. Aside from encoding the auxiliary data structures at a global level, the other major difference with respect to the local approach is that within each leaf block \B, \relation{\cDimMinusOne,\cDim} relations are extracted only for those (\cDimMinusOne)-cells $\tau$ for which the two top CP \cDim-cells sharing $\tau$ have not been already initialized.
The \iastar\ data structure extends the IA data structure to arbitrary non-manifold CP \tDim-complexes, with $0< \tDim\leq \cDim$.
Recall that, in addition to the relations stored in the IA data structure, it encodes:
\begin{inparaenum}[(i)]
\item adjacency relation $\relation{\tDim,\tDim}(\sigma)$, for each \ktopcp\ $\simplex$;
\item co-boundary relation \relation{0,1}(\vertex) restricted to the top 1-cells, for each vertex \vertex;
\item \emph{augmented} partial co-boundary relation (\partialrelation{0,\tDim}(\vertex)), $1 < \tDim \leq \cDim$, for each vertex $\vertex$, consisting of one arbitrarily selected \ktopcp\ from each \emph{\tDim-cluster} in the star of \vertex, where a \tDim-cluster is a (\tDimMinusOne)-connected component of the star of \vertex\ restricted to its top CP $k$-cells; and
\item co-boundary relation $\relation{\tDimMinusOne,\tDim}(\tau)$, for each non-manifold (\tDimMinusOne)-cell $\tau$ bounding a \ktopcp.
\end{inparaenum}
Extracting \relation{\tDim,\tDim} relations, when $\tDim<\cDim$, and \relation{\tDimMinusOne,\tDim} relations for non-manifold (\tDimMinusOne)-cells is performed by a suitable extension of Algorithm \ref{alg:top-tops}.
Augmented partial co-boundary relation \partialrelation{0,\tDim}(\vertex), for $\tDim > 1$, is computed by extracting
the restricted star of \vertex\ (Algorithm~\ref{alg:vertex-tops}) and by using \relation{\tDim,\tDim} relation
for the \topcpcells\ in the star of \vertex\ to identify the (\tDimMinusOne)-connected components incident in \vertex.
\relation{0,1}(\vertex) is initialized by iterating over the top 1-cells in the restricted star of \vertex.
\paragraph*{Experimental results}
\label{sec:adjtiming}
In Table~\ref{tab:timings_adj_ds_gen}, we compare the time, and storage requirements to generate an IA or \iastar\ data structure using the Stellar tree and brute-force approaches.
For each dataset, we compare the Stellar tree indexes generated through thresholds \ks\ and \kl\ and by using a local and a global algorithm
against the brute-force approach (\kinf) on the original indexed representation for the complex.
For the manifold (\emph{triangular}, \emph{quadrilateral}, \emph{tetrahedral} and \emph{hexahedral}) and pure (\emph{probabilistic}) datasets,
where all top cells have dimension \cDim, we used Algorithm~\ref{alg:top-tops} to compute the adjacencies.
\input{tables/table_app_adj_ds_gen_compact}
Comparing execution times, we find that the global Stellar tree approach to be about 25\% faster than the brute-force approach in most cases.
However, due in part to the redundant lookups in the adjacency calculation,
the local approach is a bit slower than the global approach, but still 10\% faster than the brute-force approach in most cases.
For example, it is almost twice as fast on \emph{F16}, on par on \emph{Lucy} and slower on the \emph{5D probabilistic} dataset).
Considering the effects of the bucket threshold \kv, we observe little discernible difference
on the global Stellar tree approach.
However, a larger bucketing threshold (\kl) yielded up to a 25\% speedup in the local approach on our larger datasets, compared to its smaller (\ks) counterpart.
Lastly, we consider the storage requirements for generating the IA / \iastar\ data structure.
For both the local and global Stellar tree tree approaches, the auxiliary storage requirements are limited to the complexity of each leaf block,
requiring only a few KB of auxiliary storage for the manifold and non-manifold datasets, and a few MB for the pure (\emph{probabilistic}) datasets.
In contrast, the brute-force approach requires hundreds of MB for the medium sized datasets. We were not able to generate the
\iastar\ data structures using the brute-force approach on our largest datasets,
which ran out of memory (\emph{OOM}) on our workstation, despite its 64 GB of available RAM.
\section{Concluding remarks}
\label{sec:stellar_conclusions}
We have introduced the Stellar decomposition as a model for topological data structures over \emph{Canonical-Polytope (CP)} complexes,
a class of complexes that includes simplicial complexes and certain classes of cell complexes, like quadrilateral and hexahedral meshes.
Stellar decompositions cluster the vertices of a complex into \emph{regions} that contain sufficient information to reconstruct the \emph{star} of its vertices.
The model is agnostic about the domain of the complex (e.g.\ manifold, pure, non-manifold)
and we have demonstrated the scalability of this model to large mixed-dimensional datasets in high dimension.
\NOTA{ \outlineComment{Key contribution -- exploit spatial locality through reindexing and SRE compression of lists} }
We introduced the Stellar tree as a concrete realization of the Stellar decomposition model over spatially embedded CP complexes.
In a Stellar tree, the embedding space of the complex is decomposed using a nested spatial index \h\ whose structure is
defined by a single tuning parameter, the \emph{bucketing threshold} \kv, which limits the maximum number of vertices indexed by a leaf block of \h.
Stellar trees effectively exploit the spatial coherence of a CP complex \sC\ by using the clustering structure of \h\
to reorder the arrays of top cells of \sC\ and to compress the resulting ranges of sequential indexes
within the lists of vertices and top cells in the leaf blocks of \h.
We have demonstrated over a wide range of datasets that this process produces \datasetName{compressed} Stellar trees
that are typically only 10\% larger than the original indexed base mesh for \sC.
The source code for our reference implementation is available at \cite{Fellegara_StellarTree_Github}.
In terms of storage size, Stellar trees compare quite favorably with state-of-the-art topological data structures.
They are consistently half the size of their \iastar\ data structure counterparts~\cite{Canino2011IA}
and one to two orders of magnitude smaller than their Simplex tree counterparts~\cite{Boissonnat2014simplex}.
This is especially notable for high dimensional Vietoris-Rips complexes, a target application for the Simplex tree,
for which Stellar trees have very low overhead.
While Stellar trees support a much broader class of complexes,
they have similar storage requirements as the SOT data structure~\cite{Gurung2009SOT,Gurung2010SOT},
which only supports static pseudo-manifold triangle and tetrahedral complexes.
In future work, it would be interesting to compare the Stellar tree against top-based extensions of the Simplex tree,
such as the $MST$ and the $SAL$~\cite{Boissonnat2017}, if public-domain implementations become available.
Despite the simplicity of their leaf block representation, Stellar trees provide a great deal of flexibility to customize
the structure and layout of their \emph{expanded} topological data structures to meet the needs of a given application.
Such data structures are typically constructed by composing several local topological incidence and adjacency relations.
We described efficient algorithms for reconstructing these relations within the subcomplex indexed by the leaf blocks of a Stellar tree
and demonstrated the advantages of this approach compared to a similar algorithm on the \iastar data structure.
As a proxy for more complicated mesh processing algorithms, we also described how Stellar trees can be used as an intermediary representation
to generate existing state of the art data structures like the half-edge and \iastar\ data structure and demonstrated the advantages of this representation
in terms of storage requirements and compute times, especially for machines with limited resources.
To this extent, some preliminary studies~\cite{Weiss2013primaldual,Fellegara2014Efficient} have shown that the Stellar tree
can be efficiently and effectively used in shape analysis applications, namely segmentations of 2D and 3D scalar fields,
encoded simplicial complexes. We are currently working on an application of the Stellar tree
to homology-preserving simplification of high-dimensional simplicial complexes.
One direction of future work could involve extending the Stellar tree representation to support a broader class of cell complexes.
For example, it would not be difficult to extend support to indexed polyhedral cell complexes which define their cells in terms of their
boundary polyhedral faces which are, in turn, defined by oriented lists of vertex indices~\cite{Muigg11}.
\NOTA{
A possible future investigation could be the definition of new local encodings for the connectivity within each leaf block.
We note that, in our experimental evaluations, the vast majority of a Stellar tree's storage requirements are due to the
encoding of the connectivity of the complex. This is especially large in our higher-dimensional complexes, where the connectivity
accounts for 99\% of the overall storage, while on lower dimensional complexes, it accounts for around 85\% of the storage.
The core idea is to infer this encoding directly from the exploited spatial coherence of the Stellar tree.
This will lead to a representation in which each leaf block \B\ fully encodes the sub-complex formed by the vertices
and the top cells associated with it, but still with the capability to recover the missing information from the neighbor leaf blocks of \B.
}
\NOTA{ \outlineComment{Future directions -- Stellar tree in parallel/distributed context -- add discussion about relation to distributed mesh data structures.} }
Another avenue for investigation could be to extend our processing algorithms for parallel, distributed and/or out-of-core environments, which could be used for applications like multicore homology computation~\cite{Lewis2014Multicore} on point cloud data.
The Stellar tree's compact leaf block representation is already geared towards a parallel execution pattern since each block already has sufficient resources to query the connectivity of its local subcomplex.
Preliminary results along this line look promising.
A simple unoptimized OpenMP~\cite{OpenMP} adaptation of boundary and restricted vertex co-boundary queries yielded a 3-4x speedup compared to our serial approach on our 6 core machine.
\NOTA{ \outlineComment{Future directions -- extend Stellar decomposition for abstract data sets} }
Finally, while Stellar trees require their underlying complex to be spatially embedded, there is no such restriction on the Stellar decomposition model.
Thus, we plan to investigate Stellar decompositions for \emph{abstract} $CP$ complexes, such as simplicial complexes representing social networks.
Social network representation and processing poses new challenges in the social big data domain, such as the identification of key-players and \emph{communities} in the dataset, as well as extracting topological properties of the network, like its homology or $k$-connectivity structure.
Due to the irregularities of non-spatial datasets, one key challenge would be to define efficient decompositions (i.e.\ with a low average spanning number $\Chi$) using only the complex's connectivity information.
\subsection{Evaluation of generation timings}
\label{sec:generation_timings}
In this section, we evaluate the generation times for the \datasetName{compressed} Stellar tree representations on the experimental datasets from Section~\ref{sec:stellar_experimental_plan}.
Table~\ref{tab:generation_timings} shows the timings of the four generation phases and the overall \emph{total} timings.
The \emph{insert} columns show the time for creating the base indexing structure \h\ over the vertices \sCV\ of the complex \sC,
or the time for inserting the top cells \sCT\ into \h,
while \emph{reindex} columns show the timings for reordering and SRE compressing the indexed lists and arrays in \h\ and \sC.
We first consider the relative expense of each of the generation phases.
In general, the vertex reindexing phase consumes less than 10\% of the overall timings.
For the \emph{triangle}, \emph{quadrilateral}, \emph{hexahedral} complexes, and the lower dimensional \emph{Vietoris-Rips} complex,
generating \h\ is the most expensive phase, while for the \emph{tetrahedral}, \emph{probabilistic-refinement} and the two higher dimensional \emph{Vietoris-Rips} models,
reindexing the top cells is the most expensive phase.
These results can be understood by considering the relative sizes of \sCV\ and \sCT.
When the number of vertices is greater than or equal to the number of top cells, it is more expensive to generate the spatial hierarchy \h.
Otherwise, the cost of reindexing and compressing the top cells arrays dominates the generation times.
Finally, considering the effect of the bucketing thresholds (\kv) on generation times,
we find that Stellar trees with higher bucketing thresholds (\kl) can be generated in less time than those with lower bucketing thresholds (\ks).
This is expected since high values of \kv\ tend to produce coarser spatial subdivisions with lower average spanning numbers \Chi.
\subsection{Stellar tree generation: reindexing and compressing the \topcpcells}
\label{sec:tops_reordering_long}
We describe here the four steps performed by the generation algorithm in reindexing the top CP cells, detailing what described in Section \ref{sec:tops_reordering}.
Algorithm~\ref{alg:main_tops_reordering} requires three auxiliary data structures:
\begin{itemize}
\item an associative array, \mapM, which maps an (integer) identifier to each unique tuple of leaf blocks;
\item an array of integers, \arrI, having the same number of entries as \mapM.
Initially, it is used to track the number of \topcpcells\ associated with each tuple of leaf blocks.
In a successive phase, it tracks the next index for a \topcp\ in a leaf tuple;
\item an array of integers, \arrTPos, of size $|\sCT|$.
Initially, it is used to associate \topcpcells\ with their leaf tuple identifier.
In a successive phase, it is used to store the new spatially coherent indices for the \topcpcells.
\end{itemize}
The reindexing exploits the spatial coherence of \topcpcells\ that are indexed by the same set of leaves
by translating spatial proximity in \aSpace\ into index-space proximity in \sCT.
Figure~\ref{fig:reindexing_verbose} illustrates this reorganization process over a triangle mesh.
\NOTA{
A 2D example is shown in Figure \ref{fig:reindexing_verbose}.
In Figure \ref{fig:reindexing_verbose}(a) it is shown the original tree \h\ to which are added the triangles in the leaf blocks keeping the original sorting of \sCT.
Then, Figure \ref{fig:reindexing_verbose}(b) shows an illustration of the technique, with the two main structures that are conceptually defined during the exploiting of the spatial coherence of the triangles:
\begin{enumerate
\item on the left, it is shown the array which keeps, for each triangle in \sCT, the array of leaf blocks tuple that index it;
\item while on the right, it is shown the structure that encodes the association between the leaves tuples and the triangles indexed in the same tuple.
\end{enumerate}
Finally, in Figure \ref{fig:reindexing_verbose}(c), it is shown the final tree, in which it has been exploited the spatial coherence of the triangles.
}
\input{algorithms/tops_reindexing_appendix}
\begin{figure*}[t]
\centering
\subfloat[Original triangles]{
\resizebox{.25\textwidth}{!}{
\includegraphics{imgs/tetra_reordering_orig}
}
}
\hfil
\subfloat[After Step 1]{
\resizebox{.2\textwidth}{!}{
\includegraphics{imgs/tetra_reordering_extract_leaf_tuples}
}
}
\hfil
\subfloat[After Step 2]{
\resizebox{.13\textwidth}{!}{
\includegraphics{imgs/tetra_reordering_extract_cell_indices_final}
}
}
\hfil
\subfloat[Reindexed triangles]{
\resizebox{.25\textwidth}{!}{
\includegraphics{imgs/tetra_reordering_new}
}
}
\caption{Top cell reindexing.
(a) initial tree with four leaf blocks $a, b, c, d$
(b,c) auxiliary data structures after Steps 1 and 2
(d) reindexed tree.}
\label{fig:reindexing_verbose}
\end{figure*}
We summarize here the major steps of Algorithm~\ref{alg:main_tops_reordering}.
\NOTA{
\paragraph*{Extract the leaves and \topcpcells\ association}
\kennyComment{Summary:
Iterate through leaves of tree.
\\ $\forall$ \topcp\ \simplex\ in \tB\ of \B\ w/ $min(\relation{\tDim,0}(\simplex)) \in \PhiMapVert(\B)$:
\\ -- Find its tuple\_of\_blocks,
\\ -- uniqueID = M[tuple\_of\_blocks] (get or create new key)
\\ -- I[uniqueID]++
\\ -- t\_positions( \tIndex) = uniqueID
}
}
\subsubsection*{\AlgoName{extract\_leaf\_tuples}}
In Algorithm \ref{alg:get_leaf_top_association}, we generate map \mapM,
count the number of \topcpcells\ associated with each tuple of leaf blocks in array \arrI\
and initialize the \arrTPos\ array entries with its tuple identifier:
\begin{itemize}
\item
for each leaf block \B\ in \h, we visit the \topcpcells\ \simplex\ in \PhiMapTop(b) whose minimum vertex index \vIndex\ (row 6)
is indexed in \B. This ensures that each \topcp\ is processed only once.
Blocks of \h\ are uniquely indexed by the index of their starting vertex \vstart;
\item
for each such \topcp\ \simplex\ with index \tIndex, we traverse the tree to find the tuple of leaf blocks from the tree that index \simplex\ (row 8 function \AlgoName{extract\_leaf\_tuple}).
We then look up its unique identifier $key$ in \mapM\ (or create a new one and insert it into \mapM) (row 9).
We then increment the count for this tuple,
and associate \simplex\ with this tuple
(rows 10 and 11).
\end{itemize}
At the end of the traversal of \h, each entry of \arrTPos\ contains the identifier of the tuple of leaf blocks indexing its corresponding top cell
and \arrI\ contains the number of \topcpcells\ indexed by each leaf tuple. \mapM\ is no longer needed and we can discard it.
The content of auxiliary data structures, after this step, is illustrated in Figure \ref{fig:reindexing_verbose}(b).
For example, triangle $5$ is indexed by leaves $a$ and $b$, whose key in \mapM\ is $2$. This tuple contains two triangles other than $5$, as indicated by the corresponding counter in \arrI.
\NOTA{
\paragraph*{Extract the spatial coherence of the \topcpcells}
%
\kennyComment{
Summary:
Use I and t\_positions to find a reordering of \sCT.
\\ -- No longer need M, clear its memory
\\ (1) I := \texttt{prefixSum} of I
\\ (2) Forall \simplex: t\_positions[t] = I[t\_positions[t]]++
\\ -- No longer need I, clear its memory
}
}
\subsubsection*{\AlgoName{extract\_cell\_indices}}
In Algorithm \ref{alg:get_tops_reorderd_indexes},
we use the \arrI\ and \arrTPos\ arrays to find the updated index for each \topcp\ in \sCT, which is computed in place in \arrTPos.
First, we convert the cell counts in array \arrI\ into starting indexes for the \topcpcells\ grouped by the same set of leaf blocks, by taking the \emph{prefix sum} of array \arrI\
(rows 1 to 4).
Then, we use array \arrI\ to update \arrTPos\ array by iterating over the \topcpcells, and replacing the tuple identifier in \arrTPos\ with the next available index from \arrI\ and increment the counter in \arrI\
(rows 5 to 8).
At this point, \arrTPos\ is a permutation array that encodes a more spatially coherent ordering for the \topcpcells\ and \arrI\ is no longer needed.
The content of auxiliary data structures after this step, is shown in Figure \ref{fig:reindexing_verbose}(c).
At the end, each entry of \arrI\ contains the first index of the next tuple, while \arrTPos\ the new position for the $i$-th triangle.
\NOTA{
\paragraph*{Compress the tree representation}
\kennyComment{Summary: Use t\_positions to reorder \tB\ arrays and SRE compress them.
Use t\_positions to reorder \sCT\ arrays.}
}
\subsubsection*{\AlgoName{compress\_tree\_cells}}
In Algorithm \ref{alg:compress_tree_representation}, we apply this order to the lists \tB\ of top simplices of each leaf block \B\ and compact the \tB\ leaf block arrays using the SRE compression (as described in Section \ref{sec:leaf_encodings}).
This procedure iteratively visits all blocks of a Stellar tree.
Within each leaf block \B, an auxiliary array, called $\tB\_aux$, is used, encoding, initially, a copy of the array of \topcpcells\ position indices encoded by \B\ (row 5). Then, these indices are updated with the spatial-coherent ones from $t\_permutation$ (rows 7 and 8), and, finally, by sorting this array we have sequential indices in consecutive position of $\tB\_aux$ (row 9).
Next, we identify consecutive index runs by iterating over $\tB\_aux$ array (rows 12 to 21).
In this phase, we use two auxiliary variables, a counter, encoding the size of the current run, and a variable, called $start\_id$, encoding the starting index of the current run. If we find two consecutive indices, we simply increment $counter$. Otherwise, we check if we have a run (row 16), or if we have to simply add the index in $start\_id$ to \tB\ array of \B\ (row 19). If we have to encode a run in \tB\ (procedure \AlgoName{create\_sre\_run}, row 17), we apply the strategy, described in Section \ref{sec:leaf_encodings}, for encoding it.
\subsubsection*{\AlgoName{permute\_array}}
Finally, in Algorithm \ref{alg:update_array}, we update the global \topcpcells\ array \sCT.
This is done by iteratively swap the entries in \sCT\ (rows 5 to 8), applying the new spatially-coherent indices encoded in $permutation$ array.
This procedure does in place updates and, thus, does not require any additional auxiliary data structure.
\NOTA{
Recall that, we assume that all the blocks of \h\ are \emph{half-open} blocks unless a block \B\ is incident
in the block representing the ambient space \aSpace, as in this case we consider that face of \B\ as \emph{closed}.
}
\NOTA{
We insert the vertices sequentially, over which we start a visit of \h\ to insert a vertex \vertex\ in a leaf block \B\ containing it.
We recall that we do not explicitly encode within each block its domain, but we compute it at runtime, keeping track of the split planes.
We assume that the root block \hR\ completely covers the complex domain and
represents it as a \emph{closed block}.
}
\NOTA{
We insert each \top\ \simplex\ at once and for each boundary vertex \vertex\ of a \ktop\ \simplex, we find the leaf block \B\ of \hL\ that indexes \vertex\
and then, we add \simplex\ to \top\ array of \B\ (i.e., we insert \tIndex\ index in \tB\ array).
}
\NOTA{
\begin{enumerate}
\item first, we navigate the tree to compress it and to get the spatially coherent ordering of the vertices (see Algorithm \ref{alg:get_vertices_ordering});
\item then, for each \simplex\ in \sCT, we update its boundary relation \relation{k,0} (see rows 4 to 6 of Algorithm \ref{alg:main_vertices_reordering});
\item finally, we update the \sCV\ array according to the new position ordering on the vertices.
\end{enumerate}
%
As auxiliary data structure for this procedure, we need an array of integer references, that we call $v\_permutation$.
This array contains exactly $|\sCV|$ entries, and at each entry $i$, it is associated the $i$-th vertex \vertex\ in \sCV.
This entry in $v\_permutation$ contains the new \emph{spatially coherent} position of \vertex\ in \sCV.
Thus, the extra storage overhead is exactly $|\sCV|$.
$v\_permutation$ is the output of Algorithm \ref{alg:get_vertices_ordering}, and it is required as input during the update
of the \topcp\ boundary relations (rows 4 to 6 of Algorithm \ref{alg:main_vertices_reordering}) and during the update of \sCV\ array.
%
Algorithm \ref{alg:get_vertices_ordering} is a recursive procedure in which we visit all the blocks in \h.
If we are in an internal block (see rows 1 to 5 of the Algorithm), we recursively visit all the children, while if we are in a leaf block \B\
(see rows 7 to 12 of the Algorithm), we visit the vertices array \PhiMapVert(b), we set up a consecutive indexes run in \B\
and we update the corresponding entries in $v\_permutation$.
}
\NOTA{
After this stage, each leaf block contains a contiguous range of vertices and each internal block contains a continuous range
of vertices equal to the union of the ranges contained in its descendant.
Moreover, we do not require extra structures to encode this range, thanks to the \emph{sequential run-length} compression, and, thus,
in each vertex array, we require just two entries that have been initialized in order to represent the unique vertices run into the leaf block.
We denote the extreme vertices of this run as \vstart\ and \vend.
Then, we proceed with the updating of boundary relation \relation{k,0} on all \ktops\ in \sCT\ (see rows 4 to 6 in Algorithm \ref{alg:main_vertices_reordering}).
For each vertex \vertex\ (with index \vIndex\ in \sCV) in \relation{k,0} of a \ktop\ \simplex, we get its spatial coherent position
in \sCV\ from $v\_permutation$, by accessing the \vIndex\ entry. Once we get this position we update the entry, associated to \vertex,
in \relation{k,0}(\simplex) with the value in $v\_permutation[\vIndex]$.
Finally, we update the \sCV\ array accordingly with the spatially coherent ordering (in Algorithm \datasetName{update\_array}).
During the algorithm, we iteratively swap the index positions, updating at each swap operation a vertex \vertex,
for which we gather its new spatially coherent position from $v\_permutation[\vIndex]$. This algorithm does exactly $|\sCV|$ swaps without requiring any extra storage.
}
\NOTA{
In order to update $M$ and $t\_position$, we need to extract the tuple of leaf blocks indexing \simplex.
If \simplex\ is completely indexed into \B, then we have its tuple already available. Otherwise, we have to visit the tree to find the leaf blocks that index the other vertices of \simplex.
Once we get the tuple of leaf blocks indexing \simplex, we have to check if the tuple is already into $M$, and if it is not present we insert it. Conversely, if it is already into $M$, we simply increment the counter that keeps track of the \topcpcells\ indexed into that tuple.
}
\section*{Acknowledgments}
This work has been supported by the National Science Foundation under grant number IIS-1116747 and by the University of Maryland under the 2017-2018 BSOS Dean Research Initiative Program.
It has also been performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
Datasets are courtesy of
the \emph{Volvis} repository (\datasetName{bonsai}, \datasetName{f16} and \datasetName{foot})~\cite{Volvis},
the \emph{Volume Library} (\datasetName{vismale})~\cite{VolLib},
\emph{CMU Unstructured Mesh Suite} (\datasetName{san fernando})~\cite{CMUMeshSuite}
and \emph{Aim@Shape} repository (\datasetName{lucy}, \datasetName{statuette} and \datasetName{neptune})~\cite{AIM@shape}.
\bibliographystyle{apalike}
|
1,116,691,497,225 | arxiv | \section{Introduction}
\label{sec:introduction}
Named Entity Recognition (NER)~\cite{nadeau2007survey} is a challenging and \revise{fundamental task} in natural language processing. The NER aims to recognize named entities such as \textit{person, location, organization} from unstructured text, converting free text into the structured one. For several tasks, such as question answering and information retrieval, a NER system is often used to preprocess the data. Thus the performance of the NER would directly affect the overall performance of \revise{these advanced tasks}. Besides, scientists, especially those working on medical, biographical, and geographical, usually need to find out name entities in the literature for \revise{further} research. For example, extracting the geographic locations automatically and then displaying them on electronic maps will help people better understand and utilize the literature~\cite{zhang2009extraction}.
Over the past few years, \revise{NER} has been widely investigated.
The development of the NER system is highly related to the evolution of the natural language processing system.
In the 1990s, rule-based natural language processing methods~\cite{brill1992simple, frye1995theory} prevailed, solved some easy problems.
However, it turns out that rule-based methods had poor versatility and are hard to transfer between domains.
NER models could also take traditional statistic methods, such as Naive Bayes Classification~\cite{mccallum1998comparison}, CRF (Conditional Random Field)~\cite{luo2015joint} and HMM (Hidden Macov Model)~\cite{ratinov2009design}. However, these models rely on resources and features that are costly to collect.
In recent years, deep neural networks provide a more practical solution. By learning the statistical features in a large-scale corpus, deep neural networks summarize and extract the features for specific tasks. In this paradigm, some breakthroughs appear in many tasks such as text classification, syntactic analysis, named entity recognition, information retrieval, question answering systems, etc. Furthermore, Collobert et al. proposed \revise{SENNA~\cite{collobert2011natural}}, a unified neural network architecture and learning algorithm, which can be applied to various natural language processing including NER.
Recently, researchers are concerned about generating high-quality text representation, mapping natural language symbols into a high-dimensional vector space. \revise{Latest works} for text representation includes ELMo~\cite{peters2018deep}, BERT~\cite{devlin2018bert}, XLNET~\cite{yang2019xlnet}, etc. However, only improving the feature generation ability is not enough. It is an important issue to build a suitable network model and better use these text representation. BLSTM-CNN~\cite{chiu2016named} firstly combines the Bi-directional LSTM and CNN for the NER task. CNN in this model is used to extract character features and generate character embedding. Similarly, ~\cite{wang2017named} proposes CNN structure by gating mechanism, which allows more flexible information control on the CNN features. However, these methods ignore the spatial characteristic that the ``neighbor words'' can reflect the label of a certain word. For example, some words are often adjacent to the named entity, such as the articles (e.g., \textit{a, the, to}) or the verbs (e.g., \textit{love, play}). In this paper, we propose a special CNN module to process spatial features, helping to extract spatial information from adjacent words. Benefited from CNN's filter structure, the representation of each word can be closely related to the semantic information of its adjacent words. In order to control the information extracted from surrounding words, we also apply a gated mechanism \revise{within} the CNN module.
Under the stronger text representation and model structure, the performance of the NER system can be significantly improved. However, there is still a gap between the capabilities of the NER system and the industry requirements. Since the \revise{size} of NER datasets is usually not large enough, overfitting is an urgent problem for the deep neural network \revise{based NER}. So it is easy for the model to identify words that have appeared before, but hard to understand unfamiliar words. Therefore, the model needs to have a stronger generalization ability \revise{to obtain stable performance}.
\revise{Adversarial training is a method to train the network with both the primal examples and adversarial examples. Here adversarial example means the primal example added a small adversarial perturbation which is designed to make the target model perform bad.} Adversarial training is now widely used in the image classification task, significantly increasing the generalization ability of the network against the input perturbation. For the \revise{NER} task, the input is usually discrete one-hot vectors that do not meet the infinitesimal perturbation. Instead of applying the adversarial examples to the word input, we add perturbations to the continuous word embeddings and other variables learned in the network. The adversarial examples are trained together with raw examples, improving the model's ability to withstand disturbances, and accelerating the converging process.
We achieve a robust \revise{NER} system ASTRAL (Adversarial Trained LSTM-CNN) by augmenting the network structure and enhancing the training process. The contributions of our work are as follows:
\begin{itemize}
\item We introduced the Gated-CNN into named entity recognition task, as an enhancement of feature extraction. We apply CNN modules on the word level, which helps the system to pay more attention to adjacent words. In order to flexibly control the spatial information extracted by CNN, we apply a gating mechanism to merge the spatial information and combine them with the original features.
\item We also \revise{refine} the training process to make the NER system more stable. With adversarial training, we construct perturbations and add them to arbitrary variables in the model during \revise{each training step}, making the model have a better generalization ability. When generating perturbations, we use the target variable to constrain the norm, so that adversarial training can be applied to any variable \revise{within} the model, even to multiple variables at the same time. The experiment shows that with adversarial training, the network is \revise{much easier} to converge than the basic model.
\item We quantitatively evaluate our system on three benchmarks, which achieves the state of the art results. \revise{The experiments show} that Gated-CNN has a different influence on various types of named entities, and adversarial training is beneficial to \revise{reduce} training loss and prevent overfitting. We also perform a qualitative case study to analyze both the success and failure cases in the system. It shows the advantages of our system and the problems that need to \revise{be fixed}.
\end{itemize}
The remainder of this paper is organized as follows. Section 2 presents an overview of traditional and \revise{deep neural network based} methods on NER, as well as the methods for text representation and adversarial training. Section 3 describes the methodology used by our model. Section 4 verifies the effectiveness of our model by performing comparisons with the state-of-the-art methods as well as ablation experiments. Section 5 concludes the paper with discussions and outlooks.
\section{Related Work}
\subsection{Named Entity Recognition}
Named Entity Recognition (NER) aims at detecting named entities (e.g., \textit{person, location, time,} and \textit{organization}) from unstructured text. In this \revise{subsection}, we will introduce the traditional high-performance approaches and \revise{deep neural network based} models.
Over the last decades, numerous approaches based on traditional \revise{machine learning} algorithms are carried out on the NER task. Those methods include Naive Bayes Classifier~\cite{mccallum1998comparison}, Conditional Random Fields models (CRF)~\cite{lafferty2001conditional}, and Knowledge-driven models~\cite{wang2018label}. However, traditional methods such as Naive Bayes Classifier and Knowledge-driven models need to write too many rules according to different scenarios. Thus a specific task cannot be generalized to all the applications, making the transfer between different domains cumbersome. Besides, CRF mainly focuses on the transition probability of each word, and it does not pay \revise{enough} attention to the name entity attributes of the word.
Now, most of the NER methods are based on sequence labelling~\cite{chiu2016named,graves2012supervised,lample2016neural,aguilar2019named,clark2018semi,akbik2018coling}. These methods classify every word in the corpus into different categories. These categories are corresponding to different application scenarios, such as \textit{person, location, time} and \textit{organizations}, etc. In this way, a sequence of labels which contains the entity information can be generated from these words.
With the \revise{developing} of deep learning techniques, the neural network has \revise{gained} state-of-the-art performance on NER. Some researchers try to reduce the manual efforts for getting labeled data. Yanyao et al.~\cite{shen2017deep} carry out incremental active learning, in which the \revise{required} amount of labeled training data can be dramatically reduced. And the lightweight architecture also speeds up the training process. \revise{These models aim to} minimize the annotation cost while maintaining the performance of NER models~\cite{chen2015study}.
The generalization of the model is also a \revise{vital} problem worth studying. Zhenghui et al.~\cite{wang2018label} propose label-aware feature transfer learning and parameter transfer learning for cross-specialty NER. In this way, a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts.
In order to combine the advantages of previous work and get a better model ability, many researchers combine Bidirectional LSTM (Bi-LSTM)~\cite{schuster1997bidirectional} and CRF~\cite{lafferty2001conditional} to perform NER task~\cite{huang2015bidirectional,lample2016neural}. They first use Bi-LSTM to extract the text feature, then construct the CRF layer to get the output label.
\subsection{Text Representation}
Text representation is a \revise{crucial} technique in natural language processing. Bengio proposed the concept of NNLM (neural network language model)~\cite{bengio2003neural} in 2003, which made the theoretical foundation for using neural networks to generate word embedding. \revise{Hence} a paradigm is formed that mapping linguistic symbols to high-dimensional spaces for further processing. After word2vec~\cite{mikolov2013efficient} and glove~\cite{pennington2014glove} are proposed, word embedding gained a better representation ability. With large-scale corpus, the neural network based language model exerts analytical ability and achieves a lower perplexity. Since then, word embedding has become a necessary method in the field of natural language processing, performing as the representation of text in \revise{various} tasks.
The text representation has great progress in recent years. There are a series of excellent works such as ELMo~\cite{peters2018deep}, GPT~\cite{radford2018improving}, BERT~\cite{devlin2018bert}, and XLNET~\cite{yang2019xlnet}. These tasks divide natural language processing into two-step: firstly use the language model to pre-train, and then use the fine-tuning module to solve various tasks. ELMo~\cite{peters2018deep} can dynamically adjust the word embedding according to the current context. GPT (Generative Pre-Training)~\cite{radford2018improving} uses Transformer~\cite{vaswani2017attention} as a feature extractor instead of RNN to obtain stronger feature extraction ability. BERT~\cite{devlin2018bert} uses the masked language model and the next sentence prediction to enhance the mining of context. XLNET~\cite{yang2019xlnet} incorporates the Transformer-XL~\cite{dai2019transformer} idea for relative segment encodings and expands the size of the dataset. These text representation methods are deeply studied in terms of pre-training, \revise{while} the construction of the application \revise{module} supporting the second stage is not focused. In this paper, instead of improving the text representation, we focus on building a better model to make use of these text representations.
\subsection{Adversarial Training}
Adversarial training~\cite{goodfellow2014explaining} is a method to enhance the training process with adversarial examples. Szegedy Christian et al.~\cite{szegedy2013intriguing} indicates that if the input sample is added with a well-designed perturbation, that human would not even notice, the neural network may get the wrong prediction. The sample with well-designed small perturbation is called \revise{the} adversarial example. There are two main \revise{kinds of research on} adversarial examples \revise{recently}. The first way is adversarial attacking~\cite{athalye2018obfuscated,tramèr2018ensemble}. The adversarial examples are utilized to evaluate the robustness of various models by attacking them. Additionally, the adversarial examples could be considered as extended training data to enhance the generalization and robustness of the model, which is named adversarial training.
The adversarial training method is first used on image classification task~\cite{goodfellow2014explaining}. Before updating parameters \revise{in each training step}, adversarial training examples are generated by adding perturbation to current parameters. So the adversarial training method is an augmentation of training data. Following the idea of adversarial training, Park Sungrae et al.~\cite{park2018adversarial} propose adversarial dropout by generating the mask of dropout according to the weak point of the model, which could also lead to a better training process. Adversarial training is also used in text classification~\cite{miyato2016adversarial}. In the natural language processing domain, the input of the model is discrete. So the perturbation is added to the word embedding and achieves state-of-the-art performance with a quite simple LSTM structure. After that, adversarial training is used to benefit the task of relation extraction~\cite{wu2017adversarial}. In this paper, we explore the advantage of adversarial training \revise{on the NER task}.
\section{Methodology}
\begin{figure}
\centering
\includegraphics[width=10cm, height=8cm]{figure_new/architecture.pdf}
\caption{The overall architecture of ASTRAL. The model consists of five modules: embedding module, Gated-CNN module, Bi-LSTM module, CRF module, and adversarial training module.} \label{architecture}
\end{figure}
In this section, we will first demonstrate the architecture of our ASTRAL (\textbf{A}dver\textbf{S}arial \textbf{TRA}ined \textbf{L}STM-CNN) model, then illustrate the implementation detail of adversarial training.
The overall structure of ASTRAL is illustrated in Figure~\ref{architecture}.
As shown in this figure, the goal of ASTRAL is to \revise{predict} tags $Y_{pre}$ with the same length of the input sentence $W$. Here $W=({{w}_{1}},{{w}_{2}},\dots,{{w}_{n}})$ represents a sentence with $n$ tokens, ${{Y}_{pre}}=({{y}_{1}},{{y}_{2}},\dots,{{y}_{n}})$ represents $n$ predicted tags for tokens in $W$. In our model, IOB format (short for \textit{inside, outside,} and \textit{beginning}) is used as the label standard. Since there are multiple types of named entities, suffixes are attached to represent their entity type after the B and I. So the tag in ${Y}_{pre}$ could be B-\#, where \# is related to the specific named entity type, \revise{e.g., ORG, MISC}. For example, in Figure~\ref{architecture}, when identifying the sentence ``EU rejects German call to boycott British lamb'', we can determine that ``EU'' belongs to organization (ORG), while ``German'' and ``British'' belong to miscellaneous (MISC), thus the sequence of tags would be ``B-ORG, O, B-MISC, O, O, O, B-MISC, O''.
The ASTRAL model \revise{is composed of} five modules: embedding module, Gated-CNN module, Bi-LSTM module, CRF module, and adversarial training module. Embedding module transforms the words into vectors. Bi-LSTM module is a variant of RNN (Recurrent Neural Network), which generate features from word vectors. CNN can enhance the refine of spatial features, and the gate mechanism further filters the obtained information. The CRF module combines the information acquired by the Bi-LSTM and the Gated-CNN, then \revise{generates} the final \revise{tags as the} output.
During training, the adversarial training module generates adversarial perturbation to make the model more generalized and \revise{obtain} better training accuracy.
\subsection{Embedding Module}
\revise{
Given a sentence $W=(w_1,w_2,\dots,w_n)$ with $n$ tokens, the embedding module aims at transferring $W\in {\mathbb{R}^{{n_{id}} \times n}}$ into its embedding representation $E=(e_1,e_2,\dots,e_n)$, where $w_i\in{{\mathbb{Z}}^ +}$ denotes the index of the $i$-th token in the sentence, $e_i\in \mathbb{R}^{d_e}$ corresponds to the $i$-th token, $n_{id}$ is the number of all used tokens. In our model, $E \in {\mathbb{R}^{{d_e} \times n}}$ is the concatenation of ${E}_{w}$ and ${E}_{f}$ as
\begin{equation}
E=[{{E}_{w}};{{E}_{f}}] \,, \\
\end{equation}
where $[\cdot;\cdot]$ denotes the concatenation of different vectors, ${E_w} \in {\mathbb{R}^{{d_w} \times n}}$ denotes the pooled contextualized embedding~\cite{akbik2019pooled}, ${E_f} \in {\mathbb{R}^{{d_f} \times n}}$ denotes the feature embedding, ${d_e} = {d_w} + {d_f}$, ${d}_{w}=300$ and ${d}_{f}=20$ in our experiments.
We then introduce the definition and function of these two submodules in detail. Pooled contextualized embedding~\cite{akbik2019pooled} ${E}_{w}$ is a kind of general word embedding
\begin{equation}
{{E}_{w}}={M}_{w}\cdot W \,,
\end{equation}
where ${M}_{w} \in {{\mathbb{R}}^{{{d}_{w}}\times {{n}_{id}}}}$ denotes the matric of pre-trained pooled contextualized embedding. ${E}_{w}$ contains contextual meaning around the target word and previous memory meaning appeared in the dataset before. Contextualized embedding can produce meaningful embeddings for even rare string by using the memory mechanism instances. And pooling operation helps to distill word representation from all contextualized tokens.
Then we utilize feature embedding ${{E}_{f}}$ to extract rule-based information
\begin{equation}
{{E}_{f}}={M}_{f}\cdot W_f \,,
\end{equation}
where ${M}_{f} \in {{\mathbb{R}}^{{{d}_{f}}\times {{n}_{f}}}}$ denotes the parameter matric of feature embedding, and ${W_f} \in {\mathbb{R}^{{n_f} \times n}}$ denotes the features indicator of given tokens. The capitalization of words is obviously useful when discriminating named entities, e.g., a location usually starts with an upper character. So following the previous work~\cite{ghaddar2018robust}, our selected five features are all-lower, upper-first, upper-not-first, numeric, and no-alpha-num, which means ${n_f=5}$. Then the sentence feature $W_f$ is mapped by the random initialized lookup table ${M}_{f}$ to ${E}_{f}\in {\mathbb{R}}^{{{d}_{f}}\times {n}}$ which contains $n$ vectors with $d_f$ dimension. After training, feature embedding ${E}_{f}$ can establish an effective representation relationship with named entities.
}
\subsection{Gated-CNN Module}
\begin{figure}
\centering
\includegraphics[width=7.5cm, height=7cm]{figure_new/Gated-CNN.pdf}
\caption{The structure of Gated-CNN. $V$ denotes the input variable which could be embedding $E$ or hidden states of Bi-LSTM $H$. The input variable $V$ passes a CNN with the filter size $3*N$ and get the feature containing spatial information (represented in yellow). Then two linear functions are used to get the Gated-CNN feature $V_g$.} \label{fig_Gated_CNN}
\end{figure}
In this model, the Gated-CNN module is proposed to integrate the spatial information extracted by the \revise{adjacent} words. The structure of the Gated-CNN module is shown in Figure~\ref{fig_Gated_CNN}, which consists of one CNN and two linear layers. \revise{Given} the input sentence variable with $n$ tokens $V=({{v}_{1}},{{v}_{2}},...,{{v}_{n}})$, we first calculate the integrated representation for each token with its adjacent tokens:
\begin{equation}
{{V}_{c}}={{f}_{CNN}}(V)
\end{equation}
where ${{f}_{CNN}}(\cdot)$ denotes the function of CNN. This is achieved by one filter with a size of $N_{w}\times N_{o}$, where window size $N_{w}$ is set in [3,5,7], meaning the number of tokens that are processed at a time and $N_{o}$ is a hyperparameter related to the output vector size. So the feature vector of each token is related to its adjacent tokens. Under the effect of padding, each column of the vector ${V}_{c} =({{v}_{c1}},{{v}_{c2}},...,{{v}_{cn}}) $ obtained by CNN can still correspond to the original token. Therefore, the vector representation of the $i$-th token ${v}_{ci}$ synthesizes the spatial information of its two \revise{sides'} surrounding words.
Then a gated linear layer is proposed to control the feature vectors produced by the CNN layer:
\begin{equation}
{{V}_{g}}=({{W}_{1}}\cdot {{V}_{c}}+{{b}_{1}})\otimes \sigma ({{W}_{2}}\cdot {{V}_{c}}+{{b}_{2}}) \\
\end{equation}
where ${W}_{1}$, ${W}_{2}$, ${b}_{1}$, ${b}_{2}$ are training parameters of linear functions, $\otimes$ denotes element-wise product, and $\sigma$ denotes the sigmoid function. The gate is trained through the dataset, and it roughly decreases the task-independent vectors to reduce the noise, while amplifying the task-related vectors to enhance the network focus. The gate makes the variables more responsive to the task by changing the focus on the feature map $V_c$.
Finally, we concatenate the variable ${V}_{g}$ with $V$, integrating spatial information and the original information to \revise{get a} more vibrant text representation $V'$ as
\begin{equation}
V'=[V;{{V}_{g}}] \,.
\end{equation}
In this model, the Gated-CNN module is used twice, one for embedding and the other for contextual extraction. As it is shown in Figure~\ref{architecture}, for Gated-CNN I, the input variable $E$ is the embedding representation of the sentence, and we get $E'=G(E)$. For Gated-CNN II, the integrated high-level variable $H$ is processed. It is the same for $H'=G(H)$ when Gated-CNN is used for the hidden state variable of Bi-LSTM $H$.
\subsection{Bi-LSTM Module}
\begin{figure}
\centering
\includegraphics[width=7.5cm, height=6cm]{figure_new/Bi-LSTM.pdf}
\caption{The structure of Bi-LSTM. ${LSTM}_F$ is the forward LSTM, ${LSTM}_B$ is the backward LSTM. The output of Bi-LSTM $H$ is the concatenation of these two sub LSTMs' output.} \label{fig2}
\end{figure}
LSTM (Long Short Term Memory)~\cite{iet} is a kind of RNN (Recurrent neural network), which extracts the features in the chronological order of the input. And the formulation of Bi-LSTM can be described as:
\begin{equation}
\begin{array}{l}
\stackrel{\rightarrow}{H}=LST{{M}_{F}}(E') \\
\stackrel{\leftarrow}{H}=LST{{M}_{B}}(E') \\
H=[\stackrel{\rightarrow}{H};\stackrel{\leftarrow}{H}] \,. \\
\end{array}
\end{equation}
In this paper, we use Bi-LSTM (Bidirectional LSTM) to extracts the features from both \revise{forward} direction as $\stackrel{\rightarrow}{H}$ and \revise{backward} direction as $\stackrel{\leftarrow}{H}$. The network structure is shown in Figure~\ref{fig2}. \revise{It obtains the representation of each token in turn from both the forward and the backward directions, finding out the correlation between other surrounding words.}
\subsection{CRF Module}
The use of CRF (Conditional Random Field) in conjunction with Bi-LSTM is a standard method for the sequence labeling task. As shown in Figure~\ref{architecture}, the input variable of CRF is $H'$ generated by Gated-CNN II, and its output is predicted tags $Y_{pre}=({{y}_{1}},{{y}_{2}},...,{{y}_{n}})$.
CRF generates sequence tags $Y_{pre}$ by status feature function $s_k(y_i,H',i)$ and the transition feature function $t_j(y_{i+1},y_i,H',i)$. And the $s_k(y_i,H',i)$ indicates the influence of the input variable $H'$ on $y_i$. The $t_j(y_{i+1},y_i,H',i)$ depicts the effect of $H'$ on the adjacent tag changes in $Y_{pre}$. The predicted tags $Y_{pre}$ is generated by maximum the score
\begin{equation}
P(y|x)=\frac{1}{Z}\exp (\sum\limits_{j}{\sum\limits_{i=1}^{n-1}{{{\lambda }_{i}}{{t}_{j}}({{y}_{i+1}},{{y}_{i}},H',i)}+\sum\limits_{k}{\sum\limits_{i=1}^{n}{{{\mu }_{k}}{{s}_{k}}({{y}_{i}},H',i)}}}) \,,
\end{equation}
where ${\lambda }_{i}$ and ${\mu }_{k}$ are hyperparameters, and $Z$ is the normalization factor. The CRF module can learn the constraints of the sequence tags. For example, the beginning of a sentence should be ``B'' or ``O'' instead of ``I''. ``O I'' is impossible since the beginning of the named entity should be ``B'' instead of ``I''.
\subsection{Adversarial Training Module}
In general, the purpose of the deep neural network is to get predicted output ${Y_{pre}}$ by the input ${V_{in}}$, making the predicted result ${Y_{pre}}$ and the ground truth ${Y}$ closer. The model learns the parameters $\theta$ to minimize the loss function
\begin{equation}
L = loss({Y_{pre}},Y) \,,
\end{equation}
where commonly used loss function includes L1Loss, MSELoss (mean squared error), CrossEntropyLoss, NLLLoss (Negative Log Likelihood), etc. We use CrossEntropyLoss in our experiments.
In this section, we describe how to use normalized adversarial training to strengthen the training process. As shown in Figure~\ref{adversarial}, for every variable $X$ in the model, we can regard it as the adversarial training target variable and add perturbation on it. We represent the model before $X$ as ${f_{bef}}(\cdot)$, and the model after $X$ as ${f_{aft}}(\cdot)$. In our model, we choose the output of Gated-CNN modules $E'$ and $H'$ as the target variables.
\begin{figure}
\centering
\includegraphics[width=12cm, height=3.5cm]{figure_new/adv.pdf}
\caption{The flowchart of adversarial training. The solid line (blue) shows the first round process of obtaining \revise{primal} loss $L_{pri}$. The dashed line (orange) shows the second round process of calculating $R_{adv}$ according to $L$ and $X$, further obtaining $L_{adv}$, and then finally generating final loss $L$. Here $\odot$ denotes the loss function, $\oplus $ denotes add operation; $Y_{pre}$ and $Y_{pre-adv}$ represent prediction results with and without adversarial perturbation respectively. The final optimized loss $L$ is the sum of \revise{primal} loss $L_{pri}$ and adversarial loss $L_{adv}$.} \label{adversarial}
\end{figure}
The adversarial training process in our model can be divided into two rounds. In the first round, our model generates \revise{primal} loss $L_{pri}$ based on the input.
\begin{equation}
X = {f_{bef}}({V_{in}};\theta ),
{Y_{pre}} = {f_{aft}}(X;\theta) \,,
\end{equation}
where $V_{in}$ is the input variable for the model. And the \revise{primal} loss is
\begin{equation}
L_{pri} = F(X,Y;\theta ) = loss({f_{aft}}(X;\theta ),Y) \,.
\end{equation}
\revise{
In the second round, $L_{pri}$ is derived from $X$ and normalized to obtain adversarial perturbation $r_{adv}$.
Here ${r_{adv}}$ should theoretically be obtained from the following optimization problems:
\begin{equation}
{r_{adv}} = \mathop {\arg \max }\limits_{r,||r|| \le \varepsilon } F(X + r,Y;\hat \theta )\,,
\end{equation}
where $\varepsilon $ constraints the norm of ${r_{adv}}$, and $\hat \theta$ indicates the instantaneous value of the parameter for each solution. The parameters are constantly updated, thus the value of $\hat \theta $ is different for each training sample and training step.
In order to get the numerical solution for ${r_{adv}}$, we apply an approximate solution~\cite{goodfellow2014explaining}. The $F(X,Y;\hat \theta )$ is assumed as a linear function around $X$, so the approximated value of ${r_{adv}}$ can be defined as:
\begin{equation}
{r_{adv}} = \varepsilon X \otimes d/||d||,
d = {\nabla _X}F(X,Y;\hat \theta )\,,
\end{equation}
where $d$ is the gradient of the primal loss $\frac{{\partial {L_{pri}}}}{{\partial X}}$, $\varepsilon$ is a hyperparameter, $\otimes$ denotes element-wise product and ${r_{adv}}$ is the adversarial perturbation designed to ascend the current loss. $X$ is introduced as the multiplicator when calculating ${r_{adv}}$, because it is more robust when simultaneously using ${r_{adv}}$ of multiple target variables under such normalization.
Then the sum of $r_{adv}$ and $X$ is put into the ${f_{aft}}(\cdot)$ (structure after $X$) to get adversarial loss $L_{adv}$ as
\begin{equation}
L_{adv} = loss({f_{aft}}(X + {r_{adv}};\theta ),Y)\,.
\end{equation}
The final optimized loss is the sum of these two losses as
\begin{equation}
L = L_{pri} + L_{adv} \,.
\end{equation}
The model parameters $\theta$ optimized in this way can be adapted to both the original data and the disturbing data.
}
\section{Experiments and Results}
\subsection{Dataset and Criteria}
\subsubsection{Dataset}
In this paper, we apply our NER system to three English datasets, CoNLL-03~\cite{sang2003introduction}, OntoNotes 5.0~\cite{pradhan2012conll} and WNUT-17~\cite{derczynski2017results}, showcasing the effectiveness and robustness of our system. CoNLL-03~\cite{sang2003introduction} is a large dataset widely used by NER researchers, whose data source is Reuters RCV1 corpus, leading its main content to be newswire. Its named entities include location, organization, person, and miscellaneous. OntoNotes 5.0~\cite{pradhan2012conll} is a larger dataset which was initially built for CoNLL 2012 shared task. The source of the text in the dataset was the LDC2013T19~\cite{weischedel2013ontonotes} published by the Linguistic Data Consortium. It covers a wide range of content, including telephone conversations, newswire, newsgroups, broadcast news, broadcast conversation, and weblogs. WNUT-17~\cite{derczynski2017results} is a complex dataset from various sources, which is mainly derived from social media. The training set is extracted from tweets, while the development set comes from the comments of YouTube, and the testing set is based on Reddit and StackExchange. The inconsistent data for training and testing make it difficult to recognize named entities for WNUT-17.
\begin{table}
\caption{Dataset statistics. The size of datasets is in the number of entities/tokens.}\label{dataset_statistic}
\centering
\footnotesize
\resizebox{\textwidth}{12mm}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{Train} & \textbf{Dev} & \textbf{Test} & \tabincell{c}{\textbf{Entities}\\\textbf{Frequency}} & \tabincell{c}{\textbf{Entity}\\\textbf{Types}} \\
\hline
CoNLL-03 & 23,499 / 204,567 &5,942 / 51,578&5,648 / 46,666& 11.6\% & 4\\
\hline
OntoNotes 5.0 & 81,828 / 1,088,503&11,066 / 147,724&11,257 / 152,728& 7.5\% & 18\\
\hline
WNUT-17 & 3,160 / 62,729& 1,250 / 15,733& 1,589 / 23,394& 5.9\% & 6\\
\hline
\end{tabular}
}
\end{table}
\begin{figure}
\centering
\includegraphics[width=7cm, height=5cm]{figure_new/distribution_of_dataset.pdf}
\caption{\revise{The distribution of named entities on the three datasets. The number of named entities within every 100 tokens is counted, and we show the percentage over that number on each dataset. We only show the number from 0 to 50 since few cases with more than 50 named entity tokens within 100 tokens.}} \label{fig_distribution_of_dataset}
\end{figure}
We show the statistics of the above datasets in Table~\ref{dataset_statistic}. When evaluating the NER system, \revise{researchers} are more inclined to compare their results on CoNLL-03. From Table~\ref{dataset_statistic}, we can see that the token and entity size of OntoNotes 5.0 is the largest, which helps to test the generalization ability of our network on large datasets. WNUT-17, a dataset closer to daily lives, makes more sense for the practical implication of the NER systems. \revise{We also analyse the distribution of named entities by the column ``Entities Frequency'' in Table~\ref{dataset_statistic} and the curves in Figure~\ref{fig_distribution_of_dataset}. The frequency of entities for the three datasets is quite different. 11.6\% of tokens in CoNLL-03 are named entities, while only 5.9\% of that in WNUT-17. Figure~\ref{fig_distribution_of_dataset} specifically indicates this phenomenon. We divide every 100 tokens into a group, and the percentage in CoNLL-03 that contains ten or more entity tokens is 70\%, while that in WNUT-17 is only 14\%. It means the percentage of entity tokens in WNUT-17 is relatively small.}
\subsubsection{Evaluation Metrics}
In the experiment, we mainly measure the F1 values of different models in the above three datasets.
Precision ($P$), Recall ($R$), and $F1$ value are common indicators for measuring model performance:
\begin{equation}
P = \frac{{|A|}}{{|{T_{pre}}|}},R = \frac{{|A|}}{{|{T_{gt}}|}},F1 = \frac{{2PR}}{{P + R}},
\end{equation}
where ${T_{pre}}$ represents the predicted answer collection, ${T_{gt}}$ denotes the ground truth answer collection, $A = {T_{pre}} \cap {T_{gt}}$ is the hit answers, and $|\cdot|$ is the number of elements in the collection.
In detail, we measure the performance of the system for each word. For example, as a named entity consisting of two words with labels ``B-PER I-PER'', it is considered to be two essential elements when evaluating.
\subsection{Main Results}
\begin{table*}
\caption{Test F1 score for different models on the datasets. \revise{In this table, ``$^*$'' indicates the results implemented by us, and ``-'' indicates that the performances of the models on the corresponding datasets are not yet obtained.} }\label{tab_main_results}
\centering
\resizebox{\textwidth}{22mm}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Model} & \textbf{CoNLL-03} & \textbf{OntoNote 5.0} & \textbf{WNUT-17}\\
\hline
Character-LSTM~\cite{lample2016neural} & 90.94 & \revise{84.86$^*$} & \revise{44.79$^*$} \\
\hline
BLSTM-CNN~\cite{chiu2016named} & 91.62 & 86.28 & \revise{45.14$^*$} \\
\hline
Stacked Multitask~\cite{aguilar2019named} & - & - & 45.55 \\
\hline
ELMo~\cite{peters2018deep} & 92.22 & - & - \\
\hline
CVT+Multitask~\cite{clark2018semi} & 92.6 &- & - \\
\hline
BERT~\cite{devlin2018bert} & 92.81 & \revise{88.28$^*$} & \revise{49.23$^*$} \\
\hline
Contextual String Embedding~\cite{akbik2018coling} & 92.86 & 88.75 & 49.49 \\
\hline
ASTRAL (ours) & \textbf{93.32} & \textbf{89.44} & \textbf{49.72} \\
\hline
\end{tabular}
}
\end{table*}
We perform experiments on three datasets, CoNLL-03, OntoNote 5.0, and WNUT-17, to measure the models' ability to identify named entities. The tested models include those focus on model improvements, such as Character-LSTM~\cite{lample2016neural} and BLSTM-CNN~\cite{chiu2016named}, and those focus on word embedding and representation, such as ELMo~\cite{peters2018deep} and BERT~\cite{devlin2018bert}.
The quantitative results of our model are shown in Table~\ref{tab_main_results}. Since CoNLL-03 is widely used by most of the models, the experimental results of former research are sufficient, which is also the most convincing measure of system performance. \revise{In order to strengthen the integrity of the experiment, we implement several models, i.e., Character-LSTM, BLSTM-CNN, and BERT. And these implemented results are marked with ``$^*$'' in Table~\ref{tab_main_results}. Although some other complex models still lack some results which are marked with ``-'', we believe that the current results are sufficient for experimental analysis.} Before the methods with pre-training language models such as ELMo~\cite{peters2018deep}, the model could not achieve 92\% in CoNLL-03. While with the language model like ELMo~\cite{peters2018deep}, BERT~\cite{devlin2018bert}, and other large-scale pre-training methods, the performance of the model has been significantly improved up to 92.81\%. Our model follows the language model method, focusing on improving the model structure and training method. It can achieve 93.32\% F1 on the CoNLL-03. \revise{The improvement can also be found on both OntoNote 5.0 and WNUT-17 by improving the model structure or the word representation. Especially on WNUT-17 dataset, the BERT model has a 3.68\% improvement over Stacked Multitask. It shows that the pre-training language model benefits more on the dataset with the complex and diverse language.} Our model also performs well on more complex datasets OntoNote 5.0 and WNUT-17. The experimental results show that ASTRAL has got state-of-the-art results in the NER task.
\subsection{Effect of Model Architecture}
\paragraph{Ablation Study}
In order to verify the validity \revise{of our modules}, we conducted an ablation study. As it is shown in Table~\ref{tab_ablation_study}, we conducted experiments on the four conditions of ASTRAL for three datasets. Here $Basic$ indicates the basic model with pre-trained word embedding and Bi-LSTM. $GC$ indicates that only the Gated-CNN is added to the basic model. $AT$ indicates that only the adversarial training method is added to the basic model. $ATGC$ indicates that the complete ASTRAL model includes Gated-CNN and adversarial training.
As can be seen from the results in Table~\ref{tab_ablation_study}, Gated-CNN and adversarial training both benefit the overall results. Finally, the combination of Gated-CNN and adversarial training can achieve better experimental results. It causes accuracy increase for 0.42\% on CoNLL-03 dataset, 0.67\% on OntoNote 5.0 dataset, and 0.57\% on WNUT-17 dataset respectively.
\begin{table}
\caption{Ablation study for our ASTRAL model. Here ``Basic'' denotes basic model, ``GC'' denotes Gated-CNN, ``AT'' denotes Adversarial Training, and ``ATGC'' denotes the combination of GC and AT.}\label{tab_ablation_study}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Model}} & \textbf{CoNLL-03} & \textbf{OntoNote 5.0} & \textbf{WNUT-17}\\
\hline
\multirow{4}*{ASTRAL} & Basic& 92.92 & 88.77 & 49.15 \\
\cline{2-5}
&GC & 93.04 & 89.02 & 49.38 \\
\cline{2-5}
&AT & 93.18 & 89.23 & 49.65 \\
\cline{2-5}
&ATGC & \textbf{93.32} & \textbf{89.44} & \textbf{49.72} \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\subfigure[CoNLL-03]{
\label{CoNLL-03-score}
\includegraphics[width=0.8\textwidth]{figure_new/CoNLL-03-score.pdf}}
\subfigure[WNUT-17]{
\label{WNUT-17-score}
\includegraphics[width=0.8\textwidth]{figure_new/WNUT-17-score.pdf}}
\caption{The impact of changes in the model structure on various entity types. The results on CoNLL-03 dataset and WNUT-17 dataset are shown in the figure. Here ``Basic'' denotes basic model, ``GC'' denotes Gated-CNN, ``AT'' denotes Adversarial Training, and ``ATGC'' denotes the combination of GC and AT.}
\label{accuracy_for_different_entity}
\end{figure}
Figure~\ref{accuracy_for_different_entity} shows the \revise{model performance on} different entity types in the two datasets CoNLL-03 and WNUT-17. When the model structure changes, the specific F1 values of different entity types are also different. Gated-CNN leads a significant improvement on the ORG (organization), PER (person) in CoNLL-03, \revise{as well as} the creative-work, person in WNUT-17. One reasonable explanation lies that the Gated-CNN emphasizes the attention of each word to its \revise{adjacent} words, and there are usually specific words (such as ``at'', ``to'', etc.) around these benefited named entities. But it has little or even adverse effect on certain entity types such as corporation and product on WNUT-17, which indicates that the adjacent words might have a negative impact on recognizing some kinds of entities. If the relationship between \revise{adjacent} words and named entities is not obvious, then Gated-CNN will bring some noise to the system. Unlike Gated-CNN, adversarial training has improved the performance on almost all kinds of entities, indicating its stability.
\paragraph{Model Generalization}
\begin{figure}
\centering
\includegraphics[width=9.5cm, height=6cm]{figure_new/generalization.pdf}
\caption{Dev F1 - Train F1 curves for different model conditions. Here ``Basic'' denotes basic model, ``GC'' denotes Gated-CNN, and ``ATGC'' denotes the combination of GC and AT.} \label{fig_generalization}
\end{figure}
Figure~\ref{fig_generalization} shows the Dev F1 - Train F1 curve under three conditions: basic model (the green curve), GC (the blue curve), and ATGC (the red curve). Dev F1 and Train F1 indicate the model performance on validating set and training set respectively in the training process.
The curves of \revise{$Basic$ (green) and $GC$ (blue)} in Figure~\ref{fig_generalization} is close, indicating \revise{our basic model and Gated-CNN model} have similar generalization ability. The position of the red curve is on the upper side of the other curves, indicating that the Dev F1 value of ATGC is higher under the same Train F1. So we can conclude that ATGC model has better generalization ability.
Additionally, observing the upper right corner of Figure~\ref{fig_generalization}, it \revise{is obvious} that Basic, GC, and ATGC can reach upper and upper positions, respectively. It shows that the training level of the model is deepened in these three cases. The training level of GC is higher than that of Basic, indicating that the adjacent word information extracted by the model is beneficial to model training. ATGC's training level is \revise{the highest}, indicating that the \revise{adversarial} perturbation is useful for model training.
\begin{figure}
\centering
\subfigure[The curves of training loss.]{
\label{fig_train_loss}
\includegraphics[width=0.8\textwidth]{figure_new/training_loss.pdf}}
\subfigure[The curves of Dev F1.]{
\label{fig_dev_f1}
\includegraphics[width=0.8\textwidth]{figure_new/Dev_F1_adv.pdf}}
\caption{The indicators of the training process with and without adversarial training (AT).}
\label{fig_at}
\end{figure}
\subsection{Effect of Adversarial Training}
Now we explore the \revise{effect} of adversarial training by presenting the indicators in the training process with and without it. \revise{Figure~\ref{fig_at} shows} the change of training loss and Dev F1 as the training epoch increases. \revise{We record the first 50 epochs to observe the situation during training.} The green curve represents the basic condition, and the carmine curve represents the AT condition.
Figure~\ref{fig_train_loss} shows that the training loss of AT condition is lower, and the convergence speed is faster during training, \revise{especially in the first 30 epochs}. And the final training loss values of basic and AT condition \revise{are both close to 0.06} since they are both overfitting at that time. From Figure~\ref{fig_dev_f1}, it can be seen that the Dev F1 of AT condition increased faster, and its final value is higher. It indicates that adversarial training has an inhibitory effect on overfitting.
\subsection{Case Study}
\begin{table*}
\caption{Case study for the three datasets. Here ``\textit{LACK}'', ``\textit{WRONG}'', and ``\textit{CORRECT}'' indicate the meaning of absence, misclassification, and entirely correct respectively.}\label{tab_case_study}
\centering
\large
\resizebox{\textwidth}{60mm}{
\begin{tabular}{|c|p{4.2cm}|p{3.2cm}|p{3.2cm}|p{3.2cm}|p{3.2cm}|}
\hline
\multirow{2}*{\textbf{Dataset}}&\centering \multirow{2}*{\textbf{Sentence}} & \multicolumn{4}{c|}{\textbf{Named Entity}}\\
\cline{3-6}
& &\centering \textbf{GroundTruth}& \centering \textbf{Basic} &\centering \textbf{GC} &\centering \textbf{ATGC} \tabularnewline
\hline
\multirow{8}*{\textbf{CoNLL-03}} & Hosts UAE play Kuwait and South Korea take on Indonesia on Saturday in Group A matches. &LOC: UAE, Kuwait, South Korea, Indonesia& \textit{LACK} - LOC: Indonesia& \textit{CORRECT} & \textit{CORRECT} \\
\cline{2-6}
&Top-seeded Eyles now meets titleholder Peter Nicol of Scotland who overcame Simon Parke of England. &PER: Eyles, Peter Nicol, Simon Parke; LOC: Scotland, England &\textit{LACK} - PER: Peter Nicol & \textit{WRONG} - LOC: Peter Nicol &\textit{CORRECT} \\
\hline
\multirow{6}*{\textbf{OntoNote 5.0}} &The same toy is sold for less than 40 US dollars at Wal-Mart. &MONEY: 40 US dollars; ORG: Wal-Mart & \textit{LACK} - MONEY: 40 US dollars& \textit{LACK} - MONEY: 40 US dollars& \textit{CORRECT}\\
\cline{2-6}
&Last week's real Jackson story ran in the New York Daily News. &DATA: Last week; PERSON: Jackson; ORG: the New York Daily News & \textit{WRONG} - GPE: New York & \textit{CORRECT}& \textit{CORRECT}\\
\hline
\multirow{10}*{\textbf{WNUT-17}} &I will nominate Virgin Active at Moore Park / Zetland for you. &corporation: Virgin Active; location: Moore Park, Zetland &\textit{CORRECT} &\textit{CORRECT} & \textit{CORRECT} \\
\cline{2-6}
&Why were Olive and Emma's powers changed in Miss Peregrint's Home for Peculiar Children? & person: Olive, Emma; creative-work: Miss Peregrint's Home, Peculiar Children;& \textit{WRONG} - person: Peregrint; \textit{LACK} - creative-work: Miss Peregrint's Home for Peculiar Children; &\textit{WRONG} - person: Peregrint; \textit{LACK} - creative-work: Miss Peregrint's Home for Peculiar Children; & \textit{LACK} - creative-work: Miss Peregrint's Home for Peculiar Children;\\
\hline
\end{tabular}
}
\end{table*}
\begin{table}
\caption{\revise{An example of predicted probability distribution under different model conditions. The example is the first case of CoNLL-03 in Table~\ref{tab_case_study}, and these probability values are from the output of CRF Module. In this table, the darker background color of tokens means the higher probability of being predicted as LOC.}}\label{tab_GC_study}
\centering
\footnotesize
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Model}} & \textbf{Predicted probability of LOC}\\
\hline
\multirow{4}*{ASTRAL} & Basic& \includegraphics[width=0.72\textwidth]{figure_new/ASTRAL_GC_distribution-1.pdf}\\
\cline{2-3}
&GC & \includegraphics[width=0.72\textwidth]{figure_new/ASTRAL_GC_distribution-2.pdf}\\
\cline{2-3}
&ATGC & \includegraphics[width=0.72\textwidth]{figure_new/ASTRAL_GC_distribution-3.pdf}\\
\hline
\end{tabular}
\end{table}
We show several cases in Table~\ref{tab_case_study}. Two sentences from each dataset are selected to analyze the characteristics of the datasets and the changes in model results under different conditions. Here we choose the sentences with concentrated named entities to analyze the model performance of different conditions. The column of Ground Truth shows the standard answers. In the following three columns, Basic, GC, and ATGC, we list the differences between the corresponding model and ground truth. \revise{Here} ``LACK'', ``WRONG'', and ``CORRECT'' \revise{respectively} indicate the meaning of absence, misclassification, and entirely correct. We still use the given label form for each dataset, so that different datasets have different kinds of labels. For example, the geographically named entity labeled ``LOC'' in CoNLL-03 is similar to ``location'' in WNUT, as well as ``GPE'' in OntoNote 5.0.
Table~\ref{tab_case_study} indicates that the results of Basic, GC, and ATGC are getting better and better for these samples, which is consistent with the previous statistical results. From some examples, we notice that GC is benefiting from the adjacent words. In the first sentence of CoNLL-03, thanks to the help of ``on'', GC can solve the LACK of ``Indonesia''. In the second sentence of OntoNote 5.0, GC correctly identifies ``New York Daily News'' as an organization instead of recognizing ``New York'' itself as a location. \revise{We further analysis the first case in Table~\ref{tab_case_study} to show the actual impact of the model condition in terms of word choice. As shown in Table~\ref{tab_GC_study}, the darker the word's background in this table, the more likely the model recognizes it as LOC. Compared with Basic's result, GC's attention to `` Indonesia '' has increased significantly, but words such as `` Saturday '' and `` Group '' have also caused more interference at the same time. And the ATGC effectively suppresses this interference. In order to further explore the advantages of GC, we observe 50 cases per each dataset in which location entities are misclassified by GC though their adjacent words contain prepositions like ``on'', ``at''. We find that the percentages of these location entities which correctly identified under GC are 64\%, 56\%, and 68\% for CoNLL-03, OntoNote 5.0 and WNUT-17 respectively. This shows that GC can effectively reduce errors in these cases.} For ATGC, the named entities in the samples are almost extracted correctly. Benefiting from the adversarial training, ATGC can correctly recognize a rare name ``Peter Nicol'' as ``person'' instead of ``location''. Overall, the model has strong extraction capabilities for simple locations and organizational structures. However, specific words that require background knowledge, such as ``Miss Peregrint's Home for Peculiar Children'', are still hard to be extracted.
\section{Conclusion and Future Work}
In this paper, a NER system named ASTRAL is proposed, whose model structure and training process are augmented. We incorporated a Gated-CNN module with the network, helping the model to extract spatial information between adjacent words. In the training process, normalized adversarial training is introduced to enhance the model's robustness and generalization ability. We performed experiments on three benchmarks, and have shown that our system gets a significant improvement over previous work and achieves state-of-the-art performance.
Our ASTRAL system has a notable performance on recognizing named entities from practical text, such as news, books, comments, etc. Thus this system could meet the requirement of users and advanced systems who need these named entities for further \revise{processing}. Compared to the recent research on the general language model such as ELMo~\cite{peters2018deep} and BERT~\cite{devlin2018bert}, our experiments show that stronger task-related modules could also have excellent effects. Meanwhile, the Gated-CNN and normalized adversarial training in this paper could be introduced into other neural language processing systems.
In the future, we will mainly focus on the following two aspects. Firstly, the effect of different task-related modules combined with different language models is worth studying. Based on the characteristics of different language models, we will design matching task-related modules for each language model. Secondly, we will study the data enhancement methods, such as distant supervision, to solve the problem of insufficient training data. It is considered to be a direct means of solving the overfitting problem.
\section*{References}
|
1,116,691,497,226 | arxiv | \section{Motivation}
It is of general interest to compute quantum corrections to classical
field configurations like soliton solutions that are frequently interpreted
as particles. On top of the wish list we find the energies that predict
particle masses. The quantum correction to the energy can be quite
significant because the classical field acts as a background that
strongly polarizes the spectrum of the quantum fluctuations about it.
For that reason the quantum correction to the classical energy is called
vacuum polarization energy (VPE). Here we will consider the leading,
{\it i.e.} one loop, contribution.
Field theories that have classical soliton solutions in various topological
sectors deserve particular interest. Solitons from different sectors have unequal
winding numbers and the fluctuation spectrum changes significantly from one
sector to the other. For example, the number of zero modes is linked to the
number of (normalizable) zero modes that in turn arise from the symmetries
that are spontaneously broken by the soliton. Of course, the pattern of
spontaneous symmetry breaking is subject to the topological structure. On the
other hand, the winding number is typically identified with the particle
number. The prime example is the Skyrme model\cite{Skyrme:1961vq,Skyrme:1988xj}
wherein the winding number determines the baryon
number\cite{Witten:1979kh,Adkins:1983ya}. Many properties of baryons have
been studied in this soliton model and its generalization in the
past\cite{Weigel:2008zz}. More recently configurations with very large
winding numbers have been investigated\cite{Feist:2012ps} and these solutions
were identified with nuclei. To obtain a sensible understanding of the predicted
nuclear binding energies it is, of course, important to consider the VPE, in
particular when it is expected to strongly depend in the particle number. So
far this has not been attempted for the simple reason that the model is not
renormalizable. A rough estimate\cite{Scholtz:1993jg}\footnote{See
Ref.\cite{Meier:1996ng} for a general discussion of the Skyrmion's quantum
corrections and further references on the topic.} in the context of the
H--dibaryon\cite{Jaffe:1976yi,Balachandran:1983dj} suggests that the VPE strongly
reduces the binding energy of multi--baryon states.
As already mentioned, one issue for the calculation of the VPE is renormalization.
Another important one is, as will be discussed below, that the VPE is (numerically)
extracted from the scattering data for the quantum fluctuations about the classical
configuration\cite{Graham:2009zz}. Though this so--called {\it spectral method}
allows for a direct implementation of standard renormalization conditions it has
limitations as it requires sufficient symmetry for a partial wave decomposition.
This may not be possible for configurations with an intricate topological
structure associated with large winding numbers.
The $\phi^6$ model in $D=1+1$ dimensions has soliton solutions with different
topological structures\cite{Lohe:1979mh,Lohe:1980js} and the fluctuations do not
decouple into a parity channels. The approach employed here is also based on scattering
data but advances the spectral method such that no parity decomposition is required.
We will also see that it is significantly more effective than previous
computations\cite{AlonsoIzquierdo:2002eb,AlonsoIzquierdo:2011dy,AlonsoIzquierdo:2012tw}
for the VPE of solitons in $D=1+1$ dimensions that are based on heat kernel expansions
combined with $\zeta$--function regularization
techniques\cite{Elizalde:1996zk,Elizalde:1994gf,Kirsten:2000ad}.
Although the $\phi^6$ model is not fully renormalizable, at one loop order the
ultra--violet divergences can be removed unambiguously. However, another very interesting
phenomenon emerges. The distinct topological structures induce non--equivalent vacua
that manifest themselves via different dispersion relations for the quantum fluctuations
at positive and negative spatial infinity. At some intermediate position the soliton
mediates between these vacua. Since this position cannot be uniquely determined the
resulting VPE exhibits a translational variance. This is surprising since, after all,
the model is defined through a local and translational invariant Lagrangian. In this paper
we will describe the emergence of this variance and link it to the different level
densities that arise from the dispersion relations. To open these results for
discussion\footnote{The present paper reflects the author's invited presentation at the
$5^{\rm th}$ {\it Winter Workshop on Non-Perturbative Quantum Field Theory} based on
the methods derived in Ref.\cite{Weigel:2016zbs} making some overlap unavoidable.} it
is necessary to review in detail the methods developed in Ref.\cite{Weigel:2016zbs} to
compute the VPE for backgrounds in one space dimension that are not (manifestly) invariant
under spatial reflection.
Following this introductory motivation we will describe the $\phi^6$ model and its kink
solutions. In chapter III we will review the spectral method that ultimately
leads to a variant of the Krein--Friedel--Lloyd formula\cite{Faulkner:1977aa} for the VPE.
The novel approach to obtain the relevant scattering data will be discussed in chapter IV
and combined with the one--loop renormalization in chapter V. A comparison with known
(exact) results will be given in chapter VI while chapter VII contains the predicted VPE
for the solitons of the $\phi^6$ model. Translational variance of the VPE that emerges
from the existence of non--equivalent vacua will be analyzed in chapter VIII. We conclude
with a short summary in chapter~IX.
\section{Kinks in $\mathbf{\phi^6}$ Models}
In $D=1+1$ dimensions the dynamics for the quantum field $\phi$ are governed solely
by a field potential $U(\phi)$ that is added to the kinetic term
\begin{equation}
\mathcal{L}=\frac{1}{2}\partial_\mu \phi\partial^\mu \phi-U(\phi)\,.
\label{eq:lag1}
\end{equation}
For the $\phi^6$ model we scale all coordinates, fields and coupling constants
such that the potential contains only a single dimensionless parameter $a$
\begin{equation}
U(\phi)=\frac{1}{2}\left(\phi^2+a^2\right)\left(\phi^2-1\right)^2\,.
\label{eq:pot1}
\end{equation}
\begin{figure}[t]
\centerline{\epsfig{file=p6a.eps,width=4.5cm,height=3cm}\hspace{1cm}
\epsfig{file=p6b.eps,width=4.5cm,height=3cm}\hspace{1cm}
\epsfig{file=p6c.eps,width=4.5cm,height=3cm}}
\caption{\label{fig:phi6pot}The field potential, eq.~(\ref{eq:pot1}) in the
$\phi^6$ model for various values of the real parameter $a=1,\fract{1}{2},0$ from
left to right.}
\end{figure}
\noindent
From figure~\ref{fig:phi6pot} we observe that there are three general cases. For
$a^2>\fract{1}{2}$ two degenerate minima at $\phi=\pm1$ exist. For $0<a^2\le\fract{1}{2}$
an additional local minimum emerges at $\phi=0$. Finally, for $a=0$ the three minima
at $\phi=0$ and $\phi=\pm1$ are degenerate. Soliton solutions connect different vacua
between negative and positive spatial infinity. For $a\ne0$ the vacua are at
$\phi=\pm1$ and the corresponding soliton solution is\cite{Lohe:1979mh}
\begin{equation}
\phi_K(x)=a\frac{X-1}
{\sqrt{4X+a^2\left(1+X\right)^2}}
\qquad \mbox{with} \qquad X={\rm e}^{2\sqrt{1+a^2}\,x}\,.
\label{eq:phik6a}
\end{equation}
Its classical energy is
$E_{\rm cl}(a)=\frac{2-a^2}{4}\sqrt{1+a^2}+\frac{4a^2+a^4}{8}\,
{\rm ln}\frac{\sqrt{1+a^2}+1}{\sqrt{1+a^2}-1}$. The case $a=0$ is
actually more interesting because two distinct soliton solutions do exist.
The first one connects $\phi=0$ at $x\to-\infty$ to $\phi=1$ at $x\to\infty$,
\begin{equation}
\phi_{K_1}(x)=\frac{1}{\sqrt{1+{\rm e}^{-2x}}}\,,
\label{eq:phi61}
\end{equation}
while the second one interpolates between $\phi=-1$ and $\phi=0$,
\begin{equation}
\phi_{K_2}(x)=-\frac{1}{\sqrt{1+{\rm e}^{2x}}}\,.
\label{eq:phi62}
\end{equation}
These soliton configurations are shown in figure \ref{fig:phi6sol}.
\begin{figure}[t]
\centerline{
\epsfig{file=p6k1.eps,width=5cm,height=3.0cm}\hspace{2cm}
\epsfig{file=p6k2.eps,width=5cm,height=3.0cm}}
\caption{\label{fig:phi6sol}The two soliton solutions for $a=0$: Left panel:
eq~(\ref{eq:phi61}); right panel eq~(\ref{eq:phi62}).}
\end{figure}
In either case the classical mass is
$E_{\rm cl}=\fract{1}{4}=\fract{1}{2}\lim_{a\to0}E_{\rm cl}(a)$. This
relation for the classical energies reflects the fact that as $a\to0$
the solution $\phi_K(x)$ disintegrates into two widely separated structures
one corresponding to $\phi_{K_1}(x)$ the other to $\phi_{K_2}(x)$.
The computation of the VPE requires the construction of scattering solutions for
fluctuations about the soliton. In the harmonic approximation the fluctuations
experience the potential
\begin{equation}
V(x)=\frac{1}{2}\frac{\partial^2 U(\phi)}{\partial\phi^2}
\Big|_{\phi=\phi_{\rm sol}(x)}
\label{eq:fltpot1}
\end{equation}
generated by the soliton ($\phi_{\rm sol}=\phi_K$, $\phi_{K_1}$ or $\phi_{K_2}$).
\begin{figure}[t]
\centerline{
\epsfig{file=phi6a.eps,width=6cm,height=3cm}\hspace{2cm}
\epsfig{file=phi60.eps,width=6cm,height=3cm}}
\caption{\label{fig:fltpot}Scattering potentials for the
quantum fluctuations in the $\phi^6$ model. Left panel: typical
example for $a\ne0$; right panel: the case $a=0$ with the two potentials
generated by $\phi_{K_1}$, full line and $\phi_{K_2}$, dashed line.}
\end{figure}
These three potentials are shown in figure \ref{fig:fltpot}. For
$a\ne0$ the potential is invariant under $x\leftrightarrow-x$. But
the particular case $a\equiv0$ is not reflection symmetric, though
$x\leftrightarrow-x$ swaps the potentials generated by $\phi_{K_1}$
and $\phi_{K_2}$. The loss of this invariance disables the separation of
the fluctuation modes into symmetric and anti--symmetric channels, which
is the one dimensional version of partial wave decomposition. Even more
strikingly, the different topological structures in the $a=0$ case
cause $\lim_{x\to-\infty}V(x)\ne\lim_{x\to\infty}V(x)$, which implies
different masses (dispersion relations) for the fluctuations at positive
and negative spatial infinity.
\section{Spectral Methods and Vacuum Polarization Energy}
The formula for the VPE, Eq.~(\ref{eq:master}) below, can be derived from first
principles in quantum field theory by integrating the vacuum matrix element of the
energy density operator\cite{Graham:2002xq}. It is, however, also illuminative to
count the energy levels when summing the changes of the zero point energies. This
sum is $\mathcal{O}(\hbar)$ and thus one loop order ($\hbar=1$ for the units used
here). We call the single particle energies of fluctuations in the soliton type
background $\omega_n$ while the $\omega_n^{(0)}$ are those for the trivial
background. Then the VPE formally reads
\begin{equation}
E_{\rm vac}=\frac{1}{2}\sum_n\left(\omega_n-\omega_n^{(0)}\right)
\Bigg|_{\rm ren.}
=\frac{1}{2}\sum_j \epsilon_j +
\frac{1}{2} \int_0^\infty dk\, \omega_k\,\Delta\,\rho_{\rm ren.}(k)\,,
\label{eq:sum0}
\end{equation}
where the subscript indicates that renormalization is required to obtain a finite
and meaningful result. On the right hand side we have separated the explicit bound
state (sum of energies $\epsilon_j$) and continuum (integral over momentum $k$)
contributions. The latter involves $\Delta\,\rho_{\rm ren.}(k)$ which is the
(renormalized) change of the level density induced by the soliton background. Let
$L$ be a large distance away from the localized soliton background. For $x\sim L$
the stationary wave--function of the quantum fluctuation is a phase shifted plane
wave $\psi(x)\sim{\rm sin}\left[kx+\delta(k)\right]$, where $\delta(k)$ is the phase
shift (of a particular partial wave) that is obtained from scattering off the potential,
Eq.~(\ref{eq:fltpot1}). The continuum levels are counted from the boundary condition
$\psi(L)=0$ and subsequently taking the limit $L\to\infty$. The number $n(k)$ of levels
with momentum less or equal to $k$ is then extracted from $kL+\delta(k)=n(k)\pi$.
The corresponding number in the absence of the soliton is $n^{(0)}(k)=kL/\pi$, trivially.
From these the change of the level density is computed via
\begin{equation}
\Delta\,\rho(k)=\lim_{L\to\infty}\frac{d}{dk}\left[n(k)-n^{(0)}(k)\right]
=\frac{1}{\pi}\frac{d\delta(k)}{dk}\,,
\label{eq:KFL}
\end{equation}
which is often referred to as the Krein--Friedel--Lloyd formula\cite{Faulkner:1977aa}.
Note that $\Delta\,\rho(k)$ is a finite quantity; but ultra--violet divergences appear
in the momentum integral in Eq.~(\ref{eq:sum0}) and originate from the large $k$ behavior
of the phase shift. This behavior is governed by the Born series
\begin{equation}
\delta(k)=\delta^{(1)}(k)+\delta^{(2)}(k)+\ldots
\label{eq:born}
\end{equation}
where the superscript reflects the power at which the potential, Eq.~(\ref{eq:fltpot1})
contributes. Though this series does not converge\footnote{For example, in three space
dimensions the series yields $\delta(0)\to0$ which contradicts Levinson's theorem.} for all
$k$, it describes the large $k$ behavior well since
$\delta^{(N+1)}(k)/\delta^{(N)}(k)\propto 1/k^2$ when $k\to\infty$. Hence replacing
\begin{equation}
\Delta\,\rho(k)\to\left[\Delta\,\rho(k)\right]_N=
\frac{1}{\pi}\frac{d}{dk}\left[\delta(k)-\delta^{(1)}(k)-\delta^{(2)}(k)-\ldots-
\delta^{(N)}(k)\right]
\label{eq:born1}
\end{equation}
produces a finite integral in Eq.~(\ref{eq:sum0}) when $N$ is taken sufficiently large.
We have to add back the subtractions that come with this replacement. Here the spectral
methods take advantage of the fact that each term in the subtraction is uniquely related
to a power of the background potential and that Feynman diagrams represent an alternative
expansion scheme for the vacuum polarization energy
\begin{equation}
\mbox{\parbox[l]{2.2cm}{\vskip-1.9cm $E_{\rm FD}^{N}[V]=$}}
\epsfig{file=fdseries.eps,height=1.9cm,width=8cm}\,.
\label{eq:FDs}
\end{equation}
The full lines are the free propagators of the quantum fluctuations and the dashed lines
denote insertions of the background potential, Eq.~(\ref{eq:fltpot1}), eventually after
Fourier transformation. These Feynman diagrams are regularized with standard techniques,
most commonly in dimensional regularization. They can thus be straightforwardly combined
with the counterterm contribution, $E_{\rm CT}[V]$ with coefficients fully
determined in the perturbative sector of the theory. This combination remains finite when
the regulator is removed.
The generalization to multiple channels is straightforward by finding an eventually
momentum dependent diagonalization of the scattering matrix $S(k)$ and summing the
so--obtained eigenphase shifts. This replaces\footnote{The proper Riemann sheet of the
the logarithm is identified by constructing a smooth function that vanishes as $k\to\infty$.}
$\delta(k)\,\longrightarrow\,(1/2{\rm i}){\rm ln}{\rm det}\,S(k)$ and analogously for
the Born expansions, Eqs.~(\ref{eq:born}) and~(\ref{eq:born1}). Since after
Born subtraction the integral converges, we integrate by parts to avoid numerical
differentiation and to stress that the VPE is measured with respect to the translationally
invariant vacuum. We then find the renormalized VPE to be, with the sum over partial
waves re--inserted,
\begin{equation}
E_{\rm vac}[V]=\sum_\ell D_\ell\left\{\frac{1}{2}\sum_j\left(\epsilon_{\ell j}-m\right)
- \int_0^\infty \frac{dk}{4\pi{\rm i}} \frac{k}{\sqrt{k^2+m^2}}\,
\left[{\rm ln}\,{\rm det}\,S(k)\right]_{N}\right\}+E_{\rm FD}^{N}[V]+E_{\rm CT}[V]\,.
\label{eq:master}
\end{equation}
Here $D_\ell$ is the degree of degeneracy, {\it e.g.} $D_\ell=2\ell+1$ in three space
dimensions. The subscript $N$ refers to the subtraction of $N$ terms of the Born
expansion, as {\it e.g.} in Eq.~(\ref{eq:born1}). We stress that, with $N$ taken sufficiently
large, both the expression in curly brackets and the sum
$E_{\rm FD}^{N}[V]+E_{\rm CT}[V]$ are individually ultra--violet finite and no
cut--off parameter is needed\cite{Farhi:1998vx}.
\section{Scattering Data in One Space Dimension}
In this section we obtain the scattering matrix for general one dimensional problems
and develop an efficient method for its numerical evaluation. This will be at the center
of the novel approach to compute the VPE.
We first review the standard approach that is applicable when $V(-x)=V(x)$, {\it e.g.}
left panel of figure \ref{fig:fltpot}. Then the partial wave decomposition separates
symmetric $\psi_S(-x)=\psi_S(x)$ and anti--symmetric, $\psi_A(-x)=-\psi_A(x)$ channels.
The respective phase shifts can be straightforwardly obtained in a variant of the variable
phase approach\cite{Calegero:1967} by parameterizing
$\psi(x)={\rm e}^{{\rm i}[kx+\beta(k,x)]}$ and imposing the obvious boundary conditions
$\psi^\prime_S(0)=0$ and $\psi_A(0)=0$. (The prime denotes the derivative with respect to
$x$.) The wave--equation turns into a non--linear differential equation for the phase function
$\beta(k,x)$. When solved subject to
$\lim_{x\to\infty}\beta(k,x)=0$ and $\lim_{x\to\infty}\beta^\prime(k,x)=0$
the scattering matrix given by\cite{Graham:2009zz}
\begin{equation}
\frac{1}{2{\rm i}}\,{\rm ln}\,{\rm det}\, S(k)=
-2{\sf Re}[\beta(k,0)]
-{\rm arctan}\frac{{\sf Im}[\beta^\prime(k,0)]}{k+{\sf Re}[\beta^\prime(k,0)]}\,.
\label{eq:sym1}
\end{equation}
Linearizing and iterating the differential equation for $\beta(k,x)$ yields the Born
series, Eq.~(\ref{eq:born}). At this point it is advantageous to use the fact that
scattering data can be continued to the upper half complex momentum plane\cite{Newton:1982qc}.
That is, when writing $k={\rm i} t$, the Jost function, whose phase is the scattering phase shift
when $k$ is real, is analytic for ${\sf Re}[t]\ge0$. Furthermore the Jost function has simple
zeros at imaginary $k={\rm i}\kappa_j$ representing the bound states. Formulating the momentum
integral from Eq.~(\ref{eq:master}) as a contour integral automatically collects the bound state
contribution and we obtain a formula as simple as\cite{Graham:2002xq,Graham:2009zz}
\begin{equation}
E^{\rm (S)}_{\rm vac}=\int_{m}^\infty \frac{dt}{2\pi}\,
\frac{t}{\sqrt{t^2-m^2}}\,\left[
{\rm ln}\left\{g(t,0)\left(g(t,0)-\frac{1}{t}g^\prime(t,0)\right)\right\}
\right]_N +E_{\rm FD}^{N}[V]+E_{\rm CT}[V]
\label{eq:EvacJost}
\end{equation}
for the VPE. Here $g(t,x)$ is the non--trivial factor of the Jost solution whose $x\to0$
properties determine the Jost function. The factor function solves the differential equation
\begin{equation}
g^{\prime\prime}(t,x)=2tg^\prime(t,x)+V(x)g(t,x)\,,
\label{eq:DEQJost}
\end{equation}
with the boundary conditions $g(t,\infty)=1$ and $g^\prime(t,\infty)=0$; iterating
$g(t,x)=1+g^{(1)}(t,x)+g^{(2)}(t,x)+\ldots$ produces the Born series.
In general, however, the potential $V(x)$ is not reflection invariant and no
partial wave decomposition is applicable. Even more, there may exist different
masses for the quantum fluctuations
\begin{equation}
m_L^2=\lim_{x\to-\infty}V(x)\qquad {\rm and}\qquad
m_R^2=\lim_{x\to\infty}V(x)
\label{eq:mass}
\end{equation}
as it is the case for the $\phi^6$ model with $a=0$, {\it cf.} right panel of
figure \ref{fig:fltpot}. We adopt the convention that $m_L\le m_R$,
otherwise we simply swap $x\to-x$. Three different cases must be considered.
First, above threshold both momenta $k$ and $q=\sqrt{k^2+m_L^2-m_R^2}$ are real. To
formulate the variable phase approach we introduce the matching point $x_m$ and
parameterize
\begin{align}
x\le x_m:&\quad \psi(x)=A(x){\rm e}^{{\rm i}kx}\quad
& A^{\prime\prime}(x)=-2{\rm i}kA^\prime(x)+V_p(x)A(x)\,\,\,\cr
x\ge x_m:&\quad \psi(x)=B(x){\rm e}^{{\rm i}qx}\quad
& B^{\prime\prime}(x)=-2{\rm i}qB^\prime(x)+V_p(x)B(x)\,.
\label{eq:match1}
\end{align}
Observe that the {\it pseudo potential}
\begin{equation}
V_p(x)=V(x)-m_L^2+(m_L^2-m_R^2)\Theta(x-x_m)
\label{eq:pseudoV}
\end{equation}
vanishes at positive and negative spatial infinity. The differential
equations~(\ref{eq:match1}) are solved for the boundary conditions
conditions $A(-\infty)=B(\infty)=1$ and $A^\prime(-\infty)=B^\prime(\infty)=0$.
There are two linearly independent solutions $\psi_1$ and $\psi_2$ that define
the scattering matrix $S=(s_{ik})$ via the asymptotic behaviors
\begin{equation}
\psi_1(x)\sim \begin{cases}
{\rm e}^{{\rm i}kx}+s_{12}(k){\rm e}^{-{\rm i}kx}\quad &{\rm as}\quad x\to-\infty\cr
s_{11}(k){\rm e}^{{\rm i}qx}\quad &{\rm as}\quad x\to\infty
\end{cases}
\hspace{0.7cm}{\rm and}\hspace{0.7cm}
\psi_2(x)\sim \begin{cases}
s_{22}(k){\rm e}^{-{\rm i}kx}\quad &{\rm as}\quad x\to-\infty\cr
{\rm e}^{-{\rm i}qx}+s_{21}(k){\rm e}^{{\rm i}qx}\quad &{\rm as}\quad x\to\infty\,.
\end{cases}
\hspace{0.7cm}
\label{eq:defS}
\end{equation}
By equating the solutions and their derivatives at $x_m$ the scattering matrix is
obtained from the factor functions as
\begin{align}
S(k)=&\begin{pmatrix}
{\rm e}^{-{\rm i} qx_m} & 0 \cr
0 & {\rm e}^{{\rm i} kx_m}
\end{pmatrix}
\begin{pmatrix}
B & -A^\ast \cr
iqB+B^\prime & ikA^\ast-A^{\prime\ast}
\end{pmatrix}^{-1}\cr&\hspace{1cm}\times
\begin{pmatrix}
A & -B^\ast \cr
ikA+A^\prime & iqB^\ast-B^{\prime\ast}
\end{pmatrix}
\begin{pmatrix}
{\rm e}^{{\rm i} kx_m} & 0 \cr
0 & {\rm e}^{-{\rm i} qx_m}
\end{pmatrix}
\hspace{2cm} {\rm for}\quad k\ge\sqrt{m_R^2-m_L^2}\,,
\label{eq:Smat1}
\end{align}
where $A=A(x_m)$, etc.. The second case refers to $k\le\sqrt{m_R^2-m_L^2}$ still being
real but $q={\rm i}\kappa$ becoming imaginary with $\kappa=\sqrt{m_R^2-m_L^2-k^2}$. The parameterization
of the wave function for $x>x_m$ changes to $\psi(x)=B(x){\rm e}^{-\kappa x}$ yielding the
differential equation $B^{\prime\prime}(x)=\kappa B^\prime(x)+V_p(x)B(x)$. The scattering matrix
then is a single unitary number
\begin{equation}
S(k)=-\,\frac{A\left(B^\prime/B-\kappa-ik\right)-A^\prime}
{A^\ast\left(B^\prime/B-\kappa+ik\right)-A^{\prime\ast}}\,
{\rm e}^{2{\rm i} kx_m}
\hspace{2cm} {\rm for}\quad 0\le k\le\sqrt{m_R^2-m_L^2}\,.
\label{eq:Smat2}
\end{equation}
It is worth noting that $V_p\equiv0$ corresponds to the step function potential. In that case
the above formalism obviously yields $A\equiv B\equiv1$ and reproduces the textbook result
\begin{equation}
\delta_{\rm step}(k)=
\begin{cases}
(k-q)x_m\,,\quad & {\rm for}\quad k\ge\sqrt{m_R^2-m_L^2}\cr
kx_m-{\arctan}\left(\frac{\sqrt{m_R^2-m_L^2-k^2}}{k}\right)\,,
\quad & {\rm for}\quad k\le\sqrt{m_R^2-m_L^2}\,.
\end{cases}
\label{eq:step1}
\end{equation}
In the third regime also $k$ becomes imaginary and we need to identify the bound
states energies $\epsilon\le m_L$ that enter Eq.~(\ref{eq:master}). We define real
variables $\lambda=\sqrt{m_L^2-\epsilon^2}$ and $\kappa(\lambda)
=\sqrt{m_R^2-m_L^2+\lambda^2}$ and solve the wave equation subject to the initial conditions
\begin{equation}
\psi_L(x_{\rm min})=1\,,\qquad
\psi^\prime_L(x_{\rm min})=\lambda
\qquad {\rm and}\qquad
\psi_R(x_{\rm max})=1\,,\qquad
\psi^\prime_R(x_{\rm max})=-\kappa(\lambda)\,,
\label{eq:bound1}
\end{equation}
where $x_{\rm min}$ and $x_{\rm max}$ represent negative and positive spatial infinity,
respectively. Continuity of the wave function requires the Wronskian determinant
\begin{equation}
\psi_L(x_m)\psi^\prime_R(x_m)-\psi_R(x_m)\psi^\prime_L(x_m)\stackrel{!}{=}0\,,
\label{eq:bound2}
\end{equation}
to vanish. This occurs only for discrete values $\lambda_j$ that in turn determine
the bound state energies\footnote{The bosonic dispersion relation does not exclude
imaginary energies that would hamper the definition of the quantum theory. This case
does not occur here.} $\epsilon_j=\sqrt{m_L^2-\lambda_j^2}$.
\section{One Loop Renormalization in One Space Dimension}
To complete the computation of the VPE we need to substantiate the renormalization
procedure. We commence by identifying the ultra--violet singularities. This is simple
in $D=1+1$ dimensions at one loop order as only the first diagram on the right hand side
of Eq.~(\ref{eq:FDs}) is divergent. Furthermore, this diagram is local in the sense
that $E_{\rm FD}^{(1)}\propto \frac{1}{\epsilon}\int dx\,\left[V(x)-m_L^2\right]$,
where $\epsilon$ is the regulator ({\it e.g.} from dimensional regularization). Hence
a counterterm can be constructed that not only removes the singularity but the
diagram in total. This is the so--called {\it no tadpole} condition and implies
\begin{equation}
E_{\rm FD}^{(1)}+E_{\rm CT}^{(1)}=0\,.
\label{eq:notad}
\end{equation}
In the next step we must identify the corresponding Born term in Eq.~(\ref{eq:born}).
To this end it is important to note that the counterterm is a functional of the
full field $\phi(x)$ that induces the background potential, Eq.~(\ref{eq:fltpot1}).
Hence we must find the Born approximation for $V(x)-m_L^2$ rather than the one for the
pseudo--potential $V_P(x)$, Eq.~(\ref{eq:pseudoV}). The standard formulation of the Born
approximation as an integral over the potential is, unfortunately, not applicable to
$V(x)-m_L^2$ since it does not vanish at positive spatial infinity. However, we note that
$V(x)-m_L^2=V_P(x)+(m_L^2-m_R^2)\Theta(x-x_m)=V_p(x)+V_{\rm step}(x)$ and that, by
definition, the first order correction is linear in the background, and thus additive.
We may therefore write
\begin{equation}
\delta^{(1)}(k)=\delta^{(1)}_P(k)+\delta^{(1)}_{\rm step}(k)
=\frac{-1}{2k}\int_{-\infty}^\infty dx\,
V_p(x)\Big|_{x_m}+\frac{x_m}{2k}\left(m_L^2-m_R^2\right)
=\frac{-1}{2k}\int_{-\infty}^\infty dx\, V_p(x)\Big|_{0}\,.
\label{eq:born2}
\end{equation}
The Born approximation for the step function potential has been obtained from
the large $k$ expansion of $\delta_{\rm step}(k)$ in Eq.~(\ref{eq:step1}). The
subscripts in Eq.~(\ref{eq:born2}) recall that the definition of the
pseudo--potential, Eq.~(\ref{eq:pseudoV}) induces an implicit dependence on the
(artificial) matching point $x_m$. Notably, this dependence disappears from the
final result! This is the first step towards establishing the matching point
independence of the VPE.
The integrals in $E_{\rm FD}^{(1)}$ and $E_{\rm CT}^{(1)}$ require further
regularization when $m_L\ne m_R$. In that case no further {\it finite
renormalization} beyond the no tadpole condition is realizable.
\section{Comparison with Known Results}
Before presenting detailed numerical results for VPEs, we note that all
simulations were verified to produce $S^\dagger S=\mbox{{\sf 1}\zr{-0.16}\rule{0.04em}{1.55ex}\zr{0.1}}$ after attaching pertinent
flux factors to the scattering matrix, Eq.~(\ref{eq:defS}). These flux factors
are not relevant for the VPE as they multiply to unity under the determinant
in Eq.~(\ref{eq:master}). In addition the numerically obtained phase shifts,
{\it i.e.} $(1/2{\rm i}){\rm ln}{\rm det}\,S$, have been monitored to not vary with
$x_m$. Since this is also the case for the bound energies, the VPE is verified to
be independent of the unrestricted choice for the matching point.
The VPE calculation based on Eq.~(\ref{eq:master}) has been applied to the
$\phi^4$ kink and sine--Gordon soliton models that are defined via the potentials
\begin{equation}
U_K(\phi)=\fract{1}{2}\left(\phi^2-1\right)^2
\qquad {\rm and}\qquad
U_{\rm SG}(\phi)=4\left({\rm cos}(\phi)-1\right)\,,
\label{eq:known1}
\end{equation}
respectively. The soliton solutions $\phi_K={\rm tanh}(x-x_0)$ and
$\phi_{\rm SG}(x)=4{\rm arctan}\left({\rm e}^{-2(x-x_0)}\right)$ induce the
scattering potentials
\begin{equation}
V_K(x)-m^2=6\left[{\rm tanh}^2(x-x_0)-1\right]
\qquad {\rm and}\qquad
V_{\rm SG}(x)-m^2=8\left[{\rm tanh}^2[2(x-x_0)]-1\right]\,.
\label{eq:known2}
\end{equation}
In both cases we have identical dispersion relations at positive and negative spatial
infinity: $m=m_L=m_R=2$ for the dimensionless units introduced above. The simulation
based on Eq.~(\ref{eq:master}) reproduces the established results
$E_{\rm vac}^{(K)}=\fract{\sqrt{2}}{4}-\fract{3}{\pi}$ and
$E_{\rm vac}^{({\rm SG})}=-\fract{2}{\pi}$\cite{Ra82}. These solitons break translational
invariance spontaneously and thus produce zero mode bound states in the fluctuation spectrum.
In addition the $\phi^4$ kink possesses a bound state with energy $\sqrt{3}$\cite{Ra82}.
All bound states are easily observed using Eq.~(\ref{eq:bound2}). The potentials in
Eq.~(\ref{eq:known2}) are reflection symmetric about the soliton center $x_0$ and the method
of Eq.~(\ref{eq:EvacJost}) can straightforwardly applied\cite{Graham:2009zz}. However, this
method singles out $x_0$ (typically set to $x_0=0$) to determine the boundary condition in the
differential equation and therefore cannot be used to establish translational invariance of
the VPE. On the contrary, the boundary conditions for Eq.~(\ref{eq:match1}) are not at all
sensitive to $x_0$ and we have applied the present method to compute the VPE for various
choices of $x_0$, all yielding the same numerical result.
The next step is to compute the VPE for asymmetric background potentials that have $m=m_L=m_R$.
For the lack of a soliton model that produces such a potential we merely consider a two
parameter set of functions
\begin{equation}
V_p(x)\,\longrightarrow\,V_{R,\sigma}(x)=Ax{\rm e}^{-x^2/\sigma^2}
\label{eq:asym1}
\end{equation}
for the pseudo potential in Eq.~(\ref{eq:match1}). Although Eq.~(\ref{eq:EvacJost}) is not
directly applicable, it is possible to relate $V_{R,\sigma}(x)$ to the symmetric potential
\begin{equation}
V_R(x)=A\left[(x+R){\rm e}^{-\frac{(x+R)^2}{\sigma^2}}
-(x-R){\rm e}^{-\frac{(x-R)^2}{\sigma^2}}\right]=V_R(-x)
\label{eq:asym2}
\end{equation}
and apply Eq.~(\ref{eq:EvacJost}). In the limit $R\to\infty$ interference effects between the
two structures around $x=\pm R$ disappear resulting in twice the VPE of Eq.~(\ref{eq:asym1}).
The numerical comparison is listed in table \ref{tab:asym}.
\begin{table}[t]
\centerline{
\begin{tabular}{c|cccccc|c}
$R$ & 1.0 & 1.5 & 2.0 & 2.5 & 3.0 & 3.5 & present \cr
\hline
$A=2.5\,,\,\sigma=1.0$ &
-0.0369 & -0.0324 & -0.0298 & -0.0294 & -0.0293 & -0.0292 & -0.0293 \cr
\hline\hline
$R$ & 4.0 & 5.0 & 6.0 & 7.0 & 8.0 & 9.0 & present \cr
\hline
$A=0.2\,,\,\sigma=4.0$ &
-0.0208 & -0.0188 & -0.0170 & -0.0161 & -0.0158 & -0.0157 & -0.0157
\end{tabular}}
\caption{\label{tab:asym}The $R$ dependent data are half the VPE for the
symmetrized potential, Eq.~(\ref{eq:asym2}) computed from Eq.~(\ref{eq:EvacJost}).
The data in the column {\it present} list the results obtained from
Eq.~(\ref{eq:master}) for the original potential, Eq.~(\ref{eq:asym1}).}
\end{table}
Indeed the two approaches produce identical results as $R\to\infty$.
The symmetrized version converges only slowly for wide potentials
(large $\sigma$) causing obstacles for the numerical simulation that
do not at all occur in the present approach.
\section{Vacuum Polarization Energies in the $\mathbf{\phi^6}$ Model}
We first discuss the VPE for the $a\ne0$ case. A typical background potential
is shown in the left panel of figure \ref{fig:phi6pot}. Obviously it is reflection
invariant and thus the method based on Eq.~(\ref{eq:EvacJost}) is applicable. In
table \ref{tab:phi6a} we also compare our results to those from the heat kernel
expansion of Ref.\cite{AlonsoIzquierdo:2011dy} since, to our knowledge, it is
the only approach that has also been applied to the asymmetric $a=0$ case in
Ref.\cite{AlonsoIzquierdo:2002eb}.
\begin{table}[t]
\centerline{
\begin{tabular}{c|ccccccc}
$a$ & 0.001 & 0.01 & 0.05 & 0.1 & 0.2 & 1.0 & 1.5 \cr
\hline
heat kernel, Ref.\cite{AlonsoIzquierdo:2011dy}~~
& -1.953 & -1.666 & -1.447 & -1.349
& -1.239 & -1.101 & -1.293\cr
parity sep., Eq.~(\ref{eq:EvacJost})~~
& -2.145 & -1.840 & -1.595 & -1.461 &
-1.298 & -1.100 & -1.295 \cr
present, Eq.(\ref{eq:master})
& -2.146 & -1.841 & -1.596 & -1.462
& -1.297 & -1.102 & -1.297
\end{tabular}}
\caption{\label{tab:phi6a}Different methods to compute the VPE of
the $\phi^6$ soliton for $a\ne0$.}
\end{table}
Not surprisingly, the two methods based on scattering data agree within
numerical precision for all values of $a$. The heat kernel results also
agree for moderate and large~$a$; but for small values deviations of the order
of 10\% are observed. The heat kernel method relies on truncating the
expansion of the exact heat kernel about the heat kernel in the absence
of a soliton. Although in Ref.\cite{AlonsoIzquierdo:2011dy} the expansion has
been carried out to eleventh(!) order, leaving behind a very cumbersome
calculation, this does not seem to provide sufficient accuracy for small $a$.
We are now in the position to discuss the VPE for $a=0$ associated with the
soliton $\phi_{K_1}(x)$ from Eq.~(\ref{eq:phi61}). The potentials for the
fluctuations and the resulting scattering data are shown in
figure \ref{fig:phi6}. By construction, the pseudo potential jumps at $x_m=0$.
However, neither the phase shift nor the bound state energy (the zero mode
is the sole bound state) depends on $x_m$.
\begin{figure}[t]
\centerline{
\epsfig{file=pot.eps,width=6cm,height=2.5cm}\hspace{2cm}
\epsfig{file=delta.eps,width=6cm,height=2.5cm}}
\caption{\label{fig:phi6}Left panel: potential ($V$) and pseudo potential ($V_p$)
for fluctuations about a $\phi^6$ soliton with $a=0$. The pseudo potential
is shown for $x_m=0$. Right panel: resulting phase shift, {\it i.e.}
$(1/2{\rm i}) {\rm ln}{\rm det}\, S$, full line and its Born approximation,
dashed line.}
\end{figure}
As expected, the phase shift has a threshold cusp at
$\sqrt{m_R^2-m_L^2}=\sqrt{3}$ and approaches $\frac{\pi}{2}$ at zero momentum.
This is consistent with Levinson's theorem in one space dimension\cite{Barton:1984py}
and the fact that there is only a single bound state. In total we find a significant
cancellation between bound state and continuum contributions
\begin{equation}
E_{\rm vac}=-0.5+0.4531=-0.0469\,.
\label{eq:main}
\end{equation}
The result\footnote{The factor $\sqrt{2}$ is added to adjust the datum
from Ref.\cite{AlonsoIzquierdo:2002eb} to the present scale.}
$-0.1264\sqrt{2}=-0.1788$ of Ref.\cite{AlonsoIzquierdo:2002eb}
was estimated relative to $V_\alpha(x)=\frac{3}{2}\left[1+{\rm tanh}(\alpha x)\right]$
for $\alpha=1$. Our results for various values of $\alpha$ are listed in
table \ref{tab:tanh}. These results are consistent with $V_\alpha(x)$ turning into
a step function for large $\alpha$. For the particular value $\alpha=1$ our relative
VPE thus is $\Delta E_{\rm vac}=-0.0469-0.1660=-0.2129$. In
view of the results shown in table \ref{tab:phi6a}, especially for small $a$,
these data match within the validity of the approximations applied in the
heat kernel calculation.
\begin{table}[t]
\centerline{
\begin{tabular}{c|ccccc|c}
$\alpha$ & 1.0 & 2.0 & 5.0 & 10.0 & 30.0 & step\cr
\hline
$E_{\rm vac}$& 0.1660 & 0.1478 & 0.1385 & 0.1363 & 0.1355 & 0.1355
\end{tabular}}
\caption{\label{tab:tanh}VPE for background potential $V_\alpha(x)$ defined
in the main text. The entry {\it step} gives the VPE for the step function
potential $V(x)=3\Theta(x)$ using Eq.~(\ref{eq:step1}) and its Born approximation
from Eq.~(\ref{eq:born2}) for $x_m=0$.}
\end{table}
\section{Translational Variance}
So far we have computed the VPE for the $\phi^6$ model soliton centered at
$x_0=0$. We have already mentioned that there is translational invariance
for the VPE of the kink and sine--Gordon solitons. It is also numerically
verified for the asymmetric background, Eq.~(\ref{eq:asym1}). In those cases
the two vacua at $x\to\pm\infty$ are equivalent and $q=k$ in Eq.~(\ref{eq:defS}).
When shifting $x\to x+x_0$, the transmission coefficients ($s_{11}$ and $s_{22}$)
remain unchanged relative to the amplitude of the in--coming wave while the
reflection coefficients ($s_{12}$ and $s_{21}$) acquire opposite phases. Consequently,
${\rm det}\,S$ is invariant. For unequal momenta this invariance forfeits and the
VPE depends on $x_0$. This is reflected by the results in
table \ref{tab:shift} in which we present the VPE for
$V_\alpha(x)=\frac{3}{2}\left[1+{\rm tanh}(\alpha (x+x_0))\right]$ and
the $\phi^6$ model soliton $1/\sqrt{1+{\rm e}^{-2(x+x_0)}}$.
\begin{table}[b]
\centerline{
\begin{tabular}{c|ccccc}
&\multicolumn{5}{c}{$E_{\rm vac}$}\cr
\hline
$x_0$& -2 & -1 & 0 & 1 & 2\cr
\hline
$\alpha=5$ & 0.341 & 0.240 & 0.139 & 0.037 & -0.064\cr
$\alpha=2$ & 0.351 & 0.250 & 0.148 & 0.046 & -0.057\cr
$\alpha=1$ & 0.369 & 0.267 & 0.166 & 0.064 & -0.038\cr
$\phi^6$ & 0.154 & 0.053 & -0.047 & -0.148 & -0.249\cr
$\Delta E_{\rm vac}$ & -0.215 & -0.214 & -0.213 & -0.212 & -0.211
\end{tabular}}
\caption{\label{tab:shift}The VPE as function of the position of the center
of the potential for $V_\alpha$ and the $\phi^6$ model soliton.
$\Delta E_{\rm vac}$ is the difference between the VPEs of
the latter and $V_1$.}
\end{table}
Obviously there is a linear dependence of the VPE on $x_0$ with the slope
insensitive to specific structure of the potential. This insensitivity is
consistent with the above remark on the difference between the two momenta.
Increasing $x_0$ shifts the vacuum with the bigger mass towards negative
infinity thereby removing states from the spectrum and hence decreasing
the VPE.
The effect is immediately linked to varying the width of a symmetric barrier
potential with height $m_R^2-m_L^2=3$:
\begin{equation}
V^{(x_0)}_{\rm SB}(x)=3\Theta\left(\frac{x_0}{2}-|x|\right)\,.
\label{eq:symbarr}
\end{equation}
For this potential the Jost solution, Eq.~(\ref{eq:DEQJost}) can be obtained
analytically\cite{Weigel:2016zbs} and the VPE has the limit
\begin{equation}
\lim_{x_0\to\infty}\frac{E_{\rm vac}[V^{(x_0)}_{\rm SB}]}{x_0}\approx-0.102\,,
\label{eq:barrlim}
\end{equation}
which again reveals the background independent slope observed above.
Having quantitatively determined the translation variance of the VPE, it is
tempting to subtract $E_{\rm vac}\left[V^{(x_0)}_{\rm SB}\right]$. Unfortunately
this is not unique because $x_0$ is not the unambiguous center of the soliton.
For example, employing the classical energy density $\epsilon(x)$ to define
the position of the soliton $1/\sqrt{1+{\rm e}^{-2(x-\overline{x})}}$, that is
formally centered at $\overline{x}$, as an expectation value leads to
\begin{equation}
x_s=\frac{\int dx x \epsilon(x)}{\int dx \epsilon(x)}=\overline{x}+\fract{1}{2}\,.
\end{equation}
This changes the VPE by approximately $0.050$. This ambiguity also hampers the
evaluation of the VPE as half that of a widely separated kink--antikink pair
\begin{equation}
\phi_{K\overline{K}}(x)=\left[1+{\rm e}^{2(x-\overline{x})}\right]^{-1/2}
+\left[1+{\rm e}^{-2(x+\overline{x})}\right]^{-1/2}-1
\label{eq:pair}
\end{equation}
similarly to the approach for Eq.~(\ref{eq:asym2}). The corresponding
background potential $V_B$ is shown in figure \ref{fig:plotbg}.
\begin{figure}[t]
\centerline{\epsfig{file=plotbg.eps,width=8cm,height=3cm}}
\caption{\label{fig:plotbg}Background potential for the kink--antikink pair,
Eq.~(\ref{eq:pair}) for different separations.}
\end{figure}
For computing the VPE, the large contribution from the constant but non--zero
potential in the regime $|x|\lesssim \overline{x}$ should be eliminated. The above
considerations lead to
\begin{align}
&\fract{1}{2}\lim_{\bar{x}\to\infty}\left\{E_{\rm vac}[V_B]
-2E_{\rm vac}[V^{(2\overline{x})}_{\rm SB}]\right\}=-0.170
\quad{\rm and}\quad
\fract{1}{2}\lim_{\bar{x}\to\infty}\left\{E_{\rm vac}[V_B]
-2E_{\rm vac}[V^{(2x_s)}_{\rm SB}]\right\}=-0.120\,.
\end{align}
When the VPE from $V^{(2(\overline{x}+1.2)}_{\rm SB}$ is subtracted, the main
result, Eq.~(\ref{eq:main}), is matched. Eventually this can be used to define
the center of the soliton.
Now we also understand why the VPE for $a\ne0$ diverges as $a\to0$, {\it cf.}
table \ref{tab:phi6a}. In that limit kink and antikink structures separate and
the ''vacuum'' in between produces an ever increasing contribution (in magnitude).
Finally, we discuss the link between the translational variance and the
Krein--Friedel--Lloyd formula, Eq.~(\ref{eq:KFL}). We have already reported
the VPE for the step function potential when $x_m=0$. We can also consider
$x_m\to\infty$:
\begin{align}
\frac{E_{\rm vac}[V^{(x_m)}_{\rm step}]}{|x_m|}\,\to\,&-{\rm sign}(x_m)\,
\left[\int_0^{\sqrt{3}}\frac{dk}{4\pi}\,\frac{2k^2-3}{\sqrt{k^2+1}}
+\int_{\sqrt{3}}^\infty\frac{dk}{4\pi}\,
\frac{2k^2-2k\sqrt{k^2-3}-3}{\sqrt{k^2+1}}\right]
\approx0.101\,{\rm sign}(x_m)\,,
\label{eq:xminf}
\end{align}
reproducing the linear dependence on the position from above. Formally, {\it i.e.}
without Born subtraction, the integral, Eq.~(\ref{eq:xminf}) is dominated by
\begin{align}
\int \frac{dk}{2\pi}\,\frac{k}{\sqrt{k^2+1}}\left[k-\sqrt{k^2-3}\right]
\sim \int \frac{dk}{2\pi}\, \sqrt{k^2+1} \frac{d}{dk}\left[\sqrt{k^2-3}-k\right]
=\int \frac{dk}{2\pi}\, \sqrt{k^2+1}\, \frac{d}{dk}\left[q-k\right]\,.
\end{align}
Essentially this is that part of the level density that originates from
the different dispersion relations at positive and negative spatial infinity.
\section{Conclusion}
We have advanced the spectral methods for computing vacuum polarization
energies (VPE) to also apply for static localized background configurations in
one space dimension that do not permit a parity decomposition for the quantum
fluctuations. The essential progress is the generalization of the variable
phase approach to such configurations. Being developed from spectral methods,
it adopts their amenities, as for {\it e.g.} an effective procedure to implement
standard renormalization conditions. A glimpse at the bulky formulas for the
heat kernel expansion (alternative method to the problem) in
Refs.\cite{AlonsoIzquierdo:2002eb,AlonsoIzquierdo:2011dy,AlonsoIzquierdo:2012tw}
immediately reveals the simplicity and effectiveness of the present approach. The
latter merely requires to numerical integrate ordinary differential equations and
extract the scattering matrix thereof, {\it cf.} Eqs.~(\ref{eq:match1})
and~(\ref{eq:Smat1}). Heat kernel methods are typically combined with $\zeta$--function
regularization. Then the connection to standard renormalization conditions is
not as transparent as for the spectral methods, though that is problematic only
when non--local Feynman diagrams require renormalization, {\it i.e.} in larger
than $D=1+1$ dimensions or when fermion loops are involved.
We have verified the novel method by means of well established results, as, {\it e.g.}
the $\phi^4$ kink and sine--Gordon solitons. For these models the approach directly
ascertains translational invariance of the VPE. Yet, the main focus was on the VPE for
solitons in $\phi^6$ models because its soliton(s) may connect in--equivalent vacua
leading to background potentials that are not invariant under spatial reflection. This
model is not strictly renormalizable. Nevertheless at one loop order a well defined result
can be obtained from the no--tadpole renormalization condition albeit no further finite
renormalization is realizable because the different vacua yield additional infinities
when integrating the counterterm. The different vacua also lead to different dispersion
relations for the quantum fluctuations and thereby induce translational variance for a
theory that is formulated by an invariant action. We argue that this variance is universal,
as it is not linked to the particular structure of the background and can be related to
the change in the level density that is basic to the Krein--Friedel--Lloyd formula,
Eq.~(\ref{eq:KFL}).
Besides attempting a deeper understanding of the variance by tracing it from the
energy momentum tensor, future studies will apply the novel method to solitons of
the $\phi^8$ model. Its elaborated structure not only induces potentials that are
reflection asymmetric, but also leads to a set of topological indexes\cite{Gani:2015cda}
that are related to different particle numbers. Then the novel method will
progress the understanding of quantum corrections to binding energies of compound
objects in the soliton picture. Furthermore the present results can be joined with
the interface formalism\cite{Graham:2001dy}, that augments additional coordinates
along which the background is homogeneous, to explore the energy (densities) of
domain wall configurations\cite{Parnachev:2000fz}.
\section*{Acknowledgments}
This work was presented at the $5^{\rm th}$ {\it Winter Workshop on
Non--Perturbative Quantum Field Theory}, Sophia-Antipolis (France), March 2017.
The author is grateful to the organizers for providing this worthwhile workshop.
The author declares that there is no conflict of interest regarding the
publication of this paper. This work is supported in parts by the NRF
under grant~109497.
|
1,116,691,497,227 | arxiv | \section{Introduction}
In the Bohr Copenhagen interpretation of quantum measurements, data are
taken from a classical apparatus reading which is influenced by a quantum
object during the time interval in which they interact. Most of the
theoretical work in analyzing quantum measurements requires computations
for the quantum object. However, Bohr's dictate is that {\it one must not
ask what the quantum object is really doing}! All that can be said is
that the classical apparatus determines which of the complementary aspects
of the quantum object will be made manifest in the experimental data.
As applied to the two slit particle diffraction experiment \cite{TO}, what
this means, dear reader, is that you will never know how a single particle
managed to have non-local awareness of two slits. Furthermore, you are not
even allowed to ask the question because it cannot be answered
experimentally without destroying the quantum interference diffraction
pattern.
But the question was asked repeatedly in different forms by Einstein,
who insisted that the picture is incomplete until such time that we can
assign some objective reality as to what the quantum object is doing. The
question does not merely concern the fact that nature plays a game of
chance and forces us to use probabilities. In his work on Brownian motion,
Einstein derived a relation between the Brownian particle diffusion
coefficient $D$ and the mechanical fluid induced friction
coefficient $R$,
\begin{equation}
D={k_BT\over R}, \label{(1)}
\end{equation}
which allowed experimentalists to verify both the existence of atoms
(which many physicists had previously doubted) and the correctness of
the statistical thermodynamics of Boltzmann and Gibbs. We may safely
assume that Einstein knew that nature played games of chance. But a
Brownian particle in a fluid does something real. It jumps randomly
back and forth and if it is large enough you are allowed to observe
this motion in some detail.
What picture can we paint for the motion of a quantum Brownian particle?
We will ignore Bohr's injunction about never being able to know and try
to form a picture. The formalism for dealing with quantum Brownian motion
was developed in complete generality by Schwinger and presented to the
physics community with a pleasant sense of humor \cite{SW}.
None of the Schwinger's
many lengthy equations are numbered, and the central result concerning
general quantum Brownian motion in the presence of non-linear
forces was quoted without any derivation at all. Nevertheless, Schwinger's
formalism is mathematically complete and the results will be used by us
for simple quantum Brownian motion consistent with the Einstein
Eq.(\ref{(1)}).
The physical picture of quantum Brownian motion has two parts:
(i) The
starting point is that a classical object can be viewed as having (say)
a coordinate which depends on time $x(t)$. A quantum object may be viewed
as splitting the single coordinate $x(t)$ into two coordinates $x_+(t)$
(going forward in time) and $x_-(t)$ (going backward in time) \cite{SW}.
The classical
limit is obtained when both motions coincide $x(t)=x_+(t)=x_-(t)$. To see
why this is the case, one may employ the Schwinger quantum operator action
principle, or more simply recall the mean value of a quantum operator
\begin{equation}
\bar{A}(t)=(\psi (t)|A|\psi (t))=
\int \! \int \psi^* (x_-,t)\, (x_-|A|x_+)\,\psi (x_+,t)\;dx_+ dx_-.
\label{(2)}
\end{equation}
Thus one requires two copies of the Schr\"odinger equation to follow the
density matrix
\begin{equation}
(x_+|\rho (t)|x_-)=\psi^* (x_-,t)\psi (x_+,t), \label{(3)}
\end{equation}
i.e. the forward in time motion
\begin{equation}
i\hbar {\partial \psi (x_+,t) \over \partial t}=H_+\psi (x_+,t), \label{(4a)}
\end{equation}
and the backward in time motion
\begin{equation}
-i\hbar {\partial \psi^* (x_-,t) \over \partial t}=H_-\psi^* (x_-,t),
\label{(4b)}
\end{equation}
yielding
\begin{equation}
i\hbar {\partial (x_+|\rho (t)|x_-) \over \partial t}=
{\cal H}\ (x_+|\rho (t)|x_-), \label{(5a)}
\end{equation}
where
\begin{equation}
{\cal H}=H_+ -H_-. \label{(5b)}
\end{equation}
The requirement of working with two copies of the Hamiltonian
(i.e. $H_{\pm }$) operating on the outer product of two Hilbert
spaces has been implicit in quantum mechanics since the very beginning
of the theory. For example, from Eqs.(\ref{(5a)}), (\ref{(5b)})
one finds immediately that
the eigenvalues of the dynamic operator ${\cal H}$ are
directly the Bohr transition frequencies $\hbar \omega_{nm}=E_n-E_m$
which was the first clue to the explanation of spectroscopic structure.
If one accepts the notion of both forward in time and backward in time
Hilbert spaces, then the following physical picture of two slit
diffraction emerges. The particle can go forward and backward in time
through slit 1. This is a classical process. The particle can go forward
in time and backward in time through slit 2, which is also
classical since for classical cases $x_+(t)=x_-(t)$. On the other hand,
the particle can go forward in time through slit 1 and backward in time
through slit 2, or forward in time through slit 2 and backward in time
through slit 1. These are the source for quantum
interference since $|x_+(t)-x_-(t)|>0$. The notion that a quantum particle
has two coordinates $x_{\pm }(t)$ moving at the same time is central.
In Sec.2 we show by explicit calculation of diffraction patterns that it
is the difference between the two motions
\begin{equation}
y=x_+ -x_- \label{(6)}
\end{equation}
that induces quantum interference.
(ii) The second part of the picture involves the question of how a
classical situation with $x_+(t)=x_-(t)$ arises. In Sec.3, the Brownian
motion of a quantum particle is discussed along with the damped evolution
operator modification of Eqs.(\ref{(5a)}), (\ref{(5b)})
\cite{DO} which becomes
(for a Brownian particle of
mass $M$ moving in a potential U(x) with a damping resistance R)
\cite{SWV, CS, TS}
\begin{equation}
{\cal H}_{Brownian}= \frac{1}{2M}
\left(p_+ - \frac{R}{2}\, x_-\right)^2-\frac{1}{2M}
\left(p_- + \frac{R}{2}\, x_+\right)^2
+U(x_+)-U(x_-)-
{ik_BTR \over \hbar}(x_+-x_-)^2, \label{(7a)}
\end{equation}
\begin{equation}
p_\pm =-i\hbar {\partial \over \partial x_\pm}, \label{(7b)}
\end{equation}
\begin{equation}
i\hbar {\partial (x_+|\rho (t)|x_-) \over \partial t}=
{\cal H}_{Brownian}\ (x_+|\rho (t)|x_-), \label{(7c)}
\end{equation}
where the density operator in general describes a mixed statistical state.
In Sec.3 it will also be shown that the thermal bath contribution to
the right hand side of Eq.(\ref{(7a)})
(proportional to fluid temperature T) is
equivalent to a white noise fluctuation source coupling the forward and
backward motions in Eq.(\ref{(6)}) according to
\begin{equation}
<y(t)y(t^\prime )>_{noise}={\hbar^2 \over 2Rk_BT}\delta (t-t^\prime ),
\label{(8)}
\end{equation}
so that continual thermal fluctuations are always occurring in the
difference Eq.(\ref{(6)}) between forward in time and backward in time
coordinates.
That the forward and backward in time motions continually occur can also
be seen by constructing the forward and backward in time velocities
\begin{equation}
v_{\pm }={\partial {\cal H}_{Brownian}\over \partial p_{\pm }}
=\pm \, \frac{1}{M}\left( p_\pm \mp \frac{R}{2}\, x_\mp \right). \label{(9)}
\end{equation}
These velocities do not commute
\begin{equation}
[v_+,v_-]=i\hbar \,{R\over M^2}, \label{(10)}
\end{equation}
and it is thereby impossible to fix the velocities forward and backward
in time as being identical. Note the similarity between Eq.(\ref{(10)})
and the
usual commutation relations for the quantum velocities
${\bf v}=({\bf p}-(e{\bf A}/c))/M$ of a charged particle moving in a
magnetic field ${\bf B}$; i.e. $[v_1,v_2]=(i\hbar eB_3/M^2c)$. Just as
the magnetic field ${\bf B}$ induces a Aharonov-Bohm phase interference
for the charged particle, the Brownian motion friction coefficient $R$
induces a closely analogous phase interference between forward and
backward motion which expresses itself as mechanical damping. This part
of the picture is discussed in Sec.4. Sec.5 is devoted to concluding
remarks.
\section{Two Slit Diffraction}
Shown in Fig.1 is a picture of a two slit experiment. What is
required to derive the diffraction pattern is knowledge of the wave
function $\psi_0(x)$ of the particle at time zero when it ``passes through
the slits'', or equivalently the density matrix
\begin{equation}
(x_+|\rho_0|x_-)=\psi^*_0 (x_-)\psi_0 (x_+). \label{(11)}
\end{equation}
At a latter time $t$ we wish to find the probability density for the
electron to be found at position $x$ at the detector screen,
\begin{equation}
P(x,t)=(x|\rho (t)|x)=\psi^* (x,t)\psi (x,t). \label{(12)}
\end{equation}
The solution to the free particle Schr\"odinger equation is
\begin{equation}
\psi (x,t)=\Big({M\over 2\pi\hbar it} \Big)^{1/2} \int_{-\infty}^{\infty}
\exp\left[\frac{i}{\hbar } A(x-x^\prime,t) \right]\psi_0 (x^\prime )
\;dx^\prime ,\label{(13a)}
\end{equation}
where
\begin{equation}
A(x-x^\prime,t)={M(x-x^\prime )^2\over 2t} \label{(13b)}
\end{equation}
is the Hamilton-Jacobi action for a classical free particle to move from
$x^\prime $ to $x$ in a time $t$. Eqs.(\ref{(11)})-(\ref{(13b)}) imply that
\begin{equation}
P(x,t)={M\over 2\pi\hbar t}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\left[iM{(x-x_+)^2-(x-x_-)^2 \over 2\hbar t} \right]
(x_+|\rho_0|x_-)\;dx_+dx_-. \label{(14)}
\end{equation}
The crucial point is that the density matrix $(x_+|\rho_0|x_-)$ when
the electron ``passes through the slits'', depends non-trivially on the
difference $(x_+-x_-)$ between the forward in time and backward in
time coordinates. Were $x_+$ and $x_-$ always the same, then Eq.(\ref{(14)})
would imply that $P(x,t)$ not oscillate in $x$, i.e. there would
not be the usual quantum diffraction. What is required for quantum
interference in Eq.(\ref{(14)}) (cf. also Eq.(\ref{(13b)}) )
is that the forward in time action
$A(x-x_+,t)$ differs from the backward in time action $A(x-x_-,t)$
for the phase interference to appear in the final probability
density $P(x,t)$.
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\vspace*{80mm}
\special{psfile="vitfig1.eps"
hscale=80 vscale=80
hoffset=-40pt voffset=-270pt}
\caption{Two slit experiment.}
\end{figure}
For the usual quantum diffraction limit (see Fig.1)
\begin{equation}
w \ll d \ll D, \label{(15)}
\end{equation}
the diffraction pattern is adequately described by $|x|\gg|x_\pm|$;
i.e.
\begin{equation}
P(x,t)\approx {M\over 2\pi\hbar t}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\left[-i{M\,x\,(x_+-x_-)\over \hbar t} \right]
(x_+|\rho_0|x_-)\;dx_+dx_-. \label{(16)}
\end{equation}
For the initial wave function we write
\begin{equation}
\psi_0(x)={1\over \sqrt{2}}\Big[\phi (x-d)+\phi (x+d)\Big], \label{(17)}
\end{equation}
where
\begin{equation}
\phi(x)={1\over \sqrt{w}} \quad if \quad
|x|\leq \frac{w}{2} \quad and \quad 0 \quad
otherwise. \label{(18)}
\end{equation}
Eqs.(\ref{(11)}) and (\ref{(17)}) imply that
\begin{eqnarray} \nonumber
(x_+|\rho_0|x_-)&=&{1\over 2}\Big\{\phi (x_+-d)\phi (x_--d)+
\\
&& +
\phi (x_++d)\phi (x_-+d)+\phi (x_+-d)\phi (x_-+d)
+\phi (x_++d)\phi (x_--d)\Big\}. \label{(19)}
\end{eqnarray}
The four terms in Eq.(\ref{(19)}) describe,
respectively, the electron going
forward and backward in time through slit 1, forward and backward
in time through slit 2, forward in time through slit 1 and backward
in time through slit 2, and forward in time through slit 2 and backward
in time through slit 1.
The integral in Eq.(\ref{(16)}) is elementary and yields
\begin{equation}
P(x,t)\approx {4 \hbar \ t \over \pi M w \ x^2} \;
\cos^2\Big({Md\ x \over \hbar\ t} \Big)
\sin^2\Big({Mw \ x \over 2\hbar \ t }\Big). \label{(20)}
\end{equation}
Defining
\begin{equation}
K={Mvd \over \hbar D},\ \ \beta={w\over d},
\end{equation}
where $v=D/t$ is the velocity of the incident electron, Eq.(\ref{(20)})
reads
\begin{equation}
P(x,D)\approx {4\over \pi \beta K x^2} \;
\cos^2(Kx)
\sin^2(\beta Kx). \label{(21)}
\end{equation}
This conventional diffraction result is plotted in Fig.2.
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\vspace*{80mm}
\special{psfile="matfig.eps"
hscale=130 vscale=130
hoffset=-100pt voffset=-750pt}
\caption{Two slit interference pattern.}
\end{figure}
\section{Quantum Mechanics with Dissipation}
The need to double the degrees of freedom of a Brownian motion particle
is implicit even in the classical theory. Recall that in the classical
Brownian theory one employs the equation of motion
\begin{equation}
M\ddot{x}(t)+R\dot{x}(t)=f(t), \label{(22)}
\end{equation}
where $f(t)$ is a random (Gaussian distributed) force obeying
\begin{equation}
<f(t)f(t^\prime )>_{noise}=2\,R\,k_BT\; \delta (t-t^\prime). \label{(23)}
\end{equation}
To enforce Eq.(\ref{(22)}) one can employ a delta functional classical
constraint representation as a functional integral
\begin{equation}
\delta[M\ddot{x}+R\dot{x}-f]=
\int {\cal D}y \; \exp\left[{i\over \hbar}\,
\int dt \; y\{f-M\ddot{x}-R\dot{x}\}\right]. \label{(24)}
\end{equation}
Note in Eq.(\ref{(24)}) that one needs a constant $\hbar $ with dimensions of
action which from purely classical considerations cannot be fixed in
magnitude. From the viewpoint of quantum mechanics, we know how to fix
the magnitude. (Exactly the same situation prevails in the purely classical
statistical mechanics of Gibbs. The dimensionless phase space volume
is $\Pi_k (dp_kdq_k/2\pi \hbar )$ and the precise value to be chosen for
the action quantum $2\pi \hbar $ was evident only after quantum theory.)
Integration by parts in the time integral of Eq.(\ref{(24)}),
and averaging over
the fluctuating force $f$ yields
\begin{equation}
<\delta[M\ddot{x}+R\dot{x}-f]>_{noise}=
\int {\cal D}y <\exp\left[{i\over \hbar}
\int dt \;{\cal L}_f(\dot{x},\dot{y},x,y)\right]>_{noise}, \label{(25)}
\end{equation}
where
\begin{equation}
{\cal L}_f(\dot{x},\dot{y},x,y)=M\dot{x}\dot{y}+
{R\over 2}(x\dot{y}-y\dot{x})+fy. \label{(26)}
\end{equation}
At the classical level, the constraint condition introduced a new
coordinate $y$, and from a Lagrangian viewpoint
\begin{equation}
{d\over dt}{\partial {\cal L}_f\over \partial \dot{y}}=
{\partial {\cal L}_f\over \partial y} \quad ; \ \ \
{d\over dt}{\partial {\cal L}_f\over \partial \dot{x}}=
{\partial {\cal L}_f\over \partial x},
\end{equation}
i.e.
\begin{equation}
M\ddot{x}+R\dot{x}=f \quad ;\ \ \
M\ddot{y}-R\dot{y}=0. \label{(27)}
\end{equation}
It is in fact true that the Lagrangian system Eqs.(\ref{(26)})-(\ref{(27)})
were discovered from a completely classical viewpoint \cite{BA}. In the
$x$ coordinate there is damping, but in the $y$ coordinate there is
amplification.
Although the Lagrangian Eq.(\ref{(26)}) was not here motivated by quantum
mechanics, it is a simple matter to make contact with the theory
of a quantum Brownian particle moving in a classical fluid using
the transformation \cite{SWV}
\begin{equation}
x_{\pm}=x\pm {y \over 2}. \label{(28)}
\end{equation}
In this case, after averaging over the random force using
\begin{equation}
<\exp\left[{i\over \hbar}\int dt \; y(t)f(t)\right]>_{noise}=
\exp\left[-{k_BTR\over \hbar^2}\int dt \; y(t)^2\right], \label{(29)}
\end{equation}
one finds
\begin{equation}
<\exp\left[{i\over \hbar}\int dt \;{\cal L}_f(\dot{x},\dot{y},x,y)\right]>_{noise}
=\exp\left[{i\over \hbar}\int dt \;{\cal L}(\dot{x}_+,\dot{x}_-,x_+,x_-)\right],
\label{(30)}
\end{equation}
with a complex Lagrangian
\begin{equation}
{\cal L}(\dot{x}_+,\dot{x}_-,x_+,x_-)={M\over 2}(\dot{x}_+^2-\dot{x}_-^2)
+{R\over 2}(\dot{x}_+x_- - \dot{x}_-x_+)\,+ \,i
{k_BTR\over \hbar}(x_+-x_-)^2, \label{(31)}
\end{equation}
In evaluating Eq.(\ref{(29)})
we employed Eq.(\ref{(23)}) and the Gaussian nature
of the random force. Considered as a statistical probability in the
coordinate $y$, Eq.(\ref{(23)})
represents a Gaussian process with a correlation
function given in Eq.(\ref{(8)}).
Employing the Lagrangian in Eq.(\ref{(31)}) in a path
integral formulation for the density matrix \cite{SW, FEV},
\begin{equation}
(x_+|\rho (t)|x_-)=
\int_{-\infty}^\infty \int_{-\infty}^\infty
K(x_+,x_+^\prime ,x_-,x_-^\prime ,t)\,
(x_+^\prime|\rho_0|x_-^\prime) \;dx_+^\prime dx_-^\prime , \label{(32a)}
\end{equation}
and
\begin{equation}
K(x_+,x_+^\prime ,x_-,x_-^\prime ,t)=
\int_{x_+^\prime}^{x_+} {\cal D}x_+(t^\prime) \int_{x_-^\prime}^{x_-}
{\cal D}x_-(t^\prime)\ \exp\left[{i\over \hbar }\int_0^t
{\cal L}^\prime dt^\prime \right], \label{(32b)}
\end{equation}
yields the equation of motion (\ref{(7c)}).
Note, from Eqs.(\ref{(7a)})-(\ref{(7c)}) the normalization integral
\begin{equation}
N(t)=Tr\rho (t)=\int_{-\infty}^\infty \int_{-\infty}^\infty
\delta (x_+-x_-)\;(x_+|\rho (t)|x_-)\;dx_+ dx_- \label{(34a)}
\end{equation}
obeys
\begin{equation}
\dot{N}(t)=- {R\over 2M}N(t)\ ,\ \ \ \ N(t)=N(0)e^{-\frac{R}{2M}t}.
\label{(35b)}
\end{equation}
The decay of the normalization is a consequence of the customary
procedure of integrating out an infinite number of thermal Brownian motion
bath coordinates in statistical mechanics. This process gives rise
to an effective Hamiltonian ${\cal H}_{Brownian}$ which even in the limit
$T\to 0$ is not self adjoint; i.e. with
\begin{equation}
{\cal H}_0=\lim_{T\to 0}\ {\cal H}_{Brownian}, \label{(36)}
\end{equation}
the eigenvalues of ${\cal H}_0$ are complex \cite{FE, QD}. These complex
eigenvalues lead to the (temperature independent) decay of $N(t)$. To
keep the probability ``normalized'' one merely uses the average
$<A>=(tr(\rho A)/tr\rho)$.
At zero temperature, the equation of motion for the density matrix
is given by
\begin{equation}
i\hbar {\partial (x_+|\rho (t)|x_-) \over \partial t}=
{\cal H}_0\ (x_+|\rho (t)|x_-), \label{(37a)}
\end{equation}
\begin{equation}
{\cal H}_0\ =
\frac{1}{2M}\left(p_+ - \frac{R}{2} \, x_-\right)^2 -
\frac{1}{2M}\left(p_- + \frac{R}{2} \, x_+\right)^2. \label{(37b)}
\end{equation}
The solution to Eq.(\ref{(37a)}) has the form
\begin{equation}
(x_+|\rho (t)|x_-)=
\int_{-\infty}^\infty \int_{-\infty}^\infty
K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t)\,
(x_+^\prime|\rho_0|x_-^\prime) \; dx_+^\prime dx_-^\prime , \label{(38)}
\end{equation}
where
\begin{equation}
K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t)=
e^{-i{\cal H}_0t/\hbar}\delta (x_+-x_+^\prime )\delta (x_--x_-^\prime ),
\label{(39)}
\end{equation}
or, in path integral form
\begin{equation}
K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t)=
\int_{x_+^\prime}^{x_+} {\cal D}x_+(s) \int_{x_-^\prime}^{x_-}
{\cal D}x_-(s)\ \exp\left[{i\over \hbar }\int_0^t
{\cal L}_0(s) ds\right], \label{(40)}
\end{equation}
where
\begin{equation}
{\cal L}_0(s)
={M\over 2}\Big[\dot{x}_+^2(s)-\dot{x}_-^2(s)\Big]
+{R\over 2}\Big[ \dot{x}_+(s)x_-(s)-\dot{x}_-(s)x_+(s)\Big]. \label{(41)}
\end{equation}
From Eqs.(\ref{(40)}), (\ref{(41)}),
a translation of the coordinates by a constant
$(x_+,x_-)\to (x_++a_+,x_-+a_-)$ yields
\begin{eqnarray} \nonumber
&&K_0 (x_++a_+,x_+^\prime +a_+,x_-+a_-,x_-^\prime +a_-,t)= \hspace{8cm}
\\
&& \hspace{4cm}
\exp\left[{iR\over 2\hbar}\Big(a_-(x_+-x_+^\prime)-
a_+(x_--x_-^\prime)\Big)\right]
K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t). \label{(42)}
\end{eqnarray}
From Eq.(\ref{(42)}),
\begin{equation}
K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t)=
e^{i\Phi (x_+,x_-,x_+^\prime ,x_-^\prime )}
{\cal F}_0 (x_+-x_+^\prime ,x_--x_-^\prime ,t), \label{(43a)}
\end{equation}
where
\begin{equation}
\Phi (x_+,x_-,x_+^\prime ,x_-^\prime )=
{R\over 2\hbar }(x_+x_-^\prime -x_-x_+^\prime).
\label{(43b)}
\end{equation}
From
\begin{equation}
i\hbar {\partial K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t)
\over \partial t}={\cal H}_0
\ K_0 (x_+,x_+^\prime ,x_-,x_-^\prime ,t), \label{(44)}
\end{equation}
and Eq.(\ref{(43a)}) one finds
\begin{equation}
i \hbar {\partial {\cal F}_0 (x_+,x_-,t)\over \partial t}=
\left[\frac{1}{2M}\left(p_+- \frac{R}{2} \, x_-\right)^2 \,-
\,\frac{1}{2M}\left(p_- + \frac{R}{2}
\, x_+ \right)^2 \right]{\cal F}_0 (x_+,x_-,t). \label{(46)}
\end{equation}
With
\begin{equation}
\gamma =\frac{R}{2M}, \label{(47a)}
\end{equation}
the solution of Eq.(\ref{(46)}) is given by
\begin{equation}
{\cal F}_0 (x_+,x_-,t)=
{M\gamma \over 2\pi \hbar\ \sinh(\gamma t)}
\exp\left[\frac{i}{2\hbar} M\,\gamma \ \coth(\gamma t)(x_+^2-x_-^2)\right].
\label{(47b)}
\end{equation}
\vspace{.5cm}
\section{Phase Coherence and Dissipative Flux}
The above results may be applied to the two slit diffraction
problem as in Eq.(\ref{(14)}).
The general result is that the probability
density in $x$ is given by
\begin{equation}
P(x,t)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
K(x,x_+,x,x_-,t)\,(x_+|\rho_0|x_-)\;dx_+dx_-. \label{(48)}
\end{equation}
In the regime of Eq.(\ref{(16)}) we then obtain from
Eqs.(\ref{(43a)}),(\ref{(43b)}) and (\ref{(47a)})-(\ref{(48)})
and the renormalized time $\tau $
\begin{equation}
\gamma \tau =e^{-\gamma t}\sinh(\gamma t), \label{(49a)}
\end{equation}
\begin{equation}
e^{\gamma t}P_0(x,t)\approx
{M\over 2\pi\hbar \tau}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp\left[-\frac{iM}{\hbar \tau} \,x\, (x_+-x_-) \right]
(x_+|\rho_0|x_-)\;dx_+dx_-. \label{(49b)}
\end{equation}
Comparing Eq.(\ref{(16)}) to Eq.(\ref{(49b)})
one finds the following remarkable
result: For a particle in a bath which induces a damping $\gamma =(R/2M)$
at zero temperature, the slit diffraction patterns for
the frictional case can be obtained from those of the zero friction
case. All that is required is to rescale the effective time
according to Eq.(\ref{(49a)}).
The probability density (at zero temperature) to find a particle in
the interval $dx$ is proportional to
\begin{equation}
P(x,t)=
{M\gamma \over 2\pi \hbar \sinh(\gamma t)}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
\exp \left[i\, \phi (x,x_+-x_-)+ i \,{M\gamma (x_+^2-x_-^2)
\over 2\hbar \tanh(\gamma t) }\right]
(x_+|\rho_0|x_-)\;dx_+dx_-, \label{(50a)}
\end{equation}
where
\begin{equation}
\phi (x,x_+-x_-)=-{M \gamma \,x (x_+-x_-)\over \hbar}
=-{R \,x (x_+-x_-)\over 2\hbar}. \label{(50b)}
\end{equation}
Eqs.(\ref{(50a)}), (\ref{(50b)}) follow from Eqs.(\ref{(43a)}),
(\ref{(43b)})
and (\ref{(47a)})-(\ref{(48)}).
The phase in Eq.(\ref{(43b)}), i.e. $\Phi (x_+,x_-,x_+^\prime ,x_-^\prime )=
(R/ 2\hbar)(x_+x_-^\prime - x_-x_+^\prime )$ represents a ``dissipative
flux'' $2 \hbar \Phi =R\ Area$. With ${\bf X}=(x_+,x_-)$ and
${\bf X}^\prime =(x_+^\prime ,x_-^\prime) $ as vectors in a plane,
$Area={\bf N\cdot }({\bf X \times X}^\prime )$,
where ${\bf N}$ is the vector normal to the plane. The phase
$
\phi (x,x_+^\prime -x_-^\prime )=\Phi (x_+=x,x_-=x,x_+^\prime,x_-^\prime)
$
in Eqs. (\ref{(50a)}) and (\ref{(50b)}) is of the dissipative flux type.
Note the similarity between Eq.(\ref{(37b)})
and the Hamiltonian for a particle
in the $x-y$ plane with a magnetic field in the z-direction; i.e.
$H=({\bf p}-e\,{\bf B\times r}/2c)^2/2M$. For the magnetic field case,
the flux is $B\times Area $ while for the closely analogous case of
${\cal H}_0$ the flux is $R\times Area$.
Magnetic flux induces Aharonov-Bohm phase interference
for the charged particle. Dissipative flux
yields an analogous phase interference between forward and
backward in time motion as expressed by mechanical damping.
The similarity is most easily appreciated in the path integral
formulation as in Eqs.(\ref{(31)}) and (\ref{(32a)}), (\ref{(32b)}).
The resistive part of the action is
\begin{equation}
{\cal S}_R={R\over 2}\int (x_-\dot{x}_+ - x_+\dot{x}_- ) dt=
{R\over 2}\int (x_-dx_+ - x_+dx_-).
\label{(51)}
\end{equation}
To see the phase interference for two different paths $P_1$ and
$P_2$ in the $(x_+,x_-)$ plane with the same endpoints,
one should compute (with $\int_{P_1}-\int_{P_2}=\oint$),
\begin{equation}
{{\cal S}_{R}(\sma{interference})\over \hbar }={R\over 2\hbar }
\oint (x_-dx_+-x_+dx_-)={R\, \Sigma (P_1,P_2)\over \hbar },
\label{(52)}
\end{equation}
where $\Sigma (P_1,P_2)$ is the oriented area between the two paths $P_1$
and $P_2$. Such phase interference $\exp\left[i R \,\Sigma /\hbar\right]$ enters
into the path integral formulation of the problem in Eq.(\ref{(40)}).
The condition of constructive phase interference is thereby
$R\,\Sigma =2\pi n \hbar $ where $n=0,\pm 1,\pm 2,...$ is a
quantization integer.
\section{Conclusions}
In the conventional textbook description of quantum mechanics,
one considers that there is but one coordinate $x$
(or more generally one set of coordinates) which describes a
physical system. As shown by Schwinger (in his seminal work on
Brownian motion \cite{SW}), in quantum mechanical theory it is
often more natural to consider doubling the system coordinates,
in our case from one coordinate $x(t)$ describing motion in time
to two coordinates, say $x_+(t)$ going forward in time
and $x_-(t)$ going backward in time.
In this picture, a system acts in a classical fashion if
the two paths can be identified, i.e.
$x_{classical}(t)\equiv x_{+\ classical}(t)\equiv x_{-\ classical}(t)$.
When the system moves so that the forward in time and backward in time
motions are (at the same time) unequal $x_+(t)\ne x_-(t)$,
then the system is behaving in a quantum mechanical fashion and
exhibits interference patterns in measured position probability densities.
Of course when $x$ is actually measured there is only one {\it classical}
$x=x_+=x_-$.
It is only when you do not look at a coordinate, e.g. do not look at which
slit the electron may have passed, that the quantum picture is valid
$x_+\ne x_-$. In this fascinating regime in which coordinate doubling
and path splitting takes place, we are all under the dictates of Bohr who
finally warns us not to ask what the quantum system is really doing.
When the system is quantum mechanical just add up the amplitudes and
absolute square them. Ask nothing more.
In this work we have concentrated on the low temperature limit, which
means $T \ll T_\gamma $ where
\begin{equation}
k_BT_\gamma=\hbar \gamma={\hbar R\over 2M}. \label{(53)}
\end{equation}
In the high temperature regime $T \gg T_\gamma $, the thermal bath
motion suppresses the probability for $x_+\ne x_-$ via the thermal term
$(k_BTR /\hbar)(x_+-x_-)^2$ in Eq.(\ref{(7a)}). In terms of the diffusion
coefficient in Eqs.(\ref{(1)}) and (\ref{(53)}), i.e.
\begin{equation}
D={T\over T_\gamma }\Big({\hbar \over 2M}\Big), \label{(54)}
\end{equation}
the condition for classical Brownian motion for high mass particles
is that $D \gg (\hbar /2M)$, and the condition for quantum interference
with low mass particles is that $D \ll (\hbar /2M)$. For a single atom
in a fluid at room temperature it is typically true that
$D\sim (\hbar /2M)$, equivalently $T\sim T_\gamma $ so that quantum
mechanics plays an important but perhaps not dominant role in the
Brownian motion. For large particles (in, say, colloidal systems)
classical Brownian motion would appear to dominate the motion.
It is interesting to note that in the formulation of quantum mechanics
known as stochastic quantization, $(\hbar /2M)$ plays the role of a
diffusion coefficient of a sort defined by Nelson \cite{NE} which also
distinguishes forward and backwards in time splitting. In such a
formulation the distinction between low temperature quantum motions
and high temperature classical motions would become the distinction
between Nelson diffusion and Einstein diffusion.
It is remarkable that, although in different contexts and in different
view point frameworks, coordinate doubling has also entered into the
canonical quantization of finite temperature field theoretical systems
\cite{TFD} as well as other dissipative systems \cite{FE, QD} and it
appears to be intimately related to the algebraic properties of the
theory \cite{DM, KH}.
Finally, we note that the "negative" kinematic term in the Lagrangian
(38) also appears in two-dimensional gravity models leading to (at least)
two different strategies in the quantization method \cite{JA}: the
Schr\"odinger representation approach, where no negative norm appears, and
the string/conformal field theory approach where negative norm states
arise as in Gupta-Bleurer electrodynamics. It appears to be an
interesting question to ask about any deeper connection between the
$(x_+,x_-)$ Schwinger formalism and the subtelties of low
dimensional gravity theory.
We hope that the views discussed in this work have clarified the
nature of coordinate doubling framework.
\vspace{0.7cm}
\noindent{\bf Acknowledgments}
This work has been partially supported by DOE in USA, by INFN in Italy and
by EU Contract ERB CHRX CT940423.
\newpage
|
1,116,691,497,228 | arxiv | \section{Introduction}
\label{secIntro}
In recent years some interest in the theory of universal coding has
focused on detecting hierarchical structure in compressed data. An
important tool for this task are universal grammar-based codes
\cite{KiefferYang00} which compress strings by transforming them first
into special context-free grammars \cite{CharikarOthers05} and then
encoding the grammars into less redundant strings. This article
presents several bounds for the vocabulary size, i.e., the number of
distinct nonterminal symbols in a~grammar-based compression for
a~string. Indirectly, the bounds concern also the code redundancy,
which can be elucidated as follows.
Let $X_{m:n}:=(X_k)_{m\le k\le n}$ be the blocks of finitely-valued
variables $X_i:\Omega\rightarrow\mathbb{X}=\klam{0,1,...,D-1}$ drawn
from stationary process $(X_k)_{k\in\mathbb{Z}}$ on
$(\Omega,\mathcal{J},P)$. Assuming expectation operator $\sred{}$,
define $n$-symbol block entropy $H(n):=H(X_{1:n})=-\sred{\log
P(X_{1:n})}$ and excess entropy $E(n):=
I(X_{1:n};X_{n+1:2n})=2H(n)-H(2n)$, being mutual information between
adjacent blocks \cite{CrutchfieldFeldman01}.
On the other hand, let $C:\mathbb{X}^+\rightarrow \mathbb{X}^+$ be
a~uniquely decodable code. For code length $\abs{C(\cdot)}$ being an
analog of algorithmic complexity \cite{CharikarOthers05}, define
$$I^C(u:v):=\abs{C(u)}+\abs{C(v)}-\abs{C(uv)}$$ as the analog of
algorithmic mutual information \cite{GrunwaldVitanyi03}. We
will denote the expected normalized code length and its excess as
\begin{align*}
H^C(n)&:=\sred{\abs{C(X_{1:n})}} \log D,
\\
E^C(n)&:=\sred{I^C(X_{1:n}:X_{n+1:2n})} \log D
\end{align*}
For a~uniquely decodable code, noiseless coding inequality $H^C(n)\ge
H(n)$ is satisfied and the code is called universal if compression
rate $\lim_{n} H^C(n)/n$ equals entropy rate $h:=\lim_{n} H(n)/n$ for
any stationary distribution $P((X_k)_{k\in\mathbb{Z}}\in\cdot)$. In
fact, the search for codes having the lowest redundancy on finite
strings can be restated as the task of finding universal codes with
the smallest excess code length $I^C(\cdot:\cdot)$ since
\begin{align}
\label{DiffECE}
\limsup_{n\rightarrow\infty} & \kwad{E^C(n) - E(n)}\ge 0,
\\
\limsup_{n\rightarrow\infty} & \kwad{E^C(n) - E^{C'}(n)}\ge 0
\text{ if $H^C(\cdot)\ge H^{C'}(\cdot)$},
\end{align}
for any universal codes $C$ and ${C'}$, cf.\ \cite{Debowski06,Debowski06c}.
The specific aim of the present note is to justify links between the
vocabulary size and excess code length $I^C(\cdot:\cdot)$ for
certain universal grammar-based codes. A~weaker form of this
connection was mentioned in the context of following linguistic
investigations, cf.\ \cite{Debowski06,Debowski06b}:
\begin{LaTeXenumerate}
\item Majority of words in a natural language text can be identified
as frequently repeated strings of letters. Grammar-based codes can
be used to detect these repeats. Distinct words of the text happen
to get represented as distinct nonterminal symbols in an
approximately smallest context-free grammar for the text
\cite{Wolff80,DeMarcken96}. The number of different
``significantly'' often repeated substrings in a~typical text can be
100 times greater than in a~comparable realization of a~memoryless
source \cite{Debowski06b}.
\item There is a~hypothesis that excess entropy of a~random natural
language text (imagined as a~stationary stochastic process with
$X_i$ being consecutive letters of the text) obeys $E(n)\asymp
\sqrt{n}$ rather than $E(n)=0$ as for a~memoryless source
\cite{Hilberg90} (cf.\ \cite{Debowski06c} for a~connection of such
an effect with nonergodicity). We asked whether the power-law growth
of $E(n)$ can be linked with the known empirical power-law growth of
the number of distinct words in a~text against the text length
\cite{Herdan64}.
\end{LaTeXenumerate}
In view of observation (i), our question in (ii) could be restated as:
Are excess entropy $E(n)$ and the expected vocabulary size of some
minimal code for string $X_{1:2n}$ approximately equal for every
stationary process? Trying to answer the question, we derived
inequality (\ref{DiffECE}) in \cite{Debowski06} and sought for further
links between the excess code length and the vocabulary size. The
result of \cite{Debowski06} concerning the latter is encouraging but
too weak. It relates the vocabulary size of the smallest grammar in
the sense of \cite{CharikarOthers05} to the Yang-Kieffer excess
grammar length rather than to the excess length of an actual universal
code.
In this article, we will strengthen the connection. We will prove
that excess code length $I^C(u:v)$ for some grammar-based code $C$ is
dominated by the product of the length of the longest repeated
substring in string $w:=uv$ and the vocabulary size of the code for
$w$. To get this inequality, it suffices that $C$ be the shortest
code in an algebraically closed subclass of codes using a~special
grammar-to-string encoder. There exist universal codes satisfying
this requirement.
Besides the mentioned dominance, we will justify an inequality in the
opposite direction and, additionally, show that the vocabulary size of
an irreducible grammar for string $w$ cannot be less than the square
root of the grammar length, cf.\ \cite{Debowski06b,KiefferYang00}.
This pair of inequalities might be used to lower-bound the redundancy
of codes based on irreducible grammars.
The exposition is following. Section \ref{secMorph} reviews
grammar-based coding. We construct local grammar-to-string encoders
(\ref{ssecEncoders}) and define minimal codes (\ref{ssecLengths}) with
respect to some classes of grammars (\ref{ssecGrammars}). Subsection
\ref{ssecUniversal} justifies universality of certain minimal codes
which use local encoders. Section \ref{secVocabulary} presents the
upper (\ref{ssecUpperExcess}) and the lower (\ref{ssecLowerExcess})
bounds for the excess lengths of a~minimal code expressed in terms of
its vocabulary size. Section \ref{secConclude} resumes the article.
\section{Grammar-based coding revisited}
\label{secMorph}
Grammar-based compression is founded on the following concept. An
\emph{admissible} grammar is a~context free-grammar which generates
singleton language $\klam{w}$, $w\in\mathbb{X}^+$, and whose
production rules do not have empty right-hand sides
\cite{KiefferYang00}. In such a~grammar, there is one rule per
nonterminal symbol and the nonterminals can be ordered so that the
symbols are rewritten onto strings of strictly succeeding symbols
\cite{KiefferYang00}.
Hence, an admissible grammar is given by its set of production
rules
$
\klam{
A_1\rightarrow\alpha_1,
A_2\rightarrow\alpha_2,
...,
A_n\rightarrow\alpha_n
}$,
where $A_1$ is the start symbol, other $A_i$ are
secondary nonterminals, and the right-hand sides of rules satisfy
$\alpha_i\in (\klam{A_{i+1},A_{i+2},...,A_n}\cup\mathbb{X})^+$. Since
the grammar can be restored also from sequence
\begin{align}
\label{GrammarNotation}
G=(\alpha_1,\alpha_2,...,\alpha_n),
\end{align}
we will call $G$ simply the grammar. Its \emph{vocabulary size}, i.e.,
the number of used nonterminal symbols, will be written
$$\voc{G}:=\card \klam{A_{1},A_{2},...,A_n} =n.$$
Let $\mathbb{X}^*=\mathbb{X}^+\cup\klam{\lambda}$, where $\lambda$ is
the empty word. For any string $\alpha\in
(\klam{A_{2},A_{3},...,A_n}\cup\mathbb{X})^*$, we denote its
\emph{expansion} with respect to $G=(\alpha_1,\alpha_2,...,\alpha_n)$
as $\dzi{\alpha}_G$ \cite{CharikarOthers05}, i.e.,
$\klam{\dzi{\alpha}_G}$ is the language generated by grammar
$(\alpha,\alpha_2,\alpha_3,...,\alpha_n)$. The set of admissible
grammars will be denoted as $\mathcal{G}$ and $\mathcal{G}(w)$ will be
the subset of admissible grammars which generate language $\klam{w}$,
$w\in\mathbb{X}^+$. Function
$\Gamma:\mathbb{X}^+\rightarrow\mathcal{G}$ such that
$\Gamma(w)\in\mathcal{G}(w)$ for all $w\in\mathbb{X}^+$ is called
a~\emph{grammar transform} \cite{KiefferYang00}.
If string $w$ contains many repeated substrings then some grammar in
$\mathcal{G}(w)$ can ``factor out'' the repetitions and may be used to
represent $w$ concisely. It is not straightforward, however, how to
quantify the size of a~grammar. In \cite{KiefferYang00} the length of
grammar $G=(\alpha_1,\alpha_2,...,\alpha_{\voc{G}})$ was defined as
\begin{align}
\label{YKlength}
\abs{G}:=\textstyle\sum_i \abs{\alpha_i},
\end{align}
where $\abs{\alpha}$ is the length of $\alpha\in
(\klam{A_{1},A_{2},...,A_n}\cup\mathbb{X})^*$. Function
(\ref{YKlength}) will be called Yang-Kieffer length.
For a grammar transform, ratio $\abs{\Gamma(w)}/\abs{w}$ can be quite
a~biased measure of string compressibility. Precisely, transform
$\Gamma$ is called \emph{asymptotically compact} if
\begin{align}
\label{AC}
\lim_{n\rightarrow\infty} \max_{w\in\mathbb{X}^n} \abs{\Gamma(w)}/n=0
\end{align}
and for each grammar in $\Gamma(\mathbb{X}^+)$ each nonterminal has
a~different expansion. There is plenty of such transforms
\cite{KiefferYang00,CharikarOthers05}.
Since the compression given by (\ref{AC}) is apparent, consider
\emph{grammar-based codes}, i.e., uniquely decodable codes
$C=B(\Gamma(\cdot)):\mathbb{X}^+\rightarrow \mathbb{X}^+$, where
$\Gamma:\mathbb{X}^+\rightarrow\mathcal{G}$ is a~grammar transform and
$B:\mathcal{G}\rightarrow\mathbb{X}^+$ is called a~\emph{grammar
encoder} \cite{KiefferYang00}. We have $\lim_n
\max_{w\in\mathbb{X}^n} \abs{C(w)}/n\ge 1$ necessarily. Nevertheless,
there exists a~grammar encoder
$B_\text{YK}:\mathcal{G}\rightarrow\mathbb{X}^+$ \cite{KiefferYang00}
such that
\begin{LaTeXenumerate}
\item set $B_\text{YK}(\mathcal{G})$ is prefix-free,
\item $\abs{B_\text{YK}(G)}\le \abs{G}(A+\log_D \abs{G})$ for some
$A>0$,
\item $C=B_\text{YK}(\Gamma(\cdot))$ is a~universal code for any
asymptotically compact transform $\Gamma$.
\end{LaTeXenumerate}
\subsection{Local grammar encoders}
\label{ssecEncoders}
It is hard to analyze the excess lengths of grammar-based codes which
use $B_\text{YK}$ given by \cite{KiefferYang00} as their
grammar-to-string encoder. We will define a~more convenient encoder.
It will represent a~grammar as a~string resembling list
(\ref{GrammarNotation}) but, simultaneously, it will constitute nearly
a~homomorphism between some operations on grammars and strings.
\begin{definition}
$\oplus:\mathcal{G}\times\mathcal{G}\rightarrow\mathcal{G}$
is called \emph{grammar joining} if
$$
G_1\in\mathcal{G}(w_1) \land
G_2\in\mathcal{G}(w_1)\implies
G_1\oplus G_2\in\mathcal{G}(w_1w_2).
$$
\end{definition}
\null
It would be convenient to use such grammar joining $\oplus$ and
encoder $B:\mathcal{G}\rightarrow\mathbb{X}^+$ that the edit
distance between $B(G_1\oplus G_2)$ and $B(G_1)B(G_2)$ be small.
Without making the idea too precise, such joining and encoder will be
called \emph{adapted}.
The following example of mutually adapted joining $\oplus$ and
encoders will be used in the next sections. For any function
$f:\mathbb{U}\rightarrow\mathbb{W}$ of symbols, where concatenation
on domains $\mathbb{U}^*$ and $\mathbb{W}^*$ is defined, denote
its extension onto strings as $f^*:\mathbb{U}^*\ni x_1x_2...x_m\mapsto
f(x_1)f(x_2)...f(x_m)\in\mathbb{W}^*$. For grammars
$G_i=(\alpha_{i1},\alpha_{i2},...,\alpha_{in_i})$, $i=1,2$, define
joining
\begin{align*}
G_1\oplus G_2:=(A_2A_{n_1+2},\,
&H_1^*(\alpha_{11}),H_1^*(\alpha_{12}),...,H_1^*(\alpha_{1n_1}),
\\
&H_2^*(\alpha_{21}),H_2^*(\alpha_{22}),...,H_2^*(\alpha_{2n_2})),
\end{align*}
where $H_1(A_j):=A_{j+1}$ and $H_2(A_j):=A_{j+n_1+1}$ for nonterminals
and $H_1(x):=H_2(x):=x$ for terminals $x\in\mathbb{X}$.
\begin{definition}
$B:\mathcal{G}\rightarrow\mathbb{X}^+$ is
a~\emph{local grammar encoder} if
\begin{align}
\label{LocalCoder}
B(G)=B_\text{S}^*(B_\text{N}(G)),
\end{align}
where:
\begin{LaTeXenumerate}
\item function
$B_\text{N}:\mathcal{G}\rightarrow(\klam{0}\cup\mathbb{N})^*$ encodes
grammars as strings of natural numbers so that the encoding of grammar
$G=(\alpha_1,\alpha_2,...,\alpha_n)$ is string
$$B_\text{N}(G):=F_1^*(\alpha_1)DF_2^*(\alpha_2)D...DF_n^*(\alpha_n)(D+1),$$
which employs relative indexing $F_i(A_j):=D+1+j-i$ for nonterminals and
identity transformation $F_i(x):=x$ for terminals
$x\in\mathbb{X}=\klam{0,1,...,D-1}$,
\item $B_\text{S}$ is any function of form $B_\text{S}:\klam{0}\cup\mathbb{N}
\rightarrow\mathbb{X}^+$ (for technical purposes, not necessarily
an injection)---we will call $B_\text{S}$ the natural number encoder.
\end{LaTeXenumerate}
\end{definition}
Indeed, local encoders are adapted to joining operation $\oplus$.
For instance, if $B(G_i)=u_iB_\text{S}(D+1)$ for some grammars $G_i$,
$i=1,2$, then $B(G_1\oplus G_2)=
B_\text{S}(D+2)B_\text{S}(D+2+\voc{G_1})
B_\text{S}(D)u_1B_\text{S}(D)u_2B_\text{S}(D+1)$.
There exist many prefix-free local encoders. Obviously, set
$B_\text{N}(\mathcal{G})$ itself is prefix-free. Therefore, encoder
(\ref{LocalCoder}) is prefix-free (and uniquely decodable) if
$B_\text{S}$ is also prefix-free, i.e., if $B_\text{S}$ is an
injection and set $B_\text{S}(\klam{0}\cup\mathbb{N})$ is prefix-free.
\subsection{Encoder-induced grammar lengths}
\label{ssecLengths}
Let us generalize the concept of grammar length.
\begin{definition}
For a~grammar encoder $B$, function $|B(\cdot)|$
will be called the \emph{$B$-induced grammar length}.
\end{definition}
For example, Yang-Kieffer length $\abs{\,\cdot\,}$ is
$B$-induced for a~local grammar encoder
$B=B_\text{S}^*(B_\text{N}(\cdot))$, where
\begin{align}
\label{YKCoder}
\text{$B_\text{S}(x)=\lambda$ for $x\in\klam{D,D+1}$
and $B_\text{S}(x)\in\mathbb{X}$ else}.
\end{align}
In the same spirit, we can extend the idea of the smallest grammar
with respect to the Yang-Kieffer length, discussed in
\cite{CharikarOthers05}. Subclass $\mathcal{J}\subset\mathcal{G}$ of
admissible grammars will be called \emph{sufficient} if there exists
a~grammar transform $\Gamma:\mathbb{X}^+\rightarrow\mathcal{J}$, i.e.,
if $\mathcal{J}\cap \mathcal{G}(w)\not=\emptyset$ for all
$w\in\mathbb{X}^+$. Conversely, we will call grammar transform
$\Gamma$ a~$\mathcal{J}$-grammar transform if
$\Gamma(\mathbb{X}^+)\subset\mathcal{J}$.
\begin{definition}
For grammar length $\aabs{\cdot}$, $\mathcal{J}$-grammar transform
$\Gamma$ will be called \emph{$(\aabs{\cdot},\mathcal{J})$-minimal
grammar transform} if $\aabs{\Gamma(w)}\le \aabs{G}$ for all
$G\in\mathcal{G}(w)\cap \mathcal{J}$ and $w\in\mathbb{X}^+$.
\end{definition}
\begin{definition}
Code $B(\Gamma(\cdot))$ will be called
\emph{$(B,\mathcal{J})$-minimal} if $\Gamma$ is
$(\aabs{\cdot},\mathcal{J})$-minimal for a~$B$-induced grammar length
$\aabs{\cdot}$.
\end{definition}
\begin{definition}
For a~grammar length $\aabs{\cdot}$, grammar subclasses
$\mathcal{J},\mathcal{K}\subset\mathcal{G}$ are called
\emph{$\aabs{\cdot}$-equivalent} if
$$\min_{G\in\mathcal{G}(w)\cap\mathcal{J}}
\aabs{G}=\min_{G\in\mathcal{G}(w)\cap\mathcal{K}} \aabs{G}\qquad \text{for
all $w\in\mathbb{X}^+$}.$$
\end{definition}
\subsection{Subclasses of grammars}
\label{ssecGrammars}
In section \ref{secVocabulary}, we will bound the excess lengths for
$(B,\mathcal{J})$-minimal codes, where $B$ are local encoders and
$\mathcal{J}$ are some sufficient subclasses. In subsection
\ref{ssecUniversal}, we will show that several of these codes are
universal. Prior to this, we have to define some necessary subclasses
of grammars.
First, we will say that $(\alpha_1,\alpha_2,...,\alpha_n)$ is
a~\emph{flat grammar} if $\alpha_i\in \mathbb{X}^+$ for $i>1$. The set
of flat grammars will be denoted as $\mathcal{F}$. Symbol
$\mathcal{D}_k\subset\mathcal{F}$ will denote the class of
\emph{$k$-block interleaved grammars}, i.e., flat grammars
$(\alpha_1,\alpha_2,...,\alpha_n)$, where $\alpha_i\in \mathbb{X}^k$
for $i>1$. On the other hand, $\mathcal{B}_k\subset\mathcal{D}_k$ will
stand for the set of \emph{$k$-block grammars}, i.e., $k$-block
interleaved grammars $(uw,\alpha_2,...,\alpha_n)$, where string
$u\in(\klam{A_{2},A_{3},...,A_n})^*$ contains occurrences of all
$A_{2},A_{3},...,A_n$ and string $w\in\mathbb{X}^*$ has length
$|w|<k$, cf.\ \cite{NeuhoffShields98}. Of course, classes
$\mathcal{B}_k$, $\mathcal{D}_k$, $\mathcal{B}:=\bigcup_{k\ge 1}
\mathcal{B}_k$, $\mathcal{D}:=\bigcup_{k\ge 1} \mathcal{D}_k$, and
$\mathcal{F}$ are sufficient.
Next, grammar $(\alpha_1,\alpha_2,...,\alpha_n)$ is called
\emph{irreducible} if
\begin{LaTeXenumerate}
\item each string $\alpha_i$ has a~different expansion
$\dzi{\alpha_i}_G$ and satisfies $\abs{\alpha_i}>1$,
\item each secondary nonterminal appears in string
$\alpha_1\alpha_2...\alpha_n$ at least twice,
\item each pair of consecutive symbols in strings
$\alpha_1,\alpha_2,...,\alpha_n$ appears at most once at
nonoverlapping positions \cite{KiefferYang00}.
\end{LaTeXenumerate}
The set of irreducible grammars will be denoted as $\mathcal{I}$. Any
$\mathcal{I}$-grammar transform is asymptotically compact
\cite{KiefferYang00} so it yields a~universal code when combined with
grammar encoder $B_\text{YK}$.
Starting with any grammar $G_1\in\mathcal{G}(w)$, one can construct an
irreducible grammar $G_2\in\mathcal{G}(w)$ by applying a~sequence of
certain reduction rules until the local minimum of functional
$2\abs{\,\cdot\,}-\voc{\cdot}$ is achieved \cite{KiefferYang00}. This
leads to the following lemma.
\begin{lemma}
\label{theoLengthIrred}
Classes $\mathcal{I}$ and $\mathcal{G}$ are
$\abs{\,\cdot\,}$-equivalent.
\end{lemma}
\begin{proof}
The only reduction rule applicable to a~grammar minimizing
$\abs{\,\cdot\,}$ is the introduction of a~new nonterminal denoting
a~pair of symbols which appears exactly twice on the right-hand side
of the grammar, cf. section VI in \cite{KiefferYang00}. This
reduction conserves Yang-Kieffer length.
\end{proof}
Additionally, we will say that grammar
$(\alpha_1,\alpha_2,...,\alpha_n)$ is \emph{partially irreducible} if
it satisfies conditions (i) and (ii) of irreducibility, as well as,
each pair of consecutive symbols in string $\alpha_1$ appears at most
once at nonoverlapping positions.
Let $\mathcal{P}$ stand for the set of partially irreducible
grammars. Of course, $\mathcal{I}\subset\mathcal{P}\subset\mathcal{G}$
and $\mathcal{P}$ is sufficient.
Although $\mathcal{F}\cap\mathcal{P}$ and $\mathcal{F}$ are not
$\abs{\,\cdot\,}$-equivalent, class $\mathcal{F}\cap\mathcal{P}$ is
sufficient and relates to $\mathcal{F}$ partially like $\mathcal{I}$
relates to $\mathcal{G}$. Some $\mathcal{F}\cap\mathcal{P}$-grammar
transform $\Gamma$ is a~modification of the longest matching
$\mathcal{I}$-grammar transform \cite{KiefferYang00,CharikarOthers05}.
In order to compute $\Gamma(w)$, we start with grammar
$\klam{A_1\rightarrow w}$ and we replace iteratively the longest
repeated substrings $u$ in the start symbol definition with new
nonterminals $A_i\rightarrow u$ until there is no repeat of length
$|u|\ge 2$. $\Gamma(w)$ is the modified grammar.
\subsection{Universal codes for local encoders}
\label{ssecUniversal}
Neuhoff and Shields proved that any
$(B_\text{NS},\mathcal{B})$-minimal code is universal for some encoder
$B_\text{NS}$ and the class of block grammars $\mathcal{B}$
\cite{NeuhoffShields98}. Encoder $B_\text{NS}$ resembles a~local
encoder. The main difference is encoding nonterminals $A_i$ as
strings of length $\floor{\log_D \voc{G}}+ 1$ rather than strings of
length $|B_\text{S}(D+i)|$. Therefore we can establish the following
proposition.
\begin{theorem}
\label{theoUni}
Let $B_\text{S}$ be such a~prefix-free natural number encoder
that $|B_\text{S}(\cdot)|$ is growing and
\begin{align}
\label{UniCoder}
\limsup_{n\rightarrow\infty} |B_\text{S}(n)|/\log_D n =1.
\end{align}
Then for any sufficient subclass of grammars
$\mathcal{J}\supset\mathcal{B}$, every
$(B_\text{S}^*(B_\text{N}(\cdot)),\mathcal{J})$-minimal code $C$ is
universal, that is, $\lim_{n} H^C(n)/n=h$ and $\limsup_{n}
K^C(X_{1:n})/n\le h$ almost surely for every stationary process
$(X_k)_{k\in\mathbb{Z}}$.
\end{theorem}
\begin{proof}
Consider $\mathcal{B}_k$-grammar transforms $\Gamma_k$.
For $\epsilon>0$ and stationary process
$(X_k)_{k\in\mathbb{Z}}$ with entropy rate $h$, let $k(n)$ be the largest
integer $k$ satisfying $k2^{k(H+\epsilon)}\le n$. We have
\begin{align*}
\limsup_{n\rightarrow\infty} \max_{w\in\mathbb{X}^n}
\frac{\log_D \voc{\Gamma_{k(n)}(w)}}{k(n)} &\le h+2\epsilon
,
\\
\lim_{n\rightarrow\infty}
\sred{\voc{\Gamma_{k(n)}(X_{1:n})}}\cdot k(n)/n &= 0
,
\\
\lim_{n\rightarrow\infty}
\voc{\Gamma_{k(n)}(X_{1:n})}\cdot k(n)/n &= 0
\text{ almost surely, cf.\ \cite{NeuhoffShields98}}
.
\end{align*}
Since $\lim_n k(n)=\infty$, a~$(B,\mathcal{J})$-minimal
code is universal if
\begin{align*}
|B(\Gamma_k(w))|\le \alpha k\voc{\Gamma_{k}(w)} + \gamma(k)\frac{n}{k}
\log_D \voc{\Gamma_{k}(w)},
\end{align*}
where $\alpha >0$ and $\lim_k \gamma(k)=1$. In particular, this
inequality holds for (\ref{LocalCoder}), (\ref{UniCoder}), and growing
$|B_\text{S}(\cdot)|$.
\end{proof}
The prefix-free natural number encoder $B_\text{S}$ satisfying
(\ref{UniCoder}) can be chosen, e.g., as the $D$-ary representation
$\omega:\mathbb{N}\rightarrow\mathbb{X}^*$ \cite{Elias75},
$|\omega(n)|=\ell(n)$, where $$\ell(n) :=
\begin{cases}
1 &\text{if } n < D, \\
\ell(\floor{\log_D n}) + \floor{\log_D n} + 1 &\text{if } n \ge D.
\end{cases}$$
Alternatively, we can use the $D$-ary representation
$\delta:\mathbb{N}\rightarrow\mathbb{X}^*$ \cite{Elias75},
$|\delta(n)|=
1
+2\floor{\log_D (1+\floor{\log_D n})}
+\floor{\log_D n}$.
\section{Bounds involving the vocabulary size}
\label{secVocabulary}
We will derive several inequalities for the vocabulary size of certain
minimal grammar-based codes. Frankly speaking, code universality is
irrelevant for the proofs. It is important, however, that the codes
use the local grammar encoders.
\subsection{Upper bounds for the excess lengths}
\label{ssecUpperExcess}
We will begin with defining several operations on grammars. For
strings $u,v\in\mathbb{X}^*$ with $n=\abs{u}$, $m=\abs{v}$, and $w=uv$,
define the \emph{left} and \emph{right croppings} of grammar
$G=(\alpha_1,\alpha_2,...,\alpha_n)\in\mathcal{G}(w)$ as
\begin{align*}
\mathbb{L}_n G:=(x_Ly_L,\alpha_2,...,\alpha_n)\in\mathcal{G}(u),
\\
\mathbb{R}_m G:=(y_Rx_R,\alpha_2,...,\alpha_n)\in\mathcal{G}(v),
\end{align*}
where exactly one of the following conditions holds:
\begin{LaTeXenumerate}
\item $\alpha_1=x_Lx_R$ and $y_Ly_R=\lambda$,
\item $\alpha_1=x_LA_ix_R$ for some nonterminal $A_i$, $2\le i\le n$,
with expansion $\dzi{A_i}_G=y_Ly_R$.
\end{LaTeXenumerate}
Next, for $G=(\alpha_1,\alpha_2,...,\alpha_n)$, define its
\emph{flattening} $\mathbb{F}G:=
(\alpha_1,\dzi{\alpha_2}_G,\dzi{\alpha_3}_G,...,\dzi{\alpha_n}_G)$.
The secondary part of the grammar will be denoted as $\mathbb{S}G:=
(\lambda,\alpha_2,\alpha_3,...,\alpha_n)$. Additionally, we will use
a~notation for the maximal length of a~nonoverlapping repeat in string
$w\in\mathbb{X}^*$, i.e.,
\begin{align*}
\mathbf{L}(w):=\max_{u,x,y,z\in\mathbb{X}^*:\, w=xuyuz} |u|.
\end{align*}
Now we can generalize Theorem 3 from \cite{Debowski06}. We will show
that the lengths of some minimal codes are almost subadditive.
Moreover, the excess lengths are dominated by the vocabulary size
multiplied by the length of the longest repeat.
\begin{theorem}
\label{theoUpper}
Let $B$ be local encoder (\ref{LocalCoder}). Introduce constants
\begin{align*}
W_m&:=\max_{0\le n\le D+2+m}|B_\text{S}(n)|.
\end{align*}
Let $\Gamma$ be a~$(\aabs{\cdot},\mathcal{J})$-minimal grammar
transform for the $B$-induced grammar length $\aabs{\cdot}$. Consider
code $C=B(\Gamma(\cdot))$, strings $u,v,w\in\mathbb{X}^+$, and
a~grammar class $\mathcal{K}$ which is $\aabs{\cdot}$-equivalent to
$\mathcal{J}$.
\begin{LaTeXenumerate}
\item If $G_1,G_2\in\mathcal{J}\implies G_1\oplus G_2\in\mathcal{K}$
then
\begin{align}
\label{UpperLower}
\abs{C(u)}+\abs{C(v)}-\abs{C(uv)} \ge -3W_0 - W_{\voc{\Gamma(u)}}.
\end{align}
\item If $G\in\mathcal{J}\implies \mathbb{L}_n G,\, \mathbb{R}_n
G\in\mathcal{K}$ for all valid $n$ then
\begin{align}
\label{UpperLeftRight}
\hspace{-2em}
\abs{C(u)},\, \abs{C(v)}
&\le \abs{C(uv)} + W_0\mathbf{L}(uv),
\\
\label{UpperUpper}
\hspace{-2em}
\abs{C(u)}+\abs{C(v)}-\abs{C(uv)}
&\le \aabs{\mathbb{S}\Gamma(uv)}+W_0\mathbf{L}(uv).
\end{align}
\item If $G\in\mathcal{J}\implies \mathbb{F}G\in\mathcal{K}$ then
\begin{align}
\label{UpperVoc}
\aabs{\mathbb{S}\Gamma(w)}+W_0\mathbf{L}(w)\le W_0\voc{\Gamma(w)}(1+\mathbf{L}(w)).
\end{align}
\end{LaTeXenumerate}
\emph{Remark 1:} In particular, (\ref{UpperLower}) holds for
$\mathcal{J}=\mathcal{G},\mathcal{P},\mathcal{I}$ while inequalities
(\ref{UpperLeftRight})--(\ref{UpperVoc}) hold for
$\mathcal{J}=\mathcal{G},\mathcal{P},\mathcal{I},\mathcal{F},\mathcal{D},\mathcal{D}_k$.
Moreover, (\ref{UpperUpper}) and (\ref{UpperVoc}) imply together bound
\begin{align}
\label{UpperVocII}
\hspace{-0.5em}
\abs{C(u)}+\abs{C(v)}-\abs{C(uv)}
&\le W_0\voc{\Gamma(uv)}(1+\mathbf{L}(uv)),
\end{align}
which we have mentioned in the introduction.
\\
\emph{Remark 2:} Theorem 3 in \cite{Debowski06} is a~restriction of
Theorem \ref{theoUpper} to $B_\text{S}$ given by (\ref{YKCoder}) and
$\aabs{\cdot}$ equal to Yang-Kieffer length $\abs{\,\cdot\,}$.
\end{theorem}
\begin{proof}
\begin{LaTeXenumerate}
\item The result is implied by $\aabs{\Gamma(uv)}\le
\aabs{\Gamma(u)\oplus\Gamma(v)}$ and $$\aabs{G_1\oplus G_2}
\le\aabs{G_1}+\aabs{G_2}+|B_\text{S}(D+2+\voc{G_1})|+3W_0,$$
where $G_1=\Gamma(u)$ and $G_2=\Gamma(v)$.
\item Set $n=\abs{u}$, $m=\abs{v}$, and $w=uv$. The inequalities follow from
\begin{align*}
\aabs{\Gamma(w)}+W_0\mathbf{L}(w)&\ge
\aabs{\mathbb{L}_n\Gamma(w)}\ge \aabs{\Gamma(u)},
\\
\aabs{\Gamma(w)}+W_0\mathbf{L}(w)&\ge
\aabs{\mathbb{R}_m\Gamma(w)}\ge \aabs{\Gamma(v)},
\end{align*}
and
$$\aabs{\mathbb{L}_n\Gamma(w)}+\aabs{\mathbb{R}_m\Gamma(w)}\le
\aabs{\Gamma(w)}+ \aabs{\mathbb{S}\Gamma(w)}+W_0\mathbf{L}(w).$$
\item The thesis is entailed by $\aabs{\mathbb{S}\Gamma(w)}\le
\aabs{\mathbb{S}\mathbb{F}\Gamma(w)}$ and
$\aabs{\mathbb{S}\mathbb{F}\Gamma(w)}\le
W_0\okra{\voc{\Gamma(w)}-1}(1+\mathbf{L}(w))+W_0$.
\end{LaTeXenumerate}
\end{proof}
\subsection{Lower bounds for the excess lengths}
\label{ssecLowerExcess}
For Yang-Kieffer length function, the excess lengths can be lower-bounded
by another quantity related to vocabulary size. Firstly, for grammars
$G_i=(\alpha_{i1},\alpha_{i2},...,\alpha_{in_i})$, $i=1,2$, denote the
number of their common nonterminal expansions
\begin{align*}
\voc{G_1;G_2}:= \card \bigcap_{i=1,2}
\klam{\dzi{\alpha_{i2}}_{G_i},\dzi{\alpha_{i3}}_{G_i},...,
\dzi{\alpha_{in_i}}_{G_i}}
\end{align*}
and introduce a~new kind of grammar joining
\begin{align*}
G_1\otimes G_2:=(\alpha_{11}\alpha_{21},\,
&Q_1^*(\alpha_{12}),...,Q_1^*(\alpha_{1n_1}),
\\
&Q_2^*(\alpha_{22}),...,Q_2^*(\alpha_{2n_2})),
\end{align*}
where $Q_1(A_j):=A_{j}$ and $Q_2(A_j):=A_{j+n_1-1}$ for nonterminals
and $Q_1(x):=Q_2(x):=x$ for terminals $x\in\mathbb{X}$.
Recall also Grammar Reduction Rule 5 from \cite{KiefferYang00}, which
deletes useless nonterminals from the grammar and, for all
nonterminals sharing the same expansion, substitutes one of them. Let
$\mathbb{I}G$ be the result of applying the rule to grammar $G$.
\begin{theorem}
\label{theoLower}
Let $\Gamma$ be a~$(\abs{\,\cdot\,},\mathcal{J})$-minimal grammar
transform. If
$G_1,G_2\in\mathcal{K}\implies \mathbb{I}G_1, G_1\otimes
G_2\in\mathcal{K}$ for some grammar class $\mathcal{K}$ being
$\abs{\,\cdot\,}$-equivalent to $\mathcal{J}$ then
\begin{align}
\label{LowerLower}
\abs{\Gamma(u)}+\abs{\Gamma(v)}-\abs{\Gamma(uv)}
&\ge
\voc{\Gamma(u);\Gamma(v)}.
\end{align}
\emph{Remark:} In particular, (\ref{LowerLower}) holds for
$\mathcal{J}=\mathcal{G},\mathcal{P},\mathcal{I},\mathcal{F},\mathcal{D}_k$.
\end{theorem}
\begin{proof}
Since $\mathcal{K}$ is closed against operation $\mathbb{I}$, there
exist $G_1\in\mathcal{K}\cap\mathcal{G}(u)$ and
$G_2\in\mathcal{K}\cap\mathcal{G}(v)$ such that
$\abs{G_1}=\abs{\Gamma(u)}$, $\abs{G_2}=\abs{\Gamma(v)}$, and
$\mathbb{I}G_i=G_i$. Hence $\abs{\alpha_{ij}}\ge 1$ for
$(\alpha_{i1},\alpha_{i2},...,\alpha_{in_i})=G_i$ and, consequently,
\begin{align}
\abs{\mathbb{I}(G_1\otimes G_2)}
&\le
\abs{G_1\otimes G_2} -
\voc{G_1;G_2}\min_{ij} \abs{\alpha_{ij}}
\nonumber
\\
\label{PreLowerLower}
&\le
\abs{G_1\otimes G_2} -
\voc{G_1;G_2}
.
\end{align}
Notice that $\abs{G_1\otimes G_2}=\abs{G_1}+\abs{G_2}$. Thus
(\ref{LowerLower}) follows from (\ref{PreLowerLower}) and from
$\abs{\Gamma(uv)}\le \abs{\mathbb{I}(G_1\otimes G_2)}$.
\end{proof}
The next proposition suggests that the size of common vocabulary
$\voc{\Gamma(u);\Gamma(v)}$ for irreducible grammar transforms may grow
quite fast with the length of strings $u$ and $v$.
\begin{theorem}
\label{theoIrred}
\begin{LaTeXenumerate}
\item
If $\Gamma$ is a~$\mathcal{F}\cap\mathcal{P}$-grammar transform then
\begin{align}
\label{PIrred}
\voc{\Gamma(w)}\mathbf{L}(w)> \sqrt{\abs{\Gamma(w)}/2}-D-1.
\end{align}
\item
If $\Gamma$ is an $\mathcal{I}$-grammar transform then
\begin{align}
\label{Irred}
\voc{\Gamma(w)}> \sqrt{\abs{\Gamma(w)}/2}-D-1.
\end{align}
\end{LaTeXenumerate}
\emph{Remark:} Bound (ii) was mentioned in \cite{Debowski06b}.
\end{theorem}
\begin{proof}
Write $G=\Gamma(w)$ and $V=\voc{\Gamma(w)}$ for brevity.
Notice that $x+a+1> \sqrt{y/2}$ follows from
$(y-x)/2 \le (x+a)^2$ for $x,y,a\ge 0$.
\begin{LaTeXenumerate}
\item At the every second position of the start symbol definition of $G$,
a~pair of symbols can occur only once. Thus (\ref{PIrred})
follows by $[\abs{G}-V\mathbf{L}(w)]/2 \le \okra{V+D}^2 \le
\okra{V\mathbf{L}(w)+D}^2$.
\item In this case, any pair of symbols occurs at most once at the
every second position of all right-hand sides of $G$. Hence,
$(\abs{G}-V)/2 \le \okra{V+D}^2$, which implies (\ref{Irred}).
\end{LaTeXenumerate}
\end{proof}
\section{Conclusion}
\label{secConclude}
We have shown that the vocabulary size of certain minimal universal
grammar-based codes is greater than the excess code length divided by
the length of the longest repeated substring $\mathbf{L}(\cdot)$.
Recall that $\mathbf{L}(X_{1:n})$ cannot be upper-bounded almost
surely by a~universal function $o(n)$ for a~block of $n$ symbols drawn
from an arbitrary stationary stochastic process \cite{Shields92b}.
Nevertheless, $\mathbf{L}(X_{1:n})=O(\log n)$ if
$(X_i)_{i\in\mathbb{Z}}$ is a~finite-energy process \cite{Shields97}.
Hence, an extended Hilberg hypothesis \cite{Hilberg90}, stating that
a~good model for texts in natural languages is a~finite-energy process
with excess entropy $E(n)\asymp\sqrt{n}$, seems consistent with
observations asserting that vocabulary size for certain text
compressions is $\Omega(\sqrt{n}/\log n)$ where $n$ is the text length
\cite[Figure 3.12 (b), p.\ 69]{NevillManning96}.
While some premises appealing to ergodic decomposition make Hilberg's
hypothesis plausible even without the evidence of grammar-based
compression \cite{Debowski06c}, there remains an important theoretical
problem. Can we use the vocabulary size or the excess length of
a~grammar-based code to estimate excess entropy accurately?
Inequality (\ref{DiffECE}) gives a~lower bound for
$E^C(n)-E(n)$ but the upper bounds are less recognized. Although
$\abs{E^C(n)-E(n)}=O(\log n)$ when the length of code $C$ equals
prefix algorithmic complexity and block distribution $P(X_{1:n})$ is
recursively computable \cite{Debowski06c,GrunwaldVitanyi03}, some
results in ergodic theory indicate that there is no universal bound
for $\abs{E^C(n)-E(n)}$ in the class of stationary processes
\cite{Debowski06c,Shields93}.
Simpler arguments could be used to infer that difference $E^C(n)-E(n)$
is large for certain codes and stochastic processes. Consider
compressing a~memoryless source with entropy rate $h>0$. We have
$E(n)=0$. On the other hand, let code $C$ be formed by a~local encoder
satisfying (\ref{UniCoder}) and an irreducible transform $\Gamma$.
Then $E^C(n)= \Omega(\sqrt{hn/\log n})$ would be implied by
Theorems \ref{theoLower} and \ref{theoIrred} if relation
$\voc{\Gamma(X_{1:n});\Gamma(X_{n+1:2n})}\asymp \voc{\Gamma(X_{1:n})}$
held.
Let us notice that the bound for $E^C(n)$ conjectured for memoryless
sources and irreducible grammar-based codes is almost the same as the
inequality established for general minimal codes and sources with
$E(n)\asymp\sqrt{n}$. This should not obscure the fact that there is
a~huge variation of vocabulary size for different information sources
and a~fixed code \cite{Debowski06b}, an empirical fact not yet fully
understood theoretically.
\section*{Acknowledgment}
This work was supported by the Australian Research Council, grant no.\
DP0210999, during the author's visit to the University of New South
Wales, Sydney, Australia. The author wishes to thank to Prof. Arthur
Ramer of the UNSW.
|
1,116,691,497,229 | arxiv | \section{Acknowledgement}
We thank Ishaan Singh for help in drafting this paper.
\section{Appendix} \label{Appendix}
\begin{table}[]
\begin{tabular}{|l|l|l|l|l|}
\hline
\multirow{2}{*}{state} & \multicolumn{2}{l|}{MAE} & \multicolumn{2}{l|}{nMAE (\%)} \\ \cline{2-5}
& Local & Global & Local & Global \\ \hline
ak & 20.57 & 34.487 & 27.85 & 46.70 \\ \hline
al & 407.77 & 373.965 & 37.15 & 34.07 \\ \hline
ar & 157.33 & 316.5366 & 29.30 & 58.96 \\ \hline
az & 226.52 & 464.88 & 31.62 & 64.9 \\ \hline
ca & 1338.76 & 1601.93 & 18.82 & 22.52 \\ \hline
co & 65.25 & 94.07 & 19.49 & 28.10 \\ \hline
dc & 18.56 & 23.43 & 34.61 & 43.69 \\ \hline
de & 50.81 & 80.96 & 55.77 & 88.87 \\ \hline
fl & 1275.58 & 2025.70 & 32.62 & 51.81 \\ \hline
ga & 510.90 & 919.80 & 22.36 & 40.26 \\ \hline
hi & 82.98 & 357.41 & 37.40 & 161.11 \\ \hline
ia & 277.22 & 426.23 & 37.63 & 57.86 \\ \hline
id & 96.08 & 146.48 & 30.18 & 45.47 \\ \hline
il & 381.80 & 1000.20 & 19.30 & 50.57 \\ \hline
in & 183.15 & 291.35 & 20.46 & 32.55 \\ \hline
ks & 425.32 & 447.91 & 76.25 & 80.30 \\ \hline
ky & 214.76 & 305.70 & 31.33 & 44.60 \\ \hline
la & 409.52 & 495.45 & 53.85 & 65.15 \\ \hline
ma & 366.00 & 425.01 & 571.28 & 663.38 \\ \hline
md & 149.23 & 203.96 & 25.33 & 34.63 \\ \hline
me & 7.79 & 31.70 & 34.83 & 141.74 \\ \hline
mi & 208.71 & 323.62 & 28.26 & 43.83 \\ \hline
mn & 149.04 & 319.29 & 21.57 & 46.22 \\ \hline
mo & 210.50 & 379.88 & 16.99 & 30.67 \\ \hline
ms & 181.35 & 234.99 & 26.92 & 34.88 \\ \hline
mt & 34.42 & 117.85 & 29.86 & 102.27 \\ \hline
nc & 331.10 & 526.08 & 23.15 & 36.78 \\ \hline
nd & 75.68 & 128.36 & 36.17 & 61.35 \\ \hline
ne & 90.33 & 177.09 & 33.68 & 66.03 \\ \hline
nh & 7.81 & 12.63 & 35.36 & 57.13 \\ \hline
nj & 123.61 & 164.06 & 37.74 & 50.09 \\ \hline
nm & 49.90 & 195.38 & 38.72 & 148.96 \\ \hline
nv & 168.63 & 502.96 & 31.84 & 94.97 \\ \hline
ny & 145.12 & 169.44 & 22.05 & 25.75 \\ \hline
oh & 189.54 & 282.73 & 18.25 & 27.23 \\ \hline
ok & 159.91 & 341.58 & 22.32 & 47.68 \\ \hline
or & 61.49 & 134.40 & 26.04 & 56.92 \\ \hline
pa & 141.93 & 246.73 & 19.46 & 33.83 \\ \hline
ri & 68.91 & 89.77 & 72.43 & 94.36 \\ \hline
sc & 240.80 & 372.59 & 28.35 & 43.88 \\ \hline
sd & 81.35 & 97.73 & 42.08 & 50.56 \\ \hline
tn & 418.23 & 569.35 & 28.91 & 39.36 \\ \hline
tx & 1231.87 & 1831.06 & 23.09 & 34.32 \\ \hline
ut & 63.38 & 166.70 & 16.83 & 44.28 \\ \hline
va & 188.00 & 476.48 & 18.47 & 46.81 \\ \hline
vt & 3.48 & 12.87 & 53.59 & 197.96 \\ \hline
wa & 135.77 & 209.61 & 25.77 & 39.80 \\ \hline
wi & 151.44 & 385.41 & 19.84 & 50.50 \\ \hline
wv & 41.85 & 104.18 & 31.64 & 78.78 \\ \hline
wy & 15.32 & 23.38 & 43.65 & 66.60 \\ \hline
\textbf{Entire} & \textbf{226.301} & 368.26 & \textbf{27.09} & 44.08\\ \hline
\end{tabular}
\label{tab: appendix_global_local}
\caption{MAE and nMAE of local level 1d resnet vs global level 1d resnet for all states. Local models outperform the global model in 49 out of 50 states. }
\end{table}
\begin{table}[]
\begin{tabular}{|l|l|l|}
\hline
\textbf{Feature} & \textbf{Top 5} & \textbf{Top 15}\\ \hline
cmnty cli & 45 & 46 \\ \hline
avoid contact all or most time & 37 & 48 \\ \hline
runny nose & 20 & 40 \\ \hline
worked outside home & 19 & 35 \\ \hline
hh cough & 15 & 34 \\ \hline
self cough & 13 & 29 \\ \hline
anosmia ageusia & 10 & 29 \\ \hline
hh sore throat & 9 & 27 \\ \hline
none of above & 8 & 28 \\ \hline
self sore throat & 6 & 27 \\ \hline
multiple symptoms & 6 & 18 \\ \hline
nasal congestion & 5 & 28 \\ \hline
other & 5 & 23 \\ \hline
high blood pressure & 5 & 16 \\ \hline
hh shortness of breath & 5 & 18 \\ \hline
hh difficulty breathing & 5 & 15 \\ \hline
hh cli & 5 & 18 \\ \hline
self difficulty breathing & 4 & 18 \\ \hline
heart disease & 4 & 15 \\ \hline
persistent pain pressure in chest & 3 & 13 \\ \hline
muscle joint aches & 3 & 13 \\ \hline
hh fever & 3 & 20 \\ \hline
self shortness of breath & 2 & 20 \\ \hline
multiple medical conditions & 2 & 15 \\ \hline
kidney disease & 2 & 10 \\ \hline
diarrhea & 2 & 22 \\ \hline
chronic lung disease & 2 & 20 \\ \hline
tiredness or exhaustion & 1 & 16 \\ \hline
self fever & 1 & 11 \\ \hline
no above medical conditions & 1 & 12 \\ \hline
cancer & 1 & 12 \\ \hline
asthma & 1 & 9 \\ \hline
nausea vomiting & 0 & 14 \\ \hline
diabetes & 0 & 16 \\ \hline
autoimmune disorder & 0 & 15 \\ \hline
\end{tabular}
\label{tab:appendix_feature}
\caption {All features and how often they occur in top 5 and top 15 features in local XGB models. Here, we can see that cmnty cli was among the top features in 45 out of 50 models and was among the top 15 features in 46 out of 50 models. cmnty - communty, cli - Covid Like Illness, anosmia - loss of smell, ageusia - loss of taste, hh - Household, self - about the person taking the survey (individual level), none of above - none of the mentioned symptoms, other - other symptoms. }
\end{table}
\section{Conclusion and Future Work}
\frenchspacing
Forecasting the epidemic and its spread is an important tool in pandemic response. In this work, we assess the possibility of building an outbreak prediction system using crowd-sourced symptoms data. Our experiments demonstrate that self-reported symptoms can predict actual cases with low error. Furthermore, a small number of features (symptoms) are sufficient to predict the total number of cases with reasonable accuracy. The analysis suggests that learning models at a state level improves the prediction performance possibly due to the top features vary across different states. In other words, COVID-19 affects every state differently. This information can be used to create state-specific and shorter surveys. Consequently, with the increasing social media and internet penetration, this method can be scaled to complement physical testing facilities to estimate the number of cases, especially in low resource areas. Future directions worth exploring include, improving the prediction capability of the model by incorporating meta learning and transfer learning to improve performance for states with relatively lower number of samples, and to forecast cases as a time series. As the data collected here is health data, privacy-preserving machine learning could improve the adoption of such systems.
\section{Data}\label{Data}
\textbf{CMU dataset} - We use the symptoms survey data from CMU \cite{CMUDataset}. This was a survey collected across ~70,000 on a daily basis. It consists of multiple questions along the directions of symptoms (cough, fever, etc), behavioral patterns (working outside, avoid contact, etc), medical conditions (cancer, heart disease, etc), and more. They aggregate the data and provide it in the form of percentage of respondents who reported having that symptom/behavior. The data is available at the county and state level with a total of 104 features (as of October 2020), including weighted (adjusted for sampling bias), unweighted signals, demographic columns (age, gender, etc). We use the state level data from Apr. 4, '20 to Sep. 11, '20.
\textbf{Daily Cases} - NY Times \cite{nytimes} reports the cumulative number of cases in a state on a given date, as provided by WHO. From this data, we compute and use the daily new cases in a state.
\section{experiments}\label{Experiments}
\section{Introduction}\label{Introduction}
\frenchspacing
The COVID-19 pandemic has created turmoil in the lives of millions of people across the globe. Even with the advent of vaccines, their distribution remains a challenge \cite{bae2020challenges,samalvacc2021,mills2021challenges}. We need interventions, policies, and solutions that go beyond pharmaceutical interventions (vaccines, therapeutics, etc). Testing and identifying the number of COVID-19 cases is important for us to identify how well we are doing against the virus. Although important, even with rapid tests and at-home testing facilities, we are testing less than 5.61 per 1000 of the population in the US \cite{ourworldindata,morales2021covid,gandhi2020clinical}. Furthermore, getting test results takes time in hours, which can lead to delays in getting medical treatment. This calls for a non-invasive, and scalable method for estimating the cases, to complement traditional testing infrastructure. In this paper, we harness the power of deep learning and crowd-sourced symptoms data for the prediction of COVID 19 cases on a daily basis. For this purpose, we use the data collected by CMU via Facebook surveys \cite{CMUDataset}. This data of self-reported symptoms is available at the state level for the US. We integrate this self-reported symptoms data with the actual COVID-19 cases reported by WHO. Here, we answer an important question \textit{``Can we predict daily COVID-19 cases at the granularity of a state, using self-reported symptoms from a fraction of the population?''}.
We predict the actual daily case count, using self-reported symptoms data that is aggregated at the population level (to obfuscate personal information), by training machine learning (ML) and deep learning (DL) algorithms. We train models at two levels of data granularity - global and local. At the global level, we combine the data from all the states into a single data set and train a single model on it, whereas at the local level, we train a separate model for each state. At the global level, the best model has a mean absolute error (MAE) of 368.26 and normalized MAE (nMAE) of 44.08\%. Further, when we take an ensemble of all the local models, the \textbf{MAE reduces to 226.30 per state} (nMAE = 27.09\%). This indicates that the model is off by 226 cases per state when predicting daily cases. The success of our experiments highlights the fact that our model can serve as a low resource way of estimating COVID-19 cases. We observe that the top features contributing to predictions vary from state to state, alluding to the benefits of local models. Our model may be used to detect hot spots and can act as a non-obtrusive, economical, and effective screening method in a low resource environment.
\section{Methodology and Experiments}\label{Methodology}
We predict the daily cases of COVID-19 in the US states \cite{nytimes} using data from 6 April 2020 to 11 September 2020.
\textbf{Input Features} - We use features provided in the CMU \cite{CMUDataset} dataset. We follow a feature selection and ranking process similar to \cite{sukumaran2020covid19}. Further, we prune un-weighted and other signals (age, gender, derived features, etc) which leaves us with 35 features. We further rank these 35 features according to their f\_regression \cite{Freg} scores against the target variable, and then input them to the models. As the demographic level (age, gender) split of actual daily cases is not available, we drop such data points from the CMU data. Thereby, only using data points that aggregate all demographics (gender and age).
\textbf{Data Granularity levels} - We train on 2 levels: \begin{itemize}
\item Global - The data of all the states is combined and a single model is trained.
\item Local - A separate model is trained for every state. For comparison with the global level model, we ensemble the predictions from all the local models - for each state we take the test prediction from the respective local models. Thereby getting the local predictions from the respective local model. Later, we compute the error metrics for the entire test set.
\end{itemize}
\textbf{Train-Test Split } - We split the entire data into train/test set on the basis of dates. The train data has the initial 80\% of the dates whereas the test data has the last 20\% of the dates. This ensures that the model has not seen future dates while training. It also ensures that the train and test data of every state is the same in the local and global level model.
\textbf{Algorithms} - We experiment with five ML baselines - Linear Regression (LR), Decision Tree (DT), Multi Layer Perceptron (MLP), Gradient Boost (GDBT), and XGBoost (XGB). We also implement 2 DL models: \begin{itemize}
\item CNN - An architecture with seven convolutional layers followed by dense layers.
\item \textbf{1d Resnet} - Inspired by the wide success of ResNets \cite{He2015} in various fields we have developed a model similar to ResNet-18 for one-dimensional data. Our proposed architecture comprises of 3 blocks.
At the end of the network, a Global Average Pooling (GAP) layer is used followed by fully connected layers. The dense layers comprises of 256, 128, and 1 neuron respectively. Between each fully connected layer, a dropout layer is employed with a probability of 0.5 (p=0.5).
\end{itemize}
\textbf{Error Metric} - We use two error metrics to evaluate and compare our models: \begin{itemize}
\item Mean Absolute Error (MAE) - The average over all data points of the absolute value of the difference between the predicted value and the actual value. \\
MAE = $\frac{1}{n}\sum_{i=1}^{n}|p_i - t_i|$
Where n is the total data instances, $p_i$ is the predicted value and $t_i$ is the actual value (ground truth).
\item Normalized Mean Absolute Error (nMAE) - Normalized error is calculated to capture the variation in the number of daily cases across different dates and states. \\
nMAE = $100* \frac{\sum_{i=1}^{n}|p_i - t_i|}{\sum_{i=1}^{n}t_i} \%$
\end{itemize}
To show statistical significance, we also calculate the 95\% confidence interval (CI) over 20 runs (the random seed is changed every time) for our ML based global level models.
\textbf{Implementation} - We use Scikit-learn \cite{scikit-learn}, Keras \cite{chollet2015keras}, and xgboost library \cite{Chen2016xgb} for our implementation. The ML models are trained on an Intel i7 8th generation CPU, while DL models are trained on Google Colab. The code is publicly available at \url{https://github.com/parthpatwa/Can-Self-Reported-Symptoms-Predict-Daily-COVID-19-Cases}.
\section{Related Work}
\frenchspacing
Recently, there has been significant traction in research related to COVID-19 both in terms of clinical and digital innovations, where the scientific community has focused on this disease with near-unprecedented intensity. Along the clinical direction, there were several efforts like vaccine development \cite{KAUR2020198114}, re-purposing known clinically-tested drugs, and virtual screening for possible targets using protein structure data \cite{10.3389/frai.2020.00065,PPR:PPR112521}, understanding efficacy of testing \cite{Jarrom2020.08.10.20171777}, etc. Efforts along clinical and other medical directions are instrumental but can be time-consuming. The drastic increase in cases has challenged the medical infrastructure worldwide in various aspects, including a sharp rise in demand for hospital beds, shortage of medical equipment and personnel \cite{sen2021closer}. At the same time, testing methods are also facing an acute shortage in developing countries. Thereby causing a delay in getting test results leading to increased infection rates and delays in critical preventive measures. Thus judicial usage of health care resources like testing, vaccines is crucial.
To complement the medical research with computational solutions, efforts have been made on predictive modelling of the disease spread, simulations on vaccine distribution strategies, etc \cite{Romero-Brufaun1087}. Many efforts were along the directions of understanding the severity, spread, and unique characteristics of the COVID-19 infection, across a broad range of clinical, imaging, and population-level datasets \cite{Gostic,Liang,Menni,Shi,shankar2020proximity}. For instance, various studies have tried to understand the progression of the virus, future hot-spot, estimating the number of cases/deaths/hospitalization, etc. using exposure notification \cite{DBLP:journals/corr/abs-2006-08543}, epidemiological modeling \cite{Romero-Brufaun1087}. Studies have also tried mathematical modeling to understand the outbreak under different situations for different demographics \cite{menni2020real,saad2020immune,wilder2020tracking}. Apart from these, several machine learning models were developed to forecast the COVID-19 cases in regions like India, Egypt \cite{FAROOQ2021587,AMAR2020622}, etc.
Effective screening enables quick and efficient diagnosis of COVID-19 and can mitigate the burden on public healthcare systems. Prediction models which combine several features (symptoms, testing, mobility, etc.) to estimate the risk of infection have been developed to assisting medical staff worldwide in triaging patients. Some of these models use laboratory tests \cite{Feng2020.03.19.20039099,mei2020artificial}, clinical symptoms \cite{tostmann2020strong}, and integration of both \cite{punn2020covid}. However, most previous models were based on data from hospitalized patients and thus are not effective in screening for COVID-19 in the general population.
Hence, the development of non-obtrusive system disentangled from the health care infrastructure becomes imperative to accelerate the efforts against COVID-19. \cite{sukumaran2020covid19} used self-reported symptoms to predict outbreaks and is the closest to our work. However, unlike their work, we predict \textit{actual daily cases} instead of \textit{self-reported daily cases}.
\section{Results and Analysis}\label{Results}
\frenchspacing
\begin{table*}[ht!]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
Algorithm & nMAE & MAE & nMAE CI & MAE CI \\ \hline
LR & 98.84\% & 826.05 & (98.84, 98.84) & (826.05, 826.05) \\ \hline
DT & 72.87\% & 609.02 & (72.48, 73.27) & (605.76, 612.28) \\ \hline
MLP & 66.30\% & 554.06 & (64.90, 67.70) & (542.37, 565.74) \\ \hline
GDBT & 60.15\% & 502.73 & (60.13, 60.18) & (502.54, 502.91) \\ \hline
XGB & 54.10\% & 452.12 & (53.92, 54.28) & (450.62, 453.62) \\ \hline
CNN & 54.92\% & 459.09 & - & - \\ \hline
\textbf{1d Resnet} & \textbf{44.08\%} & \textbf{368.26} & - & - \\ \hline
\end{tabular}
\caption{Test data results for prediction of daily cases per state by various models trained on the global level. The 95\% confidence interval (CI) is calculated on 20 runs (random seed changed every time). The models use all the 35 features. }
\label{table:symptoms}
\end{table*}
\begin{table*}[ht]
\begin{tabular}{|l|l|}
\hline
Model & Top 5 features \\ \hline
ca & cmnty cli, avoid contact all or most time, hh fever, hh sore throat, hh cough \\ \hline
tx & cmnty cli, anosmia ageusia, avoid contact all or most time, worked outside home, high blood pressure \\ \hline
fl & cmnty cli, anosmia ageusia, avoid contact all or most time, worked outside home, persistent pain pressure in chest \\ \hline
ak & cmnty cli, worked outside home, avoid contact all or most time, self runny nose, none of above \\ \hline
vt & none of above, hh cough, cancer, nasal congestion, avoid contact all or most time \\ \hline
wy & avoid contact all or most time, self shortness of breath, cmnty cli, other, tiredness or exhaustion \\ \hline
Global & cmnty cli, anosmia ageusia, hh fever, hh cli, hh sore throat \\ \hline
\end{tabular}
\caption{Top 5 features for local and global level XGB models. Notice the lack of uniformity in top features across the states. cmnty - Community, cli - Covid Like Illness, hh - household. }
\label{tab:top_featues}
\end{table*}
\begin{table}[]
\centering
\begin{tabular}{|l|l|l|}
\hline
Algorithm & nMAE & MAE \\ \hline
LR & 45.65\% & 381.51 \\ \hline
DT & 44.24 \% & 369.75 \\ \hline
MLP & 36.70 \% & 306.70 \\ \hline
GDBT & 36.68 \% & 306.55 \\ \hline
XGB & 34.42\% & 287.63 \\ \hline
CNN & 101.00\% & 849.60 \\ \hline
\textbf{1d Resnet} & \textbf{27.09\%} & \textbf{226.30} \\ \hline
\end{tabular}
\caption{Test results for prediction of daily cases per state by models trained on the local level. A local model is trained for each state. The results in this table are of the ensemble of all local level models. }
\label{table:granular}
\end{table}
\begin{figure}[]
\centering
\includegraphics[width = 0.6\linewidth]{images/n_features_vs_actual_cases_xgb.png}
\caption{MAE vs the number of features used for XGB. In general, the error decreases with increase in the number of features. The rate of decrease in error decreases with increase in the number of features. }%
\label{fig:combined}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
State & \multicolumn{2}{l|}{MAE} & \multicolumn{2}{l|}{nMAE (\%)} \\ \hline
& local & global & local & global \\ \hline
tx &1231.87 &1831.06 &23.09 &34.32 \\ \hline
ca & 1338.76 & 1601.92 & 18.82 & 22.52 \\ \hline
fl & 1275.58 & 2025.70 & 32.63 & 51.81 \\ \hline
wy & 15.32 & 23.38 & 43.65 & 66.60 \\ \hline
me & 7.79 & 31.70 & 34.83 & 141.75 \\ \hline
vt & 3.48 & 12.87 & 53.59 & 197.96 \\ \hline
\textbf{entire} & \textbf{226.30} & 368.26 & \textbf{27.09} & 44.08 \\ \hline
\end{tabular}
\caption{MAE and nMAE of 1d resnet model trained on the local and global level for few states and the entire USA, on the test data. For all the states, please refer to Section \ref{Appendix} (Appendix).}
\label{tab:local_vs_global_main}
\end{table}
\begin{figure}[]
\centering
\includegraphics[width=\linewidth]{images/Frequency_of_features.png}
\caption{Some features with the frequency of their occurrence in the top 5 and top 15 important features in local models. Most features occur in the top 5 or 15 for only a few states, indicating high variance in top features across states. Data for all the features is reported in section \ref{Appendix} (appendix). }%
\label{fig:feature_count}
\end{figure}
Table \ref{table:symptoms} shows the result of daily cases prediction by different models trained on the global level. Our 1D Resnet performs the best with 382.40 MAE and 45.77\% nMAE per state. CNN performs worse than 1d resnet. Among ML models, XGB (MAE = 502.73 , nMAE = 60.15\%) performs the best, while LR has the worst performance.
Figure \ref{fig:combined} shows the MAE vs the number of features used for XGB. It can be observed that with the increase in number of features, MAE decreases. However, MAE plateaus after incorporating approx. 15 features. Consequently, we can even use fewer (around 15) features to predict daily cases without considerably increasing the error. This observation is similar to that of \cite{sukumaran2020covid19} and may aid in reducing the number of questions of the surveys.
Table \ref{table:granular} shows the MAE of ensemble of models trained on the local level (one model per state). We see that 1d resnet performs the best and gives an \textbf{MAE of 226.30 per state across the USA} (nMAE = 27.09\%). LR performs the worst.
We can see from Table \ref{table:symptoms} and Table \ref{table:granular} that for all algorithms except CNN, the ensemble of local models achieves substantially lower error than a single global model. For example, the MAE of XGB trained on global level is 452.12 (nMAE =54.10\%) whereas MAE of XGB models trained on local level is 287.63 (nMAE = 34.43\%). The poor results of local CNN models could be due to lack of data per state or overfitting. CNN fails to capture complex relations that generalize well. The superiority of local models over the global model is further observed in Table \ref{tab:local_vs_global_main} which shows the state-wise MAE by 1d Resnet trained on global and local level for a few states. \textbf{Local level models outperform the the global level model in 49 states out of 50}. These observations motivate us to analyze the state-wise top contributing features.
Figure \ref{fig:feature_count} shows the number of times a feature was in the top 5 or top 15 important features across the local models, based on the feature importance given by the XGB models. We see that \textit{cmnty cli} (covid like illness in community), and \textit{avoid contact all or most time} are important features for most of the local level models. However, the frequency of the remaining features being in the top 5 or top 15 features is low. 32 of the 35 features are present in top 5 for at least 1 state and all the features are in the top 15 for at least 9 states. This shows that the top important features vary considerably across the states. Table \ref{tab:top_featues} shows some states having different set of top 5 features. As the feature importance (or ranking) varies considerably across the states, local models perform better than the global model. This also means that COVID-19 affects every state differently. Further, we see that the 1st (\textit{cmnty cli}), 2nd \textit{avoid contact all or most time}), and 4th (\textit{worked outside home} most occurring top features are related to social distancing, which highlights the contagious nature of COVID-19.
|
1,116,691,497,230 | arxiv | \section{Introduction}
\input{sections/intro}
\section{Background and Notation}
\input{sections/background}
\section{The Role of Global Features}
\label{sec:isglobalnecessary}
\input{sections/isglobalnecessary}
\section{Learning Global Features}
\label{sec:learnglobal}
\input{sections/learningglobalfeatures}
\section{Coreference with Global Features}
\label{sec:fullmod}
\input{sections/corefwithglobal}
\section{Experiments}
\subsection{Methods}
\input{sections/methods}
\subsection{Results}
\input{sections/results}
\section{Related Work}
\input{sections/relatedwork}
\section{Conclusion}
We have presented a simple, state of the art approach to incorporating global information in an end-to-end coreference system, which obviates the need to define global features, and moreover allows for simple (greedy) inference. Future work will examine improving recall, and more sophisticated approaches to global training.
\section*{Acknowledgments}
We gratefully acknowledge the support of a Google Research Award.
\nocite{koehn2004statistical}
\nocite{cort}
\subsection{Full Model and Training}
\label{sec:model}
Recall that our inference objective is to maximize the score of both a local mention ranking term as well as a global term based on the current clusters:
\vspace{-1mm}
{\small
\begin{align*}
\argmax_{y_1,
\ldots, y_N} \sum_{n=1}^N f(x_n, y_n) + g(x_n, y_n, \boldz_{1:n-1})
\end{align*}
}
\vspace{-3mm}
\noindent We begin by defining the local model $f(x_n,y)$ with the two layer neural network of \newcite{wiseman15learning}, which has a specialization for the non-anaphoric case, as follows:
\vspace{-1mm}
{\small
\begin{align*}
f(x_n,y) &\triangleq \begin{cases} \boldu^\trans \left[ \begin{smallmatrix} \boldh_{\ua}(x_n) \\ \boldh_{\up}(x_n,y) \end{smallmatrix}\right] + u_0 &\mbox{if } y \neq \epsilon \\
\boldv^\trans \boldh_{\ua}(x_n) + v_0 &\mbox{if } y = \epsilon\eqpunc{.} \end{cases}
\end{align*}
}
\vspace{-2mm}
\noindent Above, $\boldu$ and $\boldv$ are the parameters of the model, and $\boldh_{\ua}$ and $\boldh_{\up}$ are learned feature embeddings of the local mention context and the pairwise affinity between a mention and an antecedent, respectively. These feature embeddings are defined similarly to $\boldh_{\uc}$, as
\vspace{-1mm}
{\small
\begin{align*}
\boldh_{\ua}(x_n) &\triangleq \tanh(\boldW_{\mathrm{\ua}} \, \boldsymbol{\phi}_{\mathrm{a}}(x_n) + \boldb_{\mathrm{\ua}}) \\
\boldh_{\up}(x_n,y) &\triangleq \tanh(\boldW_{\mathrm{\up}} \, \boldsymbol{\phi}_{\mathrm{p}}(x_n,y) + \boldb_{\mathrm{\up}})\eqpunc{,}
\end{align*}
}
\vspace{-5mm}
\noindent where $\boldsymbol{\phi}_{\mathrm{a}}$ (mentioned above) and $\boldsymbol{\phi}_{\mathrm{p}}$ are ``raw'' (that is, unconjoined) features on the context of $x_n$ and on the pairwise affinity between mentions $x_n$ and antecedent $y$, respectively~\cite{wiseman15learning}. Note that $\boldh_{\ua}$ and $\boldh_{\uc}$ use the same raw features; only their weights differ.
We now specify our global scoring function $g$ based on the history of
previous decisions. Define $\boldh_{<n}^{(m)}$ as the hidden state
of cluster $m$ before a decision is made for $x_n$ -- that is, $\boldh_{<n}^{(m)}$ is the state of cluster $m$'s RNN after it has consumed all mentions in the cluster \textit{preceding} $x_n$.
We define $g$ as
{\small\begin{align*}
g(x_n, y, &\boldz_{1:n-1}) \hspace{-4mm} &\triangleq \begin{cases} \boldh_{\uc}(x_n)^\trans \boldh^{(z_{y})}_{<n} &\mbox{if } y \neq \epsilon \\
\mathrm{NA}(x_n) &\mbox{if } y = \epsilon\eqpunc{,} \end{cases}
\end{align*}}%
where $\mathrm{NA}$ gives a score for assigning $\epsilon$ based on
a non-linear function of all of the current hidden states:
{\small\begin{align*}
\label{eq:1}
\mathrm{NA}(x_n) = \boldq^\trans \tanh \left( \boldW_s \left[ \begin{smallmatrix} \boldsymbol{\phi}_{\mathrm{a}}(x_n) \\ \sum_{m=1}^{M} \boldh^{(m)}_{<n}\end{smallmatrix}\right] + \boldb_s \right).
\end{align*}}%
See Figure~\ref{fig:hidden} for a diagram. The intuition behind the first case in $g$ is that in considering whether $y$ is a good antecedent for $x_n$, we add a term to the score that examines how well $x_n$ matches with the mentions already in $X^{(z_{y})}$; this matching score is expressed via a dot-product.\footnote{We also experimented with other non-linear functions, but dot-products performed best.} In the second case, when predicting that $x_n$ is non-anaphoric, we add the NA term to the score, which examines the (sum of) the current states $\boldh^{(m)}_{<n}$ of all clusters. This information is useful both because it allows the non-anaphoric score to incorporate information about potential antecedents, and because the occurrence of certain singleton-clusters often predicts the occurrence of future singleton-clusters, as noted in Section~\ref{sec:isglobalnecessary}.
The whole system is trained end-to-end on coreference using
backpropagation. For a given training document, let $\boldz^{(o)}$
be the oracle mapping from mention to cluster, which induces an oracle
clustering.
While at training time we do have oracle clusters, we do not have oracle antecedents $(y)_{n=1}^N$, so
following past work we treat the oracle antecedent as latent \cite{yu2009learning,fernandes2012latent,Chang:13,DandK:13}.
We train with the following slack-rescaled, margin objective:
\vspace{-5mm}
{\small\begin{align*}
\sum_{n=1}^N \max_{\hat{y} \in \mcY(x_n)} \Delta(x_n,\hat{y}) &(1 + f(x_n,\hat{y}) + g(x_n, \hat{y}, \boldz^{(o)}) \\
&- f(x_n,y_n^{\ell}) - g(x_n, y_n^{\ell}, \boldz^{(o)})),
\end{align*}}
\vspace{-5mm}
\noindent where the latent antecedent $y_n^{\ell}$ is defined as
\vspace{-5mm}
\begin{align*}
\small
y_n^{\ell} \triangleq \argmax_{y \in \mcY(x_n): z^{(o)}_y = z^{(o)}_n} f(x_n,y) + g(x_n, y, \boldz^{(o)})
\end{align*}
if $x_n$ is anaphoric, and is $\epsilon$ otherwise. The term $\Delta(x_n,\hat{y})$ gives different
weight to different error types. We
use a $\Delta$ with 3 different weights
$(\alpha_1, \alpha_2, \alpha_3)$ for ``false link'' (\textsc{fl}), ``false new'' (\textsc{fn}), and ``wrong link'' (\textsc{wl}) mistakes~\cite{DandK:13}, which correspond to predicting an antecedent when non-anaphoric, $\epsilon$ when anaphoric, and the wrong antecedent, respectively.
Note that in training we use the oracle clusters $\boldz^{(o)}$. Since these are known a priori, we can pre-compute all the hidden states $\boldh_j^{(m)}$ in a document, which makes training quite simple and efficient. This approach contrasts in particular with the work of \newcite{BandK:14} --- who also incorporate global information in mention-ranking --- in that they train against latent \textit{trees}, which are not annotated and must be searched for during training. On the other hand, training on oracle clusters leads to a mismatch between training and test, which can hurt performance.
\subsection{Search}
When moving from a strictly local objective to one with
global features, the test-time search problem becomes intractable. The
local objective requires $O(n^2)$ time, whereas the full clustering problem is NP-Hard. Past work with global features has used integer linear programming solvers for exact search
\cite{Chang:13,peng15a}, or beam search with (delayed) early update training for an
approximate solution \cite{BandK:14}. In contrast, we simply use greedy search at
test time, which also requires $O(n^2)$ time.\footnote{While beam search is a natural way to decrease search error at test time, it may fail to help if training involves a \textit{local} margin objective (as in our case), since scores need not be calibrated across local decisions. We accordingly attempted to train various locally normalized versions of our model, but found that they underperformed.
We also experimented with training approaches and model variants that expose the model to its own predictions~\cite{daume09search,ross11a,bengio15scheduled}, but found that these yielded a negligible performance improvement.} The full algorithm is shown in Algorithm~\ref{alg:greedy}. The greedy search algorithm is identical to a simple mention-ranking system, with the exception of line~11, which updates the current RNN representation based on the previous decision that was made, and line~4, which then uses this
cluster representation as part of scoring.
\begin{algorithm}[t!]
\footnotesize
\begin{algorithmic}[1]
\Procedure{GreedyCluster}{$x_1, \ldots, x_N$}
\State{Initialize clusters $X^{(1)} \ldots$ as empty lists, hidden states $\boldh^{(0)}, \ldots$ as $\mathbf{0}$ vectors in $\reals^D$, $\boldz$ as map from mention to cluster, and cluster counter {$M \gets 0$}}
\For{$n = 2 \ldots N$ }
\State{$\displaystyle y^* \gets \argmax_{y \in \mcY(x_n)} f(x_n, y) + g(x_n, y, \boldz_{1:n-1}) $}
\State{$m \gets z_{y^*}$}
\If{$y^* = \epsilon$}
\State{$M \gets M + 1$}
\State{$m \gets M$}
\EndIf{}
\State{append $x_n$ to $X^{(m)}$}
\State{$z_n \gets m$ }
\State{$\boldh^{(m)} \gets \mathrm{\mathbf{RNN}}(\boldh_{\uc}(x_n), \boldh^{(m)})$}
\EndFor{}
\State{\Return{$X^{(1)}, \ldots, X^{(M)}$}}
\EndProcedure{}
\end{algorithmic}
\caption{\label{alg:greedy} Greedy search with global RNNs}
\end{algorithm}
\subsection{Qualitative Analysis}
In this section we consider in detail the impact of the $g$ term in the RNN scoring function on the two error categories that improve most under the RNN model (as shown in Table~\ref{tab:newerrors}), namely, pronominal \textsc{wl} errors and pronominal \textsc{fl} errors. We consider an example from the CoNLL development set in each category on which the baseline MR model makes an error but the greedy RNN model does not.
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\columnwidth]{justinexample2}
\caption{Cluster predictions of greedy RNN model; co-clustered mentions are of the same color, and intensity of mention $x_j$ corresponds to $\boldh_c(x_n)^{\trans} \boldh_{<k}^{(i)}$, where $k \ensuremath{\,{=}\,} j+1$, $i \in \{1,2\}$, and $x_n =$ ``his.'' See text for full description.}
\label{fig:wlviz}
\end{figure}
The example in Figure~\ref{fig:wlviz} involves the resolution of the ambiguous pronoun ``his,'' which is bracketed and in bold in the figure. Whereas the baseline MR model \textit{incorrectly} predicts ``his'' to corefer with the closest gender-consistent antecedent ``Justin'' --- thus making a \textsc{wl} error --- the greedy RNN model correctly predicts ``his'' to corefer with ``Mr. Kaye'' in the previous sentence. (Note that ``the official'' also refers to Mr. Kaye). To get a sense of the greedy RNN model's decision-making on this example, we color the mentions the greedy RNN model has predicted to corefer with ``Mr. Kaye'' in green, and the mentions it has predicted to corefer with ``Justin'' in blue. (Note that the model incorrectly predicts the initial ``I'' mentions to corefer with ``Justin.'') Letting $X^{(1)}$ refer to the blue cluster, $X^{(2)}$ refer to the green cluster, and $x_n$ refer to the ambiguous mention ``his,'' we further shade each mention $x_j$ in $X^{(1)}$ so that its intensity corresponds to $\boldh_c(x_n)^{\trans} \boldh_{<k}^{(1)}$, where $k \ensuremath{\,{=}\,} j+1$; mentions in $X^{(2)}$ are shaded analogously. Thus, the shading shows how highly $g$ scores the compatibility between ``his'' and a cluster $X^{(i)}$ as each of $X^{(i)}$'s mentions is added. We see that when the initial ``Justin'' mentions are added to $X^{(1)}$ the $g$-score is relatively high. However, after ``The company'' is correctly predicted to corefer with ``Justin,'' the score of $X^{(1)}$ drops, since companies are generally not coreferent with pronouns like ``his.''
Figure~\ref{fig:flviz} shows an example (consisting of a telephone conversation between ``A'' and ``B'') in which the bracketed pronoun ``It's'' is being used pleonastically. Whereas the baseline MR model predicts ``It's'' to corefer with a previous ``it'' --- thus making a \textsc{fl} error --- the greedy RNN model does not. In Figure~\ref{fig:flviz} the final mention in three preceding clusters is shaded so its intensity corresponds to the magnitude of the gradient of the $\mathrm{NA}$ term in $g$ with respect to that mention. This visualization resembles the ``saliency'' technique of \newcite{li16viz}, and it attempts to gives a sense of the contribution of a (preceding) cluster in the calculation of the $\mathrm{NA}$ score.
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\columnwidth]{bahnexample2}
\caption{Magnitudes of gradients of $\mathrm{NA}$ score applied to bold ``It's'' with respect to final mention in three preceding clusters. See text for full description.}
\label{fig:flviz}
\end{figure}
We see that the potential antecedent ``S-Bahn'' has a large gradient, but also that the initial, obviously pleonastic use of ``it's'' has a large gradient, which may suggest that earlier, easier predictions of pleonasm can inform subsequent predictions.
\subsection{Recurrent Neural Networks}
A recurrent neural network is a parameterized non-linear function $\mathrm{\mathbf{RNN}}$ that recursively maps an input sequence of vectors to a sequence of hidden states.
Let $(\boldm_j)_{j=1}^J$ be a sequence of $J$ input vectors $\boldm_j \ensuremath{\,{\in}\,} \reals^{D}$, and let $\boldh_0 \ensuremath{\,{=}\,} \mathbf{0}$. Applying an RNN to any such sequence yields
\vspace{-5mm}
{\small
\begin{align*}
\boldh_j \gets \mathrm{\mathbf{RNN}}(\boldm_j, \boldh_{j-1}; \btheta)\eqpunc{,}
\end{align*}
}
\vspace{-5mm}
\noindent where $\btheta$ is the set of parameters for the model, which are shared over time.
There are several varieties of RNN, but by far the most commonly used in natural-language processing is the Long Short-Term Memory network (LSTM)
\cite{hochreiter1997lstm}, particularly for language modeling (e.g., \newcite{zaremba14rnn}) and machine
translation (e.g., \newcite{sutskever2014sequence}), and we use LSTMs in all experiments.
\begin{figure*}
\centering
\vspace{-2mm}
{\small
\begin{quote}
\small
\textbf{DA:} um and [I]$_1$ think that is what's - Go ahead [Linda]$_2$.
\textbf{LW:} Well and thanks goes to [you]$_1$ and to [the media]$_3$ to help [us]$_4$...So [our]$_4$ hat is off to all of [you]$_5$...
\end{quote}
}
\adjustbox{scale=0.8}{
\begin{tikzpicture
\begin{scope}[xshift=-6cm]
\node[xshift=-0.3cm, yshift=1.2cm](X){$X^{(1)}$};
\matrix[ nodes={
line width=1pt, anchor=base, text centered, rounded corners,
minimum width=0.5cm, minimum height=0.4mm }, row sep=0.15cm,
column sep=0.4cm]{
& \node(ha){$\boldh^{(1)}_1$}; & \node(hpa){$\boldh^{(1)}_2$};\\
& & & & & \\
& \node(lexH){[I]}; & \node(toyH){[you]}; \\
};
\draw (ha.east) edge[->] (hpa.west); \draw (lexH) edge[->] (ha);
\draw (toyH) edge[->] (hpa);
\node[draw,dotted,fit=(ha) (toyH) (X)] {};
\end{scope}
\begin{scope}[xshift=-2cm]
\node[xshift=-0.5cm, yshift=1.2cm](X){$X^{(2)}$};
\matrix[ nodes={
line width=1pt, anchor=base, text centered, rounded corners,
minimum width=0.5cm, minimum height=0.4mm }, row sep=0.15cm,
column sep=0.4cm]{
& \node(ha){$\boldh^{(2)}_1$}; \\
& & & & & \\
& \node(lexH){[Linda]}; \\
};
\draw (lexH) edge[->] (ha);
\node[draw,dotted,fit=(ha) (lexH) (X)] {};
\end{scope}
\begin{scope}[xshift=1.5cm]
\node[xshift=-0.5cm, yshift=1.2cm](X){$X^{(3)}$};
\matrix[ nodes={
line width=1pt, anchor=base, text centered, rounded corners,
minimum width=0.5cm, minimum height=0.4mm }, row sep=0.15cm,
column sep=0.4cm]{
& \node(ha){$\boldh^{(3)}_1$}; \\
& & & & & \\
& \node(lexH){[the media]}; \\
};
\draw (lexH) edge[->] (ha);
\node[draw,dotted,fit=(ha) (lexH) (X)] {};
\end{scope}
\begin{scope}[xshift=5cm]
\node[xshift=-0.3cm, yshift=1.2cm](X){$X^{(4)}$};
\matrix[ nodes={
line width=1pt, anchor=base, text centered, rounded corners,
minimum width=0.5cm, minimum height=0.4mm }, row sep=0.15cm,
column sep=0.4cm]{
& \node(ha){$\boldh^{(4)}_1$}; & \node(hpa){$\boldh^{(4)}_2$};\\
& & & & & \\
& \node(lexH){[us]}; & \node(toyH){[our]}; \\
};
\draw (ha.east) edge[->] (hpa.west); \draw (lexH) edge[->] (ha);
\draw (toyH) edge[->] (hpa);
\node[draw,dotted,fit=(ha) (toyH) (X)] {};
\end{scope}
\end{tikzpicture}}
\vspace{0.25cm}
\begin{tikzpicture}[scale=0.5]
\matrix[ nodes={
line width=1pt,
anchor=base,
text centered,
rounded corners,
minimum width=1.5cm, minimum height=8mm
}, row sep=0.15cm]{
\node(lex){[I], $\boldh^{(1)}_2$}; & \node(toy){[Linda], $\boldh^{(2)}_1$}; & \node(lexb){[you], $\boldh^{(1)}_2$}; & \node(t){[the media], $\boldh^{(3)}_1$}; & \node(toyb){[us], $\boldh^{(4)}_2$}; & \node(toyc){[our], $\boldh^{(4)}_2$}; & \node(mention){$x_n$ = [you]}; & \node(none){$\epsilon$, $\mathrm{NA}(x_n)$};\\
};
\draw(mention.north) edge[bend right=10, ->, dashed] (lex.north);
\draw(mention.north) edge[bend right=10, ->, dashed] (toy.north);
\draw(mention.north) edge[bend right=10, ->, dashed] (lexb.north);
\draw(mention.north) edge[bend right=10, ->, dashed] (toyb.north);
\draw(mention.north) edge[bend right=10, ->, dashed] (toyc.north);
\draw(mention.north) edge[bend right=10, ->, dashed] (t.north);
\draw(mention.north) edge[bend left=30, ->, dashed] (none.north);
\end{tikzpicture}
\caption{\small Full RNN example for handling the mention $x_n =$ [you]. There are currently four entity clusters in scope $X^{(1)}, X^{(2)}, X^{(3)}, X^{(4)}$ based on unseen previous decisions $(y)$. Each cluster has a corresponding RNN state, two of which ($\boldh^{(1)}$ and $\boldh^{(4)}$) have processed multiple mentions (with $X^{(1)}$ notably including a singular mention [I]). At the bottom, we show the complete mention-ranking process. Each previous mention is considered as an antecedent, and the global term considers the antecedent clusters' current hidden state. Selecting $\epsilon$ is treated with a special case $\mathrm{NA}(x_n)$.
}
\label{fig:hidden}
\end{figure*}
\subsection{RNNs for Cluster Features}
Our main contribution will be to utilize RNNs to produce
feature representations of entity clusters which will provide the basis of the global term $g$. Recall that we view a cluster $X^{(m)}$ as a sequence of mentions $(X^{(m)}_j)_{j=1}^J$ (ordered in linear document order). We therefore propose to embed the state(s) of $X^{(m)}$ by running an RNN over the cluster in order.
In order to run an RNN over the mentions we need an embedding function $\boldh_{\uc}$ to map a mention to a real vector. First, following \newcite{wiseman15learning} define $\boldsymbol{\phi}_{\mathrm{a}}(x_n):\mcX \rightarrow \{0,1\}^F$ as a standard set of local indicator features on a mention, such as its head word, its gender, and so on. (We elaborate on features below.) We then use a non-linear feature embedding $\boldh_{\uc}$ to map a mention $x_n$ to a vector-space representation. In particular, we define
\vspace{-3mm}
{\small
\begin{align*}
\boldh_{\uc}(x_n) &\triangleq \tanh(\boldW_{\mathrm{\uc}} \, \boldsymbol{\phi}_{\mathrm{a}}(x_n) + \boldb_{\mathrm{\uc}})\eqpunc{,}
\end{align*}
}
\vspace{-5mm}
\noindent where $\boldW_{\mathrm{\uc}}$ and $\boldb_{\mathrm{\uc}}$ are parameters of the embedding.
We will refer to the $j$'th hidden state of the RNN corresponding to $X^{(m)}$ as $\boldh^{(m)}_j$, and we obtain it according to the following formula
{\small
\begin{align*}
\boldh^{(m)}_j \gets \mathrm{\mathbf{RNN}}(\boldh_{\uc}(X_j^{(m)}), \boldh^{(m)}_{j-1}; \btheta)\eqpunc{,}
\end{align*}
}
\vspace{-4mm}
\noindent again assuming that $\boldh^{(m)}_0 \ensuremath{\,{=}\,} \bzero$. Thus, we will effectively run an RNN over each (sequence of mentions corresponding to a) cluster $X^{(m)}$ in the document, and thereby generate a hidden state $\boldh^{(m)}_j$ corresponding to each step of each cluster in the document. Concretely, this can be implemented by maintaining $M$ RNNs -- one for each cluster -- that all share the parameters $\btheta$. The process is illustrated in the top portion of Figure~\ref{fig:hidden}.
\subsection{Pronoun Problems}
Recent empirical work has shown that the resolution of pronominal mentions accounts for a substantial percentage of the total errors made by modern mention-ranking systems. \newcite{wiseman15learning} show that on the CoNLL 2012 English development set, almost 59\% of mention-ranking precision errors and almost 24\% of recall errors involve pronominal mentions. \newcite{martschat15latent} found a similar pattern in their comparison of mention-ranking, mention-pair, and latent-tree models.
To see why pronouns can be so problematic, consider the following passage from the ``Broadcast Conversation'' portion of the CoNLL development set (bc/msnbc/0000/018); below, we enclose mentions in brackets and give the same subscript to co-clustered mentions. (This example is also shown in Figure~\ref{fig:hidden}.)
\vspace{-1mm}
{\small
\begin{quote}
\small
\textbf{DA:} um and [I]$_1$ think that is what's - Go ahead [Linda]$_2$.
\textbf{LW:} Well and uh thanks goes to [you]$_1$ and to [the media]$_3$ to help [us]$_4$...So [our]$_4$ hat is off to all of [you]$_5$ as well.
\end{quote}
}
\vspace{-1mm}
\noindent This example is typical of Broadcast Conversation, and it is difficult because local systems learn to myopically link pronouns such as [you]$_5$ to other instances of the same pronoun that are close by, such as [you]$_1$. While this is often a reasonable strategy, in this case predicting [you]$_1$ to be an antecedent of [you]$_5$ would result in the prediction of an incoherent cluster, since [you]$_1$ is coreferent with the singular [I]$_1$, and [you]$_5$, as part of the phrase ``all of you,'' is evidently plural. Thus, while there is enough information in the text to correctly predict [you]$_5$, doing so crucially depends on having access to the \textit{history} of predictions made so far, and it is precisely this access to history that local models lack.
More empirically, there are non-local statistical regularities involving pronouns we might hope models could exploit. For instance, in the CoNLL training data over 70\% of pleonastic ``it'' instances and over 74\% of pleonastic ``you'' instances follow (respectively) previous pleonastic ``it'' and ``you'' instances. Similarly, over 78\% of referential ``I'' instances and over 68\% of referential ``he'' instances corefer with previous ``I'' and ``he'' instances, respectively.
Accordingly, we might expect non-local models with access to global features to perform significantly better. However, models incorporating non-local features have a rather mixed track record. For instance, \newcite{BandK:14} found that cluster-level features improved their results, whereas \newcite{martschat15latent} found that they did not. \newcite{clark15entity} found that incorporating cluster-level features \textit{beyond} those involving the pre-computed mention-pair and mention-ranking probabilities that form the basis of their agglomerative clustering coreference system did not improve performance. Furthermore, among recent, state-of-the-art systems, mention-ranking systems (which are completely local) perform at least as well as their more structured counterparts~\cite{DandK:14,clark15entity,wiseman15learning,peng15a}.
\subsection{Issues with Global Features}
We believe a major reason for the relative ineffectiveness of global features in coreference problems is that, as noted by \newcite{clark15entity}, cluster-level features can be hard to define. Specifically, it is difficult to define discrete, fixed-length features on clusters, which can be of variable size (or shape). As a result, global coreference features tend to be either too coarse or too sparse. Thus, early attempts at defining cluster-level features simply applied the coarse quantifier predicates \textit{all}, \textit{none}, \textit{most} to the mention-level features defined on the mentions (or pairs of mentions) in a cluster~\cite{culotta2007first,rahman11narrowing}. For example, a cluster would have the feature `most-female=true' if more than half the mentions (or pairs of mentions) in the cluster have a `female=true' feature.
On the other extreme, \newcite{BandK:14} define certain cluster-level features by concatenating the mention-level features of a cluster's constituent mentions in order of the mentions' appearance in the document. For example, if a cluster consists, in order, of the mentions (\textit{the president}, \textit{he}, \textit{he}), they would define a cluster-level ``type'' feature `C-P-P=true', which indicates that the cluster is composed, in order, of a common noun, a pronoun, and a pronoun. While very expressive, these concatenated features are often quite sparse, since clusters encountered during training can be of any size.
|
1,116,691,497,231 | arxiv | \section{Introduction}
Trapped ions are at the forefront of both digital and analog quantum simulation~\cite{Cirac:1995, Porras_2004, Blatt:2008}. On the digital side, trapped-ions are the building blocks of the highest fidelity two-qubit universal gates~\cite{Brown:2011, Ballance:2016, Gaebler:2016}, and the recent demonstration of on-the-fly quantum error correction adds to the robustness of this architecture~\cite{Bohnet:2021}. On the analog side, they have been used to emulate the dynamics and prepare the ground states of quantum magnets, as well as study the dynamics of quantum correlations, quantum information and entanglement in the presence of engineered, variable-range interactions~\cite{Kim:2009, Monroe_Correlations_2014, Monroe_MBL_2016, Rey_MQC_2017, Bermudez:2011, Bermudez:2011}.
Trapped-ion quantum simulators allow one to engineer power-law spin-spin interactions which decay as $1/r^\alpha$ where $0<\alpha<3$ and $r$ is the distance between two ions. This is the direct result of the mechanism behind the interactions. The inter-ion interactions are phonon-mediated and as such depend on the spectrum and structure of the collective vibrational modes of the ion crystal~\cite{Britton:2012, Freericks:2015}. So far experimental efforts utilizing trapped-ions as analog simulators have been restricted to the aforementioned power-law interactions.
Recently it has been shown that the addition of optical tweezers to the typical trapped-ion platform produces a highly tunable quantum simulator in terms of connectivity, range, and sign of the interactions in both linear (or 1D) and triangular (2D) ion crystals in Paul traps~\cite{Espinoza:2021, Teoh:2021, Nath:2015, Olsacher:2020}. If a target interaction matrix passes our feasibility criterion, we search for the optimal optical tweezer pattern to manipulate the frequencies and structure of the collective vibrational modes of the crystal.
In this work we study the robustness of our scheme in presence of typical experimental imperfections: micromotion, tweezer misalignment, and tweezer intensity noise. In Section.~\ref{sec:trapped-ion-qsim} we review the radio-frequency (r.f.) Paul trap and the formalism describing the motion (including micromotion) of ion crystals. In Section.~\ref{sec:micromotion-spinspin-engineering} we extend previous studies to characterize the effect of small-amplitude micromotion~\cite{Landa:2012, Kaufmann2012, Duan:2015} and correct for it in our tweezer patterns, before including first-order Doppler modulation. Section.~\ref{sec:LocalStress} investigates if local stress due to misalignment of the tweezers can improve the optimization and considers the effect of laser intensity fluctuations.
\section{Trapped-ion quantum simulator}\label{sec:trapped-ion-qsim}
We consider a one or two dimensional crystal of $N$ ions in a Paul trap. The potential energy of the system is given by $V_0 = V_{\rm coulomb}+V_{\rm trap}$. The first term is the contribution due to the Coloumb repulsion between the ions $V_{\rm coulomb}(\bm{r}_i)=\frac{1}{2} \sum_{i\neq j} \abs{\bm{r}_i - \bm{r}_j}^{-1}$, whilst the second term is the confinement supplied by the external trapping potential
\begin{align}
V_\text{trap}(r_{i,\alpha},t) = \frac{\Omega_\text{rf}^2}{8} \sum_{i,\alpha} [a_\alpha - 2 q_\alpha \cos(\Omega_\text{rf} t)] r_{i,\alpha}^2, \label{eq:generalTrappotential}
\end{align}
generated by DC fields and AC components oscillating at $\Omega_{\rm rf}$. Here $a$ and $q$ are the (dimensionless) Mathieu parameters and $r_{i,\alpha}$ is the position of the $i$-th ion in the $\alpha=x,y,z$ direction. The ion positions and the oscillation frequency are dimensionless and in terms of the characteristic length scale $d = \left(e^2/(4\pi\epsilon_0 m \bar{\omega}^2)\right)^{1/3}$ and a characteristic frequency $\bar \omega$ respectively. Here $e$ is the electron charge, $\epsilon_0$ is the vacuum permittivity and $m$ is the ion mass. This allows us to define time $t$ in units of $1/\bar{\omega}$. Thus Eq.~\ref{eq:generalTrappotential} is dimensionless with an energy scale $m \bar{\omega}^2 d^2$.
The interplay between the external trapping potential and the Coulomb repulsion results in stable Coulomb crystals. The dimensionality of the crystal depends on the relative strength of the trapping potential along the different axes \cite{Dubin_1993,Enzer_2000}. We focus on the case of a 2D zigzag crystal in the $yz$-plane, as shown in Fig.~\ref{fig:pseudopotentialVSmicromotion}(a). Tight confinement along $x$ ensures the crystal forms in the $yz$-plane, whilst a weaker potential along $z$ compared to $y$ (or vice-versa) leads to the formation of the zigzag structure.
The equilibrium positions of the ions are given by the solutions to $\nabla V_0 = 0$. The full solution is equilibrium positions with explicit time-dependence $\mathbf R_{i}(t)$ to account for micromotion even at ultra-low temperatures. However when $\abs{a},q^2 \ll 1$ we make the pseudopotential approximation and replace the time-dependent potential $V_\text{trap}$ with a static harmonic potential~\cite{James:1998}
\begin{align}
V_\text{pseudo}(r_{i,\alpha}) = \frac{1}{2}\sum_{i,\alpha} \Theta_\alpha^2 r_{i,\alpha}^2, \label{eq:PPTrapPotential}
\end{align}
where $\Theta_\alpha = \gamma_\alpha \Omega_\text{rf}/2$ are effective frequencies determined by the characteristic exponents of the Mathieu equation, $\gamma_\alpha \approx \sqrt{a_\alpha + q_\alpha^2/2}$ \cite{McLachlan1947}. Note that although the Mathieu exponents are usually denoted by $\beta$, we use $\gamma$ to avoid confusion with a later use of $\beta$.
The emergence of effective spin-spin interactions, mediated by the collective oscillations (phonon modes) of the crystal have been previously studied. The phonon-mediated interactions are generated by applying a spin-dependent force, using a Raman beam pair, to couple the electronic spin of the ion to the collective motion of the crystal. Within this approximation trapped-ion quantum simulators allow us to engineer spin-spin interactions that decay as $1/r^{\xi}$, with $0\leq \xi \leq 3$ \cite{Richerme2016, Britton:2012, Kim:2009, Porras_2004}. The interaction strength between ions $i$ and $j$ is given by
\begin{align}
J_{i,j} = \sum_m \frac{(\bm{k} \cdot \bm{b}_{i,m})(\bm{k} \cdot\bm{b}_{j,m})}{\mu^2 - \omega_m^2}, \label{eq:spinspincoupling}
\end{align}
where $\bm{b}_{i,m}$ is a $3$-element vector (each element describing a direction $\alpha$) of the $m$-th mode and the $i$th ion, $\bm{k}$ the $3$-element wave vector of the Raman beam pair, $\omega_m$ the frequency of the $m$-th mode and $\mu$ the Raman beat-note frequency. Thus the structure of the spin-spin interactions is fully determined by the normal modes of the crystal and the beat-note frequency $\mu$. Here we have assumed that the phase of the Raman beam pair driving the side-band transitions remains constant at the equilibrium position of the ions.
In the absence of any additional control knob, one is limited to the power-law interactions described above. We have previously shown that a wider variety of target spin-spin interactions can be engineered by modifying the mode structure with optical tweezers \cite{Espinoza:2021}. We assume that the tweezers have cylindrical symmetry and supply confinement in the yz-plane only. We also assume that the micromotion amplitude is sufficiently small such that each ion stays near the center of the tweezer beam, and that the tweezer beam is focused on the ion equilibrium positions $\bm{R}_{i}$. Then the tweezer potential can be written as a local harmonic potential for each ion,
\begin{align}
V_\text{tweezer}(r_{i,\alpha}) = \frac{1}{2} \sum_{i=1}^{N}\sum_{\alpha = y,z}\nu_i^2 (\tilde{r}_{i,\alpha})^2,
\end{align}
where $\nu_i$ is the pinning frequency on the $i$th ion and $\bm{\tilde{r}}_i = \bm{r}_{i} - \bm{R}_{i}$ are the ion positions relative to their equilibrium. In the pseudopotential approximation the equilibrium positions are natively time-independent; when including micromotion we average the time-dependent equilibrium positions over one r.f. period. We denote the total potential, including tweezers, by $V_\text{total} = V_\text{trap} + V_\text{coulomb} + V_\text{tweezer}$.
\subsection{Equilibrium Positions with Micromotion}
When optical tweezers are added to the system, in principle the solution to $\nabla V_\text{total} = 0$ gives the equilibrium positions. However for simplicity we assume that the equilibrium positions are unaffected by the tweezer potentials, which we justify in Section.~\ref{sec:micromotion-spinspin-engineering} by showing that our engineered coupling matrix is unaffected by this approximation.
The equilibrium positions are thus given by the solution to $\nabla V_0 = 0$. We set the characteristic frequency $\bar{\omega} = \Omega/2$, and re-scale time accordingly $t \rightarrow \Omega t / 2$ to make the micromotion $\pi$-periodic. The $3N$ coupled equations of motion (eoms) are then \cite{Leibfried:2003}
\begin{align}
\ddot{r}_{i,\alpha} + [a_\alpha - 2 q_\alpha \cos(2 t)] r_{i,\alpha} - \sum_{i\neq j} \frac{ r_{i,\alpha} - r_{j,\alpha}}{\abs{\bm{r}_i - \bm{r}_j}^{3}} = 0.
\label{eq:ionEOMs}
\end{align}
The addition of a cooling term $V_{\rm cool} = f(t) \dot{\bm{r}}_{i}$, where $f(t)$ is a time-dependent cooling profile that ramps from $f(0)=1$ to $f(t_{\rm max})=0$ allows us to start from an initial guess and evolve to the equilibrium configuration at $t_{\rm max}$. We then evolve the positions for one more period with $f(t>t_{\rm max})=0$ to determine the time-dependent equilibrium positions $\bm{R}_{i}(t)$.
\subsection{Linearized Motion}
To calculate the normal mode structure we follow the steps in Refs.~\cite{Landa:2012,Kaufmann2012}. We linearize the eoms about small oscillations of the equilibrium positions $\bm{\tilde{r}}_{i} = \bm{r}_{i}-\bm{R}_{i}$,
\begin{align}
\ddot{\tilde{r}}_{i,\alpha} + [a_\alpha - 2q_\alpha \cos(2t)] \tilde{r}_{i,\alpha} + \sum_{j,\beta} D_{i,j}^{\alpha,\beta}(t) \tilde{r}_{j,\beta} = 0,
\end{align}
where the time-dependent Hessian is defined as
\begin{align}
D_{i,j}^{\alpha,\beta}(t) = \left. \frac{\partial^2 V_{\rm Coulomb}}{\partial r_{i,\alpha} \partial r_{j,\beta} } \right \vert_{r_{i,\alpha} = R_{i,\alpha}}. \label{eq:Hessian}
\end{align}
The linearized eoms have periodic coefficients and thus can be treated using Floquet theory. Expanding the Hessian matrix in a Fourier series as
\begin{align}
D = D_0 - 2 D_2 \cos(2t) - \hdots
\end{align}
where the matrices $A$ and $Q$ are defined as $A = {\rm diag}(a) + D_0$ and $Q = {\rm diag}(q) + D_2$, the matrix $\Pi(t)$ and vector $\bm{\phi}$ are introduced as
\begin{align}
\Pi(t) = \begin{pmatrix}
0 & \mathbbm{1} \\
-A + 2Q\cos(2t) & 0
\end{pmatrix}, \quad
\bm{\phi} = \begin{pmatrix}
\tilde{r}_{i,\alpha} \\ \dot{\tilde{r}}_{i,\alpha}
\end{pmatrix},
\end{align}
where $\mathbbm{1}$ is a $3N$-dimensional identity matrix. The linearized eoms are then written as linearly independent equations in $6N$-dimensional phase space as
\begin{align}
\dot{\bm{\phi}} = \Pi(t) \, \bm{\phi}. \label{eq:floquet}
\end{align}
We solve the set of differential equations to obtain the Floquet modes and exponents, which are related to the eigenmodes $\bm{b}_m^{f}$ and eigenfrequencies $\omega_m^{f}$ of the linearized ion-crystal motion (using superscript $f$ to denote that the solutions are from the full motion treatment).
To obtain the eigenmodes and eigenfrequencies in the pseudopotential approximation we construct the Hessian as defined in Eq.~\ref{eq:Hessian}, but where the partial derivatives are now with respect to the static equilibrium positions $\bm{R}_{i}$. The Hessian is therefore time-independent and can be simply diagonalized to yield the eigenmodes $\bm{b}_m^{p}$ and eigenfrequencies $\omega_m^{p}$ (using superscript $p$ to denote the pseudopotential solutions).
\subsection{Micromotion of a 2D Zigzag Crystal}\label{subsec:EffectMicromotion2DResults}
To characterize the effect of micromotion we study a $N=12$ ion crystal using experimentally relevant trap parameters. Specifically we use $a = \{0.018704, -0.018900, 0.000196\}$, $q = \{0.202780 , -0.202780 , 0 \}$ and $\Omega_\text{rf} = 2\pi\times 20 \text{ MHz}$. The corresponding pseudopotential frequencies are $\Theta_\alpha = 2\pi \times \{2,0.4,0.14\}\text{ MHz}$.
Fig.~\ref{fig:pseudopotentialVSmicromotion}(a) shows the ion equilibrium positions with blurring to indicate micromotion over one r.f. period. Micromotion occurs only in $y$ with amplitude proportional to the ion's distance from the $y = 0$ trap axis, as described by the first order approximation $(1/2)q_{\alpha} R_{i,\alpha}$. In Fig.~\ref{fig:pseudopotentialVSmicromotion}(b) we plot the spectrum. Because the micromotion is a breathing mode oscillation the center of mass (com) modes are unchanged. The out-of-plane modes (along $x$) are decoupled from the in-plane modes ($y$ and $z$) and have a higher frequency and smaller bandwidth. Fig.~\ref{fig:pseudopotentialVSmicromotion}(c) shows the frequency shift $\Delta \omega_m = \omega_m^{f} - \omega_m^{p}$ normalized to $\omega_m^f$. Although the frequency shift is larger for modes with more breathing or zigzag-like structure, the frequency shifts are all relatively small $(\text{kHz})$ compared to the mode frequencies themselves $(\text{MHz})$. As such, from the mode structure itself we conclude that the pseudopotential approximation is justified.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/combined_fig_lessopaque.pdf}
\caption{The effect of micromotion on a $N=12$ ion zig-zag crystal. (a) Ion positions during one r.f. period with motion indicated by blurring. Micromotion occurs only in the $y$ direction. (b) Mode frequency spectrum for $yz$ plane (orange) and $x$ (blue). The com mode frequencies (vertical black dashed lines) are unchanged by micromotion. (c) Frequency shift $\Delta \omega_m$ normalized to $\omega_m^f$. All shifts are small (kHz) relative to the mode frequencies themselves (MHz).}
\label{fig:pseudopotentialVSmicromotion}
\end{figure}
\section{Engineering spin-spin interactions in Optical Tweezers}\label{sec:micromotion-spinspin-engineering}
In this section we investigate if micromotion restricts our ability to engineer a target spin-spin interaction. We demonstrate that although tweezer patterns determined in the pseudopotential approximation are unsuitable once micromotion is included, corrected tweezer patterns can be found. However, the Doppler shift of the laser implementing the spin-spin interactions does cause an appreciable degradation in the engineered interaction compared to the target which is challenging to correct.
\subsection{Naive Inclusion of Micromotion}
We firstly make the pseudopotential approximation and numerically optimize the tweezer frequencies $\nu_i$ and Raman beat-note frequency $\mu$ to engineer a target coupling matrix. To characterize the success of the optimization, we define an error function as
\begin{align}
\epsilon = \frac{\norm{J_T - J_E}}{\norm{J_T}}, \label{eq:errorfunc}
\end{align}
where $J_E$ and $J_T$ are the engineered and target interaction matrices respectively, and where the matrix norm is the Frobenius norm.
During the optimization we assume that the equilibrium positions are unchanged by the tweezers. To justify this approximation we find that applying a maximum $2\pi \times 10 \text{ MHz}$ tweezer frequency on all ions causes an ion position change of $\sim 10 $~nm, and that using the corrected ion positions with an optimal set of tweezer frequencies causes a negligible change in $\epsilon$ on the order of $10^{-3}$.
For the target coupling we use a spin-ladder interaction, as shown in Fig.~\ref{fig:mmppresults}(a). Here we choose the spin-ladder since it is challenging to realize in ion crystals utilizing only the collective modes of the crystal in the absence of the tweezer potentials. It also offers variety via the coupling strength ratio $j_2/j_1$ enabling us to study the interplay of frustration and fluctuations, necessary ingredients for spontaneous continuous or discrete symmetry breaking in condensed matter systems. The ability to tune the range of zig-zag coupling strengths ($|j_2/j_1| \gg 1$) will allow us to study the phase diagram of this well-known frustrated magnetic system with no exact solution.
To perform the numerical optimization we use Simulated Annealing, implemented using \emph{Optim.jl} \cite{mogensen2018optim} version 1.6.1 in \emph{Julia} \cite{bezanson2017julia} version 1.6.2. We limit the maximum tweezer laser power to 30~W and use beam waists of $w = 1 \,\mu\text{m}$. The tweezer frequencies are upper-bounded by $\nu_i/(2\pi) \leq 1.0 \text{ MHz}$ whilst the Raman transition frequency is bounded by $0.3 \text{ MHz} \leq \mu/(2\pi) \leq 1.0 \text{ MHz}$. In addition we demand that $|\mu - \omega_m| > 10 \text{ kHz} $ to ensure the phonon modes are only excited virtually. We implement this final requirement in the optimisation routine by adding a large value to the cost function defined in Eq.~\ref{eq:errorfunc} if the condition is not satisfied.
Fig.~\ref{fig:mmppresults}(b) shows the optimal interaction graph and corresponding error $\epsilon_p = 0.304$ that can be realized in the pseudopotential approximation. In Fig.~\ref{fig:mmppresults}(c) we ``naively'' take the optimal tweezer pattern found in the pseudopotential approximation and recalculate the error using the micromotion equilibrium positions and mode structure, finding $\epsilon_m = 0.654$. The difference $\epsilon_{m}-\epsilon_{p} = 0.350$ is significant, with the interaction graph showing little spin-ladder structure. As such, any optimization should include micromotion during the routine.
\begin{figure}
\includegraphics[width=\linewidth]{figs/couplinggraphs.pdf}
\caption{(a) Spin-spin couplings for the target zig-zag coupling $J_T$ with $- j_2/j_1 = 0.5$. (b) Engineered couplings $J_E$ in the pseudopotential approximation. With optical tweezers, the target coupling can be engineered with reasonably low error. (c) ``Naive'' inclusion of micromotion by using the tweezer parameters found in the pseudopotential case. The difference in mode structure results in a large increase in $\epsilon$, making the tweezer solution found in the pseudopotential approximation unsuitable.}
\label{fig:mmppresults}
\end{figure}
\subsection{Including Micromotion during optimization}
Including micromotion during the optimization routine requires re-calculating the time-dependent Hessian with a given set of $\nu_i$ and solving the $6N$ Floquet equations to find the new mode structure. Although this procedure is computationally costly, for larger $N$ the cost can be reduced by using the symmetry of the coupling matrix in the tweezer patterns. For example, the spin-ladder interaction is symmetric about $z = 0$ and thus the tweezer frequencies can be assumed to obey the same symmetry. For the $N=12$ Coulomb crystal we find this is not necessary, and so optimize over all $12$ tweezer frequencies.
In Fig.~\ref{fig:optimresults} we plot the optimization of $\epsilon$. When micromotion is included in the optimization, $\epsilon$ approaches the pseudopotential result. As such, micromotion itself is not a significant barrier to engineering interactions with optical tweezer.
\begin{figure*}
\includegraphics[width=\linewidth]{{figs/graphspinladder1NIP-trace.pdf}}
\caption{Panels (a), (b) and (c) show the error $\epsilon$ as a function of optimization evaluations ($i$) for wave vectors $\bm{k} = [0,1,0]$, $\bm{k} = [0,0,1]$ and $\bm{k} = [0,1,1]$ respectively. The target coupling is the spin-spin ladder shown in Fig.~\ref{fig:mmppresults}. When including micromotion in the optimization (dark blue line) we obtain a similar $\epsilon$ as the pseudopotential case (dotted orange line). The $\text{k} = [0,1,0]$ case (panel a) shows the best performance due to the tighter confinement along $y$. Modulation has a detrimental effect in this scenario, as this is the direction where the micromotion amplitude is largest. Panels (d), (e) and (f) show the native spectrum (yellow) and tweezer-modified spectrum (dark blue) corresponding to (a), (b) and (c) respectively. The black dashed line shows the optimized beatnote frequency $\mu$. Note that $\abs{\omega_m^f - \mu} < 10 \text{kHz}$ to maintain a dispersive spin-phonon coupling. }
\label{fig:optimresults}
\end{figure*}
\subsection{First Order Doppler Modulation}
The first order Doppler shift can have a significant impact on the spin-spin couplings. Following the procedure used in Ref.~\cite{Berkeland:1998} to lowest order in $a$ and $q$ the laser field (up to a phase factor) in the reference frame of the moving ion is
\begin{align}
E_i(t) = \Re \left[ \bm{E}_0 e^{i \bm{k}^ \cdot \bm{r}_{i}} \sum_{n = -\infty}^{\infty} \mathcal{J}_n(\beta) e^{-i \omega t + i n (\Omega_\text{rf} t)} \right],
\end{align}
where $\omega$ is the frequency and $\bm{k}$ the wave vector, and $\mathcal{J}_n(\beta_i)$ the Bessel function. The (dimensionless) modulation index $\beta_i$ is given by
\begin{align}
\beta_i = \frac{1}{2} \abs{\sum_{\alpha} k_\alpha R_{i,\alpha} q_\alpha}.
\end{align}
The carrier transition amplitude is modified by $\mathcal{J}_0(\beta_i)$, and thus the interaction matrix element becomes
\begin{align}
J_{i,j}^\text{doppler} = \mathcal{J}_0(\beta_i) \mathcal{J}_0(\beta_j) J_{i,j},
\end{align}
where $J_{i,j}$ is the unmodulated coupling matrix element given in Eq.~\ref{eq:spinspincoupling}. Assuming a $411 \text{nm}$ laser we include Doppler modulation in the optimization. The resulting $\epsilon$ is shown in Fig.~\ref{fig:optimresults}. Although there is no Doppler modulation in $z$ (because $q_z = 0$) nor $x$ (because $R_{i,x} = 0$), there is significant modulation along $y$. The reduction in coupling strength depends on the distance of each ion from the $y = 0$ r.f. null, which makes it challenging to correct for using optical tweezers. While this ion-dependent source of error can be compensated by tuning the intensity of the Raman beams on each ion, the extra infrastructure cost is prohibitive.
\section{Local Stress}\label{sec:LocalStress}
In Section~\ref{sec:micromotion-spinspin-engineering} we used tweezer beams centered on the average equilibrium positions of the ions to more accurately engineer spin-ladder interactions. However if the tweezer beams are offset from the equilibrium positions, the tweezers add not only a local trapping potential but also supply a force. In this section we investigate if this local stress enables further improvements to our engineered couplings. We show that tweezer offsets of up to $0.25\,\mu\text{m}$ offer only small improvements to $\epsilon$.
\subsection{First Order Approximation}
For simplicity we assume that we have a geometry in which micromotion does not play a role. As before we assume that the tweezers have cylindrical symmetry and supply confinement in the $yz$-plane only. The tweezer potential including an offset is then given by
\begin{equation}
V_{\text{tweezer}}(r_{i,\alpha}) = \frac{1}{2}\sum_{i=1}^{N}\sum_{\alpha=y,z} \nu_i^2 (\tilde{r}_{i,\alpha}-\delta r_{i,\alpha})^2,
\label{v_tw}
\end{equation}
where $\bm{\tilde{r}}_{i} = \bm{r}_{i} - \bm{R}_{i}$ are the positions of the ions relative to their equilibrium, $\bm{\delta r}_{i}$ is the tweezer offset from $\bm{\tilde{r}}_i$ and the characteristic frequency is now set to $\bar{\omega}=\Theta_z$.
Offsetting the tweezers changes the equilibrium positions of the ions. To find the new equilibrium positions $\mathbf{R}_i + \bm{\rho}_i$ we need to solve $\nabla_{\mathbf{\tilde{r}}} V_\text{total} =0$. This is computationally costly for large crystals, particularly when included in an optimization routine. Instead, as a first approximation we assume that the tweezers pull lightly on the ions, $\bm{\rho}_{i}/\bm{\delta r}_{i} \ll 1$. This is equivalent to treating the tweezers as a small perturbation compared to the Paul trap and Coulomb interactions. For simplicity we omit the $x$-direction, which is justified when the laser implementing the spin-spin interactions has no effective wave vector in the $x$-direction and the sound wave modes in the $x$-direction decouple, such as in a 2D ion crystal in the $yz$-plane. These prerequisites can be easily obtained by design. Denoting the Hessian matrix of $V_0 = V_\text{trap} + V_\text{coulomb}$ by $D_0$, we expand $\nonumber \nabla_{\mathbf{\tilde{r}}} V_{\text{tot}}(\mathbf{\rho})$ to first order,
\begin{align}
\nonumber \nabla_{\mathbf{\tilde{r}}} V_{\text{tot}}(\mathbf{\rho}) &\approx \left(\nabla_{\mathbf{\tilde{r}}}\left(\nabla_{\mathbf{\tilde{r}}} V_{0}(\mathbf{\tilde{r}})\right)\right)_{\mathbf{\tilde{r}}=0}\text{ }\bm{\rho}+\bm{\nu}^2(\bm{\rho} - \bm{\delta r})\\
&= D_0(0)\bm{\rho}+\bm{\nu}^2(\bm{\rho} - \bm{\delta r}),
\label{notwsimp}
\end{align}
where $\bm{\nu}$ is a $2N\times 2N$ diagonal matrix with diagonal elements $\nu_i$. Note the zeroth order term drops out since $(\nabla V_0)_{\mathbf{\tilde{r}}=0}=0$ by definition. The lowest order shifts in the equilibrium positions are therefore
\begin{equation}
\bm{\rho} \approx \left(D_\text{tot}(0)\right)^{-1}\bm{\nu}^2 \delta \bm{r},
\label{neweqpos}
\end{equation}
where $D_\text{tot}(0)=D_0(0) + D_{\text{tw}}$ and $D_{\text{tw}}=\bm{\nu}^2$.
Having approximated the new equilibrium positions, we now calculate the change in the Hessian matrix. To avoid calculating the Hessian $D_{\text{tot}}(\bm{\rho})$ directly from the new potential, we use an approximation to further reduce the computational cost,
\begin{equation}
D_{\text{tot}}(\bm{\rho})\approx D_\text{tot}(0) + \left(\nabla_{\mathbf{\tilde{r}}}(D_0)\right)_{\mathbf{\tilde{r}}=0}\text{ }\bm{\rho}+\hdots.
\label{dapprox}
\end{equation}
$D_{\text{tot}}(\bm{\rho})$ has new eigenfrequencies $\tilde{\omega}_m^{\text{str}}$ and eigenvectors $\tilde{\mathbf{b}}_m^{\text{str}}$ resulting in new spin-spin interactions as defined by Eq.~(\ref{eq:spinspincoupling}). Although only approximate, this equation gives insight into the effect of the local stress on the mode spectrum. Because both $D_{\text{tw}}$ and $D_\text{trap}$ are constant diagonal matrices, the derivatives of $D_\text{tot}(0)$ originate from the Coulomb interaction alone. Due to the long-range character of the Coulomb interactions we expect that the local stress should ease the simulation of long-range interactions. On the other hand, the local stress terms are higher order than the tweezer curvature terms, so we expect the capability of local stress to significantly change the mode spectrum to be limited. Although this suggests local stress will not offer improvements to our engineered couplings, the benefit is that errors due to misaligned tweezers are suppressed.
\subsection{Optimization}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/graphstress.pdf}
\caption{Error $\epsilon$ as a function of the spin-ladder coupling strength ratio $-j_2/j_1$. The error is smallest at $-j_2/j_1 = 1.5$ as this most closely resembles power-law interactions that can be well-engineered natively. The addition of tweezers offers significant improvement at all ratios. As predicted by our approximate expression Eq.~\ref{dapprox}, local stress of up to $0.25\,\mu\text{m}$ only offers a small improvement. This suggests that the couplings are robust to tweezer misalignments. }
\label{fig:spinladderstressvsnostress}
\end{figure}
We investigate numerically whether it is possible to improve on the results obtained in the previous section if we allow the tweezers to supply local stress on the ions. For the $N=12$ ion crystal we fix the tweezer pattern to the optimal solution found in Sec.~\ref{sec:micromotion-spinspin-engineering} and optimize the tweezer offsets $0 \leq \bm{\delta r_i} \leq 0.25\,\mu\text{m}$. The offset bounds enable us to approximate the tweezers as harmonic. By fixing the tweezer parameters, we only need to optimize over the $2N$ offset parameters, and therefore in the optimization routine can calculate the new equilibrium positions $\bm{R}_i = \bm{\rho}_i$ and Hessian directly. Note that optimization over the full parameter set (including the tweezer parameters) is possible, particularly with a two-step optimization routine that firstly uses the approximate calculations of Eqs.~\ref{neweqpos} and \ref{dapprox} to determine if the parameters are promising, and then when the error falls below a set threshold uses the exact calculation to fine-tune the parameters and obtain the true error. We also optimize the full parameter set in this manner and find no difference to our fixed-tweezer optimization.
In Fig.~\ref{fig:spinladderstressvsnostress} we vary the ratio $-j_2/j_1$ in the $12$-ion spin-ladder and calculate the error as defined in Eq.~\ref{eq:errorfunc}. As expected, the inclusion of tweezers results in significant improvements in engineering the target spin-spin interactions. However applying local stress to the ion crystal only results in minimal improvements. As such we conclude that in the perturbative regime local stress offers little benefit, but is reassuring since the interactions are therefore robust to tweezer misalignments.
\subsection{Intensity noise}\label{sec:IntensityNoise}
Finally, we study the effect of tweezer intensity fluctuations. We consider a worst-case shot-to-shot noise scenario, whereby an optimal set of tweezer frequencies $\nu$ are each subject to a fluctuation $\delta \nu$. Note that $\delta \nu \propto \sqrt{\delta P}$, where $\delta P$ is the power fluctuation, since the square of the tweezer frequencies are proportional to the laser power. To simulate the noise we multiply an optimal tweezer pattern by a random fluctuation sampled from a normal distribution with standard deviation $\delta P$. We repeat the calculation $N_\text{repeat} = 10^4$ times and take the average. In Fig.~\ref{fig:spinladdertweezerfluctuations} we plot $\epsilon$ as a function of the percentage noise in the tweezer power $\delta P$. We find that for typical experimental parameters intensity noise on the order of $\lesssim 1\%$ can have a noticeable impact on the engineered coupling. As such, intensity stabilization on the order of sub-percent is required to accurately engineer the target spin-ladder coupling.
\begin{figure}[t!]
\includegraphics[width=1\linewidth]{figs/graphtwnoise.pdf}
\caption{Error $\epsilon$ when tweezer intensity fluctuations $\delta P$ are included for two different spin-ladder coupling strength ratios. The error is calculated assuming random Gaussian noise in the laser generating the optical tweezers at frequencies much slower than the coupling time.}
\label{fig:spinladdertweezerfluctuations}
\end{figure}
\section{Conclusions}\label{sec:Conclusion}
Local optical potentials, supplied by optical tweezers, allow us to create analog trapped-ion quantum simulators with an unprecedented level of flexibility concerning the possible spin-spin interaction patterns. In this work we studied the robustness of this approach in a typical experimental setup. In particular, we focused on three sources of error: (i) micromotion, (ii) tweezer misalignment, and (iii) tweezer intensity noise. We used the ferromagnetic zig-zag model, with $j_1>0$ and $j_2<0$, to quantify the adverse effect of each source of error. Our choice of model is motivated by the fact that tweezers play a fundamental role in generating the target connectivity and the range of interactions. Hence this model provides us with a upper bound on the sensitivity of the scheme to the three sources of error listed above.
We showed that the effect of micromotion is two-fold. First, it shifts the motional modes of the crystal, and second, it causes a first-order Doppler shift and in turn modulates the spin-spin couplings for each ion. We showed that the shift in the motional modes is at the level of few percent, justifying the use of the pseudopotential approximation. However the first-order Doppler shift may be a major source of error along the weaker confinement direction when micromotion is the largest. In contrast, we find that in the limit where the tweezer potential is perturbative compared to the Paul trap and the Coulomb interactions, any additional stress and strain force on the ions due to the misalignment of the tweezers is negligible. Finally we find that the intensity noise should be controlled to the sub-percent level, as this shot-to-shot noise severely impacts the fidelity with which the target interactions can be realized.
\begin{acknowledgments}
We thank Juan Diego Arias-Espinoza for sharing code. We acknowledge Rima Sch\"{u}ssler, Henrik Hirzler and Matteo Mazzanti for fruitful discussions. This work was supported by the Netherlands Organization for Scientific Research (Grant Nos. 680.91.120 and 680.92.18.05, R.G.). A.S.N is supported by the Dutch Research Council (NWO/OCW), as part of the Quantum Software Consortium programme (project number 024.003.037).
\end{acknowledgments}
\section{Introduction}
Trapped ions are at the forefront of both digital and analog quantum simulation~\cite{Cirac:1995, Porras_2004, Blatt:2008}. On the digital side, trapped-ions are the building blocks of the highest fidelity two-qubit universal gates~\cite{Brown:2011, Ballance:2016, Gaebler:2016}, and the recent demonstration of on-the-fly quantum error correction adds to the robustness of this architecture~\cite{Bohnet:2021}. On the analog side, they have been used to emulate the dynamics and prepare the ground states of quantum magnets, as well as study the dynamics of quantum correlations, quantum information and entanglement in the presence of engineered, variable-range interactions~\cite{Kim:2009, Monroe_Correlations_2014, Monroe_MBL_2016, Rey_MQC_2017, Bermudez:2011, Bermudez:2011}.
Trapped-ion quantum simulators allow one to engineer power-law spin-spin interactions which decay as $1/r^\alpha$ where $0<\alpha<3$ and $r$ is the distance between two ions. This is the direct result of the mechanism behind the interactions. The inter-ion interactions are phonon-mediated and as such depend on the spectrum and structure of the collective vibrational modes of the ion crystal~\cite{Britton:2012, Freericks:2015}. So far experimental efforts utilizing trapped-ions as analog simulators have been restricted to the aforementioned power-law interactions.
Recently it has been shown that the addition of optical tweezers to the typical trapped-ion platform produces a highly tunable quantum simulator in terms of connectivity, range, and sign of the interactions in both linear (or 1D) and triangular (2D) ion crystals in Paul traps~\cite{Espinoza:2021, Teoh:2021, Nath:2015, Olsacher:2020}. If a target interaction matrix passes our feasibility criterion, we search for the optimal optical tweezer pattern to manipulate the frequencies and structure of the collective vibrational modes of the crystal.
In this work we study the robustness of our scheme in presence of typical experimental imperfections: micromotion, tweezer misalignment, and tweezer intensity noise. In Section.~\ref{sec:trapped-ion-qsim} we review the radio-frequency (r.f.) Paul trap and the formalism describing the motion (including micromotion) of ion crystals. In Section.~\ref{sec:micromotion-spinspin-engineering} we extend previous studies to characterize the effect of small-amplitude micromotion~\cite{Landa:2012, Kaufmann2012, Duan:2015} and correct for it in our tweezer patterns, before including first-order Doppler modulation. Section.~\ref{sec:LocalStress} investigates if local stress due to misalignment of the tweezers can improve the optimization and considers the effect of laser intensity fluctuations.
\section{Trapped-ion quantum simulator}\label{sec:trapped-ion-qsim}
We consider a one or two dimensional crystal of $N$ ions in a Paul trap. The potential energy of the system is given by $V_0 = V_{\rm coulomb}+V_{\rm trap}$. The first term is the contribution due to the Coloumb repulsion between the ions $V_{\rm coulomb}(\bm{r}_i)=\frac{1}{2} \sum_{i\neq j} \abs{\bm{r}_i - \bm{r}_j}^{-1}$, whilst the second term is the confinement supplied by the external trapping potential
\begin{align}
V_\text{trap}(r_{i,\alpha},t) = \frac{\Omega_\text{rf}^2}{8} \sum_{i,\alpha} [a_\alpha - 2 q_\alpha \cos(\Omega_\text{rf} t)] r_{i,\alpha}^2, \label{eq:generalTrappotential}
\end{align}
generated by DC fields and AC components oscillating at $\Omega_{\rm rf}$. Here $a$ and $q$ are the (dimensionless) Mathieu parameters and $r_{i,\alpha}$ is the position of the $i$-th ion in the $\alpha=x,y,z$ direction. The ion positions and the oscillation frequency are dimensionless and in terms of the characteristic length scale $d = \left(e^2/(4\pi\epsilon_0 m \bar{\omega}^2)\right)^{1/3}$ and a characteristic frequency $\bar \omega$ respectively. Here $e$ is the electron charge, $\epsilon_0$ is the vacuum permittivity and $m$ is the ion mass. This allows us to define time $t$ in units of $1/\bar{\omega}$. Thus Eq.~\ref{eq:generalTrappotential} is dimensionless with an energy scale $m \bar{\omega}^2 d^2$.
The interplay between the external trapping potential and the Coulomb repulsion results in stable Coulomb crystals. The dimensionality of the crystal depends on the relative strength of the trapping potential along the different axes \cite{Dubin_1993,Enzer_2000}. We focus on the case of a 2D zigzag crystal in the $yz$-plane, as shown in Fig.~\ref{fig:pseudopotentialVSmicromotion}(a). Tight confinement along $x$ ensures the crystal forms in the $yz$-plane, whilst a weaker potential along $z$ compared to $y$ (or vice-versa) leads to the formation of the zigzag structure.
The equilibrium positions of the ions are given by the solutions to $\nabla V_0 = 0$. The full solution is equilibrium positions with explicit time-dependence $\mathbf R_{i}(t)$ to account for micromotion even at ultra-low temperatures. However when $\abs{a},q^2 \ll 1$ we make the pseudopotential approximation and replace the time-dependent potential $V_\text{trap}$ with a static harmonic potential~\cite{James:1998}
\begin{align}
V_\text{pseudo}(r_{i,\alpha}) = \frac{1}{2}\sum_{i,\alpha} \Theta_\alpha^2 r_{i,\alpha}^2, \label{eq:PPTrapPotential}
\end{align}
where $\Theta_\alpha = \gamma_\alpha \Omega_\text{rf}/2$ are effective frequencies determined by the characteristic exponents of the Mathieu equation, $\gamma_\alpha \approx \sqrt{a_\alpha + q_\alpha^2/2}$ \cite{McLachlan1947}. Note that although the Mathieu exponents are usually denoted by $\beta$, we use $\gamma$ to avoid confusion with a later use of $\beta$.
The emergence of effective spin-spin interactions, mediated by the collective oscillations (phonon modes) of the crystal have been previously studied. The phonon-mediated interactions are generated by applying a spin-dependent force, using a Raman beam pair, to couple the electronic spin of the ion to the collective motion of the crystal. Within this approximation trapped-ion quantum simulators allow us to engineer spin-spin interactions that decay as $1/r^{\xi}$, with $0\leq \xi \leq 3$ \cite{Richerme2016, Britton:2012, Kim:2009, Porras_2004}. The interaction strength between ions $i$ and $j$ is given by
\begin{align}
J_{i,j} = \sum_m \frac{(\bm{k} \cdot \bm{b}_{i,m})(\bm{k} \cdot\bm{b}_{j,m})}{\mu^2 - \omega_m^2}, \label{eq:spinspincoupling}
\end{align}
where $\bm{b}_{i,m}$ is a $3$-element vector (each element describing a direction $\alpha$) of the $m$-th mode and the $i$th ion, $\bm{k}$ the $3$-element wave vector of the Raman beam pair, $\omega_m$ the frequency of the $m$-th mode and $\mu$ the Raman beat-note frequency. Thus the structure of the spin-spin interactions is fully determined by the normal modes of the crystal and the beat-note frequency $\mu$. Here we have assumed that the phase of the Raman beam pair driving the side-band transitions remains constant at the equilibrium position of the ions.
In the absence of any additional control knob, one is limited to the power-law interactions described above. We have previously shown that a wider variety of target spin-spin interactions can be engineered by modifying the mode structure with optical tweezers \cite{Espinoza:2021}. We assume that the tweezers have cylindrical symmetry and supply confinement in the yz-plane only. We also assume that the micromotion amplitude is sufficiently small such that each ion stays near the center of the tweezer beam, and that the tweezer beam is focused on the ion equilibrium positions $\bm{R}_{i}$. Then the tweezer potential can be written as a local harmonic potential for each ion,
\begin{align}
V_\text{tweezer}(r_{i,\alpha}) = \frac{1}{2} \sum_{i=1}^{N}\sum_{\alpha = y,z}\nu_i^2 (\tilde{r}_{i,\alpha})^2,
\end{align}
where $\nu_i$ is the pinning frequency on the $i$th ion and $\bm{\tilde{r}}_i = \bm{r}_{i} - \bm{R}_{i}$ are the ion positions relative to their equilibrium. In the pseudopotential approximation the equilibrium positions are natively time-independent; when including micromotion we average the time-dependent equilibrium positions over one r.f. period. We denote the total potential, including tweezers, by $V_\text{total} = V_\text{trap} + V_\text{coulomb} + V_\text{tweezer}$.
\subsection{Equilibrium Positions with Micromotion}
When optical tweezers are added to the system, in principle the solution to $\nabla V_\text{total} = 0$ gives the equilibrium positions. However for simplicity we assume that the equilibrium positions are unaffected by the tweezer potentials, which we justify in Section.~\ref{sec:micromotion-spinspin-engineering} by showing that our engineered coupling matrix is unaffected by this approximation.
The equilibrium positions are thus given by the solution to $\nabla V_0 = 0$. We set the characteristic frequency $\bar{\omega} = \Omega/2$, and re-scale time accordingly $t \rightarrow \Omega t / 2$ to make the micromotion $\pi$-periodic. The $3N$ coupled equations of motion (eoms) are then \cite{Leibfried:2003}
\begin{align}
\ddot{r}_{i,\alpha} + [a_\alpha - 2 q_\alpha \cos(2 t)] r_{i,\alpha} - \sum_{i\neq j} \frac{ r_{i,\alpha} - r_{j,\alpha}}{\abs{\bm{r}_i - \bm{r}_j}^{3}} = 0.
\label{eq:ionEOMs}
\end{align}
The addition of a cooling term $V_{\rm cool} = f(t) \dot{\bm{r}}_{i}$, where $f(t)$ is a time-dependent cooling profile that ramps from $f(0)=1$ to $f(t_{\rm max})=0$ allows us to start from an initial guess and evolve to the equilibrium configuration at $t_{\rm max}$. We then evolve the positions for one more period with $f(t>t_{\rm max})=0$ to determine the time-dependent equilibrium positions $\bm{R}_{i}(t)$.
\subsection{Linearized Motion}
To calculate the normal mode structure we follow the steps in Refs.~\cite{Landa:2012,Kaufmann2012}. We linearize the eoms about small oscillations of the equilibrium positions $\bm{\tilde{r}}_{i} = \bm{r}_{i}-\bm{R}_{i}$,
\begin{align}
\ddot{\tilde{r}}_{i,\alpha} + [a_\alpha - 2q_\alpha \cos(2t)] \tilde{r}_{i,\alpha} + \sum_{j,\beta} D_{i,j}^{\alpha,\beta}(t) \tilde{r}_{j,\beta} = 0,
\end{align}
where the time-dependent Hessian is defined as
\begin{align}
D_{i,j}^{\alpha,\beta}(t) = \left. \frac{\partial^2 V_{\rm Coulomb}}{\partial r_{i,\alpha} \partial r_{j,\beta} } \right \vert_{r_{i,\alpha} = R_{i,\alpha}}. \label{eq:Hessian}
\end{align}
The linearized eoms have periodic coefficients and thus can be treated using Floquet theory. Expanding the Hessian matrix in a Fourier series as
\begin{align}
D = D_0 - 2 D_2 \cos(2t) - \hdots
\end{align}
where the matrices $A$ and $Q$ are defined as $A = {\rm diag}(a) + D_0$ and $Q = {\rm diag}(q) + D_2$, the matrix $\Pi(t)$ and vector $\bm{\phi}$ are introduced as
\begin{align}
\Pi(t) = \begin{pmatrix}
0 & \mathbbm{1} \\
-A + 2Q\cos(2t) & 0
\end{pmatrix}, \quad
\bm{\phi} = \begin{pmatrix}
\tilde{r}_{i,\alpha} \\ \dot{\tilde{r}}_{i,\alpha}
\end{pmatrix},
\end{align}
where $\mathbbm{1}$ is a $3N$-dimensional identity matrix. The linearized eoms are then written as linearly independent equations in $6N$-dimensional phase space as
\begin{align}
\dot{\bm{\phi}} = \Pi(t) \, \bm{\phi}. \label{eq:floquet}
\end{align}
We solve the set of differential equations to obtain the Floquet modes and exponents, which are related to the eigenmodes $\bm{b}_m^{f}$ and eigenfrequencies $\omega_m^{f}$ of the linearized ion-crystal motion (using superscript $f$ to denote that the solutions are from the full motion treatment).
To obtain the eigenmodes and eigenfrequencies in the pseudopotential approximation we construct the Hessian as defined in Eq.~\ref{eq:Hessian}, but where the partial derivatives are now with respect to the static equilibrium positions $\bm{R}_{i}$. The Hessian is therefore time-independent and can be simply diagonalized to yield the eigenmodes $\bm{b}_m^{p}$ and eigenfrequencies $\omega_m^{p}$ (using superscript $p$ to denote the pseudopotential solutions).
\subsection{Micromotion of a 2D Zigzag Crystal}\label{subsec:EffectMicromotion2DResults}
To characterize the effect of micromotion we study a $N=12$ ion crystal using experimentally relevant trap parameters. Specifically we use $a = \{0.018704, -0.018900, 0.000196\}$, $q = \{0.202780 , -0.202780 , 0 \}$ and $\Omega_\text{rf} = 2\pi\times 20 \text{ MHz}$. The corresponding pseudopotential frequencies are $\Theta_\alpha = 2\pi \times \{2,0.4,0.14\}\text{ MHz}$.
Fig.~\ref{fig:pseudopotentialVSmicromotion}(a) shows the ion equilibrium positions with blurring to indicate micromotion over one r.f. period. Micromotion occurs only in $y$ with amplitude proportional to the ion's distance from the $y = 0$ trap axis, as described by the first order approximation $(1/2)q_{\alpha} R_{i,\alpha}$. In Fig.~\ref{fig:pseudopotentialVSmicromotion}(b) we plot the spectrum. Because the micromotion is a breathing mode oscillation the center of mass (com) modes are unchanged. The out-of-plane modes (along $x$) are decoupled from the in-plane modes ($y$ and $z$) and have a higher frequency and smaller bandwidth. Fig.~\ref{fig:pseudopotentialVSmicromotion}(c) shows the frequency shift $\Delta \omega_m = \omega_m^{f} - \omega_m^{p}$ normalized to $\omega_m^f$. Although the frequency shift is larger for modes with more breathing or zigzag-like structure, the frequency shifts are all relatively small $(\text{kHz})$ compared to the mode frequencies themselves $(\text{MHz})$. As such, from the mode structure itself we conclude that the pseudopotential approximation is justified.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/combined_fig_lessopaque.pdf}
\caption{The effect of micromotion on a $N=12$ ion zig-zag crystal. (a) Ion positions during one r.f. period with motion indicated by blurring. Micromotion occurs only in the $y$ direction. (b) Mode frequency spectrum for $yz$ plane (orange) and $x$ (blue). The com mode frequencies (vertical black dashed lines) are unchanged by micromotion. (c) Frequency shift $\Delta \omega_m$ normalized to $\omega_m^f$. All shifts are small (kHz) relative to the mode frequencies themselves (MHz).}
\label{fig:pseudopotentialVSmicromotion}
\end{figure}
\section{Engineering spin-spin interactions in Optical Tweezers}\label{sec:micromotion-spinspin-engineering}
In this section we investigate if micromotion restricts our ability to engineer a target spin-spin interaction. We demonstrate that although tweezer patterns determined in the pseudopotential approximation are unsuitable once micromotion is included, corrected tweezer patterns can be found. However, the Doppler shift of the laser implementing the spin-spin interactions does cause an appreciable degradation in the engineered interaction compared to the target which is challenging to correct.
\subsection{Naive Inclusion of Micromotion}
We firstly make the pseudopotential approximation and numerically optimize the tweezer frequencies $\nu_i$ and Raman beat-note frequency $\mu$ to engineer a target coupling matrix. To characterize the success of the optimization, we define an error function as
\begin{align}
\epsilon = \frac{\norm{J_T - J_E}}{\norm{J_T}}, \label{eq:errorfunc}
\end{align}
where $J_E$ and $J_T$ are the engineered and target interaction matrices respectively, and where the matrix norm is the Frobenius norm.
During the optimization we assume that the equilibrium positions are unchanged by the tweezers. To justify this approximation we find that applying a maximum $2\pi \times 10 \text{ MHz}$ tweezer frequency on all ions causes an ion position change of $\sim 10 $~nm, and that using the corrected ion positions with an optimal set of tweezer frequencies causes a negligible change in $\epsilon$ on the order of $10^{-3}$.
For the target coupling we use a spin-ladder interaction, as shown in Fig.~\ref{fig:mmppresults}(a). Here we choose the spin-ladder since it is challenging to realize in ion crystals utilizing only the collective modes of the crystal in the absence of the tweezer potentials. It also offers variety via the coupling strength ratio $j_2/j_1$ enabling us to study the interplay of frustration and fluctuations, necessary ingredients for spontaneous continuous or discrete symmetry breaking in condensed matter systems. The ability to tune the range of zig-zag coupling strengths ($|j_2/j_1| \gg 1$) will allow us to study the phase diagram of this well-known frustrated magnetic system with no exact solution.
To perform the numerical optimization we use Simulated Annealing, implemented using \emph{Optim.jl} \cite{mogensen2018optim} version 1.6.1 in \emph{Julia} \cite{bezanson2017julia} version 1.6.2. We limit the maximum tweezer laser power to 30~W and use beam waists of $w = 1 \,\mu\text{m}$. The tweezer frequencies are upper-bounded by $\nu_i/(2\pi) \leq 1.0 \text{ MHz}$ whilst the Raman transition frequency is bounded by $0.3 \text{ MHz} \leq \mu/(2\pi) \leq 1.0 \text{ MHz}$. In addition we demand that $|\mu - \omega_m| > 10 \text{ kHz} $ to ensure the phonon modes are only excited virtually. We implement this final requirement in the optimisation routine by adding a large value to the cost function defined in Eq.~\ref{eq:errorfunc} if the condition is not satisfied.
Fig.~\ref{fig:mmppresults}(b) shows the optimal interaction graph and corresponding error $\epsilon_p = 0.304$ that can be realized in the pseudopotential approximation. In Fig.~\ref{fig:mmppresults}(c) we ``naively'' take the optimal tweezer pattern found in the pseudopotential approximation and recalculate the error using the micromotion equilibrium positions and mode structure, finding $\epsilon_m = 0.654$. The difference $\epsilon_{m}-\epsilon_{p} = 0.350$ is significant, with the interaction graph showing little spin-ladder structure. As such, any optimization should include micromotion during the routine.
\begin{figure}
\includegraphics[width=\linewidth]{figs/couplinggraphs.pdf}
\caption{(a) Spin-spin couplings for the target zig-zag coupling $J_T$ with $- j_2/j_1 = 0.5$. (b) Engineered couplings $J_E$ in the pseudopotential approximation. With optical tweezers, the target coupling can be engineered with reasonably low error. (c) ``Naive'' inclusion of micromotion by using the tweezer parameters found in the pseudopotential case. The difference in mode structure results in a large increase in $\epsilon$, making the tweezer solution found in the pseudopotential approximation unsuitable.}
\label{fig:mmppresults}
\end{figure}
\subsection{Including Micromotion during optimization}
Including micromotion during the optimization routine requires re-calculating the time-dependent Hessian with a given set of $\nu_i$ and solving the $6N$ Floquet equations to find the new mode structure. Although this procedure is computationally costly, for larger $N$ the cost can be reduced by using the symmetry of the coupling matrix in the tweezer patterns. For example, the spin-ladder interaction is symmetric about $z = 0$ and thus the tweezer frequencies can be assumed to obey the same symmetry. For the $N=12$ Coulomb crystal we find this is not necessary, and so optimize over all $12$ tweezer frequencies.
In Fig.~\ref{fig:optimresults} we plot the optimization of $\epsilon$. When micromotion is included in the optimization, $\epsilon$ approaches the pseudopotential result. As such, micromotion itself is not a significant barrier to engineering interactions with optical tweezer.
\begin{figure*}
\includegraphics[width=\linewidth]{{figs/graphspinladder1NIP-trace.pdf}}
\caption{Panels (a), (b) and (c) show the error $\epsilon$ as a function of optimization evaluations ($i$) for wave vectors $\bm{k} = [0,1,0]$, $\bm{k} = [0,0,1]$ and $\bm{k} = [0,1,1]$ respectively. The target coupling is the spin-spin ladder shown in Fig.~\ref{fig:mmppresults}. When including micromotion in the optimization (dark blue line) we obtain a similar $\epsilon$ as the pseudopotential case (dotted orange line). The $\text{k} = [0,1,0]$ case (panel a) shows the best performance due to the tighter confinement along $y$. Modulation has a detrimental effect in this scenario, as this is the direction where the micromotion amplitude is largest. Panels (d), (e) and (f) show the native spectrum (yellow) and tweezer-modified spectrum (dark blue) corresponding to (a), (b) and (c) respectively. The black dashed line shows the optimized beatnote frequency $\mu$. Note that $\abs{\omega_m^f - \mu} < 10 \text{kHz}$ to maintain a dispersive spin-phonon coupling. }
\label{fig:optimresults}
\end{figure*}
\subsection{First Order Doppler Modulation}
The first order Doppler shift can have a significant impact on the spin-spin couplings. Following the procedure used in Ref.~\cite{Berkeland:1998} to lowest order in $a$ and $q$ the laser field (up to a phase factor) in the reference frame of the moving ion is
\begin{align}
E_i(t) = \Re \left[ \bm{E}_0 e^{i \bm{k}^ \cdot \bm{r}_{i}} \sum_{n = -\infty}^{\infty} \mathcal{J}_n(\beta) e^{-i \omega t + i n (\Omega_\text{rf} t)} \right],
\end{align}
where $\omega$ is the frequency and $\bm{k}$ the wave vector, and $\mathcal{J}_n(\beta_i)$ the Bessel function. The (dimensionless) modulation index $\beta_i$ is given by
\begin{align}
\beta_i = \frac{1}{2} \abs{\sum_{\alpha} k_\alpha R_{i,\alpha} q_\alpha}.
\end{align}
The carrier transition amplitude is modified by $\mathcal{J}_0(\beta_i)$, and thus the interaction matrix element becomes
\begin{align}
J_{i,j}^\text{doppler} = \mathcal{J}_0(\beta_i) \mathcal{J}_0(\beta_j) J_{i,j},
\end{align}
where $J_{i,j}$ is the unmodulated coupling matrix element given in Eq.~\ref{eq:spinspincoupling}. Assuming a $411 \text{nm}$ laser we include Doppler modulation in the optimization. The resulting $\epsilon$ is shown in Fig.~\ref{fig:optimresults}. Although there is no Doppler modulation in $z$ (because $q_z = 0$) nor $x$ (because $R_{i,x} = 0$), there is significant modulation along $y$. The reduction in coupling strength depends on the distance of each ion from the $y = 0$ r.f. null, which makes it challenging to correct for using optical tweezers. While this ion-dependent source of error can be compensated by tuning the intensity of the Raman beams on each ion, the extra infrastructure cost is prohibitive.
\section{Local Stress}\label{sec:LocalStress}
In Section~\ref{sec:micromotion-spinspin-engineering} we used tweezer beams centered on the average equilibrium positions of the ions to more accurately engineer spin-ladder interactions. However if the tweezer beams are offset from the equilibrium positions, the tweezers add not only a local trapping potential but also supply a force. In this section we investigate if this local stress enables further improvements to our engineered couplings. We show that tweezer offsets of up to $0.25\,\mu\text{m}$ offer only small improvements to $\epsilon$.
\subsection{First Order Approximation}
For simplicity we assume that we have a geometry in which micromotion does not play a role. As before we assume that the tweezers have cylindrical symmetry and supply confinement in the $yz$-plane only. The tweezer potential including an offset is then given by
\begin{equation}
V_{\text{tweezer}}(r_{i,\alpha}) = \frac{1}{2}\sum_{i=1}^{N}\sum_{\alpha=y,z} \nu_i^2 (\tilde{r}_{i,\alpha}-\delta r_{i,\alpha})^2,
\label{v_tw}
\end{equation}
where $\bm{\tilde{r}}_{i} = \bm{r}_{i} - \bm{R}_{i}$ are the positions of the ions relative to their equilibrium, $\bm{\delta r}_{i}$ is the tweezer offset from $\bm{\tilde{r}}_i$ and the characteristic frequency is now set to $\bar{\omega}=\Theta_z$.
Offsetting the tweezers changes the equilibrium positions of the ions. To find the new equilibrium positions $\mathbf{R}_i + \bm{\rho}_i$ we need to solve $\nabla_{\mathbf{\tilde{r}}} V_\text{total} =0$. This is computationally costly for large crystals, particularly when included in an optimization routine. Instead, as a first approximation we assume that the tweezers pull lightly on the ions, $\bm{\rho}_{i}/\bm{\delta r}_{i} \ll 1$. This is equivalent to treating the tweezers as a small perturbation compared to the Paul trap and Coulomb interactions. For simplicity we omit the $x$-direction, which is justified when the laser implementing the spin-spin interactions has no effective wave vector in the $x$-direction and the sound wave modes in the $x$-direction decouple, such as in a 2D ion crystal in the $yz$-plane. These prerequisites can be easily obtained by design. Denoting the Hessian matrix of $V_0 = V_\text{trap} + V_\text{coulomb}$ by $D_0$, we expand $\nonumber \nabla_{\mathbf{\tilde{r}}} V_{\text{tot}}(\mathbf{\rho})$ to first order,
\begin{align}
\nonumber \nabla_{\mathbf{\tilde{r}}} V_{\text{tot}}(\mathbf{\rho}) &\approx \left(\nabla_{\mathbf{\tilde{r}}}\left(\nabla_{\mathbf{\tilde{r}}} V_{0}(\mathbf{\tilde{r}})\right)\right)_{\mathbf{\tilde{r}}=0}\text{ }\bm{\rho}+\bm{\nu}^2(\bm{\rho} - \bm{\delta r})\\
&= D_0(0)\bm{\rho}+\bm{\nu}^2(\bm{\rho} - \bm{\delta r}),
\label{notwsimp}
\end{align}
where $\bm{\nu}$ is a $2N\times 2N$ diagonal matrix with diagonal elements $\nu_i$. Note the zeroth order term drops out since $(\nabla V_0)_{\mathbf{\tilde{r}}=0}=0$ by definition. The lowest order shifts in the equilibrium positions are therefore
\begin{equation}
\bm{\rho} \approx \left(D_\text{tot}(0)\right)^{-1}\bm{\nu}^2 \delta \bm{r},
\label{neweqpos}
\end{equation}
where $D_\text{tot}(0)=D_0(0) + D_{\text{tw}}$ and $D_{\text{tw}}=\bm{\nu}^2$.
Having approximated the new equilibrium positions, we now calculate the change in the Hessian matrix. To avoid calculating the Hessian $D_{\text{tot}}(\bm{\rho})$ directly from the new potential, we use an approximation to further reduce the computational cost,
\begin{equation}
D_{\text{tot}}(\bm{\rho})\approx D_\text{tot}(0) + \left(\nabla_{\mathbf{\tilde{r}}}(D_0)\right)_{\mathbf{\tilde{r}}=0}\text{ }\bm{\rho}+\hdots.
\label{dapprox}
\end{equation}
$D_{\text{tot}}(\bm{\rho})$ has new eigenfrequencies $\tilde{\omega}_m^{\text{str}}$ and eigenvectors $\tilde{\mathbf{b}}_m^{\text{str}}$ resulting in new spin-spin interactions as defined by Eq.~(\ref{eq:spinspincoupling}). Although only approximate, this equation gives insight into the effect of the local stress on the mode spectrum. Because both $D_{\text{tw}}$ and $D_\text{trap}$ are constant diagonal matrices, the derivatives of $D_\text{tot}(0)$ originate from the Coulomb interaction alone. Due to the long-range character of the Coulomb interactions we expect that the local stress should ease the simulation of long-range interactions. On the other hand, the local stress terms are higher order than the tweezer curvature terms, so we expect the capability of local stress to significantly change the mode spectrum to be limited. Although this suggests local stress will not offer improvements to our engineered couplings, the benefit is that errors due to misaligned tweezers are suppressed.
\subsection{Optimization}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/graphstress.pdf}
\caption{Error $\epsilon$ as a function of the spin-ladder coupling strength ratio $-j_2/j_1$. The error is smallest at $-j_2/j_1 = 1.5$ as this most closely resembles power-law interactions that can be well-engineered natively. The addition of tweezers offers significant improvement at all ratios. As predicted by our approximate expression Eq.~\ref{dapprox}, local stress of up to $0.25\,\mu\text{m}$ only offers a small improvement. This suggests that the couplings are robust to tweezer misalignments. }
\label{fig:spinladderstressvsnostress}
\end{figure}
We investigate numerically whether it is possible to improve on the results obtained in the previous section if we allow the tweezers to supply local stress on the ions. For the $N=12$ ion crystal we fix the tweezer pattern to the optimal solution found in Sec.~\ref{sec:micromotion-spinspin-engineering} and optimize the tweezer offsets $0 \leq \bm{\delta r_i} \leq 0.25\,\mu\text{m}$. The offset bounds enable us to approximate the tweezers as harmonic. By fixing the tweezer parameters, we only need to optimize over the $2N$ offset parameters, and therefore in the optimization routine can calculate the new equilibrium positions $\bm{R}_i = \bm{\rho}_i$ and Hessian directly. Note that optimization over the full parameter set (including the tweezer parameters) is possible, particularly with a two-step optimization routine that firstly uses the approximate calculations of Eqs.~\ref{neweqpos} and \ref{dapprox} to determine if the parameters are promising, and then when the error falls below a set threshold uses the exact calculation to fine-tune the parameters and obtain the true error. We also optimize the full parameter set in this manner and find no difference to our fixed-tweezer optimization.
In Fig.~\ref{fig:spinladderstressvsnostress} we vary the ratio $-j_2/j_1$ in the $12$-ion spin-ladder and calculate the error as defined in Eq.~\ref{eq:errorfunc}. As expected, the inclusion of tweezers results in significant improvements in engineering the target spin-spin interactions. However applying local stress to the ion crystal only results in minimal improvements. As such we conclude that in the perturbative regime local stress offers little benefit, but is reassuring since the interactions are therefore robust to tweezer misalignments.
\subsection{Intensity noise}\label{sec:IntensityNoise}
Finally, we study the effect of tweezer intensity fluctuations. We consider a worst-case shot-to-shot noise scenario, whereby an optimal set of tweezer frequencies $\nu$ are each subject to a fluctuation $\delta \nu$. Note that $\delta \nu \propto \sqrt{\delta P}$, where $\delta P$ is the power fluctuation, since the square of the tweezer frequencies are proportional to the laser power. To simulate the noise we multiply an optimal tweezer pattern by a random fluctuation sampled from a normal distribution with standard deviation $\delta P$. We repeat the calculation $N_\text{repeat} = 10^4$ times and take the average. In Fig.~\ref{fig:spinladdertweezerfluctuations} we plot $\epsilon$ as a function of the percentage noise in the tweezer power $\delta P$. We find that for typical experimental parameters intensity noise on the order of $\lesssim 1\%$ can have a noticeable impact on the engineered coupling. As such, intensity stabilization on the order of sub-percent is required to accurately engineer the target spin-ladder coupling.
\begin{figure}[t!]
\includegraphics[width=1\linewidth]{figs/graphtwnoise.pdf}
\caption{Error $\epsilon$ when tweezer intensity fluctuations $\delta P$ are included for two different spin-ladder coupling strength ratios. The error is calculated assuming random Gaussian noise in the laser generating the optical tweezers at frequencies much slower than the coupling time.}
\label{fig:spinladdertweezerfluctuations}
\end{figure}
\section{Conclusions}\label{sec:Conclusion}
Local optical potentials, supplied by optical tweezers, allow us to create analog trapped-ion quantum simulators with an unprecedented level of flexibility concerning the possible spin-spin interaction patterns. In this work we studied the robustness of this approach in a typical experimental setup. In particular, we focused on three sources of error: (i) micromotion, (ii) tweezer misalignment, and (iii) tweezer intensity noise. We used the ferromagnetic zig-zag model, with $j_1>0$ and $j_2<0$, to quantify the adverse effect of each source of error. Our choice of model is motivated by the fact that tweezers play a fundamental role in generating the target connectivity and the range of interactions. Hence this model provides us with a upper bound on the sensitivity of the scheme to the three sources of error listed above.
We showed that the effect of micromotion is two-fold. First, it shifts the motional modes of the crystal, and second, it causes a first-order Doppler shift and in turn modulates the spin-spin couplings for each ion. We showed that the shift in the motional modes is at the level of few percent, justifying the use of the pseudopotential approximation. However the first-order Doppler shift may be a major source of error along the weaker confinement direction when micromotion is the largest. In contrast, we find that in the limit where the tweezer potential is perturbative compared to the Paul trap and the Coulomb interactions, any additional stress and strain force on the ions due to the misalignment of the tweezers is negligible. Finally we find that the intensity noise should be controlled to the sub-percent level, as this shot-to-shot noise severely impacts the fidelity with which the target interactions can be realized.
\begin{acknowledgments}
We thank Juan Diego Arias-Espinoza for sharing code. We acknowledge Rima Sch\"{u}ssler, Henrik Hirzler and Matteo Mazzanti for fruitful discussions. This work was supported by the Netherlands Organization for Scientific Research (Grant Nos. 680.91.120 and 680.92.18.05, R.G.). A.S.N is supported by the Dutch Research Council (NWO/OCW), as part of the Quantum Software Consortium programme (project number 024.003.037).
\end{acknowledgments}
|
1,116,691,497,232 | arxiv | \section{Introduction}
In \cite{LSY-1, LSY-2} and \cite{Y}, the concept called
\it holomorphic-homogeneous-regular \rm and equivalently
\it the uniformly-squeezing\rm, respectively, for complex
manifolds has been introduced. This concept was essential for estimation of
several invariant metrics. See the above cited papers for details.
\medskip
Let $\Omega$ be a complex manifold of dimension $n$. The {\it squeezing function}
$\sigma_\Omega:\Omega \to {\mathbb R}$ of $\Omega$ is defined as follows:
for each $p \in \Omega$ let
$$
{\mathcal F}(p,\Omega) := \{f\colon \Omega \to \mathbb B^n, \hbox{ 1-1 holomorphic},
f(p)=0\},
$$
where:
\begin{itemize}
\item $\mathbb B^n (p;r) = \{ z \in \mathbb C^n \colon \|z-p\|<r \}$, and
\item $\mathbb B^n = \mathbb B^n (0;1)=\mathbb B^n ((0,\ldots,0); 1)$.
\end{itemize}
Then
$$
\sigma_\Omega (p) = \sup \{r \colon \mathbb B^n (0,r)\subset f(\Omega),
\hbox{ for some }f \in {\mathcal F}(p,\Omega)\}.
$$
Furthermore, the {\it squeezing constant} $\hat \sigma_\Omega$ for $\Omega$ is defined by
$$
\hat\sigma_\Omega := \inf_{p\in\Omega} \sigma_\Omega (p) .
$$
\begin{definition}[Liu-Sun-Yau \cite{LSY-1, LSY-2}; Yeung \cite{Y}]
\rm A complex manifold $\Omega$ is called {\it holomorphic homogeneous regular}
(HHR), or equivalently {\it uniformly squeezing} (USq), if $\hat\sigma_\Omega>0$.
\end{definition}
Notice that the property HHR (i.e., USq) is preserved by biholomorphisms. The
squeezing function and squeezing constants are also biholomorphic invariants.
These concepts have been developed in order for the study of completeness and other
geometric properties such as the metric equivalence of the invariant metrics including
Carath\'eodry, Kobayashi-Royden, Teichm\"uller, Bergman, and Kaehler-Einstein
metrics. It is obvious that the examples of HHR/USq manifolds include bounded
homogeneous domains. In case the manifold is biholomorphic to a bounded domain and
the holomorphic automorphism orbits accumulate at every boundary point, such as in the
case of the Bers embedding of the Teichm\"uller space, again USq/HHR property holds. A
bit less obvious example may be the bounded strongly convex domains (as the majority
of them do not possess any holomorphic automorphisms except the identity map),
proved by S.-K. Yeung \cite{Y}. But there, one of the most standard examples,
such as the bounded convex domains and the bounded strongly pseudoconvex domains
were left untouched.
Indeed the starting point of this article is to show
\begin{theorem} \label{thm-1}
All bounded convex domains in $\mathbb C^n$ ($n\ge 1$) are HHR (i.e., USq).
\end{theorem}
The concept of squeezing function $\sigma_\Omega$ defined above plays an important
role, and moreover it appeals to us that the further investigations on this function should
be worthwhile. One immediate observation is that if, $\sigma_\Omega (p)=1$ for some
$p \in \Omega$, then $\Omega$ is biholomorphic to the unit open ball (\cite{DGZ}). In
the light of studies on the asymptotic behavior of several invariant metrics of the
strongly pseudoconvex domains, perhaps the following question is natural to pose:
\begin{question} \label{q1}
If $\Omega$ is a bounded strongly pseudoconvex domain in $\mathbb C^n$, would
$\displaystyle{\lim_{\Omega\ni q\to p} \sigma_\Omega(q) = 1}$ hold for
every boundary point $p \in \partial\Omega$?
\end{question}
While we do not know the solution at the time of this writing, fortunately, we are able to
present the following result.
\begin{theorem}
\label{thm-2}
If $\Omega$ is a bounded domain in $\mathbb C^n$ with a ${\mathcal C}^2$ strongly convex
boundary, then $\displaystyle{\lim_{\Omega\ni q\to p} \sigma_\Omega (q) = 1}$
for every $p \in \partial\Omega$.
\end{theorem}
The proof-arguments also clarify and simplify some previously-known theorems;
those shall be mentioned in the final section as remarks.
\medskip
\it Acknowlegements. \rm This research is supported in part by SRC-GaiA
(Center for Geometry and its Applications), the Grant 2011-0030044 from The Ministry of
Education, and the research of the first named author is also supported in part by National
Research Foundation Grant 2011-0007831, of South Korea.
\section{Bounded convex domains are HHR/USq manifolds}
The aim of this section is to establish Theorem \ref{1-2} stated below. Not only does this
theorem cover the case left untreated in \cite{Y}, but our method is different. (See also
\cite{DGZ} on this matter). Our method uses a version of the ``scaling method in several
complex variables'' initiated by S. Pinchuk \cite{Pinchuk}. In fact, we use the version
presented in \cite{K}, modified for the purpose of studying the asymptotic boundary
behavior of holomorphic invariants.
\medskip
\begin{theorem} \label{1-2}
Every convex Kobayashi hyperbolic domain in ${\mathbb C}^n$ is \break
HHR/USq.
\end{theorem}
Note that all bounded domains are Kobayashi hyperbolic, and every convex Kobayashi
hyperbolic domain is biholomorphic to a bounded domain. But the bounded realization
may not in general be convex. In that sense this theorem is more general than
Theorem \ref{thm-1}.
\medskip
\noindent\bf Proof. \rm We proceed in 5 steps.
\medskip
{\bf Step 1. \it Set-up}. Let $\Omega$ be a convex hyperbolic domain in $\mathbb{C}^n$.
Suppose that $\Omega$ is not HHR/USq. Then there exists a sequence $\{q_j\}$ in
$\Omega$ converging to a boundary point, say $q\in \partial\Omega$ such that
$$
\lim_{j \to \infty} S_\Omega (q_j) = 0.
$$
Needless to say, it suffices to show that such a sequence cannot exist.
\medskip
{\bf Step 2. \it The $j$-th orthonormal frame}.
Let $\langle ~, ~ \rangle$ represent the standard Hermitian inner product of $\mathbb C^n$,
and let $\|v\| = \sqrt{\langle v, v \rangle}$. For every $q \in \mathbb C^n$ and a complex linear
subspace $V$ of $\mathbb C^n$, denote by
$$
B^V (q, r) = \{ p \in \mathbb C^n \colon p-q \in V \hbox{ and } \|p-q\|<r\}.
$$
Now let $q \in \Omega$ and define the positive number $\lambda(q, V)$ by
$$
\lambda (q,V) = \max \{r>0 \colon B^V (q, r) \subset \Omega \}.
$$
This number is finite for each $(q,V)$, whenever $\dim V > 0$,
since $\Omega$ is Kobayashi hyperbolic.
Fix the index $j$ momentarily. Then we choose an orthonormal basis for $\mathbb C^n$,
with respect to the standard Hermitian inner product $\langle ~, ~ \rangle$.
First consider
$$
\lambda_j^1 := \lambda (q_j, \mathbb C^n).
$$
Then there exists $q_j^{1*} \in \partial\Omega$ such that $\|q_j^{1*} - q_j\| =
\lambda_j^1$. Let
$$
e_j^1 = \frac{q_j^{1*} - q_j}{\|q_j^{1*} - q_j\|}.
$$
Then consider the complex span $\hbox{Span}_\mathbb C \{e_j^1\}$, and let $V^1$ be its
orthogonal complement in $\mathbb C^n$. Then take
$$
\lambda_j^2 := \lambda (q_j, V^1)
$$
and $q_j^{2*} \in \partial\Omega$ such that $q_j^{2*}-q_j \in V^1$ and
$\|q_j^{2*} -q_j\|=\lambda_j^2$. Then let
$$
e_j^2 := \frac{q_j^{2*} - q_j}{\|q_j^{2*} - q_j\|}.
$$
With $e_j^1, e_j^2, \ldots, e_j^{\ell}$ and $\lambda_j^1, \lambda_j^2, \ldots,
\lambda_j^\ell$ chosen,
the next element $e_j^{\ell+1}$ is selected as follows. Denote by $V^{\ell}$ the
complex orthogonal complement of
$\hbox{Span}_\mathbb C \{e_j^1, e_j^2, \ldots, e_j^\ell \}$. Then
$$
\lambda_j^{\ell+1} := \lambda (q_j, V^\ell)
$$
and $q_j^{\ell+1*} \in \partial\Omega$ such that $q_j^{\ell+1*}-q_j \in V^\ell$ and
$\|q_j^{\ell+1*} -q_j\|=\lambda_j^{\ell+1}$. Let
$$
e_j^{\ell+1} := \frac{q_j^{\ell+1*} - q_j}{\|q_j^{\ell+1*} - q_j\|}.
$$
By induction, this process yields an orthonormal set $e_j^1, \ldots, e_j^n$ for $\mathbb C^n$
and the positive numbers $\lambda_j^1, \ldots, \lambda_j^n$.
\medskip
{\bf Step 3. \it Stretching complex linear maps}. Let $\hat e^1, \ldots, \hat e^n$ denote
the standard orthonormal basis for $\mathbb C^n$, i.e.,
$$
\hat e^1 = (1,0,\ldots, 0), \hat e^2 = (0,1,0,\ldots, 0), \ldots, \hat e^n = (0, \ldots, 0, 1).
$$
Define the {\it stretching linear map} $L_j: \mathbb C^n \to \mathbb C^n$ by
$$
L_j (z) = \sum_{k=1}^n \frac{\langle z-q_j, e_j^k \rangle}{\lambda_j^k}~ {\hat e}^k
$$
for every $z \in \mathbb C^n$. Note that, for each $j$, $L_j$ maps $\Omega$
biholomorphically onto its image.
\medskip
{\bf Step 4. Supporting hyperplanes.} Notice that
$$
L_j (q_j) = 0 = (0,\ldots,0), L_j (q_j^{1*}) = \hat e^1, \ldots, L_j (q_j^{n*}) = \hat e^n.
$$
We shall consider the supporting hyperplanes, say $\Pi_j^k$ ($k=1,\ldots,n$), of
$L_j(\Omega)$ at points $L_j(q_j^{k*})$, $k=1,\ldots,n$, repectively.
\medskip
{\it Substep 4.1. The supporting hyperplane $\Pi_j^1$}: Recall that
$L_j (q_j^{1*}) = \hat e^1 =(1,0,\ldots, 0)$. Due
to the choice of $q_j^{1*}$ the supporting hyperplane of $\Omega$ at $q_j^{1*}$
must also support the sphere tangent to the boundary $\partial\Omega$.
Consequently the supporting hyperplane $\Pi_j^1$ of
$L_j(\Omega)$ must support a smooth surface (an ellipsoid) tangent to
$L_j(\partial\Omega)$ at $\hat e^1$. Thus the equation for this hyperplane $\Pi_j^1$ is
$$
\hbox{\rm Re}\, (z_1-1) = 0
$$
(independently of $j$, being perpendicular to $\hat e^1$ consequently).
We also note that
$$
L_j (\Omega) \subset \{ (z_1, \ldots, z_n) \in \mathbb C^n \colon \hbox{\rm Re}\, z_1 < 1\}.
$$
\medskip
{\it Substep 4.2. The rest of supporting hyperplanes $\Pi_j^k$, for $k\ge 2$}: First
consider the case $k=2$. Then the supporting hyperplane $\Pi_j^2$
passes through $L_j (q_j^{2*}) = \hat e^2 =(0,1,\ldots, 0)$. Since the restriction of
$\Omega$ to $V^1$ contains the sphere in $V^1$ tangent to the restriction of
$\partial\Omega$ at the point $\hat e^2$, the supporting hyperplane $\Pi_j^2$ restricted
to $L_j(V^1)$ takes the equation $\{(z_2,\ldots,z_n) \in \mathbb C^{n-1}\colon
\hbox{\rm Re}\, (z_2 -1) = 0\}$.
Hence
$$
\Pi_j^2 = \{(z_1, \ldots, z_n)\in\mathbb C^n \colon \hbox{\rm Re}\, (a_j^{2,1} z_1 + a_j^{2,2} (z_2 -1))
= 0\}
$$
for some $(a_j^{2,1}, a_j^{2,1}) \in \mathbb C^2$ with $\Big|a_j^{2,1}\Big|^2 +
\Big|a_j^{2,2}\Big|^2=1$ and $a_j^{2,2} >0 $.
We also have that
$$
L_j(\Omega) \subset \{ (z_1, \ldots, z_n) \in \mathbb C^n \colon \hbox{\rm Re}\, (a_j^1 z_1 +
a_j^2 (z_2 -1))<0\}.
$$
For $k \in \{3, \ldots, n\}$, one deduces inductively that the supporting hyperplane
$\Pi_j^k$
passes through the point $\hat e^k$, and that
\begin{multline*}
\Pi_j^k = \{(z_1, \ldots, z_n)\in\mathbb C^n \colon \\ \hbox{\rm Re}\, (a_j^{k,1} z_1 + \cdots +
a_j^{k,k-1}
z_{k-1} + a_j^{k,k} (z_k -1))=0,
\end{multline*}
with $a_j^{k,k} >0$ and $\sum_{\ell=1}^k \Big|a_j^{k,\ell}\Big|^2=1$.
Also,
\begin{multline*}
L_j (\Omega) \subset \{(z_1, \ldots, z_n)\in\mathbb C^n \colon \\
\hbox{\rm Re}\, (a_j^{k,1} z_1 + \cdots + a_j^{k,k-1} z_{k-1} + a_j^{k,k} (z_k -1))<0\}.
\end{multline*}
\medskip
{\it Substep 4.3. Polygonal envelopes}: We add this small substep for convenience.
From the discussion by far in this Step, we have the $j$-th polygonal envelope (of
$L_j(\Omega)$)
\begin{eqnarray*}
\Sigma_j & := & \{(z_1, \ldots, z_n) \in \mathbb C^n : \\
& & \qquad \qquad \quad \hbox{\rm Re}\, z_1 < 1 \\
& & \qquad \qquad \hbox{\rm Re}\, (a_j^{2,1} z_1 + a_j^{2,2} (z_2 -1))<0 \\
& & \qquad \qquad \qquad \qquad \vdots \\
& & \qquad \quad \hbox{\rm Re}\, (a_j^{n,1} z_1 + \cdots + a_j^{n,n-1} z_{n-1} + a_j^{n,n}
(z_n -1))<0\}
\end{eqnarray*}
\medskip
{\bf Step 5. Bounded realization}.
Notice that, for every $k \in \{1,\ldots,n\}$, the disc
$$
D_j^k := \{z=(z_1, \ldots, z_n) \in \mathbb C^n \colon \langle z-q_j, e_j^\ell \rangle = 0,
\forall \ell \neq k; \|z-q_j\|<\lambda_j^k \}
$$
is contained in $\Omega$. Hence, every $L_j (\Omega)$ contains the discs $D^k := \{
\zeta \hat e^k \colon \zeta \in \mathbb C, |\zeta|<1\}$ for every $k = 1,\ldots, n$. Since
$\Omega$ is convex and since $L_j$ is linear, $L_j (\Omega)$ is also convex.
Therefore, the ``unit acorn''
$$
A := \{ (z_1, \ldots, z_n) \in \mathbb C^n \colon |z_1|+ \cdots + |z_n| < 1 \}
$$
is contained in $L_j (\Omega)$. This restricts the unit normal vectors \break $n_j^k :=
(a_j^{k,1}, \ldots, a_j^{k,k}, 0, \ldots, 0) \in \mathbb C^n$ for every $k=2,\ldots, n$. Namely,
there is a positive constant $\delta>0$ independent of $j$ and $k$ such that $a_j^{k,k}
\ge \delta$ for every $j, k$.
Now taking a subsequence (of $q_j$), we may assume that the sequence of unit vectors
$\{n_j^k\}_{j=1}^\infty$ converges for every $k\in \{2,\ldots,n\}$. Let us write
$$
\lim_{j\to\infty} n_j^k = n^k = (a^{k,1}, \ldots, a^{k,k}, 0,\ldots, 0)
$$
for each $k = 1,2,\ldots, n$.
Consider the maps
$$
B_j (z_1, \ldots, z_n) = (\zeta_1, \ldots, \zeta_n)
$$
defined by
\begin{eqnarray*}
\zeta_1 & = & z_1, \\
\zeta_2 & = & a_j^{2,1} z_1 + a_j^{2,2} z_2, \\
& \vdots & \\
\zeta_n & = & a_j^{n,1} z_1 + \ldots + a_j^{n,n} z_n. \\
\end{eqnarray*}
Then it follows that
\begin{eqnarray*}
B_j \circ L_j (\Omega) & \subset & B_j (\Sigma_j) \\
& = &
\{(\zeta_1, \ldots, \zeta_n) \in \mathbb C^n \colon \hbox{\rm Re}\, \zeta_1<1, \hbox{\rm Re}\,\zeta_2 < a_j^{2,2}, \ldots,
\hbox{\rm Re}\,\zeta_n < a_j^{n,n}\}
\end{eqnarray*}
Now we consider the Cayley transformation, for each $j$,
$$
\Phi_j (z_1, \ldots, z_n) = \Big( \frac{z_1}{2-z_1}, \frac{z_2}{2a_j^{2,2} - z_2},
\ldots, \frac{z_n}{2a_j^{n,n} - z_n} \Big).
$$
Then $\Phi_j \circ B_j (\Sigma_j) \subset D^n$,
where $D^n$ denote the unit polydisc in $\mathbb C^n$ centered at the origin. Also, there
exists a positive constant $\delta' \in (0,\delta)$ such that $\Phi_j \circ B_j (\Sigma_j)
\subset D^n$ contains the ball of radius $\delta'$ centered at the origin $0$.
Since $\Phi_j \circ B_j \circ L_j (q_j) = (0,\ldots,0)$ for every $j$, we now conclude
that the squeezing function satisfies
$$
\sigma_\Omega (q_j) \ge \frac{\delta'}{\sqrt{n}}.
$$
This estimate, which holds for every sequence $q_j$ approaching the boundary, yields
the desired contradiction at last. Thus the proof is complete. \hfill $\Box$
\section{Boundary behavior of squeezing function on strongly convex domains}
Consider first the following
\begin{definition}
Let $\Omega$ be a domain in $\mathbb C^n$. A boundary point $p \in \partial\Omega$ is said
to be {\it spherically-extreme} if
\begin{itemize}
\item[(1)] the boundary $\partial\Omega$ is ${\mathcal C}^2$ smooth in an open neighborhood
of $p$, and
\item[(2)] there exists a ball $\mathbb B^n (c(p);R)$ in $\mathbb C^n$ of some radius $R$, say, centered
at some point $c(p)$ such that $\Omega \subset \mathbb B^n(c(p);R)$ and $p \in \partial\Omega \cap
\partial \mathbb B^n(c(p);R)$.
\end{itemize}
\end{definition}
The main goal of this section is to establish
\begin{theorem}
\label{thm-2g}
If a domain $\Omega$ in $\mathbb C^n$ admits a
spherically-extreme boundary point $p$, say, in a neighborhood of which the
boundary $\partial\Omega$ is ${\mathcal C}^2$ smooth, then
$$
\lim_{\Omega\ni q \to p} \sigma_\Omega (q)=1.
$$
\end{theorem}
\bf Proof. \rm
Since every boundary point of a ${\mathcal C}^2$ strongly convex bounded domain is
spherically-extreme, this theorem implies Theorem \ref{thm-2}. The rest of this section is devoted
to the proof of Theorem \ref{thm-2g}, which we shall proceed in seven steps.
\medskip
{\bf Step 1: Sphere Envelopes.}
Let $\Omega$ be a bounded domain in $\mathbb C^n$ with a boundary point
$p \in \partial\Omega$ such that
\begin{itemize}
\item[(\romannumeral 1)]$\partial\Omega \cap B^n (p; r_0)$ is ${\mathcal C}^2$-smooth
for some $r_0>0$, and
\item[(\romannumeral 2)] $p$ is a spherically-extreme boundary point of $\Omega$.
\end{itemize}
Then there exist positive constants $r_1, r_2$ and $R$ with $r_0>r_1>r_2$ such that
every $q \in \Omega \cap \mathbb B^n (p; r_2)$ admits points
$b(q) \in \partial\Omega \cap \mathbb B^n (p;r_1)$ and $c(q) \in \mathbb C^n$
satisfying the conditions
\begin{itemize}
\item[(\romannumeral 3)] $\|q-b(q)\|<\|q-z\|$ for any
$z \in \partial\Omega - \{b(q)\}$, and
\item[(\romannumeral 4)] $\|c(q)-b(q)\|=R$ and $\Omega \subset \mathbb B^n (c(q);R)$.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[height=1.9in, width=2.00in]{pic1.eps}
\caption{\sf Sphere envelopes}
\end{figure}
Notice that (\romannumeral 3) says that $b(q)$ is the unique boundary point that is the closest to $q$, and that the constant $R$ in (\romannumeral 4) is independent of the choice of $q \in \mathbb B^n(p;r_2)$.
\bigskip
{\bf Step 2: Centering.}
From this stage we shall exploit the familiar notation
\begin{eqnarray} \label{T1}
z &=& (z_1,\ldots, z_n), \nonumber \\
z' &=& (z_2, \ldots, z_n), \\
u &=& \hbox{\rm Re}\, z_1, \nonumber\\
v &=& \hbox{\rm Im}\, z_1.\nonumber
\end{eqnarray}
\smallskip
For each $q \in \Omega \cap B^n (p, r_2)$, choose a unitary transform $U_q$
of $\mathbb C^n$ such that the map $A_q (z) := U_q (z-b(q))$
satisfies the following conditions:
\begin{equation} \label{T2}
A_q (q) = (\lambda_q, 0, \ldots, 0)
\end{equation}
for some $\lambda_q > 0$, and
\begin{equation} \label{T3}
A_q (\Omega) \subset \mathbb B^n ((R,0,\ldots,0); R)
= \{ z \in \mathbb C^n \colon |z_1-R|^2 + \|z'\|^2 < R^2 \}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[height=1.3in, width=3.70in]{pic2.eps}
\caption{\sf The Centering Process}
\end{figure}
Then there exists a positive constant $r_3<r_2$ such that
\begin{multline} \label{T4}
z \in A_q (\Omega) \cap B^n (0, r_3)
\\ \Leftrightarrow \|z\| < r_3 ~\hbox{ and }~
2 u > H_{b(q)} (z') + {\mathcal K}_{b(q)} (v, z') + {\mathcal R}_{b(q)} (v, z')
\end{multline}
where:
\begin{itemize}
\item $H_{b(q)}$ is a quadratic positive-definite Hermitian form such that there
exists a constant $c_0 > 0$, independent of $q$, satisfying
\begin{equation} \label{T5}
H_{b(q)} (z') \ge c_0 \|z'\|^2
\end{equation}
and
\item there exists a constant $C > 0$, independent of $q \in \mathbb B^n (p; r_3) \cap \Omega$, such that
\begin{equation} \label{T6}
|{\mathcal K}_{b(q)} (v, z')| \le C(|v|^2 +|v|\|z'\|),
\end{equation}
whenever $z \in \mathbb B^n (0, r_3)$. Furthermore, we have
$$
|{\mathcal R}_{b(q)} (v, z')| = o(|v|^2 + \|z'\|^2).
$$
In particular, the choice of $r_3$ can allow us the estimate
$$
|{\mathcal R}_{b(q)} (v, z')| \le \frac{c_0}{2}(|v|^2 + \|z'\|^2).
$$
\end{itemize}
Notice that
$$
\lim_{\Omega \ni q \to p} b(q) = p,
\qquad
\lim_{\Omega \ni q \to p} H_{b(q)}(z') = H_p(z'),
$$
and
$$
\lim_{\Omega \ni q \to p} A_q = I \hbox{ (the identity map)}.
$$
This last and an inductive construction yield that for each integer $m>2$ there exists a strictly-increasing integer-valued function $k(m)$ such that
\begin{equation} \label{T7}
\mathbb B^n (0; r_3/(2k(m))) \subset A_q\big(\mathbb B^n (p; r_3/k(m))\big) \subset \mathbb B^n (0; r_3/m),
\end{equation}
whenever $q \in \mathbb B^n (p, \frac{r_3}{2k(m)})$.
\bigskip
\bf Step 3: The Cayley transform. \rm
The {\it Cayley transform} considered here is the map
\begin{equation} \label{T8}
\kappa (z) := \Big( \frac{1-z_1}{1+z_1}, \frac{\sqrt{2} z_2}{1+z_1}, \ldots, \frac{\sqrt{2} z_n}{1+z_1}\Big),
\end{equation}
well-defined except at points of $Z= \{z \in \mathbb C^n \colon z_1=-1\}$.
Notice that this transform maps the open unit ball $\mathbb B^n (0;1)$ biholomorphically onto the
Siegel half space
\begin{equation} \label{T9}
{\mathcal S}_0 := \{z \in \mathbb C^n \colon 2 \hbox{\rm Re}\, z_1 > \|z'\|^2 \}.
\end{equation}
Moreover, $\kappa \circ \kappa = 1$ and consequently, $\kappa ({\mathcal S}_0) = B^n (0,1)$. Notice also that, if we denote by ${\bf 1}=(1,0,\ldots)$ and $-{\bf 1}=(-1,0,\ldots)$, then we have
$\kappa({\bf 1}) =(0,\ldots,0)$, $\kappa((0,\ldots,0))={\bf 1}$,
$\kappa(-{\bf 1}) = \infty$ and $\kappa(\infty) = -{\bf 1}$.
\bigskip
\bf Step 4: Stretching. \rm
Let $q \in \Omega \cap \mathbb B^n (p; \frac{r_3}{2k(m)})$. If we let $m$ tend to infinity. Then of course $A_q (q) = (\lambda_q, 0, \ldots, 0)$ approaches $A_q(b(q))=(0,\ldots,0)$ and so $\lambda_q$ approaches zero. For simplicity, denote by $\lambda = \lambda_q$, suppressing the notation $q$. But $\lambda$ is still dependent upon $q$. Note that
\begin{equation} \label{T10}
A_q (\mathbb B^n(c(q); R)) = \{z \in \mathbb C^n \colon 2 R \ \hbox{\rm Re}\, z_1 > \|z\|^2 \}.
\end{equation}
Define the map $\Lambda_\lambda\colon \mathbb C^n \to \mathbb C^n$ by
\begin{equation} \label{T11}
\Lambda_\lambda (z) := \Big( \frac{z_1}{\lambda}, \frac{z_2}{\sqrt{\lambda}}, \cdots,
\frac{z_n}{\sqrt{\lambda}} \Big),
\end{equation}
the {\it stretching map}, introduced originally by Pinchuk (cf.\ \cite{Pinchuk}).
Recall (\ref{T6}). This stretching map transforms $A_q(\Omega)\cap \mathbb B^n (0; \frac{r_3}{3})$ to the domain
$
\Lambda_\lambda \big(A_q(\Omega) \cap \mathbb B^n (0;\frac{r_3}{3})\big)
$
so that
\begin{eqnarray}
& z & \in \Lambda_\lambda \circ A_q(\Omega)
\cap \mathbb B^n \Big(0;\frac{r_3}{\sqrt{\lambda}k(3)}\Big) \label{T12} \\
& & \Leftrightarrow \|z\|<\frac{r_3}{\sqrt{\lambda}k(3)} \text{ and } \nonumber\\
& &\qquad 2u > H_{b(q)}(z') + \frac1\lambda K_{b(q)} (\lambda v, \sqrt{\lambda}z') +
\frac1\lambda {\mathcal R}_{b(q)} (\lambda v, \sqrt{\lambda}z'). \nonumber
\end{eqnarray}
On the other hand, notice that
$$
\Big\|\frac1\lambda K_{b(q)} (\lambda v, \sqrt{\lambda}z')\Big\|
\le
C\sqrt{\lambda}(\sqrt{\lambda} |v|^2 + |v|\|z'\|)
$$
and that
$$
\Big\|\frac1\lambda {\mathcal R}_{b(q)} (\lambda v, \sqrt{\lambda}z')\Big\|
\le \frac1\lambda o ((|\lambda v|^2 + \|\sqrt{\lambda}z'\|^2)) = \frac1\lambda o(\lambda)
$$
on $\mathbb B^n(0;\rho)$ for any fixed constant $\rho>0$. Notice that both terms
approach zero as $\lambda$ tends to zero. Thus, these terms can become sufficiently
small if we limit $q$ to be contained in $\mathbb B^n (p; \frac{r_3}{2k(m)})$ for some
sufficiently large $m$.
\medskip
\bf Step 5: Set-convergence. \rm
This step is in part heuristic; and the heuristics appearing, especially which concern
set-convergences, in this step are not used in the proof, strictly speaking.
We include this step because they seem to help us to grasp the logical structure of the
proof. On the other hand, the constructions in (\ref{T13})--(\ref{T15}) shall be used in
the proof-arguments, especially in Step 7.
The main role of the stretching map $\Lambda_\lambda$, as $\lambda \searrow 0$ is to
rescale the domains successively, letting them to converge to the set-limits.
For instance if one considers
$$
\Lambda_\lambda ( A_q (\Omega) \cap B^n (0,r_3))
$$
then, one can see that $\Lambda_\lambda (B^n (0, r_3))$ contains $B^n (0, r_2/\sqrt{\lambda})$,
a very large ball, which exhausts $\mathbb C^n$ successively as $\lambda$ approaches zero.
In the mean time within that large ball, $\Lambda_\lambda (A_q (\Omega))$ is restricted only by
the inequality
$$
2 u > H_{b(q)} (z') + \tilde K_\lambda (v,z')
$$
where $\tilde K_\lambda = o(\lambda)$ is small enough to be negligeable. One can
imagine that indeed the ``limit domain'' of this procedure should be
\begin{equation} \label{T13}
\widehat\Omega := \{z \in \mathbb C^n \colon 2u > H_p (z')\}.
\end{equation}
Here, of course, $H_p(z')$ is the quadratic positive-definite Hermitian form which appears in the defining inequality of $\Omega$ about the boundary point $p$ (understood as the origin):
$$
2 \hbox{\rm Re}\, z_1 > H_p (z') + o(|\hbox{\rm Im}\, z_1|+ \|z'\|^2).
$$
Notice that
$$
\kappa(\widehat\Omega) = \{z \in \mathbb C^n \colon |z_1|^2 + H_p (z') < 1\},
$$
and hence there is a $\mathbb C$-linear isomorphism
\begin{equation}\label{T14}
L\colon \mathbb C^n\to \mathbb C^n
\end{equation}
that maps $\kappa(\widehat\Omega)$ biholomorphically onto the unit ball $\mathbb B^n(0;1)$ with $L({\bf 1})= {\bf 1}$.
Before leaving this step we remark that, since $\Omega \subset \mathbb B^n(c(q);R)$ whenever
$q \in \mathbb B^n (p;r_2)$, $A_q (\Omega) \subset A_q (\mathbb B^n(c(q);R))
= \mathbb B^n ((R,0,\ldots,0);R)$. This in turn implies that
\begin{eqnarray}
\Lambda_\lambda \circ A_q(\Omega)
& \subset & \Lambda_\lambda\big(\mathbb B^n ((R,0,\ldots,0);R)\big)
\label{T15} \\
& \subset & {\mathcal E} := \{z\in\mathbb C^n \colon 2R~ \hbox{\rm Re}\, z_1 > \|z'\|^2\}. \nonumber
\end{eqnarray}
The last inclusion follows by (\ref{T10}).
\bigskip
\bf Step 6: Auxiliary domains. \rm
Let $\delta > 0$ be given. Consider the domains
\begin{equation}\label{T16}
{\mathcal G}_\delta := \{z \in \mathbb C^n \colon 2 u > -\delta |v| + (1-\delta) H_{b(q)} (z') \},
\end{equation}
\begin{equation}\label{T17}
{\mathcal F}_\delta := \{z \in \mathbb C^n \colon 2 u > \delta |v| + (1+\delta) H_{b(q)} (z') \}
\end{equation}
and
\begin{equation}\label{T18}
{\mathcal H}_q := \{z\in\mathbb C^n \colon 2 u > H_{b(q)}(z') \},
\end{equation}
in addition to $\widehat\Omega$ and ${\mathcal E}$ introduced in (\ref{T13}) and (\ref{T15}).
\begin{figure}[h]
\centering
\includegraphics[height=1.8in, width=2in]{pic4.eps}
\caption{\sf Auxiliary domains ${\mathcal G}_\delta$ and ${\mathcal F}_\delta$}
\end{figure}
\smallskip
\noindent
A straightforward computation checks that the image $\kappa ({\mathcal G}_\delta)$ of ${\mathcal G}_\delta$ via the Cayley transform $\kappa$ introduced earlier is
\begin{equation} \label{T19}
\kappa({\mathcal G}_\delta) = \{z \in \mathbb C^n \colon |z_1|^2 - \frac{\delta}2 |z_1-\bar z_1| + (1-\delta)H_{b(q)} (z') < 1\}.
\end{equation}
Hence, there exists $\delta_0 > 0$ that, for every $\delta$ with $0<\delta<\delta_0$,
$\kappa({\mathcal G}_\delta)$ is a bounded domain. Notice also that this domain is arbitrarily
close to the domain $\kappa ({\mathcal H}_{b(q)})$ as $\delta_0$ becomes arbitrarily small.
It follows therefore that, for every $\epsilon > 0$, there exists $\delta_0>0$ such that
\begin{equation} \label{T20}
L\circ \kappa({\mathcal G}_\delta) \subset \mathbb B^n (0;1+\epsilon)
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[height=1.40in, width=2.90in]{pic3.eps}
\caption{$G(\Omega)=L\circ \kappa \circ \Lambda_\lambda \circ A_q(\Omega)$ for $q \sim p$}
\end{figure}
whenever $0<\delta<\delta_0$. Moreover, observe that the stretching map
$\Lambda_\lambda$ preserves all such domains as
$$
{\mathcal F}_\delta, {\mathcal G}_\delta, \widehat\Omega, {\mathcal E} \text{ and } {\mathcal H}_q.
$$
Let us now define the expression
\begin{equation} \label{T21}
G(z) := L \circ \kappa \circ \Lambda_\lambda \circ A_q (z)
\end{equation}
for $z \in \mathbb C^n - (\Lambda_\lambda \circ A_q)^{-1} (Z)$. [ The set $Z$ has
been defined in (\ref{T8}). Notice that this expression $G$ depends upon
$q \in \mathbb B^n (0; r_2)$, for instance; see Figure 3 in Step 4 for an illustration.]
In particular, this $G$ maps $\Omega$ onto its image $G(\Omega)$ biholomorphically.
\bigskip
\bf Step 7: Proof of Theorem \ref{thm-2g}. \rm
Our present goal is to show the following
\medskip
\noindent
\bf Claim. \it For any $\epsilon$ with $0<\epsilon<1/2$, there exists an integer $m >0$ such that
\begin{equation} \label{T22}
\mathbb B^n (0; 1-\epsilon) \subset G(\Omega) \subset \mathbb B^n (0; 1+\epsilon)
\end{equation}
whenever $q \in \Omega \cap B^n (p, \frac{r_3}{2k(m)})$. \rm
\medskip
Since $G(q)=0$, this implies that the squeezing function $\sigma_\Omega$ satisfies
$$
\sigma_\Omega (q) \ge \frac{1-\epsilon}{1+\epsilon}.
$$
Notice that this completes the proof of Theorem \ref{thm-2g}.
\medskip
Therefore we are only to establish this claim.
\medskip
Start with $\mathbb B^n(0; 1-\epsilon)$. Notice first, by the definition of ${\mathcal F}_\delta$, that
for every $\delta>0$ there exists $m_1>0$ such that
$$
{\mathcal F}_\delta \cap \mathbb B^n (0;r_2/m) \subset A_q(\Omega) \cap \mathbb B^n (0;r_2/m),
$$
for any $m>m_1$.
Also,
$$
\kappa^{-1}\circ L^{-1}( \mathbb B^n(0; 1-\epsilon)) \subset\subset \kappa^{-1}\circ L^{-1}( \mathbb B^n(0; 1)) = \widehat\Omega.
$$
As discussed in ({T4})--(\ref{T7}), $L \circ \kappa ({\mathcal H}_q)$ is sufficiently close to
$L\circ \kappa (\hat\Omega)$ which is the unit ball,
whenever $q \in \mathbb B^n (p;\frac{r_3}{2k(m)})$ and $m$ is sufficiently large.
Therefore there exist an integer $m_2 > m_1$ such that
$(L\circ \kappa)^{-1}(\mathbb B^n(0; 1-\epsilon)) \subset\subset {\mathcal H}_q$ whenever $q \in \mathbb B^n (p;r_3/m_2)$.
As in (\ref{T19}), a direct computation yields
\begin{equation} \label{T23}
\kappa({\mathcal F}_\delta) = \{z \in \mathbb C^n \colon |z_1|^2 + \frac{\delta}2 |z_1-\bar z_1| + (1+\delta)H_{b(q)} (z') < 1\}.
\end{equation}
Now, consider the set $L\circ \kappa \circ \Lambda_\lambda ({\mathcal F}_\delta)$ for each
$\delta > 0$. (Recall that $\Lambda_\lambda ({\mathcal F}_\delta)={\mathcal F}_\delta$ as remarked
in the line below (\ref{T20}).) These domains increase monotonically as $\delta \searrow 0$
(since ${\mathcal F}_\delta$'s do) in such a way that
the union $\bigcup_{0<\delta <\delta_0} L \circ \kappa \circ ({\mathcal F}_\delta)$ becomes arbitrarily close to $\mathbb B^n (0; 1)$ as $m$ is sufficiently large.
\begin{figure}[h]
\centering
\includegraphics[height=2in, width=2.70in]{pic5.eps}
\caption{$\mathbb B^n (0;1-\epsilon) \subset G(\Omega)$}
\end{figure}
Consequently there exists a constant $\delta>0$ such that $\mathbb B^n(0;1-\epsilon) \subset\subset L \circ \kappa \circ ({\mathcal F}_\delta)$. Moreover there is an intger $m_3 > m_2$ such that
\begin{equation} \label{T24}
\Lambda_\lambda^{-1} \big(\kappa^{-1}\circ L^{-1}(\mathbb B^n(0; 1-\epsilon)\big) \subset \mathbb B^n (0;r_3/k(m_1)),
\end{equation}
as $\Lambda_\lambda^{-1}$ scales down the compact subsets (since $\lambda < r_3/m_2$, sufficiently small) to a small set near the origin.
Hence, we have
$$
\Lambda_\lambda^{-1} \big(\kappa^{-1}\circ L^{-1}(\mathbb B^n(0; 1-\epsilon)\big) \subset {\mathcal F}_\delta \cap \mathbb B^n (0;r_3/k(m_1)) \subset \Omega.
$$
Consequently,
\begin{eqnarray}
\mathbb B^n (0; 1-\epsilon)
& \subset & L \circ \kappa \circ \Lambda_\lambda
({\mathcal F}_\delta \cap \mathbb B^n (0;r_3/k(m_1)) ) \nonumber \\
& \subset & L \circ \kappa \circ \Lambda_\lambda (A_q (\Omega)) \label{T25} \\
& = &G(\Omega), \nonumber
\end{eqnarray}
as long as $q \in \mathbb B^n (p; \frac{r_3}{2k(m_3)})$.
\medskip
Now we show that $G(\Omega) \subset \mathbb B^n (0; 1+\epsilon)$. Consider
$$
\Omega' := \Omega - \mathbb B^n (p, r_2).
$$
Notice that there exists an integer $\ell >>1$ such that
\begin{equation} \label{T26}
A_q (\Omega') \subset A_q(\Omega) - \mathbb B^n (0; r_2/\ell) \subset {\mathcal E} - \mathbb B^n (0; r_2/\ell).
\end{equation}
Now, there exists an integer $m_4>3$ such that, if $m>m_4$ and $q \in \mathbb B^n (p,\frac{r_3}{2k(m)})$, then
$$
\Lambda_\lambda ({\mathcal E} - \mathbb B^n (0; r_2/k)) \subset \{z \in {\mathcal E} \colon \hbox{\rm Re}\, z_1 >
\frac{r_2}{r_3}\cdot \frac{m_4}{\ell} \}.
$$
This implies that there exists $m_4$ such that
$$
G(\Omega') \subset L \circ \kappa (\{z \in {\mathcal E} \colon \hbox{\rm Re}\, z_1 >
\frac{r_2}{r_3}\cdot \frac{m_4}{\ell} \}) \subset (\mathbb B^n (-{\bf 1}; \rho(m_4)))
$$
for some $\rho(m)$ which approaches zero as $m$ tends to infinity; a direct computation with the Cayley transform and the choice of $L$ (cf.\ (\ref{T14})) verify this immediately. Therefore, choosing $m_4$ sufficiently large, we arrive at
\begin{equation} \label{T27}
G(\Omega') \subset \mathbb B^n (-{\bf 1}; \epsilon).
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[height=2in, width=3.40in]{pic6.eps}
\caption{$G(\Omega') \subset \mathbb B^n (-{\bf 1};\epsilon)$}
\end{figure}
For the $\epsilon$ given above, there exists $\delta $ such that
\begin{equation} \label{T28}
L\circ \kappa ({\mathcal G}_\delta) \subset \mathbb B^n (0;1+\epsilon).
\end{equation}
Fix this $\delta$. Then, recall how the auxiliary domain ${\mathcal G}_\delta$ was defined
in (\ref{T16}). Given any $\delta > 0$, according to (\ref{T4})--(\ref{T6}),
there exists $\rho >0$ such that
$$
A_q (\Omega) \cap \mathbb B^n(0;\rho) \subset {\mathcal G}_\delta.
$$
\begin{figure}[h]
\centering
\includegraphics[height=2in, width=3.40in]{pic6a.eps}
\caption{$G(\Omega) \subset \mathbb B^n (0; 1+\epsilon)$}
\end{figure}
On the other hand, we can go back to (\ref{T26}) and require that $r_2/\ell < \rho/2$. Then
we have
\begin{equation} \label{T29}
A_q (\Omega) \cap \mathbb B^n(0;2r_2/\ell) \subset {\mathcal G}_\delta.
\end{equation}
Since there exists an integer $m_5>0 $ such that
$A_q (\mathbb B^n (p; r_2/\ell) \subset \mathbb B^n (0; 2 r_2/\ell)$, we have that
$$
G(\Omega - \Omega') \subset L \circ \kappa \circ \Lambda_\lambda
\big( A_q(\Omega) \cap \mathbb B^n (0; 2 r_2/\ell) \big).
$$
This implies
\begin{eqnarray}
G(\Omega - \Omega')
& \subset & L \circ \kappa \circ \Lambda_\lambda
\big( A_q(\Omega) \cap \mathbb B^n (0; 2 r_2/\ell) \big) \nonumber \\
& \subset & L \circ \kappa \circ \Lambda_\lambda({\mathcal G}_\delta)
\qquad\qquad\qquad\qquad \text{by (\ref{T29})} \label{T30}\\
& \subset & L \circ \kappa ({\mathcal G}_\delta)
\qquad\qquad\text{by the sentence following (\ref{T20})}\nonumber\\
& \subset & \mathbb B^n (0;1+\epsilon).\nonumber
\end{eqnarray}
By (\ref{T27}) and (\ref{T30}) we have that
$$
G(\Omega) \subset \mathbb B^n (0; 1+\epsilon).
$$
This completes the proofs of Claim and Theorem \ref{thm-2g}. \hfill $\Box$
\medskip
\section{Remarks}
In this final section we present several remarks.
\subsection{On the spherically-extreme points}
Pertaining to Question \ref{q1}, one of the naturally rising question would be whether
one may re-embed (the closure of) the bounded strongly pseudoconvex domain so that
the pre-selected boundary point becomes spherically extreme. Recent paper
by Diederich-Fornaess-Wold \cite{DFW} says that the answer to this question is
affirmative. Owing to this new result, Theorem \ref{thm-2g} now implies the following
\begin{theorem} If $\Omega$ is a bounded domain in $\mathbb C^n$ with a ${\mathcal C}^2$-smooth strongly pseudoconvex boundary, then $\lim_{\Omega\ni z \to \partial\Omega} \sigma_\Omega (z) = 1$.
\end{theorem}
On the other hand, a more ambitious try may be that one would like to re-embed
the domain using the
automorphisms of $\mathbb C^n$ to achieve the same goal. But this cannot work. Here is a
counterexample to such a try:
\begin{example} \rm Consider the domain $U$ which is the open $1/10$-
tubular neighborhood of the circle $S:= \{(e^{it}, 0) \in \mathbb C^2 \colon t \in {\mathbb R} \}$. This
domain is strongly pseudoconvex. Let $p = (9/10, 0)$. Clearly $p \in \partial U$. If
there were $\psi \in \hbox{\rm Aut}\,(\mathbb C^2)$ that makes $\psi(p)$ sperically-extreme for
$\psi(U)$, then consider the analytic disc $\Sigma := \psi(\Delta)$ where $\Delta :=
\{(z,0) \colon |z|\le 1\})$. Since $\Delta$ crosses $\partial U$ transversally at $\psi(p)$,
$\Sigma$ crosses the sphere envelope at $\psi(p)$ and extends to the exterior of the
sphere. On the other hand the boundary of $\Sigma$ remains inside $\psi(U)$ and hence
inside the sphere. Now let the sphere expand radially from its center, and let it stop at the
radius beyond which cannot have intersection with the holomorphic disc $\Sigma$. Then
the sphere is tangent to a point to $\Sigma$ at an interior point keeping the whole disc
inside the sphere. The maximum principle now implies that $\Sigma$ should be entirely
on the sphere. But the boundary of $\Sigma$ is strictly inside the sphere, which is a
contradiction. This implies that $p$ cannot be made spherically-extreme via any
re-embedding by an automorphism of $\mathbb C^n$.
\end{example}
\smallskip
\it Acknowledgement: \rm This example was obtained after a valuable discussion between
the first named author and Josip Globevnik. The first named author would like to
express his thanks to Josip Globevnik for pointing out such possibility.
\subsection{On the exhaustion theorem by Fridman-Ma}
The main theorem by Buma Fridman and Daowei Ma in \cite{FridMa} had obtained the conclusion
of Theorem \ref{thm-2g} in the sepcial case $\Omega \ni q \to p$ {\it trasversely} to the
boundary $\partial \Omega$. However, that is not sufficient to prove Theorem \ref{thm-2g}; it
is indeed necessary to consider all possible sequences approaching the boundary.
In \cite{FridMa} they need not consider the point sequences approaching the
boundary tangentially, as their interest
was only on the holomorphic exhaustion of the ball by the biholomorphic images of a
bounded strongly pseudoconvex domain. On the other hand, our proof of Theorem \ref{thm-2g}
gives a proof to their theorem as well; one only need to use $(1+\epsilon)^{-1} G(z)$ instead of
$G$. [Recall that $G$ depends upon $q$. Letting $q$ converge to $p$ and $\epsilon$
tend to zero, one gets a sequence of maps that exhausts the unit ball holomorphically.]
\subsection{Plane domain cases}
For domains in $\mathbb C$, several theorems have been obtained by F. Deng, Q. Guan and L.
Zhang in \cite{DGZ}. Theorem \ref{thm-2g} obviously includes many of those
results, as every boundary point of a plain domain with ${\mathcal C}^2$ smooth boundary is
spherically-extreme.
|
1,116,691,497,233 | arxiv | \section{Introduction}
\label{secintro}
Tales of freak waves by lucky survivors used to be taken with a large grain of salt. Were the sailors making excuses for bad seamanship? The first such wave to be measured directly was the famous New Year's wave in 1995~\cite{trulsen97}. With modern cameras and video, not to mention satellites~\cite{dankert, schulz04}, it is no longer controversial that freak or rogue extreme waves exist on the world's great oceans~\cite{dysthe08,mallory,kharif}.
Any realistic seaway (an irregular, moderate to rough sea) is comprised of a superposition of waves differing in wavelength and direction, with random relative phases. Supposing that the dispersion in wavelengths is not too large, and assuming uniform sampling, Longuet-Higgins~\cite{lh} exploited the central limit theorem to derive a large number of statistical properties of such wave superpositions, including of course wave height distributions. From this viewpoint, extreme waves are the result of unlucky coherent addition of plane waves corresponding to the tail of the Gaussian distribution (see Eq.~(\ref{prayleigh}) below). As explained below, wave heights greater than about $4 \,\sigma$ in the tail of the Gaussian are classified as extreme. The problem has become how to explain why the observed number of rogue wave events is greater than the number $4\,\sigma$ out in the Longuet-Higgins theory.
For the following discussion, it is important to understand why a 20 meter wave in a sea where the significant wave height (SWH, defined as the average height of the highest one third of the waves) is 18 meters is far less onerous than a 20 meter wave where the SWH is 8 meters. An established seaway of uniform energy density (uniform if averaged over an area large compared to the typical wavelength) is ``accommodated'' over time and distance, through nonlinear energy transfer mechanisms. Seaways of higher energy density develop correspondingly longer wave periods and wavelengths, even with no further wind forcing, keeping wave steepness under control as a result.
This accommodation process is one of the ways nonlinear processes are implicitly lurking behind ``linear'' theories, in that the input into the linear theories, i.e., the SWH, the period, dispersion in direction, and dispersion in wavelength are all the result of prior nonlinear processes. A 20 meter wave in a sea of SWH 8 meters is necessarily very steep, possibly breaking, with a deep narrow trough preceding it. The tendency for steep waves to break is an often devastating blow just as the ship is sailing over an unusually deep trough before meeting the crest.
Observational evidence has shown that the linear Longuet-Higgins theory is too simplistic~\cite{dysthe08}. Recent advances in technology have allowed multiple wave tank experiments and field observations to be conducted, confirming the need for a more realistic theory to explain the results~\cite{forristall00, onorato04}. An obvious correction is to incorporate nonlinear wave evolution at every stage, rather than split the process into an implicit nonlinear preparation of the seaway followed by linear propagation. Clearly the exact evolution is always nonlinear to some extent, but the key is to introduce nonlinearities at the right moment and in an insightful and computable way. Realistic fully nonlinear computations wave by wave over large areas are very challenging, but initial attempts have been made to simulate the ocean surface using the full Euler equation both on large scales~\cite{tanaka01} and over smaller areas~\cite{gibson05, gibson07}.
Surprisingly, investigation of nonlinear effects is actually not the next logical step needed to improve upon the Longuet-Higgins model. Indeed the full {\em linear} statistical theory had not been given, for the reason that uniform sampling assumed by Longuet-Higgins is not justified. A nonuniform sampling theory, which does not assume that the energy density is uniformly distributed over all space, is possible and is still ``linear.'' Moreover, the parameters governing a nonuniform sampling are knowable. Inspired by the work of White and Fornberg~\cite{wf}, the present authors showed that current eddies commonly present in the oceans are sufficient to cause the time-averaged wave intensity to be spatially non-uniform, and to exhibit ``hot spots'' and ``cold spots'' some tens to hundreds of kilometers down flow from the eddies. We emphasize that in terms of wave evolution, the refraction leading to the patchy energy density is purely linear evolution. The key ideas are (1) that waves suddenly entering a high energy patch are not accommodated to it and grow steep and dangerous, and (2) the process is still probabilistic and the central limit theorem still applies, with the appropriate sampling over a nonuniform distribution. The high-energy patches will skew the tails of the wave height distribution, perhaps by orders of magnitude. This was the main point in reference~\cite{hkd}.
There is no denying the importance of nonlinear effects in wave evolution, and a full theory should certainly include them. On the other hand a nonlinear theory that fails to account for patchy energy density is missing an important, even crucial effect. The linear theory needs to be supplemented by nonlinear effects however, since the accommodation of the waves to the presence of patchy energy density needs to be considered. It is our goal here to review progress along these lines and point the way to a more complete theory. We first review the nonuniform sampling linear theory and then discuss newer simulations using the nonlinear Schr\"odinger equation (NLSE). Finally, we show that even larger rogue wave formation probabilities are predicted when linear and nonlinear formation mechanisms are acting in concert.
Rogue wave modeling would benefit greatly from a comprehensive, accurate, and unbiased global record of extreme wave events, supplemented by data on local ocean conditions, including current strength, SWH, steepness, and the angular and spectral spread of the sea state. Such a record, not available at present, would allow for direct statistical tests of linear and nonlinear theories of rogue wave formation. Anecdotal evidence does suggest that rogue waves may be especially prevalent in regions of strong current, including the Gulf Stream, the Kuroshio Current, and especially the Agulhas Current off the coast of South Africa. Consequently, the Agulhas Current in particular has attracted much attention in rogue wave research~\cite{mallory,agulhas}. However, anecdotal evidence from ships and even oil platform measurements cannot provide a systematic, unbiased, and statistically valid record that would support a correlation between possibly relevant variables and rogue wave formation probability. Instead, satellite-based synthetic aperture radar (SAR)~\cite{dankert, schulz04} is currently the only method that shows potential for monitoring the ocean globally with single-wave resolution, but validating the surface elevations obtained by SAR is a challenge. The SAR imaging mechanism is nonlinear, and may yield a distorted image of the ocean wave field; the nonlinearity is of course of particular concern for extreme events~\cite{ja}. Recently, an empirical approach has been proposed that may accurately obtain parameters such as the SWH from SAR data~\cite{sskl}.
\section{Linear Wave Model in Presence of Currents}
\label{seclinear}
\subsection{Ray Density Statistics}
\label{secrayinten}
To understand the physics of linear rogue wave formation in the presence of currents, it is very helpful to begin with a ray, or eikonal, approximation for wave evolution in the ocean~\cite{wf,hkd},
\begin{equation}
{d\vec k\over dt} = -{\partial \omega(\vec r,\vec k)\over \partial \vec r}; \ \ \ \ \ \ \ {d\vec r\over dt} = {\partial \omega(\vec r,\vec k)\over \partial \vec k} \,,
\label{eikonal}
\end{equation}
where $\vec r$ is the ray position, $\vec k$ is the wave vector, and $\omega$ is the frequency. For surface gravity waves in deep water, the dispersion relation is
\begin{equation}
\omega(\vec r,\vec k) = \sqrt{g\vert \vec k\vert} + \vec k \cdot \vec U(\vec r) \,,
\label{dispers}
\end{equation}
where $\vec U(\vec r)$ is the current velocity, assumed for simplicity to be time-independent, and $g=9.81$~m/s is the acceleration due to gravity. The validity of the ray approximation depends firstly on the condition $|\vec k|\xi \gg 1$, where $\xi$ is the length scale on which the current field $\vec U(\vec r)$ is varying, physically corresponding to the typical eddy size. This condition is well satisfied in nature, since wave numbers of interest in the deep ocean are normally of order $k\sim 2 \pi/(100 \, {\rm m})$, while the typical eddy size may be $\xi \sim 5\,{\rm km}$ or larger. Secondly, scattering of the waves by currents is assumed to be weak, i.e., the second term in equation~(\ref{dispers}) should be small compared to the free term. This again is well justified since eddy current speeds $|\vec U|$ are normally less than $0.5\, {\rm m/s}$, whereas the wave speed $v=\partial\omega/\partial k\approx \sqrt{g/4k}$ is greater than $5\, {\rm m/s}$. In section~\ref{seclinnumerical} below, we will explicitly compare
the ray predictions with results obtained by exact integration of the corresponding wave equation.
\begin{figure}[ht]
\centerline{\includegraphics[width=2.8in,angle=0]{image_th10b.eps}\hskip 0.2in \includegraphics[width=2.8in,angle=0]{image_th20b.eps}}
\caption{A ray density map $I(x,y)$ is calculated for rays moving through a $640$ km by $640$ km random eddy field, with rms eddy current $u_{\rm rms}=0.5$ m/s and eddy correlation length $\xi=20$ km. Here bright regions represent high density. The rays are initially distributed uniformly
along the left edge of each panel, with angular spread $\Delta \theta$ around the $+x$ (rightward) direction, and with frequency $\omega=2 \pi/(10\,{\rm sec})$, corresponding to velocity $v=7.81\,{\rm m/s}$ in the absence of currents.
The left and right panels illustrate $\Delta \theta=10^\circ$ and $\Delta \theta=20^\circ$, respectively.}
\label{figrayimage}
\end{figure}
In the numerical simulations shown in figure~\ref{figrayimage}, we follow White and Fornberg~\cite{wf} in considering a random incompressible current field in two dimensions,
with zero mean current velocity, generated as
\begin{equation}
U_x(\vec r) = -{\partial \psi(\vec r)}/{\partial y}\,; \;\;\;\;\;\; U_y(\vec r) = {\partial \psi(\vec r)}/{\partial x}
\end{equation}
from the scalar stream function $\psi(\vec r)$. The stream function itself is Gaussian distributed with Gaussian decay of spatial correlations:
\begin{equation}
\overline{\psi(\vec r)} = 0\,; \;\;\;\;\;\; \overline{\psi(\vec r)\,\psi(\vec r')} \sim e^{-(\vec r-\vec r')^2/2\xi^2}\,,
\end{equation}
and the overall current strength is described by $u_{\rm rms}^2=\overline{|\vec U(\vec r)|^2}$. The specific choice of a Gaussian distribution for the stream function is made for convenience only. The detailed structure of the individual eddies on the scale $\xi$ has no effect on the final rogue wave statistics as long as the current is weak ($u_{\rm rms} \ll v$), since each ray must travel a distance $\gg \xi$ before being appreciably scattered. Each panel in figure~\ref{figrayimage} represents a $640$ km by $640$ km random eddy field, with rms eddy current $u_{\rm rms}=0.5$ m/s and eddy correlation length $\xi=20$ km.
The initial swell, entering from the left in each panel, is characterized by a single frequency $\omega=2\pi/(10\,{\rm sec})$ (and thus a single wave number $k=\omega^2/g=0.04\,{\rm m}^{-1}$ and a single wave speed $v=\sqrt{g/4k}=7.81$ m/s). As discussed in Ref.~\cite{hkd}, within the context of a linear model, a nonzero frequency spread affects rogue wave formation only at second order in the spread $\Delta \omega$, and may be neglected for all practical purposes. In contrast, the {\it angular} spread of the incoming sea is very important in determining rogue wave statistics. In this figure, we consider an initially Gaussian angular distribution $p(\theta) \sim e^{-\theta^2/2(\Delta \theta)^2}$, where $\theta$ is the wave vector direction relative to the mean direction of wave propagation. Here all rays begin at the left edge of each panel, uniformly distributed in the $y$ direction, and the mean direction of wave propagation is rightward. The left and right panels illustrate the behavior for two different values of the initial angular spread $\Delta \theta$.
In both panels we observe bright streaks or branches, corresponding to regions of larger than average ray density $I(x,y)$, and thus larger than average wave intensity.
The branches may be understood by considering briefly the limiting (unphysical) case of a unidirectional initial sea state ($\Delta \theta=0$), corresponding to a single incoming plane wave. In the ray picture, and in the coordinates of figure~\ref{figrayimage}, the initial conditions are in this limit characterized by a one-dimensional phase space manifold $(x, y, k_x, k_y) = (0,y,k,0)$, where $k$ is the fixed wave number, and $y$ varies over all space. As this incoming plane wave travels through the random current field, it undergoes small-angle scattering, with scattering angle $\sim u_{\rm rms}/v$ after traveling one correlation length $\xi$ in the forward direction. Eventually, singularities appear that are characterized in the surface of section map $[y(0),k_y(0)]\to [y(x),k_y(x)]$ by $\delta y(x) / \delta y(0)=0$, i.e., by local focusing of the manifold of initial conditions at a point $(x,y)$.
The currents leading to such a focusing singularity may be thought of as forming a `bad lens.' Whereas a lens without aberration focuses all parallel incoming rays to one point, a bad lens only focuses at each point an infinitesimal neighborhood of nearby rays, so that different neighborhoods get focused at different places as the phase-space manifold evolves forward in $x$, resulting in lines, or branches, of singularities. The typical pattern is an isolated cusp singularity, $\delta^2 x(y) / \delta x(0)^2=0$, followed by two branches of fold singularities, as shown in figure~\ref{figcusp}.
\begin{figure}[ht]
\centerline{\includegraphics[width=4in,angle=0]{figcusp.ps}}
\caption{A cusp singularity, followed by two branches of fold singularities, is formed as initially
parallel rays pass through a focusing region. The two branches appear because the focal
distance varies with the distance of approach from the center, as in a `bad' lens with strong spherical aberration. After
averaging over incident directions, the singularities will
be softened but not washed away completely~\cite{hkd}}.
\label{figcusp}
\end{figure}
A simple scaling argument~\cite{wf,lkbranch,mfg} shows that the first singularities occur after a median distance $y = L \sim \xi (u_{\rm rms}/v)^{-2/3} \gg \xi$ along the direction of travel.
when the typical ray excursion in the transverse $x$ direction becomes of order $\xi$. Thus, each ray passes through many uncorrelated eddies before a singularity occurs, and a statistical description is well justified. For realistic parameters, $L \sim 100$ km or more is typical. The cusp singularities formed in this way are separated by a typical distance $\xi$ in the transverse direction, and thus the rms deflection angle by the time these singularities appear scales as
\begin{equation}
\delta \theta \sim \xi/L \sim (u_{\rm rms}/v)^{2/3} \,.
\label{delkx}
\end{equation}
We note that the typical deflection angle $\delta \theta$ does not depend on the eddy size but only on the velocity ratio $u_{\rm rms}/v$: faster currents cause larger deflection. For the input parameters used in figure~\ref{figrayimage}, the median distance to the first singularity is $L=7.5\xi=150$~km, and the rms deflection at the point of
singularity is $\delta \theta=18^\circ$.
\begin{figure}[ht]
\centerline{\includegraphics[width=3.5in,angle=0]{sendai.eps}}
\caption{Predicted tsunami wave heights from the T\=ohoku earthquake, a 9.0 magnitude undersea earthquake that occurred on March 11, 2011, off the coast of Japan. A branching structure is clearly visible as the waves move outward from the epicenter. (Source: NOAA Center for Tsunami Research.)}
\label{figtsunami}
\end{figure}
Similar phenomenology can give rise to wave focusing and rogue wave formation in shallow water,
where the dispersion relation of equation~(\ref{dispers}) is replaced with $\omega(
{\vec r},{\vec k})=\sqrt{gk \tanh(k h({\vec r}))}$, and varying depth
$h({\vec r})$ takes the place of the varying current $U({\vec r})$ as the
origin of scattering~\cite{tucker}. The same mechanism can lead to amplification of tsunami waves~\cite{berrytsunami,Dobrokhotov} where because of the long wavelength, shallow water equations apply. Fig.~\ref{figtsunami} shows a striking recent example of a predicted tsunami wave height map, in which the branched flow structure is unmistakably present.
More generally,
singularities and branched flow due to focusing in random media have been investigated in contexts as diverse as
electron flow in a two-dimensional electron gas~\cite{2deg}, ocean acoustics~\cite{tomsovic}, twinkling of starlight~\cite{twinkling},
and rain shower activation in turbulent clouds~\cite{rainshower}. Recently, universal
expressions have been obtained describing the branching statistics
for a large class of such systems, and valid at all distances from a source~\cite{mfg}.
\begin{figure}[ht]
\centerline{\includegraphics[width=3.5in,angle=270]{hist10.ps}}
\caption{The ray density distribution, for an initial sea state of uniform density scattered by a random eddy current field, is shown for several values of the freak index $\gamma$. The input parameters are chosen as in figure~\ref{figrayimage}, with initial angular spread $\Delta \theta=25^\circ$, $15^\circ$, and $5^\circ$ corresponding to freak index $\gamma=0.72$, $1.20$, and $3.60$, respectively. The mean intensity is normalized to unity in each case. The dashed curves are fits to the $\chi^2$ distribution of Eq.~(\ref{chisq}).}
\label{figraychisq}
\end{figure}
For finite initial angular spread $\Delta \theta$, the singularities are softened, and the finite contrast between the peak ray density in the branches and the background intensity is governed for $\Delta \theta \ll 1$ and $\delta \theta \ll 1$ by the ratio
\begin{equation}
\gamma = {\delta \theta \over \Delta \theta} \sim {(u_{\rm rms}/v)^{2/3} \over \Delta \theta} \,,
\label{gammadef}
\end{equation}
which we refer to as the freak index~\cite{hkd}. Of particular interest is the regime of small $\gamma$, where the scattering characterized by $\delta \theta$ is weak compared to the initial angular spread $\Delta \theta$ of the incoming sea. In this limit, the scattering produces only small perturbations of order $\gamma^{-1}$ in the ray density $I(x,y)$, in units where the initial (uniform) density is $I_0=1$~\cite{hkd}. Specifically,
as seen in figure~\ref{figraychisq},
the distribution of ray intensities in this regime may be well described
by a $\chi^2$ distribution~\cite{microwave},
\begin{equation}
g(I)=\chi_N^2(I)=\left(\frac{N}{2}\right)^{\frac{N}{2}}\frac{I^{\frac{N}{2}-1}}{\Gamma\left(\frac{N}{2}\right)}
e^{-NI/2} \,,
\label{chisq}
\end{equation}
where the number of degrees of freedom $N$ scales with the freak index as $\gamma^{-2}$. The proportionality constant may be obtained numerically by a fit to the data:
\begin{equation}
N={\alpha \over \gamma^2}={45 \over \gamma^2} \,.
\label{nval}
\end{equation}
In the limit $\gamma \to 0$ associated with zero current, we have $N \to \infty$, and we recover as expected the uniform ray density distribution
$g(I)=\delta(I-1)$.
\subsection{Implications for Wave Statistics}
\label{linwavestat}
In the Longuet-Higgins random seas model~\cite{lh}, the sea surface elevation above the average elevation is given by ${\rm Re} \, \zeta(x,y,t)$, where $\zeta$ is a random superposition of many plane waves with differing directions and frequencies. By the central limit theorem, $\zeta$ is distributed as a complex Gaussian random variable with standard deviation $\sigma$. Furthermore, for a narrow-banded spectrum ($\delta \omega \ll \omega$) the wave crest height $H$ is equal to the wave function amplitude $|\zeta|$, and the probability of encountering a wave crest of height $H$ or larger is
\begin{equation}
P_{\rm Rayleigh}(H) = e^{-H^2/2 \sigma^2} \,.
\label{prayleigh}
\end{equation}
Due to an exact symmetry between crests and troughs in a linear wave model, a crest height of $H$ corresponds to a wave height (crest to trough) of $2H$. Conventionally, a rogue wave is defined as $2H \ge 2.2\,{\rm SWH}$, where the significant wave height ${\rm SWH}$ is the average of the largest one third of wave heights in a time series, or approximately ${\rm SWH} \approx 4.0 \sigma$. Thus the condition for a rogue wave is $H \ge 4.4 \sigma$, and the random seas model predicts such waves to occur with probability $P_{\rm Rayleigh}(4.4\sigma) =6.3 \cdot 10^{-5}$. Similarly, extreme rogue waves may be defined by the condition $2H \ge 3\,{\rm SWH}$ or $H \ge 6.0\sigma$, and these are predicted to occur with probability $P_{\rm Rayleigh}(6.0\sigma) =1.5 \cdot 10^{-8}$ within the random seas model. As discussed in section~\ref{secintro}, the random seas model greatly underestimates the occurrence probability of extreme waves, when compared with observational data~\cite{dankert}.
What are the implications of scattering by currents, as discussed in section~\ref{secrayinten}, on the wave height statistics? Within the regime of validity of the ray approximation, we have at any spatial point $(x,y)$ correspondence between the ray density $I(x,y)$ and the wave intensity $H^2=|\zeta(x,y,t)|^2$, averaged over time. Thus, in contrast with the original Longuet-Higgins model, the time-averaged wave intensity is not uniform over all space but instead exhibits ``hot spots'' and ``cold spots'' associated with focusing and defocusing in the corresponding ray equations. At each point in space (assuming of course that the currents are stationary), the central limit theorem and thus the Rayleigh distribution still apply, and we have
\begin{equation}
P_{(x,y)}(H) = e^{-H^2/2\sigma^2 I(x,y)} \,,
\label{plocal}
\end{equation}
where $I(x,y)$ is the local ray density, normalized so that the spatial average is unity, and
$\sigma^2$ is the variance of the surface elevation in the incoming sea state, before scattering by currents. This is the situation a ship experiences at a given position.
Now averaging over space, or over an ensemble of random eddy fields with a given rms current speed, we obtain a total cumulative height distribution
\begin{equation}
P_{\rm total}(H) = \int_0^\infty dI\, g(I) \, e^{-H^2/2\sigma^2 I} \,.
\label{ptotal}
\end{equation}
In equation~(\ref{ptotal}), the full cumulative distribution of wave heights for a given sea state has been expressed as a convolution of two factors: (i) the local density distribution $g(I)$, which can be extracted from the ray dynamics, and (ii) the universal Longuet-Higgins distribution of wave heights for a given local density. Similar decompositions of chaotic wave function statistics into non-universal and universal components have found broad applicability in quantum chaos, including for example in the theory of scars~\cite{scar,baecker}. In the context of rogue waves, a similar approach was adopted by Regev {\it et al.} to study wave statistics in a one-dimensional inhomogeneous sea,
where the inhomogeneity arises from the interaction of an initially homogeneous sea with a
(deterministic) long swell~\cite{regev}.
Using the previously obtained ray density distribution in the presence of currents, equation~(\ref{chisq}), we obtain the K-distribution~\cite{kdistr}
\begin{equation}
P_{\rm total}(H)=2
\frac{\;\;\left({\sqrt{N} H/2\sigma}\right)^{\frac{N}{2}}}{\Gamma(N/2)}
K_{N/2}\left(\sqrt{N} H \sigma\right)\,,
\label{kbess}
\end{equation}
where $K_n(y)$ is a modified Bessel function.
Defining the dimensionless variable $x=2H/{\rm SWH} \approx 2H/(4 \sigma)$, so that a rogue wave is given by $x=2.2$ and an extreme rogue wave by $x=3.0$, we find
the probability of a wave height exceeding $x$ significant wave heights:
\begin{equation}
P_{\rm total}(x)=2
\frac{\;\;\left(\sqrt{N}x\right)^{\frac{N}{2}}}{\Gamma(N/2)}
K_{N/2}\left(2\sqrt{N}x\right)\,,
\label{kbess2}
\end{equation}
to be compared with the random seas prediction
\begin{equation}
P_{\rm Rayleigh}(x)=e^{-2 x^2}
\label{prayleighx}
\end{equation} in the same dimensionless units.
We recall that $N$ in equation (\ref{kbess}) or (\ref{kbess2})
is a function of the freak index $\gamma$, as given by equation (\ref{nval}).
To examine the predicted enhancement in the probability of rogue wave formation, as compared with random seas model (\ref{prayleigh}), we may consider two limiting cases. Keeping the wave height of interest fixed, and taking the limit $\gamma \to 0$, i.e. $N \to \infty$, we obtain the perturbative result
\begin{eqnarray}
P_{\rm perturb}(x)&=&\left[ 1+\frac{4}{N}(x^4-x^2)\right]P_{\rm Rayleigh}(x) \\
&=& \left[ 1+\frac{4 \gamma^2}{b}(x^4-x^2)\right]P_{\rm Rayleigh}(x) \,,
\label{pperturb}
\end{eqnarray}
valid for $x^4 \ll N$, or equivalently $x^2 \gamma \ll 1$. Thus, in the limit of small freak index, the distribution reduces, as expected, to the prediction of the random seas model.
Analogous perturbative corrections appear for quantum wave function intensity distributions in the presence of weak disorder or weak scarring by periodic orbits~\cite{mirlin,damborsky}.
Much more dramatic enhancement is observed if we consider the tail of the intensity distribution ($x \to \infty$) for a given set of sea conditions (fixed $\gamma$ or $N$). Then for $x \gg N^{3/2}$, or equivalently $x \gamma^3 \gg 1$, we obtain the asymptotic form
\begin{eqnarray}
P_{\rm asymptotic}(x)&=&
\sqrt{\pi}
\frac{\;\;\left(\sqrt{N}x\right)^{\frac{N-1}{2}}}{\Gamma(N/2)}
e^{-2x \sqrt{N}}
\nonumber \\
&=&
\sqrt{\pi}
\frac{\;\;\left(\sqrt{N}x\right)^{\frac{N-1}{2}}}{\Gamma(N/2)}
e^{2x(x- \sqrt{N})} P_{\rm Rayleigh}(x)
\,,
\label{asympt}
\end{eqnarray}
i.e., the probability enhancement over the random seas model is manifestly super-exponential in the wave height $x$.
Predicted enhancements in the probability of rogue wave and extreme rogue wave formation, based on equations (\ref{kbess2}) and (\ref{prayleighx}), are shown in table~\ref{enhancement}. We notice in particular that enhancements of an order of magnitude or more are predicted in the extreme tail, even for moderate values of the input parameters, corresponding to $\gamma \sim 0.72\,-\,1.2$ or $N \sim 30 \, - \, 85$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| c | c | c | c | c |}
\hline
\;\;\;\;\;$\Delta \theta$\;\;\;\;\; & \;\;\;\;\;$\gamma$\;\;\;\;\; & \;\;\;\;\;$N$\;\;\;\;\; & \;\;\;\;\;$E(2.2)$\;\;\;\;\; & \;\;\;\;\;$E(3.0)$\;\;\;\;\; \\
\hline
5 & 3.6 & 3.46 & 57 & 16800 \\
10 & 1.8 & 13.9 & 10.4 & 570 \\
15 & 1.2 & 31.2 & 4.3 & 76 \\
20 & 0.90 & 55.4 & 2.7 & 22 \\
25 & 0.72 & 86.6 & 2.0 & 9.8 \\
30 & 0.60 & 125 & 1.7 & 5.7 \\
\hline
\end{tabular}
\end{center}
\caption{The $N$ parameter of equation~(\ref{kbess2}), and the associated enhancement in the probability of rogue wave formation (wave height $2H=2.2\, {\rm SWH}$) as well as the enhancement of the probability of extreme rogue wave formation
(wave height $2H=3.0\, {\rm SWH}$) are calculated for several values of the incoming angular spread $\Delta \theta$
using equations~(\ref{gammadef}), (\ref{nval}), and (\ref{kbess2}). Here $E(x)=P_{\rm total}(x)/P_{\rm Rayleigh}(x)$. In all cases we fix the rms current speed $u_{\rm rms}=0.5$~m/s and mean wave speed $v=7.8$~m/s, so $\delta \theta=18^\circ$.
}
\label{enhancement}
\end{table}
\subsection{Numerical Results for Linear Wave Equation}
\label{seclinnumerical}
The theoretical predictions of equation~(\ref{kbess2}) are based on several approximations, including the assumption of local Rayleigh statistics. To see whether the approximations we have made are valid, we compare the theoretical predictions with direct numerical integration of the current-modified linear Schr\"odinger equation, which is obtained from the third-order current-modified nonlinear Schr\"odinger (CNLS) equation~\cite{stocker} by setting the nonlinear term to zero. CNLS governs the modulations of weakly nonlinear water waves around a mean frequency and mean wave vector, incorporating the effect of currents, and is presented in full in Sec.~\ref{secnlse} below. In dimensionless variables, the linear equation for the wave envelope describing the wave modulations is expressed as~\cite{stocker}
\begin{equation}
iA_T -\frac{1}{8}A_{XX}+ \frac{1}{4}A_{YY}-k_0 U_x A=0 \,.
\label{cnls}
\end{equation}
Here $A(X,Y,T)$ is the wave envelope, defined by separating out the carrier wave propagating
with mean wave vector ${\vec k}= k_0 {\hat x}$,
\begin{eqnarray}
\zeta(X,Y,T)&=&k_0 A(X,Y,T)e^{i k_0 x-i\sqrt{gk_0}t} \nonumber \\
&=& k_0 A(X,Y,T)e^{i X-iT/2}\,,
\end{eqnarray}
and
\begin{equation}
(X,Y,T)=(k_0 x-\frac{1}2{}\sqrt{gk}t,k_0 y,\sqrt{g k_0}t)
\label{dimspacetime}
\end{equation}
are dimensionless space and time coordinates. We also note that Eq.~(\ref{cnls}) may be obtained directly from the dispersion relation (\ref{dispers}), by expanding $\omega$ and $\vec k$ around $\omega_0$ and $k_0 {\hat x}$, respectively.
\begin{figure}[ht]\centerline{\includegraphics[width=3.0in,angle=270]{fig9.ps}}
\caption{The probability of exceeding wave height $2H$, in units of the significant wave height SWH, is shown for an incoming wave speed $v=7.8$~m/s, incoming angular spread $\Delta \theta=5.7^\circ$, and rms current speed $u_{\rm rms}=0.5$~m/s. The solid curve shows the results of a numerical simulation performed on a $20$~km by $40$~km field, with typical eddy size $\xi=800$~m, while the dashed curve represents equation~(\ref{kbess2}) with $N=6.8$. The Rayleigh (random seas) prediction of equation (\ref{prayleighx}) is shown for comparison.
}
\label{figkdistr}
\end{figure}
The calculations are performed on a rectangular field measuring 40 km along the mean direction of propagation and 20 km in the transverse direction, with typical eddy size $\xi=800$~m. (We note that a very small value for the eddy size is chosen to maximize the statistics collected; this is also a ``worst case'' scenario for the theory, as the ray approximation is expected to work ever better as the ratio of eddy size to the wavelength increases.) Equation~(\ref{cnls}) is integrated numerically using a split-operator Fourier transform method~\cite{weidman}, on a 1024 by 512 grid. The incoming wave is a random superposition of a large number of monochromatic waves with
directions uniformly distributed around the mean direction $\theta=0$ with standard deviation $\Delta \theta$. Without loss of generality, the incoming wave number is fixed at $k_0=2 \pi/(156\,{\rm m})$, corresponding to a frequency $\omega_0=\sqrt{gk_0}=2 \pi/ (10\,{\rm sec})$ and a group velocity $v=7.81$~m/s.
Each run simulates wave evolution for $5 \cdot 10^5$~sec or $5\cdot 10^4$ wave periods, sufficient for the wave height statistics to converge.
The results for $\Delta \theta=5.7^\circ$, corresponding to a very large freak index $\gamma=3.15$, are shown in figure~\ref{figkdistr}. The results are compared both with the theoretical prediction of equation~(\ref{kbess2}) (here $N=6.8$) and with the baseline Rayleigh distribution of equation~(\ref{prayleighx}). This is an extreme scenario, in which the occurrence probability of extreme rogue waves ($3$ times the significant wave height) is enhanced by more than three orders of magnitude. Even better agreement with the theoretical model of equation~(\ref{kbess2}) obtains for more moderate values of $\gamma$, corresponding to larger $N$.
\begin{figure}[ht]\centerline{\includegraphics[width=3.0in,angle=270]{fig13.ps}}
\caption{The wave height distribution for a random incoming sea scattered by random currents is obtained numerically for four values of the rms current speed $u_{\rm rms}$ and four values of the incoming angular spread $\Delta \theta$. In each case, a fit to the K-distribution (equation~(\ref{kbess})) yields the number-of-degrees-of-freedom parameter $N$, (describing deviations from Rayleigh statistics), which is plotted as a function of the freak index $\gamma$ (defined in equation~(\ref{gammadef})). As in the previous figures, the wave speed is fixed at $v=7.81$~m/s. The solid line is the theoretical prediction of equation~(\ref{nval}).
}
\label{figlinscaling}
\end{figure}
In figure~\ref{figlinscaling} we repeat the numerical simulation for four different values of the incoming angular spread $\Delta \theta$ and four different values of the rms current speed $u_{\rm rms}$. In each case, the numerically obtained wave height distribution is fit to a K-distribution (equation~(\ref{kbess})), and the resulting value of $N$ (which fully describes the strength of deviations from Longuet-Higgins statistics) is plotted as a function of the freak index $\gamma$. Excellent agreement is observed with the power-law prediction of equation~(\ref{nval}) all the way up to $\gamma \approx 2$ (corresponding to $N \approx 10$), even though the analytic prediction was obtained in a small-$\gamma$ approximation. The regime in which the analytic formula (\ref{nval}) works well includes most conditions likely to be found in nature (e.g., all but the first row of table~\ref{enhancement}). Referring again to table~\ref{enhancement}, we observe that the theory accurately describes enhancements of up to three orders of magnitude in the formation probability of extreme rogue waves. Modest deviations from the analytic formula are observed numerically at very large values of $\gamma$ (corresponding to even larger enhancements).
\subsection{Experimental Demonstration of Linear Rogue Wave Formation}
\begin{figure}[ht]\centerline{\includegraphics[width=3.0in,angle=0]{fig6.eps} \includegraphics[width=3.0in,angle=0]{fig7.eps}}
\caption{The left panel shows the distribution of the experimentally measured time-averaged microwave intensity $s$, found for $780$ positions in the random wave field, is compared with the $\chi^2$ distribution of
equation~(\ref{chisq}), with $N=32$. The inset shows the same data on a logarithmic scale. The right panel shows a time series of the wave intensity at a single point shows a ``rogue wave'' event. The inset shows a snapshot of the wave intensity field near this point at the moment of the extreme event.
}
\label{figmicrowave}
\end{figure}
Direct experimental verification in the ocean of the statistical predictions made analytically in section~\ref{linwavestat} and confirmed numerically in section~\ref{seclinnumerical} is obviously highly desirable. Unfortunately no observational data set exists at this time that would allow the tail of the wave height distribution to be studied as a function of the freak index $\gamma$, i.e., as a function of the rms current speed $u_{\rm rms}$ and of the angular spread $\Delta \theta$. Recently, however, experiments in open quasi-two-dimensional microwave cavities with disorder~\cite{microwave} have found a strong enhancement in the occurrence probability of high-amplitude waves, which may be interpreted as ``rogue waves'' in this analog system. In the microwave system, randomly placed brass cones play the role of random ocean currents, and a movable source antenna enables incoming waves to arrive from different directions. A movable drain antenna acts a weak probe, and allows for a spatial mapping of the wave fields within the scattering arrangement. A great advantage of the microwave system is that the electromagnetic wave equation is linear, so that the observed enhancement in the tail of the wave height distribution may serve in principle as a direct experimental test of the theory developed in the previous sections.
In the left panel of figure~\ref{figmicrowave}, the time-averaged wave intensity $s$ is found for different positions of the probe, and the probability distribution $g(s)$ is shown. Most of the distribution is well described by the $\chi^2$ distribution of equation~(\ref{chisq}). We note that in the absence of disorder, the time-averaged intensity would be position-independent and $g(s)$ would reduce to $\delta(s-1)$ ($N \to \infty$ in equation~(\ref{chisq})). The inset in the left panel shows additional rare events in the far tail,that are not described by the $\chi^2$ distribution~\cite{microwave}. The right panel in figure~\ref{figmicrowave} shows time series data of the wave intensity at a single point, including an extreme event observed in the experiment, and the inset shows a snapshot of the wave intensity in the region at the moment corresponding to this extreme event. The event presented here has wave height $2H=5.3$~SWH, and events of this magnitude of greater are observed with probability $1.3 \times 10^{-9}$ in the experiment, which is an enhancement of 15 orders of magnitude compared to the Rayleigh distribution.
These results confirm that linear scattering is a sufficient mechanism for a large enhancement in the tail of the wave height distribution, even when nonlinearity is entirely absent from the physical system being studied.
\section{Nonlinear Wave Model}
We have already seen (e.g., in table~\ref{enhancement}) that under physically realistic sea conditions, linear wave dynamics, with nonlinearity only in the corresponding ray equations, are sufficient to
enhance the incidence of extreme rogue waves by several orders of magnitude. At the same time, the true equations for ocean wave evolution are certainly nonlinear, and furthermore the nonlinear terms, which scale as powers of the wave height, manifestly become ever more important in the tail of the wave height distribution. Thus, a fully quantitative theory of rogue wave statistics must necessarily include nonlinear effects, which we address in the following.
\subsection{Nonlinear Schr\"odinger Equation}
\label{secnlse}
The original Nonlinear Schr\"odinger Equation (NLSE) for surface gravity waves in deep water was derived by Zakharov using a spectral method~\cite{zakharov}, and is valid
to third order in the steepness $\varepsilon =k_0 \overline{H}$, where $\overline{H}$ is the
mean wave height. Subsequently, the NLSE was extended to fourth order in $\varepsilon$
by Dysthe~\cite{dysthe} and then to higher order in the bandwidth $\Delta \omega/\omega$ by Trulsen and Dysthe~\cite{trulsen}. The Trulsen-Dysthe equations include frequency downshifting~\cite{trulsendownshift}, the experimentally observed reduction in average frequency over time~\cite{lake}; however the physics of frequency downshifting may not yet be fully understood~\cite{segur}.
In our simulations we implement the current-modified
$O(\varepsilon^4)$ NLSE, as derived by Stocker and Peregrine in dimensionless form~\cite{stocker}:
\begin{eqnarray}\label{cnls4}
& &i{B}_T -\frac{1}{8}({B}_{{X}{X}}-2{B}_{{Y}{Y}})-\frac{1}{2}{B}|{B}| ^2-{B}\Phi_{c{X}}\nonumber= \frac{i}{16} ({B}_{{X}{X}{X}}-6{B}_{{Y}{Y}{X}}) \nonumber \\ & &+\bar{\Phi}_{{X}}{B}+\frac{i}{4}{B}({B}{B^*}_{{X}}-6{B^*}{B}_{{X}})
+i(\frac{1}{2}\Phi_{c{X}T}-\Phi_{cZ}){B}-i\bar{\nabla}_h\Phi_c\cdot\bar{\nabla}_h{B} \,,
\end{eqnarray}
where the the linear and third-order terms are collected on the left hand side of equation~(\ref{cnls4}).
Here $\bar{\Phi}$, $\Phi_c$, and $B$ represent the mean flow, surface current, and oscillatory parts, respectively of
the velocity potential $\phi$:
\begin{equation}\label{vpexpand}
\phi=\sqrt{\frac{g}{k_0^3}}\left[\bar{\Phi}+\Phi_c+ \frac{1}{2}\left( Be^{k_0z+i\theta}+B_2e^{2(k_0z+i\theta)}+{\rm c.c.} \right)\right]\,,
\end{equation}
where the second-harmonic term $B_2$ is function of $B$ and its derivatives,
$({X},{Y},T)$ are dimensionless space and time coordinates defined previously in equation~(\ref{dimspacetime}),
and $\theta=k_0x-\sqrt{gk_0}t={X}-T/2$ is the phase. The surface elevation, which is the quantity of interest
for our purposes, is similarly expanded as
\begin{equation}
\zeta={k_0}^{-1}\left[\bar{\zeta}+\zeta_c+ \frac{1}{2}\left( A^{i\theta}+A_2e^{2i\theta}+A_3e^{3i\theta}+{\rm c.c.} \right)\right]\,,
\label{Aexpand}
\end{equation}
where the expansion coefficients may be obtained from the velocity potential as
\begin{eqnarray}
A&=&i B+\frac{1}{2k_0} B_x+\frac{i}{8 k_0^2}(B_{xx}-2B_{yy})+\frac{i}{8} B|B|^2 \nonumber \\
A_2&=&-\frac{1}{2}B^2 +\frac{i}{k_0}BB_x \\
A_3&=&-\frac{3i}{8} B^3 \nonumber \,.
\end{eqnarray}
***Here both B and A are of order $\varepsilon$, and proportion to $\varepsilon$. By changing the magnitude of B or A in the incoming wave, we can set steepness to different value.
In the simulatin, for the simplicity, we works in the frame of reference moving with the velocity $v_0=(c_o+U_o,V_o)$, so $\bar{\Phi}$ and $\Phi_c$ in equation~(\ref{vpexpand}) is zero. The incoming wave is a random superposition of a large number of monochromatic waves with different frequencies and propagating directions. Thus the initial wave could be prepared analytically, as a linear summation of a large number of plain wave,
\begin{equation}\label{eq:iniwave}
\psi(\vec{r},t)=\sum_1^N\phi_i=\sum_1^NA_ie^{i\vec{k_i}\cdot\vec{r}}
\end{equation}
where $\vec{k_i}$ is the random wave vector for each monochromatic wave. For our setup, the wave vector can be expressed as
\begin{equation}\label{eq:wavenumber}
\vec k=(k_0+k')\cdot (cos\theta'\,\vec x+sin\theta'\,\vec y)
\end{equation}
where $k'$ is a random variation in wave number follows a Gaussian distribution whose half height width is $\Delta k$, and $\theta'$ is the angular spread which is a normal distribution with stand deviation $\Delta \theta$.
In the following examples, equation~(\ref{cnls4}) is integrated numerically with the current set to zero, in order to investigate systematically and quantitatively the effect of nonlinear focusing. In nature, the interplay between linear and nonlinear mechanisms is also of great interest, and may give rise to even stronger enhancement in the probability of rogue wave occurrence than either effect individually, as demonstrated below in section~\ref{seccombined}~(see also~\cite{janssenherbers,yingkaplan}).
\subsection{Height Distribution}
\label{secnlheight}
As in the linear case, the split-operator Fourier transform method is used to integrate equation~(\ref{cnls4}) numerically. The rectangular field measuring 20 km along the mean direction of propagation and 10 km in the transverse direction is discretized using a 1024 by 512 grid. The incoming state is a random superposition of plane waves with wave numbers normally distributed around $k_0$ with standard deviation $\Delta k$, and
directions uniformly distributed around the mean direction $\theta=0$ with standard deviation $\Delta \theta$. Without loss of generality we fix the mean incoming wave number at $k_0=2 \pi/(156\,{\rm m})$, as in section~\ref{seclinear}. The steepness $k_0 \overline{H}$ is adjusted by varying the mean height $\overline{H}$ of the incoming sea. Each run simulates wave evolution for $4 \cdot 10^6$~sec or $4\cdot 10^5$ wave periods.
Typical results are represented by solid curves in figure~\ref{fignldistr}, where we fix $\Delta k/k_0=0.1$ and $\Delta \theta=2.6^\circ$ (as
we will see below, the values of $\Delta \theta$ required to see very strong effects from nonlinear focusing are typically smaller than those needed to observe significant deviations from Rayleigh by linear scattering). The cumulative probability distribution of the wave height $2H$, in units of the significant wave height SWH, is shown for four nonzero values of the wave steepness $\varepsilon$. As expected, the Rayleigh probability distribution of equation~(\ref{prayleighx}) is recovered
in the limit $\varepsilon \to 0$, and ever stronger enhancement in the tail is observed as the steepness of the incoming sea increases. The occurrence probability of extreme rogue waves, $2H/{\rm SWH}=3.0$, is enhanced by one to three orders of magnitude for the parameters shown.
To understand the functional form of the distributions in figure~\ref{fignldistr}, we again make use of the local Rayleigh approximation discussed above in section~\ref{linwavestat}. Here the wave height distribution is given locally in space and time by a Rayleigh distribution around the local mean height (corresponding to a locally random superposition of plane waves), while the local mean height itself varies slowly on the scale of the mean wavelength and mean period. This approximation is well justified, since the envelope $A(X,Y,T)$ in equation~(\ref{Aexpand}) is slowly varying for $\Delta k/k_0 \ll 1$ and $\Delta \theta \ll 1$, while the higher harmonics $A_2(X,Y,T)$ and $A_3(X,Y,T)$ are suppressed by factors of $\varepsilon$ and $\varepsilon^2$, respectively. Taking the local mean intensity to be $\chi^2$ distributed, and convolving the $\chi^2$ distribution of the mean intensity with the Rayleigh distribution around the mean intensity, we obtain as in the linear case a K-distribution (\ref{kbess2}) for the total distribution of wave heights.
\begin{figure}[ht]
\centerline{\includegraphics[width=4in,angle=0]{8.eps}}
\caption{The distribution of wave heights, in units of the significant wave height, is calculated for three nonzero values of the steepness $\varepsilon$ (upper three solid curves), and compared with the random seas model of equation~(\ref{prayleighx}) (lowest solid curve). In each case, the dashed or dotted curve is a best fit to the K-distribution of equation~(\ref{kbess2}). Here the we fix the angular spread $\Delta \theta=2.6^\circ$ and wave number spread $\Delta k/k_0=0.1$ of the incoming sea.
}
\label{fignldistr}
\end{figure}
In figure~\ref{fignldistr}, each data set is fit to the K-distribution of equation~(\ref{kbess2}), arising from the local Rayleigh approximation. We see that the fits, indicated by dashed and dotted lines, perform adequately for probabilities down to $10^{-6}$, where statistical noise begins to dominate. In particular, we clearly observe the crossover between the Gaussian behavior (\ref{prayleighx}) at small to moderate heights
and the asymptotic exponential behavior (\ref{asympt}) at large heights. However, systematic deviations do exist, which are especially visible at larger values of $\varepsilon$, corresponding to smaller values of the $N$ (degrees of freedom) parameter. These systematic deviations are in large part due to the fact that the true wave height distribution for any given set of input parameters exhibits spatial dependence, evolving from the original Rayleigh distribution imposed by incoming boundary conditions to the broader K-distribution, and then gradually back to a Rayleigh distribution as the wave energy is transferred to longer wavelengths and the steepness decreases~\cite{janssenherbers}. An example of this spatial dependence appears below in figure~\ref{figspatial}. Thus, a more accurate model
for the total wave height distribution consists of a sum of several K-distributions, or equivalently the tail of the
full distribution may be modeled by a K-distribution multiplied by a prefactor $C<1$, as discussed in reference~\cite{yingkaplan}. Nevertheless, as seen in figure~\ref{fignldistr}, equation~(\ref{kbess2}) correctly describes wave height probabilities at the $\pm 20\%$ level of accuracy, allows for an extremely simple one-parameter characterization of the wave height distribution, and facilitates easy comparison between the effects of linear and nonlinear focusing.
\subsection{Scaling with Input Parameters}
Given the single-parameter approximation of equation~(\ref{kbess2}), it is sufficient to explore the dependence of the parameter $N$ on the input variables describing the incoming sea, specifically the initial angular spread $\Delta \theta$, the initial wave number spread $\Delta k/k_0$, and the initial steepness $\varepsilon$. In the two panels of figure~\ref{fignlscaling}, we fix the steepness at $\varepsilon=0.032$ and show the scaling of $N$ with $\Delta \theta$ and $\delta k/k_0$, respectively. Given that the Benjamin-Feir instability for a monochromatic wave in one dimension~\cite{bf} is at the root of the nonlinear instability in the general case, it is not surprising that stronger deviations from the Rayleigh model, as indicated by smaller values of $N$, occur as $\Delta \theta$ or $\Delta k$ is reduced, consistent with earlier results~\cite{onorato01, dysthe00, socquet05, gramstad07}. Specifically, we find
\begin{equation}
N \sim (\Delta \theta)^a \left(\frac{\Delta k}{k_0}\right)^b
\label{nonlindtdk}
\end{equation}
where $a$, $b \approx 1$.
\begin{figure}[ht]
\centerline{\includegraphics[width=3.5in,angle=0]{dn.eps}
\includegraphics[width=3.5in,angle=0]{kn.eps}}
\caption{The best-fit $N$ value (equation~(\ref{kbess2})) describing the wave height probability distribution is shown as a function of the initial angular spread $\Delta \theta$ and initial wave number spread $\Delta k/k_0$ of the incoming sea. The steepness is fixed at $\varepsilon=0.032$.
The left panel shows the scaling of $N$ with $\Delta \theta$, with the line showing the best-fit scaling $N\sim (\Delta \theta)^{1.04}$ for $\Delta k/k_0=0.15$. The right panel shows the scaling of $N$ with $\Delta k/k_0$, with the line showing the best-fit scaling $N \sim (\Delta k/k_0)^{1.15}$ for $\Delta \theta=2.6^\circ$.
}
\label{fignlscaling}
\end{figure}
We note that the scaling of $N$ with incoming angular spread $\Delta \theta$ for nonlinear focusing, $N \sim \Delta \theta$, is only half as strong as the scaling $N \sim (\Delta \theta)^2$ arising from linear wave scattering by currents, as implied by equations~(\ref{gammadef}) and (\ref{nval}). Thus, smaller angular spreads $\Delta \theta$ are needed for the nonlinear focusing mechanism to be effective, as compared with linear focusing by currents.
This is easily seen by comparing the range of $\Delta \theta$ in figure~\ref{fignlscaling} with the corresponding range in table~\ref{enhancement} for the linear mechanism.
On the other hand, figure~\ref{fignlscaling} and equation~(\ref{nonlindtdk}) both imply that the nonlinear mechanism exhibits significant sensitivity to the spectral width $\Delta k/k_0$, consistent with previous findings~\cite{kharif, clamond02, henderson99,lake77,tanaka90,zak06}. This is to be contrasted with the linear mechanism of rogue wave formation, which is insensitive to the spectral width at leading order in $\Delta k/k_0$~\cite{hkd}.
\begin{figure}[ht]
\centerline{\includegraphics[width=4in,angle=0]{sn.eps}}
\caption{The best-fit $N$ value (equation~(\ref{kbess2})) describing the wave height probability distribution is shown as a function of the steepness $\varepsilon$ for several values of the initial angular spread $\Delta \theta$. The
initial wave number spread is fixed at $\Delta k/k_0=0.1$, as in figure~\ref{fignldistr}. The line shows the best-fit scaling $N \sim \varepsilon^{-2.9}$ for $\Delta \theta=2.6^\circ$.
}
\label{fignlsteep}
\end{figure}
Finally, in figure~\ref{fignlsteep}, we fix the steepness $\Delta k/k_0=0.1$, as in figure~\ref{fignldistr}, and examine the scaling of the $N$ value with the steepness $\varepsilon$, for several values of the initial angular spread $\Delta \theta$. As $\varepsilon$ grows, $N$ decreases, indicating greater deviations from the Rayleigh distribution. Again, we observe good power-law scaling with the steepness in the range of parameters considered here. We have
\begin{equation}
N \sim \varepsilon^c
\label{nonlineps}
\end{equation}
where $c \approx -3$. At larger values of the steepness (not shown), saturation occurs.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| c | c | c |}
\hline
$\;\;\;\;\;N\;\;\;\;\;$ & \;\;\;\;\;$E(2.2)$\;\;\;\;\; & \;\;\;\;\;$E(3.0)$\;\;\;\;\; \\
\hline
2 & $1.1 \cdot 10^2$ & $5.2\cdot 10^4$ \\
5 & 37 & $7.3\cdot 10^3$ \\
10 & 16 & $1.3 \cdot 10^3$ \\
20 & 6.8 & $2.2 \cdot 10^2$ \\
50 & 2.9 & 27 \\
100 & 1.8 & 7.8 \\
\hline
\end{tabular}
\end{center}
\caption{The enhancement in the probability of rogue wave formation (wave height $2H=2.2\, {\rm SWH}$) as well as the enhancement of the probability of extreme rogue wave formation
(wave height $2H=3.0\, {\rm SWH}$) are calculated for several values of the $N$ parameter, as in table~\ref{enhancement}.
}
\label{enhnl}
\end{table}
Table~\ref{enhnl},
calculated analogously to table~\ref{enhancement} in the previous section, aids in extracting the implications
of figures~\ref{fignlscaling} and \ref{fignlsteep} by indicating the quantitative relationship between the $N$ value and the enhancement in rogue wave and extreme rogue wave occurrence probabilities. We note that even at $N$ values between $50$ and $100$, corresponding to the upper range of values in figures~\ref{fignlscaling} and \ref{fignlsteep}, the occurrence of extreme rogue waves is enhanced by an order of magnitude. Exponentially larger enhancement is predicted for parameters associated with smaller values of $N$.
\section{Combined Effect of Nonlinear and Linear Focusing}
\label{seccombined}
Finally, we discuss the possibility of even greater enhancement in the rogue wave formation probability when linear and nonlinear mechanisms are acting together~\cite{janssenherbers,yingkaplan}. In this context, it is important to consider again the spatial scales associated with rogue wave development in the two mechanisms. We recall that when an incoming random sea is linearly scattered by strong currents, the first singularities in the ray dynamics occur after a distance scale $L \sim \xi (u_{\rm rms}/v)^{-2/3}$, as discussed in section~\ref{secrayinten}. These first singularities are the ones associated with the highest probability of rogue wave formation, as subsequent random scattering exponentially stretches the phase space manifold and leads to ever smaller density associated with each second- and higher-order singularity~\cite{lkbranch,hkd}. At distances $\gg L$, the pattern of hot and cold spots becomes less and less prominent, the ray density again becomes nearly uniform (see figure~\ref{figrayimage}), and the wave height distribution asymptotically approaches again the Rayleigh limit.
Similarly, nonlinear evolution as described by the NLSE without current (equation~(\ref{cnls4}) with $\Phi_c=0$) occurs on a typical distance scale $1/k\varepsilon^2$. On distance scales larger than $1/k\varepsilon^2$, energy transfer from smaller to larger wavelengths (i.e., the frequency downshifting effect mentioned previously in Sec.~\ref{secnlse}) results eventually in a decline in the steepness and again an approach towards the limiting Rayleigh distribution~\cite{janssen09,tanaka01,gibson07}.
\begin{figure}[ht]
\centerline{\includegraphics[width=4in,angle=0]{spatial_rr.eps}}
\caption{The fourth moment of the wave height distribution is shown as a function of evolution distance, starting in each case from a Longuet-Higgins random sea with mean
wave speed $v=7.81\,{\rm m/s}$, initial angular spread $\Delta \theta=5.2^\circ$, and wave number spread $\Delta k/k_0=0.1$. The three situations considered are:
(a) linear scattering by random currents of rms speed $u_{\rm rms}=0.2$~m/s and eddy correlation length $\xi=800$~m,
(b) nonlinear evolution with initial steepness $\varepsilon=0.032$ and without currents,
and (c) nonlinear evolution in the presence of currents.
}
\label{figspatial}
\end{figure}
This behavior is illustrated in figure~\ref{figspatial} for linear evolution with random currents, nonlinear evolution in the absence of currents, and for a scenario in which the two mechanisms are both active. Here we use the fourth moment $\overline{H^4}$ as a convenient measure of the size of the tail of the wave height distribution. Note that for the chosen parameters, the distance scales associated with linear and nonlinear rogue wave formation are comparable. Clearly, in this case currents have a greater effect than nonlinear focusing, but the strongest deviations from Rayleigh statistics are observed when linear scattering and nonlinear interaction are both present.
\begin{figure}[ht]\centerline{\includegraphics[width=3.0in,angle=0]{nonlin_current_1_rr.eps} \includegraphics[width=3.0in,angle=0]{nonlin_current_2_rr.eps}}
\caption{Left panel: The cumulative distribution of wave heights, in units of the significant wave height, is obtained for the same three scenarios as are considered in figure~\ref{figspatial}. In each case, the solid curve is a fit to a K-distribution with (a) $N=16$ for linear scattering by currents, (b) $N=29$ for nonlinear evolution, and (c) $N=5.1$ when linear and nonlinear focusing are acting in concert. The Rayleigh distribution ($N= \infty$) is shown for reference. Right panel: In each of the three scenarios, the probability enhancement factor $P_{\rm total}(H)/P_{\rm Rayleigh}(H)$ is obtained from the data.
}
\label{fignonlincurrent}
\end{figure}
The total wave height distributions for these same three scenarios, and the probability enhancement over the predictions of the Longuet-Higgins model, are shown in figure~\ref{fignonlincurrent}. As noted above in section~\ref{secnlheight}, when wave height data is collected over a large spatial field that includes some areas of very strong deviations of Rayleigh statistics and other areas where such deviations have not yet had an opportunity to develop, the full distribution may not be well approximated by a single K-distribution, but the tail may still be well approximated in this way, since it is dominated by data from those areas where deviations are strongest~\cite{yingkaplan}. This is indeed what we clearly observe in figure~\ref{fignonlincurrent}, for the scenario where nonlinearity and currents are both present.
Again we see from figure~\ref{fignonlincurrent} that deviations from Rayleigh statistics become ever more pronounced as taller and taller waves are considered, as expected from the asymptotic form of the K-distribution (equation~(\ref{asympt})). In particular, in this example we see that the probability of forming an extreme rogue wave (wave height $=3$~SWH) is enhanced by a factor $90$ due to nonlinear interaction, by a factor of $380$ due to focusing by currents, and by a factor of $2600$ when the two mechanisms are combined.
\section{Conclusions and Outlook}
It will take some time to sort out the mechanisms of rogue wave formation with complete certainty. All potentially important factors and mechanisms ought to be included in the discourse, which we hope will someday lead to agreement about the several formation mechanisms and their interactions. More importantly, predictive tools leading to safer navigation should eventually emerge. One of the seemingly important factors, which might be called ``statistical focusing,'' is highlighted here. In terms of wave propagation, statistical focusing is a linear effect (although it is nonlinear dynamics at the level of ray tracing). It leads to large enhancements in the frequency of rogue wave formation under reasonable sea state assumptions.
Statistical focusing combines the effects of deterministic wave refraction by current eddies with Longuet-Higgins statistical ideas under realistic conditions. The key notion is that the focusing effects of eddies, which would be very dramatic on an (unrealistic) monochromatic and unidirectional sea, are not altogether washed out when realistic frequency and directional dispersion are included. Essentially, deterministic caustics present in the unrealistic idealization are smoothed into hot spots, which are then treated statistically within Longuet-Higgins theory. The hot spots dominate the statistics in the tail of the wave height distribution. This amounts to a nonuniform sampling version of Longuet-Higgins theory, with a solid basis for the nonuniform energy density distributions used.
Since nonlinear effects are also important, we have examined them alone within the popular fourth-order nonlinear Schr\"odinger equation (NLSE) approximation for nonlinear wave evolution under realistic seaway conditions. Finally, we have investigated the combined effect of nonlinear wave evolution and statistical focusing. We find that strongest deviations from Rayleigh statistics are observed when linear scattering (statistical focusing) and nonlinear interaction (NLSE) are both present. However, for the parameters chosen here at least, the linear scattering due to eddies was more important than the nonlinear effects, which require large steepness or a very narrow range of propagation directions to become significant.
We have presented a measure closely related to the probability of rogue wave formation, the freak index $\gamma$. This could conceivably become the basis for a probabilistic forecast of rogue wave formation, in the spirit of rainfall forecasts.
There are at least three clear directions for future development of the work presented here. First, both the computer simulations and the theory must be developed further to explore fully and systematically the combined effects of nonlinear and linear focusing. This will also involve investigating in depth the underlying mechanism through which the formation of hot and cold spots is aided by nonlinear focusing. Secondly, a better understanding is needed of the stability of the hot spot patterns under slow changes in the current field or in the spectrum or directionality of the incoming sea. The strength of what might be called scintillation or twinkling~\cite{twinkling} in analogy with the case of light traveling through the atmosphere will have important consequences for the predictive power of the model. Thirdly, and most importantly, there is a clear need to compare the model simulations with observations and experiments. Although comprehensive global data are not available at this point, it may be possible to compare the results to local observations where data are more readily available, e.g., in the North Sea.
Whatever the final word is on rogue wave formation (or final words, because there may be more than one mechanism), it must involve a reallocation of energy from a larger area to a smaller one. Waves cannot propagate and increase in height at no expense to their neighbors: the energy has to come from somewhere, and the effect must be to reduce the wave energy somewhere else. The focusing mechanism is clear in this respect: hot spots form and cold spots do too, according to a ray tracing analysis, maintaining energy balance~\cite{brethertongarrett}.
\ack This work was supported in part by the US
NSF under Grant PHY-0545390.
\section*{References}
|
1,116,691,497,234 | arxiv | \section*{Appendix}
\gdef\thesection{\Alph{section}}
\setcounter{section}{1}}
\begin{document}
\title{Structures and Transformations for Model Reduction of Linear Quantum Stochastic Systems\footnote{Research supported by the Australian Research Council}}
\author{Hendra~I.~Nurdin
\thanks{Hendra I. Nurdin is with the School of Electrical Engineering and Telecommunications, The University of New South Wales (UNSW),
Sydney NSW 2052, Australia. Email: [email protected]}}
\date{}
\maketitle \thispagestyle{empty}
\begin{abstract}
The purpose of this paper is to develop a model reduction theory for linear quantum stochastic systems that are commonly encountered in quantum optics and related fields, modeling devices such as optical cavities and optical parametric amplifiers, as well as quantum networks composed of such devices. Results are derived on subsystem truncation of such systems and it is shown that this truncation preserves the physical realizability property of linear quantum stochastic systems. It is also shown that the property of complete passivity of linear quantum stochastic systems is preserved under subsystem truncation. A necessary and sufficient condition for the existence of a balanced realization of a linear quantum stochastic system under sympletic transformations is derived. Such a condition turns out to be very restrictive and will not be satisfied by generic linear quantum stochastic systems, thus necessary and sufficient conditions for relaxed notions of simultaneous diagonalization of the controllability and observability Gramians of linear quantum stochastic systems under symplectic transformations are also obtained. The notion of a quasi-balanced realization is introduced and it is shown that all asymptotically stable completely passive linear quantum stochastic systems have a quasi-balanced realization. Moreover, an explicit bound for the subsystem truncation error on a quasi-balanceable linear quantum stochastic system is provided. The results are applied in an example of model reduction in the context of low-pass optical filtering of coherent light using a network of optical cavities.
\end{abstract}
\begin{keywords}
Linear quantum stochastic systems, model reduction, symplectic transformations, quantum optical systems, open Markov quantum systems
\end{keywords}
\section{Introduction}
\label{sec:intro}
The class of linear quantum stochastic systems \cite{BE08,JNP06,NJD08,GJN10} represents multiple distinct open quantum harmonic oscillators that are coupled linearly to one another and also to external Gaussian fields, e.g., coherent laser beams, and whose dynamics can be conveniently and completely summarized in the Heisenberg picture of quantum mechanics in terms of a quartet of matrices $A,B,C,D$, analogous to those used in modern control theory for linear systems. As such, they can be viewed as a quantum analogue of classical linear stochastic systems and are encountered in practice, for instance, as models for optical parametric amplifiers \cite[Chapters 7 and 10]{GZ04}. However, due to the constraints imposed by quantum mechanics, the matrices $A,B,C,D$ in a linear quantum stochastic system cannot be arbitrary, a restriction not encountered in the classical setting. In fact, as derived in \cite{JNP06} for the case where $D$ is of the form $D=[\begin{array}{cc} I & 0 \end{array}]$, with $I$ denoting an identity matrix, it is required that $A$ and $B$ satisfy a certain non-linear equality constraint, and $B$ and $C$ satisfy a linear equality constraint. These constraints on the $A,B,C,D$ matrices are referred to as {\em physical realizability} constraints \cite{JNP06}. Due to the analogy with classical linear stochastic systems, linear quantum stochastic systems provide a particularly tractable class of quantum systems with which to discover and develop fundamental ideas and principles of quantum control, just as classical linear systems played a fundamental role in the early development of systems and control theory.
In control problems involving linear quantums stochastic systems such as $H^{\infty}$ control \cite{JNP06} and LQG control \cite{NJP07b}, the important feature of the controller is its transfer function rather than the systems matrices $(A,B,C,D)$. The controller may have many degrees of freedom, which may make it challenging to realize. Therefore it is of interest to have a method to construct an approximate controller with a smaller number of degrees of freedom whose transfer function approximates that of the full controller. In systems and control theory, this procedure is known as model reduction and is an important part of a controller design process, see, e.g. \cite{ZDG95}.
Model reduction methods for linear quantum stochastic systems have been limited to singular perturbation techniques \cite{BvHS07,GNW10,Pet10} and an eigenvalue truncation technique that is restricted to a certain sub-class of completely passive linear quantum stochastic systems \cite{Pet12}. These methods cannot be applied to general linear quantum stochastic systems and the current paper contributes towards filling this important gap by developing new results on subsystem truncation for general linear quantum stochastic systems. Moreover, the paper studies the feasibility of performing model reduction by balanced truncation for linear quantum stochastic systems and derives a necessary and sufficient condition under which it can be carried out. It is shown that balanced truncation is {\em not} possible for generic linear quantum stochastic systems. Therefore, this paper also considers other realizations in which the system controllability and observability Gramians are simultaneously diagonal, and introduces one such realization which is referred to as a quasi-balanced realization. The results are illustrated in an example that demonstrates an instance where quasi-balanced truncation can be applied.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Notation}
We will use the following notation: $\imath=\sqrt{-1}$, $^*$ denotes the adjoint of a linear operator
as well as the conjugate of a complex number. If $A=[a_{jk}]$ then $A^{\#}=[a_{jk}^*]$, and $A^{\dag}=(A^{\#})^{\top}$, where $(\cdot)^{\top}$ denotes matrix transposition. $\Re\{A\}=(A+A^{\#})/2$ and $\Im\{A\}=\frac{1}{2\imath}(A-A^{\#})$.
We denote the identity matrix by $I$ whenever its size can be
inferred from context and use $I_{n}$ to denote an $n \times n$
identity matrix. Similarly, $0_{m \times n}$ denotes a $m \times n$ matrix with zero
entries but drop the subscript when its dimension can be determined from context. We use
${\rm diag}(M_1,M_2,\ldots,M_n)$ to denote a block diagonal matrix with
square matrices $M_1,M_2,\ldots,M_n$ on its diagonal, and ${\rm diag}_{n}(M)$
denotes a block diagonal matrix with the
square matrix $M$ appearing on its diagonal blocks $n$ times. Also, we
will let $\mathbb{J}=\left[\begin{array}{rr}0 & 1\\-1&0\end{array}\right]$ and $\mathbb{J}_n=I_{n} \otimes \mathbb{J}={\rm diag}_n(\mathbb{J})$.
\subsection{The class of linear quantum stochastic systems}
\label{sec:linear-summary}
Let $x=(q_1,p_1,q_2,p_2,\ldots,q_n,p_n)^T$ denote a vector of the canonical position and
momentum operators of a {\em many degrees of freedom quantum
harmonic oscillator} satisfying the canonical commutation
relations (CCR) $xx^T-(xx^T)^T=2\imath \mathbb{J}_n$. A {\em linear quantum stochastic
system} \cite{JNP06,NJP07b,NJD08} $G$ is a quantum system defined by three {\em parameters}:
(i) A quadratic Hamiltonian $H=\frac{1}{2} x^T R x$ with $R=R^T \in
\mathbb{R}^{n \times n}$, (ii) a coupling operator $L=Kx$, where $K$ is
an $m \times 2n$ complex matrix, and (iii) a unitary $m \times m$
scattering matrix $S$. For shorthand, we write $G=(S,L,H)$ or $G=(S,Kx,\frac{1}{2} x^TRx)$. The
time evolution $x(t)$ of $x$ in the Heisenberg picture ($ t
\geq 0$) is given by the quantum stochastic differential equation
(QSDE) (see \cite{BE08,JNP06,NJD08}):
\begin{align}
dx(t) &= A_0x(t)dt+ B_0\left[\small \begin{array}{c} d\mathcal{A}(t)
\\ d\mathcal{A}(t)^{\#} \end{array} \normalsize \right];
x(0)=x, \notag\\
dY(t) &= C_0 x(t)dt+ D_0 d\mathcal{A}(t), \label{eq:qsde-out}
\end{align}
with $A_0=2\mathbb{J}_n(R+\Im\{K^{\dag}K\})$, $B_0=2\imath \mathbb{J}_n [\begin{array}{cc}
-K^{\dag}S & K^TS^{\#}\end{array}]$,
$C_0=K$, and $D_0=S$. Here $Y(t)=(Y_1(t),\ldots,Y_m(t))^{\top}$ is a vector of
continuous-mode bosonic {\em output fields} that results from the interaction of the quantum
harmonic oscillators and the incoming continuous-mode bosonic quantum fields in the $m$-dimensional vector
$\mathcal{A}(t)$. Note that the dynamics of $x(t)$ is linear, and
$Y(t)$ depends linearly on $x(s)$, $0 \leq s \leq t$. We refer to $n$ as the {\em
degrees of freedom} of the system or, more simply, the {\em degree} of the system.
Following \cite{JNP06}, it will be convenient to write the dynamics in quadrature form as
\begin{align}
dx(t)&=Ax(t)dt+Bdw(t);\, x(0)=x. \nonumber\\
dy(t)&= C x(t)dt+ D dw(t), \label{eq:qsde-out-quad}
\end{align}
with
\begin{eqnarray*}
w(t)&=&2(\Re\{\mathcal{A}_1(t)\},\Im\{\mathcal{A}_1(t)\},\Re\{\mathcal{A}_2(t)\},\Im\{\mathcal{A}_2(t)\},\ldots,\Re\{\mathcal{A}_m(t)\},\Im\{\mathcal{A}_m(t)\})^{\top}; \\
y(t)&=&2(\Re\{Y_1(t)\},\Im\{Y_1(t)\},\Re\{Y_2(t)\},\Im\{Y_2(t)\}, \ldots,\Re\{Y_m(t)\},\Im\{Y_m(t)\})^{\top}.
\end{eqnarray*}
The real matrices $A,B,C,D$ are in a one-to-one correspondence
with $A_0,B_0,C_0,D_0$. Also, $w(t)$ is taken to be in a vacuum state where it
satisfies the It\^{o} relationship $dw(t)dw(t)^{\top} = (I+\imath \mathbb{J}_m)dt$; see \cite{JNP06}. Note that in this form it follows that $D$ is a real unitary symplectic matrix. That is, it is both unitary (i.e., $DD^{\top}=D^{\top} D=I$) and symplectic (a real $m \times m$ matrix is symplectic if $D \mathbb{J}_m D^{\top} =\mathbb{J}_m$). However, in the most general case, $D$ can be generalized to a symplectic matrix that represents a quantum network that includes ideal squeezing devices acting on the incoming field $w(t)$ before interacting with the system \cite{GJN10,NJD08}. The matrices $A$, $B$, $C$, $D$ of a linear quantum stochastic system cannot be arbitrary and are not independent of one another. In fact, for the system to be physically realizable \cite{JNP06,NJP07b,NJD08}, meaning it represents a meaningful physical system, they must satisfy the constraints (see \cite{WNZJ12,JNP06,NJP07b,NJD08,GJN10})
\begin{eqnarray}
&&A\mathbb{J}_n + \mathbb{J}_n A^{\top} + B\mathbb{J}_mB^{\top}=0, \label{eq:pr-1}\\
&& \mathbb{J}_n C^{\top} +B\mathbb{J}_mD^{\top}=0, \label{eq:pr-2}\\
&&D \mathbb{J}_m D^{\top} = \mathbb{J}_{m}. \label{eq:pr-3}
\end{eqnarray}
The above are the physical realizability constraints for systems for which the (even) dimension of the output $y(t)$ is the same as that of the input $w(t)$, i.e., $n_y=2m$. However, for the purposes of the model reduction theory to be developed in this paper, it is pertinent to consider the case where $y(t)$ has an even dimension possibly less than $w(t)$. The reason for this and the physical realizability constraints for systems with less outputs and inputs are given in the next section.
Following \cite{GJ07}, we denote a linear quantum stochastic system having an equal number of inputs and outputs, and Hamiltonian $H$, coupling vector $L$, and scattering matrix $S$, simply as $G=(S, L,H)$ or $G=(S,Kx,\frac{1}{2} x^{\top}Rx)$. We also recall the {\em concatenation product} $\boxplus$ and {\em series product} $\triangleleft$ for open Markov quantum systems \cite{GJ07} defined by $G_1 \boxplus G_2=({\rm diag}(S_1,S_2),(L_1^{\top},L_2^{\top})^{\top},H_1+H_2)$, and $G_2 \triangleleft G_1=(S_2S_1, L_2+S_2 L_1,H_1+H_2+\Im\{L_2^{\dag}S_2 L_1\})$. Since both products are associative, the products $G_1
\boxplus G_2 \boxplus \ldots \boxplus G_n$ and $G_n \triangleleft G_{n-1} \triangleleft \ldots \triangleleft G_1$ are unambiguously defined.
\subsection{Linear quantum stochastic systems with less outputs than inputs}
\label{sec:pr-less-out}
In general one may not be interested in all outputs of the system but only in a subset of them, see, e.g., \cite{JNP06}. That is, one is often only interested in certain pairs of the output field quadratures in $y(t)$. Thus, in the most general scenario, $y(t)$ can have an even dimension $n_y <2m$ and $D$ is a $n_y \times 2m$ matrix satisfying $D \mathbb{J}_m D^{\top}=\mathbb{J}_{n_y/2}$. Thus, more generally we can consider outputs $y(t)$ of form
\begin{equation}
y(t) = C x(t) + D w(t), \label{eq:y-e}
\end{equation}
with $C \in \mathbb{R}^{n_y \times 2n}$, $D \in \mathbb{R}^{n_y \times 2m}$ with $n_y$ even and $n_y < 2m$. In this case, generalizing the notion developed in \cite{JNP06}, we say that a linear quantum stochastic system with output (\ref{eq:y-e}) is physically realizable if and only if there exists matrices $C' \in \mathbb{R}^{(2m-n_y) \times 2n}$ and $D' \in \mathbb{R}^{(2m-n_y) \times 2m}$ such that the system
\begin{align}
dx(t)&=Ax(t)dt+Bdw(t);\, x(0)=x. \nonumber\\
dy'(t)&= \left[\begin{array}{c} C \\ C' \end{array}\right]x(t)dt+ \left[\begin{array}{c} D \\ D' \end{array}\right]dw(t), \label{eq:qsde-out-quad-e}
\end{align}
is a physically realizable linear quantum stochastic system with the same number of inputs and outputs. That is, the matrices $A$, $B$, $[\begin{array}{cc} C^{\top} & (C')^{\top} \end{array}]^{\top}$, and $[\begin{array}{cc} D^{\top} & (D')^{\top} \end{array}]^{\top}$ satisfy the constraints (\ref{eq:pr-1})-(\ref{eq:pr-3}) when $C$ and $D$ in (\ref{eq:pr-2}) and (\ref{eq:pr-3}) are replaced by $[\begin{array}{cc} C^{\top} & (C')^{\top} \end{array}]^{\top}$ and $[\begin{array}{cc} D^{\top} & (D')^{\top} \end{array}]^{\top}$, respectively. A necessary and sufficient condition for physical realizability of general linear quantum stochastic systems is the following \cite{WNZJ12}:
\begin{theorem}
A linear quantum stochastic system with less outputs than inputs is physically realizable if and only if
\begin{eqnarray}
&&A\mathbb{J}_n + \mathbb{J}_n A^{\top} + B\mathbb{J}_mB^{\top}=0, \label{eq:pr-1e}\\
&& \mathbb{J}_n C^{\top} +B\mathbb{J}_mD^{\top}=0, \label{eq:pr-2e}\\
&&D \mathbb{J}_m D^{\top} = \mathbb{J}_{n_y/2}. \label{eq:pr-3e}
\end{eqnarray}
\end{theorem}
A proof of this theorem had to be omitted in \cite{WNZJ12} due to page limitations, so a short independent proof is provided below.
\begin{proof}
The necessity of (\ref{eq:pr-1e})-(\ref{eq:pr-3e}) follows immediately from the definition of a physically realizable system with less outputs than inputs (as given previously) and from the physical realizability contraints for systems with the same number of input and outputs. As for the sufficiency, first note that for $D$ satisfying (\ref{eq:pr-3e}), it follows from an analogous construction to that given in the proof of \cite[Lemma 6]{Nurd11} that a matrix $D' \in \mathbb{R}^{(2m-n_y)\times 2m}$ can be constructed such that the the matrix $\tilde D = [\begin{array}{cc} D^{\top} & (D')^{\top} \end{array}]^{\top}$ is symplectic. Now, define $C' = -D' \mathbb{J}_m B^{\top}$ and $\tilde C=[\begin{array}{cc} C^{\top} & (C')^{\top} \end{array}]^{\top}$. Consider now a system $\tilde G$ with an equal number of inputs and outputs, and system matrices $(A,B,\tilde C, \tilde D)$. From the physical realizability conditions (\ref{eq:pr-1e})-(\ref{eq:pr-3e}) and the definition of $C'$ and $\tilde C$, it follows that $\tilde G$ satisfies (\ref{eq:pr-1e})-(\ref{eq:pr-3e}) and is therefore physically realizable with the same number of inputs and outputs. It now follows from definition that the original system with output $y'(t)$ of smaller dimension that $w(t)$ is physically realizable. This completes the proof. $\hfill \Box$
\end{proof}
\section{Model reduction of linear quantum stochastic systems by subsystem truncation}
\label{sec:subsys-truncation}
\subsection{Preservation of quantum structural constraints in subsystem truncation}
\label{sec:pr-preserve-truncation}
In this section we show that physically realizable linear quantum stochastic systems possess the convenient property that any subsystem defined by a collection of arbitrary pairs $(q_j,p_j)$ in $x$ and obtained via a simple truncation procedure inherit the physical realizability property.
Let $\pi$ be any permutation map on $\{1,2,\ldots,n\}$, i.e., a bijective map of $\{1,2,\ldots,n\}$ to itself. Let $x_{\pi}=(q_{\pi(1)},
p_{\pi(1)},q_{\pi(2)},p_{\pi(2)},\ldots,q_{\pi(n)},p_{\pi(n)})^{\top}$, and $P$ be the permutation matrix representing this permutation of the elements of $x$, i.e., $Px=x_{\pi}$. Then the permuted system $G_{\pi}$ will have system matrices
$(A_{\pi},B_{\pi},C_{\pi},D_{\pi})$ with $A_{\pi}=PA P^{\top}$, $B_{\pi}=PB$, $C_{\pi}=CP^{\top}$, $D_{\pi}=D$. Since $G_{\pi}$ involves a mere rearrangement of the degrees of freedom $x$ of $G$, it represents the same physically realizable system as $G$, up to a reordering of the components of $x$. Thus the system matrices $(A_{\pi},B_{\pi},C_{\pi},D_{\pi})$ of $G_{\pi}$ trivially satisfy the physically realizability constraints (\ref{eq:pr-1})-(\ref{eq:pr-3}). Partition $x_{\pi}$ as $x_{\pi}=(x_{\pi,1}^{\top},x_{\pi,2}^{\top})^{\top}$ where $x_{\pi,1}=(q_{\pi(1)},p_{\pi(1)},\ldots,q_{\pi(r)},p_{\pi(r)})^{\top}$ and $x_{\pi,2}=(q_{\pi(r+1)},p_{\pi(r+1)},\ldots,q_{\pi(n)},p_{\pi(n)})^{\top}$, with $r<n$. Partition the matrices $A_{\pi}$, $B_{\pi}$, and $C_{\pi}$ compatibly with the partitioning of $x_{\pi}$ into $x_{\pi,1},x_{\pi,2}$. That is,
\begin{eqnarray}
A_{\pi}&=&\left[\begin{array}{cc} A_{\pi,11} & A_{\pi,12} \\ A_{\pi,21} & A_{\pi,22} \end{array} \right],\; B_{\pi} =\left[\begin{array}{c} B_{\pi,1} \\ B_{\pi,2} \end{array} \right], \label{eq:mat-part-1}\\
C_{\pi}&=&\left[\begin{array}{cc} C_{\pi,1} & C_{\pi,2} \end{array}\right]. \label{eq:mat-part-2}
\end{eqnarray}
From the fact that $A_{\pi}$, $B_{\pi}$, $C_{\pi}$, and $D_{\pi}$ satisfy the physical realizability constraints (\ref{eq:pr-1e})-(\ref{eq:pr-3e}) we immediately obtain for $j=1,2$:
\begin{eqnarray}
&&A_{\pi,jj}\Theta_{j}+\Theta_{j}A_{\pi,jj}^{\top} + B_{\pi,j}\mathbb{J}_m B_{\pi,j}^{\top} =0, \label{eq:part-AB}\\
&& \Theta_j C_{\pi,j}^{\top} +B_{\pi,j} \mathbb{J}_m D_{\pi}^{\top}=0,\label{eq:part-BC} \\
&& D_{\pi} \mathbb{J}_m D_{\pi}^{\top} = \mathbb{J}_{n_y/2}, \label{eq:part-D}
\end{eqnarray}
where $\Theta_1 =\mathbb{J}_r$ and $\Theta_2=\mathbb{J}_{n-r}$. Therefore, the subsystems $G_{\pi,j}=(A_{\pi,jj},B_{\pi,j},C_{\pi,j},D_{\pi})$ with $x_{\pi,j}$ as canonical internal variable are physically realizable systems in their own right for $j=1,2$. Thus, we can state the following theorem.
\begin{theorem}
\label{thm:pr-sub-systems} For any given permutation map $\pi$ of the indices $\{1,2,\ldots,n\}$ and any partitioning of $x_{\pi}=(q_{\pi(1)},
p_{\pi(1)},q_{\pi(2)},p_{\pi(2)},\ldots,q_{\pi(n)},p_{\pi(n)})^{\top}$ as $x_{\pi}=(x_{\pi,1}^{\top},x_{\pi,2}^{\top})^{\top}$, with $x_{\pi,1}=(q_{\pi(1)},p_{\pi(1)},\ldots,q_{\pi(r)},p_{\pi(r)})^{\top}$ and $x_{\pi,2}=(q_{\pi(r+1)},p_{\pi(r+1)},\ldots,q_{\pi(r)},p_{\pi(r)})^{\top}$, with $r<n$, the subsystems $G_{\pi,j}=(A_{\pi,jj},B_{\pi,j},C_{\pi,j},D_{\pi})$ with canonical position and momentum operators in $x_{\pi,j}$ are physically realizable for $j=1,2$.
\end{theorem}
From a model reduction perspective, the theorem says that if one truncates a subsystem $x_{\pi,j}$ according to any partitioning of $x_{\pi}$ in which each partition $x_{\pi,1}$ and $x_{\pi,2}$ contain distinct pairs of conjugate position and momentum quadratures, then the remaining subsystem after the truncation (i.e., $x_{\pi,1}$ if $x_{\pi,2}$ is truncated, and $x_{\pi,2}$ if $x_{\pi,1}$ is truncated) is automatically guaranteed to be a physically realizable linear quantum stochastic system. This is rather fortunate as the physical realizability constraints are quite formidable to deal with (see, e.g., \cite{NJP07b} in the context of coherent-feedback LQG controller design) and at a glance one would initially expect that physically realizable reduced models would not be easily obtained.
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.5]{sysdecom}
\caption{Cascade realization of $G_{\pi}$ with direct interaction Hamiltonians $H^d_{\pi(j)\pi(k)}$ between sub-systems $G_{\pi(j)}$ and $G_{\pi(k)}$ for $j,k=1,2,\ldots,n$, following \cite{NJD08}. Illustration is for $n>3$.}
\label{fig:sysdecom}
\end{figure}
An equivalent proof of the theorem is via the main network synthesis result of \cite{NJD08} -- this viewpoint of Theorem \ref{thm:pr-sub-systems} will be especially useful in the next section. It is shown in \cite[Theorem 5.1]{NJD08} that any (physically realizable) linear quantum stochastic system of degree $n$ such as $G_{\pi}$ can be decomposed into a cascade or series connection of $n$ one degree of freedom linear quantum stochastic systems $G_{\pi(j)}$, $j=1,2,\ldots,n$ together with a direct quadratic coupling Hamiltonian between (at most) every pair of the $G_{\pi(j)}$'s, see Fig.~\ref{fig:sysdecom}. Here $G_{\pi(j)}$ is a one degree of freedom linear quantum stochastic system with $x_{\pi(j)}=(q_{\pi(j)},p_{\pi(j)})^{\top}$ as its canonical position and momentum operators. In the figure, $H^d_{\pi(j)\pi(k)}$ indicates the quadratic coupling Hamiltonian between $G_{\pi(j)}$ and $G_{\pi(k)}$. It shows that if we
\begin{enumerate}
\item remove the $n-r$ one degree of freedom subsystems $G_{\pi(r+1)}$, $G_{\pi(r+2)}$, $\ldots$, $G_{\pi(n)}$ from this cascade connection,
\item remove all Hamiltonian coupling terms associated with each of the subsystems that have been removed,
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.5]{truncsubsysdecom}
\caption{Cascade realization of $G_{\pi,1}$ with direct interaction Hamiltonians $H^d_{\pi(j)\pi(k)}$ between sub-systems $G_{\pi(j)}$ and $G_{\pi(k)}$ for $j,k=1,2,\ldots,r$, following \cite{NJD08}. Illustration is for $r>3$.}
\label{fig:truncsubsysdecom}
\end{figure}
\item reconnect the remaining $r$ subsystems in a cascade connection in the same order in which they appeared in the original cascade connection, and keeping the coupling Hamiltonians between each pair of remaining one degree of freedom sub-systems, as shown in Fig.~ \ref{fig:truncsubsysdecom},
\end{enumerate}
we recover a physically realizable linear quantum stochastic system of degree $r$ as constructed in Theorem \ref{thm:pr-sub-systems}.
The theorem may also be applied to the case where we allow certain transformations of $x(t$), namely symplectic transformations that preserve the canonical commutation relations (recall from Section \ref{sec:linear-summary} that a $2n \times 2n$ matrix is symplectic if $V \mathbb{J}_n V^{\top}=\mathbb{J}_n$. If $V$ is symplectic then so is $V^{\top}$ and $V^{-1}$). That is, we can transform internal variables from $x(t)$ to $z(t)=Vx(t)$, with $V$ symplectic so that $z(t)z(t)^{\top}-(z(t)z(t)^{\top})^{\top} = V(x(t)x(t)^{\top}-(x(t)x(t)^{\top})^{\top})V^{\top} =2\imath V \mathbb{J}_n V^{\top} =2 \imath \mathbb{J}_n $. That is, $z(t)$ satisfies the same canonical commutation relations as $x(t)$. The dynamics of a system with $z(t)$ as the internal variable is then given by
\begin{eqnarray*}
dz(t)&=&VAV^{-1} z(t) dt + VB dw(t),\; z(0)=z_0=Vx_0, \\
dy(t)&=& CV^{-1} z(t) + D dw(t),
\end{eqnarray*}
and again represents a physically realizable system. However, strictly speaking, a linear quantum stochastic system with $x(t)$ as the internal variable and another system with $z(t)$ as the internal variable represent physically inequivalent quantum mechanical systems, although they have the same transfer function given by $C(sI-A)^{-1}B+D$. This physical subtlety, not encountered in the classical setting when similarity transformations are applied, has been discussed in some detail in \cite{Nurd10b}. In particular, the two systems do not have the same $S,L,H$ parameters.
If we are only interested in the steady-state input-output evolution of $y(t)$ in relation to the driving noise $w(t)$ as $t \rightarrow \infty$ (assuming that the matrix $A$ is Hurwitz) then how the canonical position and momentum operators in $x(t)$ evolve is inconsequential. Thus, in this case we can allow a similarity transformation of the matrices $(A,B,C,D)$ to $(VAV^{-1}, VB, CV^{-1},D)$ with a symplectic $V$; see \cite{Nurd10b}.
The advantage of such a transformation when we are mainly interested in steady-state input-output phenomena is that the transformed system matrices may be of a more convenient form for analysis and computation, possibly allowing simplified formulas. Since $G'=(VAV^{-1}, VB, CV^{-1},D)$ is also a physically realizable system we can again apply Theorem \ref{thm:pr-sub-systems} to truncate certain sub-systems of $G'$.
\subsection{Application to completely passive linear quantum stochastic systems}
\label{sec:cp-systems}
We now specialize to a class that will be referred to as {\em completely passive} linear quantum stochastic systems \cite{GJN10,Pet10,Nurd10a,Nurd10b}.
Following \cite{Nurd10a}, a physically realizable linear quantum stochastic system (\ref{eq:qsde-out-quad}) with an equal number of inputs and outputs is said to be completely passive if (i) $H$ can be written as $H = \frac{1}{2}a^{\dag} \tilde R a +c$, (ii) $L$ can be written as $L=\tilde Ka$ with $a=\frac{1}{2}(q_1+ \imath p_1,q_2+\imath p_2,\ldots,q_n + \imath p_n)^{\top}$ for some complex Hermitian matrix $\tilde R \in \mathbb{C}^{n \times n}$, a real constant $c$, and some $\tilde K \in \mathbb{C}^{m \times n}$, and (iii) $D$ is unitary symplectic. On the other hand, if the system is of the form (\ref{eq:qsde-out-quad-e}) with less outputs than inputs, besides the same requirements (i) and (ii) of $H$ and $L$, for complete passivity we require that there exists a real matrix $E \in \mathbb{R}^{(2m-n_y)\times 2m}$ such that the matrix $\tilde D = [\begin{array}{cc} D^{\top} & E^{\top}\end{array}]^{\top}$ is unitary symplectic. Note that the latter systems are merely completely passive systems with an equal number of inputs and outputs with certain pairs of output quadratures being ignored.
It has been shown in \cite{Nurd10a} that any completely passive system can be synthesized using purely passive devices, that is, devices that do not need an external source of quanta/energy. In quantum optics this means that they can be constructed using only optical cavities, beam splitters, and phase shifters. We now show that the property of completely passivity is also preserved under subsystem truncation. The proof is similar to that of \cite[Theorem 7]{Nurd10b}.
\begin{lemma}
\label{lem:cp-preservation} If $G$ is completely passive then so is the truncated system $G_{\pi,1}$ for any permutation $\pi$.
\end{lemma}
\begin{proof}
Since $G$ is completely passive so is $G_{\pi}$ for any permutation $\pi$ because they represent the same physical system up to a permutation of the position and momentum operators. It suffices to consider completely passive systems with the same number of inputs and outputs, as any completely passive system with less outputs than inputs can be obtained from the former simply by disregarding pairs of output quadratures that are of no interest. To this end, assume that the system has an equal number of inputs and outputs and $S=I$ (i.e., $D=I$). Let $\tilde K=[\begin{array}{cccc} \tilde K_{1} & \tilde K_2 & \ldots & \tilde K_n\end{array}]$ and $\tilde R=[\tilde R_{jk}]_{j,k=1,2,\ldots,n}$, where $\tilde K_j \in \mathbb{C}^{m \times 1}$, and $\tilde R_{jk}$ are complex numbers with $\tilde R_{kj}=\tilde R_{jk}^*$.
Let $\tilde K_{\pi}=[\begin{array}{cccc} \tilde K_{\pi(1)} & \tilde K_{\pi(2)} & \ldots & \tilde K_{\pi(n)} \end{array}]$, $a_{\pi(j)}=\frac{1}{2}(q_{\pi(j)}+\imath p_{\pi(j)})$, $G_{\pi(j)}=(I,\tilde K_{\pi(j)} a_{\pi(j)}, \frac{1}{2}\tilde R_{jj}a_{\pi(j)}^* a_{\pi(j)}+\tilde R_{jj}/4)$, and $H^d_{\pi(k)\pi(j)}=\tilde R_{kj}a_{\pi(k)}^*a_{\pi(j)}+\tilde R_{kj}^*a_{\pi(k)}a_{\pi(j)}^*+\frac{\imath}{2}(\tilde K_{\pi(k)}^{\dag}\tilde K_{\pi(j)}a_{\pi(k)}^*a_{\pi(j)}-\tilde K_{\pi(j)}^{\dag}\tilde K_{\pi(k)}a_{\pi(k)}a_{\pi(j)}^*)$ for all $k>j$ and $j=1,2,\ldots,n$. Then by \cite[Theorem 5.1]{NJD08}, we have that $G_{\pi}=(G_{\pi(n)} \triangleleft \cdots \triangleleft G_{\pi(2)} \triangleleft G_{\pi(1)}) \boxplus (0,0,\sum_{j=1}^{n-1}\sum_{k=j+1}^n H^d_{\pi(k)\pi(j)})$ (recall the definition of the series product $\triangleleft$ and the concatenation product $\boxplus$ from Section \ref{sec:linear-summary}); see Fig.~\ref{fig:sysdecom}. Note that by construction all the $G_{\pi(j)}$'s are completely passive. Following the discussion in Sec. \ref{sec:pr-preserve-truncation}, we can write $G_{\pi,1}=(G_{\pi(r)} \triangleleft \cdots \triangleleft G_{\pi(2)} \triangleleft G_{\pi(1)}) \boxplus (0,0,\sum_{j=1}^{r-1}\sum_{k=j+1}^r H^d_{\pi(k)\pi(j)})$. Since $G_{\pi(r)} \triangleleft \cdots \triangleleft G_{\pi(2)} \triangleleft G_{\pi(1)}$ is by inspection completely passive, it is now apparent that $G_{\pi,1}$ is completely passive. Evidently this holds true for any permutation map $\pi$ since the choice of $\pi$ was arbitrary to begin with.
If $S$ is unitary but not equal to the identity matrix (this means that $D$ is a unitary symplectic matrix different from the identity matrix), then one simply inserts a static passive network that implements $S$ between the input fields and $G_{\pi(1)}$; see \cite[Section 3]{NJD08}. The same argument as above then goes through. $\hfill \Box$
\end{proof}
A truncation method has been proposed for a class of completely passive linear quantum stochastic systems in \cite{Pet12} based on an algorithm developed in \cite{Pet11}. This algorithm is not guaranteed to be applicable to all completely passive linear quantum stochastic systems but to a ``generic'' subclass of it. Theorem \ref{thm:pr-sub-systems} and Lemma \ref{lem:cp-preservation} of this paper shows that a quantum structure preserving subsystem truncation method can be developed for the entire class of completely passive systems which guarantees that the truncation is also completely passive. The idea in \cite{Pet11}, and later proved to be true for all completely passive linear quantum stochastic systems in \cite{Nurd10b}, is that if we allow a symplectic similarity transformations, the transfer function of these systems can always be realized by a purely cascade connection of completely passive systems without the need for any direct interaction Hamiltonians $H^d_{jk}$ between any sub-systems $j$ and $k$. Then the model reduction strategy proposed in \cite{Pet12} is to truncate some tail components in this cascade. Using the results of \cite{Nurd10b} and Theorem \ref{thm:pr-sub-systems} of this paper, a similar truncation strategy to \cite{Pet12} can thus be applied to all completely passive systems provided that $G_{\pi}$ and the truncated subsystem $G_{\pi,1}$ are both asympotically stable.
\section{Co-diagonalizability of the controllability and observability Gramians and model reduction by quasi-balanced truncation}
\label{sec:qb-reduction}
In this section we will consider the question of when it is possible to have a balanced or an ``almost'' balanced realization of a linear quantum stochastic linear system under the restriction of similarity transformation by a symplectic matrix. That is, we will derive conditions under which there is a symplectic similarity transformation of the system matrices $A,B,C,D$ such that the transformed system has controllability Gramian $P$ and observability Gramian $Q$ that are diagonal. Then we say that the Gramians $P$ and $Q$ are {\em co-diagonalisable}, the meaning of which will be made precise below. In the classical setting, if the system is minimal (i.e., it is controllable and observable) it is always possible to not only have the Gramians $P$ and $Q$ simultaneously diagonal but to make them diagonal and equal. The idea for model reduction by balanced truncation is to remove subsystems that are associated with the smallest positive diagonal entries of $P$ and $Q$, these correspond heuristically to systems modes that are least controllable as well as least observable. As will be shown, the restriction to a symplectic transformation somewhat limits what is achievable with linear quantum stochastic systems. Nonetheless, in Theorem \ref{thm:bt-q-lin} of this section precise conditions are deduced under which a symplectic transformation exists such that the transformed system will have $P$ and $Q$ simultaneously diagonal (though not necessarily to the same diagonal matrix).
Consider a physically realizable $n$ degree of freedom linear quantum stochastic system (\ref{eq:qsde-out-quad}), thus the system matrices satisfy (\ref{eq:pr-1e})-(\ref{eq:pr-3e}), with $n_y$ possibly less than $2m$ (i.e., possibly less outputs than inputs). We have seen that similarity transformations for linear quantum stochastic systems are restricted to symplectic matrices $T$ to preserve physical realizability. We assume that the system matrix $A$ is Hurwitz (all its eigenvalues are in the left half plane). As for classical linear systems, we can define the controllability and observability matrices as
$$
[\begin{array}{ccccc} B & AB & A^2 B & \ldots & A^{2n-1}B \end{array}],
$$
and
$$
[\begin{array}{ccccc} C^{\top} & A^{\top}C^{\top} & (A^{\top})^2 C^{\top} & \ldots & (A^{\top})^{2n-1}C^{\top} \end{array}]^{\top},
$$
respectively. Since $A$ is Hurwitz, there exists a unique $0 \leq P = P^{\top} \in \mathbb{R}^{2n \times 2n}$ and $0 \leq Q=Q^{\top} \in \mathbb{R}^{2n \times 2n}$ satisfying the Lyapunov equations
\begin{eqnarray*}
&AP + P A^{\top} + B B^{\top} =0,\\
&A^{\top}Q +Q A + C^{\top} C =0,
\end{eqnarray*}
respectively, and, moreover, if the system is controllable (i.e., controllability matrix is full rank) and observable (i.e., observability matrix is full rank) then $P>0$ and $Q>0$; see, e.g., \cite{ZDG95}. Using standard terminology, the matrices $P$ and $Q$ are referred to as the controllability and observability Gramian of the system, respectively. The transfer function $G(s)$ of the system is defined as $G(s)=C(sI-A)^{-1}B+D$. In this section, we investigate a necessary and sufficient condition under which there is a {\em symplectic} matrix $T \in \mathbb{R}^{2n \times 2n}$ such that the transformed system with system matrices $(T A T^{-1}, TB, CT^{-1},D)$ has controllability and observability Gramians that are simultaneously diagonal. If there exists such a $T$ then we say that the Gramians $P$ and $Q$ are {\em co-diagonalizable}. A more convenient way to express co-diagonalizability is that there exists a symplectic matrix $T$ such that $TPT^{\top}=\Sigma_P$ and $T^{-\top}Q T^{-1}=\Sigma_Q$, with $\Sigma_P$ and $\Sigma_Q$ nonnegative and diagonal. In analogy with balanced realization for classical linear time-invariant systems \cite{ZDG95}, the case where $\Sigma_P=\Sigma_Q$ will be of particular interest. That is, when $P$ and $Q$ are co-diagonalizable to the same diagonal matrix.
Before stating the main results, let us introduce some formal definitions. Two matrices $M_1,M_2 \in \mathbb{R}^{2n \times 2n}$ are said to be {\em symplectically congruent} if there exists a symplectic matrix $T \in \mathbb{R}^{2n \times 2n}$ such that $TM_1T^{\top}=M_2$. Two matrices $M_1,M_2 \in \mathbb{R}^{2n \times 2n}$ are said to be {\em symplectically similar} if there exists a symplectic matrix $T \in \mathbb{R}^{2n \times 2n}$ such that $TM_1T^{-1}=M_2$. Our first result is the following:
\begin{lemma}
\label{lem:sym-cong-sim} A real $2n \times 2n$ matrix $P=P^{\top} \geq 0$ is symplectically congruent to a real diagonal $2n \times 2n$ matrix $\Sigma \geq 0$ if and only if $\mathbb{J}_n P$ is symplectically similar to $\mathbb{J}_n \Sigma$. If the symplectic congruence holds and $\mathbb{J}_n P$ is diagonalizable then its eigenvalues come in imaginary conjugate pairs $\pm \imath \sigma_i$, $i=1,2,\ldots,n$. In particular, if $P>0$ then $P$ is symplectically congruent to a diagonal matrix $\Sigma >0$, and $\mathbb{J}_n P$ is diagonalizable and symplectically similar to $\mathbb{J}_n \Sigma$.
\end{lemma}
\begin{proof}
See Appendix \ref{app:sym-cong-sim}.
\end{proof}
\textbf{Remark.} If $P \geq 0$ and $\mathbb{J}_n P$ is diagonalizable then the $n$ largest nonnegative eigenvalues $\sigma_1$, $\sigma_2$, $\ldots$, $\sigma_n$ of $\imath \mathbb{J}_n P$ are referred to as the {\em symplectic eigenvalues} of $P$. In particular, by Williamson's Theorem \cite{Will36}, \cite[Lemma 2]{PSL09}, $\mathbb{J}_nP$ is always diagonalizable when $P>0$ and in this case $\sigma_1$, $\sigma_2$, $\ldots$, $\sigma_n>0$.
\begin{lemma}
\label{lem:sym-eig-struct} Let $P=P^{\top} \geq 0$ be a real $2n \times 2n$ matrix with $\mathbb{J}_nP$ diagonalizable (in particular, whenever $P>0$). Define $\mathbb{K}_n=P_s \mathbb{J}_n P_s^{\top}=\left[\begin{array}{cc} 0 & I_n \\ -I_n & 0 \end{array} \right]$ and $\tilde P=P_s P P_s^{\top}$, where $P_s$ is a $2n \times 2n$ permutation matrix acting as $$P_s(q_1,p_1,q_2,p_2,\ldots,q_n,p_n)^{\top} =(q_1,q_2,\ldots,q_n,p_1,p_2,\ldots,p_n)^{\top}.$$ Suppose that $P$ has symplectic eigenvalues $\sigma_1,\sigma_2,\ldots,\sigma_n$, with $\sigma_k \geq 0$ for $k=1,2,\ldots,n$. Then there exist $2n$ linearly independent eigenvectors $v_1$, $v_1^{\#}$, $v_2$, $v_2^{\#}$, $\ldots$, $v_n$, $v_n^{\#}$ of $\mathbb{K}_n \tilde P$ satisfying $\mathbb{K}_n \tilde P v_k=\imath \sigma_k v_k$ and $\mathbb{K}_n \tilde P v_k^{\#}=-\imath \sigma_k v_k^{\#}$ for $k=1,2,\ldots,n$ such that the complex $2n \times 2n$ matrix
\begin{equation}
V=[\begin{array}{cccccccc} v_1 & v_2 & \ldots & v_n & v_1^{\#} & v_2^{\#} & \ldots & v_n^{\#} \end{array}] \label{eq:V-transform}
\end{equation}
satisfies
\begin{eqnarray}
-\imath V^{-1}\mathbb{K}_n \tilde P V &=& {\rm diag}(\sigma_1,\sigma_2,\ldots,\sigma_n,-\sigma_1,-\sigma_2,\ldots,-\sigma_n), \label{eq:V-prop-1}\\
-\imath V^{\dag} \mathbb{K}_n V &=& {\rm diag}(I_n,-I_n). \label{eq:V-prop-2}
\end{eqnarray}
\end{lemma}
\begin{proof}
Note that $\mathbb{K}_n \tilde P = P_s \mathbb{J}_n P_s^{\top} \tilde P = P_s \mathbb{J}_n P P_s^{\top}$. Therefore, $\mathbb{K}_n \tilde P$ and $\mathbb{J}_n P$ are similar to one another. Since $\mathbb{J}_n P$ is diagonalizable by hypothesis (in particular, whenever $P>0$), from Lemma \ref{lem:sym-cong-sim} it follows that $-\imath \mathbb{K}_n \tilde P$ is diagonalizable with real eigenvalues $\pm \sigma_1$, $\pm \sigma_2$, $\ldots$, $\pm \sigma_n$ with the corresponding eigenvectors in $V$. Thus $-\imath \mathbb{K}_n \tilde P$ satisfies (\ref{eq:V-prop-1}). The lemma now follows immediately from the following result of \cite{Xiao09}:
\begin{lemma}
\cite[Lemma 71, Section VI, pp. 32-34]{Xiao09} If $\mathbb{K}_n \tilde P$ is diagonalizable then the matrix $V$ defined in (\ref{eq:V-transform}) satisfies (\ref{eq:V-prop-2}).
\end{lemma}
$\hfill \Box$
\end{proof}
Based on the above lemma we can prove the following:
\begin{theorem}
\label{thm:sym-diag} Let $P=P^{\top} \geq 0$ be a real $2n \times 2n$ matrix with $\mathbb{J}_n P$ diagonalizable (in particular, whenever $P>0$), and suppose that the symplectic eigenvalues of $P$ are $\sigma_1$, $\sigma_2$, $\ldots$, $\sigma_n$. Define $V$ and $P_s$ as in Lemma \ref{lem:sym-eig-struct}. Also, define the $2n \times 2n$ unitary matrix
$$
U=\frac{1}{\sqrt{2}}{\rm diag}_n\left(\left[\begin{array}{cc} 1 & -\imath \\ 1 & \imath \end{array} \right]\right),
$$
and the $2n \times 2n$ matrix $T=(P_s^{\top} V P_s U)^{\top}$. Then $T$ is symplectic, $T^{-\top} \mathbb{J}_n P T^{\top}=\mathbb{J}_n \Sigma =\Sigma \mathbb{J}_n$, and $T P T^{\top}=\Sigma$, with $\Sigma={\rm diag}(\sigma_1 I_2, \sigma_2I_2,\ldots,\sigma_n I_2)$.
\end{theorem}
\begin{proof}
See Appendix \ref{app:sym-diag}.
\end{proof}
\begin{theorem}
\label{thm:bt-q-lin} Let $G$ be a $n$ degree of freedom linear quantum stochastic system with system matrices $(A,B,C,D)$ with $A$ Hurwitz. Let $P=P^{\top}\geq 0$ and $Q=Q^{\top} \geq 0$ be, respectively, the controllability and observability Gramians of the system which are, respectively, the unique solution to the Lyapunov equations
\begin{eqnarray*}
AP+PA^{\top} + B B^{\top}=0,\\
A^{\top} Q + Q A + C^{\top} C=0.
\end{eqnarray*}
Suppose that $\mathbb{J}_nP$ and $\mathbb{J}_nQ$ are diagonalizable (in particular, whenever $P>0$ and $Q>0$) then the following holds:
\begin{enumerate}
\item There exists a symplectic matrix $T$ such that $TPT^{\top}=\Sigma$, $T^{-\top}QT^{-1}=\Sigma$, and $\Sigma= {\rm diag}(\sigma_1 I_2, \sigma_2 I_2,\ldots,\sigma_n I_2)$ for some $\sigma_1,\sigma_2,\ldots,\sigma_n \geq 0$, if and only if $\mathbb{J}_n P= Q \mathbb{J}_n$. In this case, $\sigma_1$, $\sigma_2$, $\ldots$, $\sigma_n$ are the coinciding symplectic eigenvalues of $P$ and $Q$ as well as the Hankel singular values of the system.
\item There exists a symplectic matrix $T$ such that $TPT^{\top}=\Sigma_P$, $T^{-\top}QT^{-1}=\Sigma_Q$, with $\Sigma_X$ ($X \in \{P,Q\}$) of the form $\Sigma_X= {\rm diag}(\sigma_{X,1} I_2, \sigma_{X,2} I_2,\ldots,\sigma_{X,n} I_2)$, with $\sigma_{X,1}$, $\sigma_{X,2}$, $\ldots$, $\sigma_{X,n}\geq 0$ the symplectic eigenvalues of $X$ (symplectic eigenvalues of $P$ need not be the same as those of $Q$), if and only if $[\mathbb{J}_n P,Q \mathbb{J}_n]=0$.
\item There exists a symplectic matrix $T$ such that $TPT^{\top}=\Sigma_P$, $T^{-\top}QT^{-1}=\Sigma_Q$, for some real positive semidefinite diagonal matrices $\Sigma_X$ ($X \in \{P,Q\}$), if and only if there exist symplectic matrices $\tilde T_P$, $\tilde T_Q$, and diagonal symplectic matrices $D_P$ and $D_Q$ such that (i) $\tilde T_P P \tilde T_P^{\top}={\rm diag}(\sigma_{P,1}I_2,\sigma_{P,2}I_2,\ldots,\sigma_{P,n}I_2)$ with $\sigma_{P,1}$, $\sigma_{P,2}$, $\ldots$, $\sigma_{P,n}$ the symplectic eigenvalues of $P$, (ii) $\tilde T_Q^{-\top} Q \tilde T_Q^{-1}={\rm diag}(\sigma_{Q,1}I_2,\sigma_{Q,2}I_2,\ldots,\sigma_{Q,n}I_2)$ with $\sigma_{Q,1}$, $\sigma_{Q,2}$, $\ldots$, $\sigma_{Q,n}$ the symplectic eigenvalues of $Q$, and (iii) $D_P^{-1}\tilde T_P = D_Q \tilde T_Q$.
\end{enumerate}
\end{theorem}
\begin{proof}
See Appendix \ref{app:bt-q-lin}.
\end{proof}
A discussion of the contents of the theorem is now in order.
Point 1 of the theorem is the best possible outcome and results in a direct analogue in the quantum case of balanced realization. This is for two reasons. If $\mathbb{J}_n P = Q \mathbb{J}_n$ is satisfied then the Gramians $P$ and $Q$ can be co-diagonalized to the same diagonal matrix $\Sigma$. Moreover, the diagonal entries of $\Sigma$ come in identical pairs for each pair of conjugate position and momentum operators in the transformed system. This is desirable since when we discard oscillators from the model, we must simultaneusly remove pairs of conjugate position and momentum operators not just one of the pair. If the coefficients of $\Sigma$ were different for the position and momentum operators of the same oscillator, it is not possible to simply remove the operator corresponding to the larger value of the corresponding diagonal element of $\Sigma$. However, this ideal scenario is only achievable under the extremely restrictive condition that $\mathbb{J}_n P=Q\mathbb{J}_n$. Generic linear quantum stochastic systems will not satisfy this condition. Indeed, it is easy to generate random examples of linear quantum stochastic systems that will fail to meet this condition.
Point 2 of the theorem shows that it is possible to have co-diagonalization of $P$ and $Q$ to diagonal matrices $\Sigma_X$ of the form $\Sigma_X={\rm diag}(\sigma_{X,1} I_2,\sigma_{X,2} I_2,\ldots, \sigma_{X,n} I_2)$, $X\in\{P,Q\}$, but $\Sigma_P$ and $\Sigma_Q$ will not necessarily coincide. This weaker co-diagonalization is achievable under the weaker requirement (compared to the requirement of Point 1) that $[\mathbb{J}_n P,Q \mathbb{J}_n]=0$. Since $\Sigma_P$ and $\Sigma_Q$ need not coincide, their diagonal elements may not be ordered in the same way. However, it will be shown in the next section that for quasi-balancesable system there is a natural strategy to truncate subsystems. Moreover, as will be demonstrated in a forthcoming example in Section \ref{sec:cp-systems-bt}, there exists a class of linear quantum stochastic systems that have a quasi-balanced realization.
Point 3 of the theorem is the weakest possible co-diagonalization result for $P$ and $Q$. This form of diagonalization can be achieved under a weaker condition than that of Points 1 and 2. It states that $P$ and $Q$ can be co-diagonalized by a symplectic matrix to, respectively, diagonal matrices $\Sigma_P$ and $\Sigma_Q$ which need not have the special form stipulated in Points 1 and 2.
\section{Truncation error bound in model reduction of quasi-balanceable systems}
\label{sec:truncation-error-bound}
In this section we shall derive a bound on the magnitude of the error transfer function due to subsystem truncation of a quasi-balanceable linear quantum stochastic system. The error bound will be presented in Theorem \ref{thm:qb-error-bound} of this section. Let us introduce the notation $\bar{\sigma}(\cdot)$ and $\lambda_{\rm max}(\cdot)$ to denote the largest singular value and eigenvalue of a matrix, respectively, with the matrix being square for the latter, and recall that the $H^{\infty}$ norm of a transfer function $H(s)$ is $\|H \|_{\infty}=\mathop{\sup}_{\omega \in \mathbb{R}} \bar{\sigma}(H(\imath \omega))$. We begin with the following lemma.
\begin{lemma}
\label{lm:zero-error} Let $G=(A,B,C,D)$ be a linear quantum stochastic system of degree $n$ with $A$ Hurwitz, $\mathbb{J}_n P$ and $\mathbb{J}_n Q$ diagonalizable, and $[\mathbb{J}_n P,Q\mathbb{J}_n]=0$. Let $\Xi_G(s)=C(sI-A)^{-1}B+D$ be the transfer function of $G$,
and let $T$ be a symplectic transformation such that $\tilde G =(TAT^{-1},TB,CT^{-1},D)$ is a quasi-balanced linear quantum stochastic system with diagonal positive semidefinite controllability and observability Gramians $\Sigma_P={
\rm diag}(\sigma_{P,1}I_{2},\sigma_{P,2}I_{2},\ldots,\sigma_{P,n}I_{2})$ and $\Sigma_Q={\rm diag}(\sigma_{Q,1}I_{2},\sigma_{Q,2}I_{2},\ldots,\sigma_{Q,n}I_{2})$, respectively. Partition the Gramian $\Sigma_X$ ($X \in \{Q,P\}$) as $\Sigma_X ={\rm diag}(\sigma_{X,r1},\sigma_{X,r2})$ with $\sigma_{X,r1} \in \mathbb{R}^{2r \times 2r}$ and $r<n$, and partition $\tilde A =TAT^{-1}$, $\tilde B=TB$, and $\tilde C = CT^{-1}$ compatibly as
$$
\tilde A=\left[\begin{array}{cc} \tilde A_{r,11} & \tilde A_{r,12} \\ \tilde A_{r,21} & \tilde A_{r,22} \end{array}\right];\,
\tilde B=\left[\begin{array}{c} \tilde B_{r,1} \\ \tilde B_{r,2} \end{array}\right];\, \tilde C=\left[\begin{array}{cc} \tilde C_{r,1} & \tilde C_{r,2} \end{array}\right].
$$
Let $\Xi_{\tilde G_r}(s)= \tilde C_{r,1}(sI-\tilde A_{r,11})^{-1} \tilde B_{r,1}+D$. If $\tilde A_{r,11}$ is Hurwitz then for all $\omega \in \mathbb{R}$
\begin{eqnarray*}
\lefteqn{\bar{\sigma} (\Xi_G(\imath\omega)-\Xi_{\tilde G_{r}}(\imath\omega) )}\\
&=&\sqrt{\lambda_{\rm \max}((\Sigma_{P,r2}+\Delta_r(\imath \omega)^{-1}\Sigma_{P,r2}\Delta_r(\imath \omega)^{*})( \Delta_r(\imath \omega)^{-*}\Sigma_{Q,r2}\Delta_r(\imath\omega) + \Sigma_{Q,r2}))},
\end{eqnarray*}
with $\Delta_r(s)=sI-\tilde A_{r,22}-\tilde A_{r,21}(sI-\tilde A_{r,11})^{-1}\tilde A_{r,12}$. In particular, if either of, or both of, $P$ and $Q$ are singular with ${\rm rank}(PQ)=2\nu<2n$, and $T$ has been chosen such that $\Sigma_{P,\nu 1}\Sigma_{Q, \nu 1} >0$\footnote{If $T$ does not already satisfy this then pairs of consecutive odd and even indexed rows of $T$ can always be permuted to get a new symplectic $T$ that does satisfy it to replace the original $T$.}, and
$\tilde A_{r,11}$ is Hurwitz for $r=\nu,\nu+1,\ldots,n-1$, then $\|\Xi_{G}-\Xi_{\tilde G_{\nu}}\|_{\infty}=0$.
\end{lemma}
\begin{proof}
The expression for $\bar{\sigma}(\Xi_G(\imath \omega)-\Xi_{\tilde G_r}(\imath \omega))$ in the lemma follows mutatis mutandis from the derivation in Section 3 of \cite{Enns84}, with the obvious modifications. Now, by the hypothesis of the latter part of the lemma on $P$, $Q$, and $T$, we have that $\Sigma_{P,r2}\Sigma_{Q,r2}=0$ for all $r=\nu+1,\nu+2,\ldots,n-1$. Taking $r=n-1$, by the hypothesis that $\tilde A_{n-1,11}$ is Hurwitz we then get that $\bar{\sigma}(\Xi_G(\imath \omega)-\Xi_{\tilde G_{n-1}}(\imath \omega))=0$ for all $\omega$, therefore $\| \Xi_G -\Xi_{\tilde G_{n-1}}\|_{\infty}=0$. Since $\Xi_{\tilde G_{n-1}}$ has again, by construction, a quasi-balanced realization, the assumption that $\tilde A_{n-2,11}$ is Hurwitz implies analogously that $\|\Xi_{\tilde G_{n-1}} -\Xi_{\tilde G_{n-2}}\|_{\infty}=0$. Repeating this argument for $r=n-3,n-2,\ldots,\nu+1$, we obtain $\|\Xi_{\tilde G_{r}}-\Xi_{\tilde G_{r-1}}\|_{\infty}=0$ for $r=n-3,n-2,\ldots,\nu+1$. Therefore, with $\Xi_{\tilde G_n}=\Xi_G$,
$$
\| \Xi_G-\Xi_{\tilde G_{\nu}}\|_{\infty} = \left\| \sum_{r=\nu+1}^{n}(\Xi_{\tilde G_r} - \Xi_{\tilde G_{r-1}})\right\|_{\infty}\leq \sum_{r=\nu+1}^{n} \|\Xi_{\tilde G_r} - \Xi_{\tilde G_{r-1}}\|_{\infty}=0.
$$
$\hfill \Box$
\end{proof}
The above lemma states that we can always discard subsystems corresponding to position and momentum pairs in the quasi-balanced realization that correspond to vanishing products $\sigma_{P,r} \sigma_{Q,r}$ without incurring any approximation error, provided the submatrices $\tilde A_{\nu,11},\tilde A_{\nu+1,11},\ldots, \tilde A_{n-1,11}$ are all Hurwitz, where $2\nu$ is the rank of $PQ$. Therefore, to simplify the exposition, from this point on we consider only the case where $G$ is {\em minimal} in the usual sense that $(A,B)$ is a controllable pair (i.e., the matrix $[\begin{array}{cccc} B & AB & \ldots & A^{2n-1}B\end{array}]$ is full rank) and $(A,C)$ is an observable pair (i.e., the matrix $[\begin{array}{cccc} C^{\top} & A^{\top} C^{\top} & \ldots & (A^{2n-1})^{\top}C^{\top} \end{array}]$ is full rank). In this case we will have that $P>0$, $Q>0$ and $PQ$ is nonsingular. We now show that when $[\mathbb{J}_n P, Q \mathbb{J}_n]=0$, a quasi-balanced realization of $G$ is similar to a non-physically realizable balanced realization of $G$ by a simple diagonal similarity transformation. This is stated precisely in the next lemma.
\begin{lemma}
\label{lm:qb-to-b} Let $\tilde G=(\tilde A,\tilde B,\tilde C,D)$ be a quasi-balanced realization of $G=(A,B,C,D)$ as defined in Lemma \ref{lm:zero-error}, and suppose that $G$ is minimal. Let $T_b ={\rm diag}(T_{b,1}I_2,T_{b,2}I_2,\ldots,T_{b,n}I_2)$ be a diagonal matrix with $T_{b,j}=\left(\frac{\sigma_{Q,j}}{\sigma_{P,j}}\right)^{1/4}$. Then $\tilde G_b(s)=(\tilde A_b,\tilde B_b,\tilde C_b,D)$ with $\tilde A_b=T_b \tilde A T_b^{-1}$, $\tilde B_b=T_b \tilde B$, and $\tilde C_b=\tilde C T_b^{-1}$, is a non-physically realizable balanced realization of $G$ (in particular, $\Xi_{\tilde G_b}(s)=\Xi_G(s)$) with diagonal and identical controllability and observability Gramians, $\Sigma_{P,b}=\Sigma_{Q,b}=\Sigma_b$, with $$\Sigma_b={\rm diag}(\sqrt{\sigma_{P,1}\sigma_{Q,1}}I_2,\sqrt{\sigma_{P,2}\sigma_{Q,2}}I_2,\ldots, \sqrt{\sigma_{P,n}\sigma_{Q,n}}I_2),$$ where $\Sigma_{P,b}$ and $\Sigma_{Q,b}$ denote the controllability and observability Gramians of $\tilde G_b$, respectively. Moreover, let $\tilde A$, $\tilde B$, $\tilde C$ be partitioned according to Lemma \ref{lm:zero-error}, and partition $\tilde A_b$, $\tilde B_b$, $\tilde C_b$ compatibly as
$$
\tilde A_b=\left[\begin{array}{cc} \tilde A_{b,r,11} & \tilde A_{b,r,12} \\ \tilde A_{b,r,21} & \tilde A_{b,r,22} \end{array}\right];\,
\tilde B_b=\left[\begin{array}{c} \tilde B_{b,r,1} \\ \tilde B_{b,r,2} \end{array}\right];\, \tilde C_b=\left[\begin{array}{cc} \tilde C_{b,r,1} & \tilde C_{b,r,2} \end{array}\right],
$$
and define $\Xi_{\tilde G_{b,r}}(s) = \tilde C_{b,r,1}(sI-\tilde A_{b,r,11})^{-1} \tilde B_{b,r,1}+D$, then
\begin{equation}
\Xi_G(s) -\Xi_{\tilde G_r}(s) = \Xi_{G_b}(s) -\Xi_{\tilde G_{b,r}}(s), \label{eq:qb-n-b-equal-error}
\end{equation}
where $\tilde G_r$ is as defined in Lemma \ref{lm:zero-error}.
\end{lemma}
\begin{proof}
Note that from the given definitions of $T_b$ and $\Sigma_b$, $T_b$ is invertible and we easily verify that $T_b \Sigma_P T_b^{\top}=\Sigma_b$ and $T_b^{-\top} \Sigma_Q T_b^{-1}=\Sigma_b$.
Since $TPT^{\top}=\Sigma_P$ and $T^{-\top} QT^{-1}=\Sigma_Q$, defining $\tilde T_b=T_b T$ it follows that $\tilde T_b P \tilde T_b^{\top} = T_b \Sigma_P T_b^{\top}=\Sigma_b$ and $\tilde T_b^{-\top} Q\tilde T_b^{-1}=T_b^{-\top}\Sigma_Q T_b^{-1}=\Sigma_b$.
Hence the system $\tilde G_b$ as defined in the lemma is similar to $G$ (via the transformation $\tilde T_b$) and has balanced Gramians $\Sigma_{P,b}=\Sigma_{Q,b}=\Sigma_b$, therefore it is a balanced realization of $\Xi_G(s)$, although it is not physically realizable. Since $T_b$ is a diagonal matrix, we can partition it conformably with the partitioning of $\tilde A_b$, $\tilde B_b$ and $\tilde C_b$ as given in the lemma as $T_b={\rm diag}(T_{b,r1},T_{b,r2})$ with $T_{b,r1}$ a diagonal and invertible $2r \times 2r$ matrix. By the diagonal form of $T_b$ we easily verify that $\tilde A_{b,r,11}= T_{b,r1} \tilde A_{r,11}T_{b,r1}^{-1}$, $\tilde B_{b,r,1}= T_{b,r1} \tilde B_{r,1}$, $\tilde C_{b,r,1}= \tilde C_{r,1}T_{b,r1}^{-1}$, and we conclude that $\tilde G_{b,r}=(\tilde A_{b,r,11},\tilde B_{b,r,1}, \tilde C_{b,r,1},D)$ is similar to $\tilde G_r=(\tilde A_{r,11},\tilde B_{r,1}, \tilde C_{r,1},D)$ (via the transfomation $T_{b,r1}$) and thus $\Xi_{\tilde G_{b,r}}(s)=\Xi_{\tilde G_{r}}(s)$. From this and the fact established earlier that $\Xi_G(s)=\Xi_{\tilde G_b}(s)$, (\ref{eq:qb-n-b-equal-error}) therefore holds. $\hfill \Box$
\end{proof}
The identity (\ref{eq:qb-n-b-equal-error}) together with the fact that $\tilde G_b$ is a balanced realization of $\Xi_G(s)$ (although not physically realizable) allows us to immediately obtain bounds for the approximation error $\| \Xi_G -\Xi_{\tilde G_r}\|_{\infty}$ using standard proofs for results on error bounds for truncation of balanced realizations of classical linear systems, see, e.g., \cite[Theorem 7.3]{ZDG95}. This is stated as the following theorem.
\begin{theorem}
\label{thm:qb-error-bound} Let $G=(A,B,C,D)$ be a minimal linear quantum stochastic system of degree $n$ with $A$ Hurwitz, $\mathbb{J}_n P$ and $\mathbb{J}_n Q$ diagonalizable, and $[\mathbb{J}_n P,Q\mathbb{J}_n]=0$. Let $\Xi_G(s)=C(sI-A)^{-1}B+D$ be the transfer function of $G$, and let $T$ be a symplectic transformation such that $\tilde G =(TAT^{-1},TB,CT^{-1},D)$ is a quasi-balanced linear quantum stochastic system
with diagonal positive definite controllability and observability Gramians $\Sigma_P$ and $\Sigma_Q$, respectively,
and $\Sigma_b=(\Sigma_{P}\Sigma_{Q})^{1/2}={\rm diag}(\sigma_{b,1} I_{2i_1},\sigma_{b,2}I_{2i_2},\ldots,\sigma_{b,\mu}I_{2i_{\mu}})$ with $\sigma_{b,1}>\sigma_{b,2}>\ldots>\sigma_{b,\mu}>0$ for some positive integers $\mu\leq n$ and $i_1,i_2,\ldots,i_{\mu}$ such that $\sum_{r=1}^{\mu} i_r =n$. Let $\tilde A_{r,11}$, $\tilde G_{r}$, and $\Sigma_{X,r1}$ ($X \in \{Q,P\}$) be as defined in Lemma \ref{lm:zero-error}, and let $j_r = \sum_{k=1}^r i_k$. Then for any $r < \mu$, $\tilde A_{j_r,11}$ is Hurwitz, and
$$
\| \Xi_G - \Xi_{\tilde G_{j_r}} \|_{\infty} \leq 2 \sum_{k=r+1}^{\mu} \sigma_{b,k},
$$
with the bound being achieved for $r=\mu-1$: $\| \Xi_G - \Xi_{\tilde G_{j_{\mu-1}}} \|_{\infty}=\sigma_{b,\mu}$.
\end{theorem}
The error bound given by the above theorem gives a recipe for truncating the subsystems in a quasi-balanced realization of $G$. That is, one should truncate those subsystems in $\tilde G$ associated with position-momentum operator pairs that correspond to pairs $(\sigma_{P,r},\sigma_{Q,r})$ with the smallest geometric means $\sqrt{\sigma_{P,r}\sigma_{Q,r}}$. Furthermore, since $\tilde G_b$ is a balanced realization of $\Xi_G$, it turns out, rather nicely, that for quasi-balanced realizations of linear quantum stochastic systems these geometric means in fact coincide with the Hankel singular values of $G$.
\section{Quasi-balanced truncation of completely passive linear quantum stochastic systems}
\label{sec:cp-systems-bt}
We now consider model reduction for the special class of completely passive linear quantum stochastic systems as defined in Sec. \ref{sec:cp-systems}. The key result in this section is that members of this distinguished class have the property that, provided the $A$ matrix is Hurwitz, they always satisfy Point 2 of Theorem \ref{thm:bt-q-lin} and thus always have a quasi-balanced realization. Therefore, subsystem truncation can be performed on quasi-balanced realizations of this class of systems by removing subsystems associated with the smallest geometric means of the product of the diagonal controllability and observability Gramians, with an error bound given by Theorem \ref{thm:qb-error-bound}.
It has been shown in \cite{Nurd10b} that for completely passive systems the matrix $R$ has the block form $R=[R_{jk}]_{j,k=1,2,\ldots,n}$, where $R_{jk}$ is a $2 \times 2$ diagonal matrix of the form $R_{jk}=r_{jk} I_2$ for some $r_{jk} \in \mathbb{R}$. Also, if $\tilde K=[\tilde K_{ij}]_{i=1,2,\ldots,m,j=1,2,\ldots,n}$ with $\tilde K_{ij}=e^{\imath \theta_{ij}} \sqrt{\gamma_{ij}}$ with $\theta_{ij},\gamma_{ij} \in \mathbb{R}$ and $\gamma_{ij} >0$, then by some straightforward algebra (see \cite[proof of Theorem 3.4]{JNP06}) we find that $B$ has the block form
$B=[B_{ij}]_{i=1,2,\ldots,n,j=1,2,\ldots,m}$ with
\begin{equation}
B_{ij}=-\sqrt{\gamma_{ji}}\left[\begin{array}{cc} \cos(\theta_{ji}) & \sin(\theta_{ji}) \\ -\sin(\theta_{ji}) & \cos(\theta_{ji}) \end{array}\right]. \label{eq:B_ij}
\end{equation}
That is, $B_{ij}$ is a scaled rotation matrix on $\mathbb{R}^2$. These special structures of the matrices of completely passive linear quantum stochastic systems lead to the following results.
\begin{lemma}
\label{lem:cp-trans} If $G=(A,B,C,D)$ is a completely passive linear quantum stochastic system and $T$ is a unitary symplectic matrix, then the transformed system $\tilde G= (TAT^{-1},TB,CT^{-1},D)$ is also completely passive.
\end{lemma}
\begin{proof}
See Appendix \ref{app:cp-trans}.
\end{proof}
The above lemma essentially states that the complete passivity property is invariant under unitary symplectic similarity transformations. Also, we have that
\begin{theorem}
\label{thm:cp-qbr} For any completely passive system that is asymptotically stable (i.e., the $A$ matrix is Hurwitz), $P=I$ and $[\mathbb{J}_n P,Q\mathbb{J}_n]=0$. That is, any such system has a quasi-balanced realization. In this case, the quasi-balancing transformation $T$ is unitary symplectic and can be determined by applying Theorem \ref{thm:sym-diag} to the observability Gramian $Q$ such that $T^{-\top}QT^{-1}=\Sigma_Q$. Moreover, any reduced system obtained by truncating a subsystem of the quasi-balanced realization is again completely passive.
\end{theorem}
\begin{proof}
See Appendix \ref{app:cp-qbr}.
\end{proof}
We are now ready to proceed to an example illustrating the use of Theorems \ref{thm:sym-diag}, \ref{thm:bt-q-lin}, and \ref{thm:cp-qbr}.
\begin{example}
Consider a two mirror optical cavity $G_1$ (the mirrors being labelled M1 and M2) with resonance frequency $\omega_c$ (say, in the order of GHz, its exact value not being critical here) and each mirror having decay rate $\gamma=12 \times 10^6$ Hz (typically a much smaller value than $\omega_c$ for a high Q cavity). The mirror M2 is driven by coherent field $d\mathcal{A}_{\rm in}(t) = \alpha(t) e^{\imath \omega_c t} dt+d\mathcal{A}_{2}(t)$, where $\alpha(t)$ is a complex-valued signal and $\mathcal{A}_2(t)$ a vacuum annihilation field. For sufficiently large $t$, the light $\mathcal{A}_{\rm out}(t)$ reflected from M1 will be a filtered version (by the cavity) of $\mathcal{A}_{\rm in}(t)$ of the form $d\mathcal{A}_{\rm out}(t) = \tilde \alpha(t) e^{\imath\omega_c t} dt + d\mathcal{A}_1(t)$, where $\tilde \alpha(t)$ is a low-pass filtered version of $\alpha(t)$ (with some inherent vacuum fluctuations) and $\mathcal{A}_1(t)$ a vacuum annihilation field.
Note that the light reflected from M2 (the other cavity output) is of no interest here since it contains a feedthrough of the unfiltered signal due to the cavity being driven through this mirror, so we opt to ignore it. Working in a rotating frame with respect to the cavity resonance frequency $\omega_c$ (see, e.g., \cite{NJD08}), this two mirror cavity is described by a one degree of freedom, 4 input, and 2 output linear quantum stochastic system with Hamiltonian matrix $R=0_{2 \times 2}$, coupling matrix $K=\frac{1}{2}\left[\begin{array}{cc} \sqrt{\gamma} & \imath\sqrt{\gamma} \\ \sqrt{\gamma} & \imath \sqrt{\gamma} \end{array} \right]$, and scattering matrix $S=I$, with the output from mirror M2 neglected.
It is possible to obtain a high roll-off rate and realize a sharper low-pass cut-off by connecting several identical cavities together in a particular way, as shall now be described. Suppose that $G_2,G_3,\ldots,G_N$ are additional cavities all identical to $G_1$. For $j=1,2,\ldots,N-1$, connect the output from mirror M1 of cavity $G_{j-1}$ as input to mirror M2 of $G_{j}$. The signal to be filtered will drive mirror M2 of cavity $G_1$ and the output of interest will be the filtered light reflected off mirror M1 of cavity $G_N$. The optical low-pass filtering network $G_{{\rm net},N}$ composed of this interconnection of $G_1,G_2,\ldots,G_N$ is a linear quantum stochastic system with $N$ degrees of freedom, $2(N+1)$ inputs (with a pair of quadratures being driven by the signal to be filtered), and 2 outputs\footnote{Physically there are actually $2(N+1)$ outputs but $2N$ of them are of no interest as they feed through the original unfiltered signal and are thus ignored.}. This network is completely passive since it is composed of completely passive cavities, and the $A$ matrix of the network is Hurwitz. For the case $N=5$, the Hankel singular values of the network\footnote{By Lemma \ref{lm:qb-to-b} and Theorem \ref{thm:cp-qbr}, they coincide with the square root of the symplectic eigenvalues of $Q$ and come in identical pairs.} are 0.9028, 0.5826, 0.2632, 0.0812, and 0.0154 (each appearing twice). This suggests that modes corresponding to the two smallest Hankel singular values may be removed without excessive truncation error. Transforming this system into quasi-balanced form by applying Theorems \ref{thm:bt-q-lin} and \ref{thm:sym-diag}, and truncating the two modes corresponding to the two smallest Hankel singular values 0.0812 and 0.0154 gives a physically realizable asymptotically stable reduced model $G_{\rm red,3}$ with three degrees of freedom, 12 inputs, and 2 outputs, with error bound $\|\Xi_{G_{\rm net,5}}-\Xi_{G_{\rm red,3}}\|_{\infty} \leq 2(0.0812+0.0154)=0.1932$. Here the driven input quadratures are labelled as the last two inputs $w_{2N+1}$ and $w_{2N+2}$, and the frequency responses of interest will be the ones from $w_{2N+1}$ and $w_{2N+2}$ to the filtered output quadratures $y_1$ and $y_2$, respectively, with all other inputs only contributing vacuum fluctuations to the filtered signal\footnote{Note the (steady-state) vacuum fluctuations experienced by the $2N$ cavity quadratures will only be of unity variance, independently of $N$. This is because the steady-state (symmetrized) covariance of the fluctuations is given by the controllability Gramian $P$ and by Theorem \ref{thm:cp-qbr} we have that for completely passive systems $P=I$.}. Due to decoupling and symmetries in the cavity equations, the single input single output transfer functions $w_{2N+1} \rightarrow y_1$ and $w_{2N+2} \rightarrow y_2$ are in fact identical, and their magnitude and phase frequency responses are shown in Fig.~\ref{fig:lp-modred}. It can be seen from the figure that the reduced model approximates the magnitude response quite well at lower frequencies but has a slower roll-off rate than the full network, as can be expected, and also captures the phase response of the full model very well.
\begin{figure}[tbph]
\label{fig:lp-modred}
\centering
\includegraphics[scale=0.6]{lpmodred2}
\caption{Magnitude (top) and phase (bottom) frequency responses from the driven quadrature $w_{2N+1}$ to the filtered output $y_1$ (or, identically, from $w_{2N+2}$ to the filtered output $y_2$) of the full optical network and a three degree of freedom reduced model for $N=5$. The response of the full network and reduced model are indicated by solid blue lines and dashed red lines, respectively.}
\end{figure}
\end{example}
\section{Conclusion}
\label{sec:conclusion} This paper has developed several new results on model reduction of linear quantum stochastic systems. It is shown that the physical realizability and complete passivity properties of linear quantum stochastic systems are preserved under subsystem truncation. The paper also studied the co-diagonalizability of the controllability and observability Gramians of a linear quantum stochastic system. It is found that a balanced realization of the system, where the Gramians are diagonal and equal, exists if and only if a strong condition is satisfied, typically not satisfied by generic linear quantum stochastic systems. Necessary and sufficient conditions for weaker realizations with simultaneously diagonal controllability and observability Gramians were also obtained. The notion of a quasi-balanced realization of a linear quantum stochastic system was introduced and it is shown that the special class of asymptotically stable completely passive linear quantum stochastic systems always possess a quasi-balanced realization. An explicit bound for the truncation error of model reduction on a quasi-balanceable linear quantum stochastic system was also derived, in analogy with the classical setting. An example of an optical cavity network for optical low-pass filtering was developed to illustrate the application of the results of this paper to model reduction of quasi-balanceable linear quantum stochastic systems.
\section*{Appendices}
\appendices
\section{Proof of Lemma \ref{lem:sym-cong-sim}}
\label{app:sym-cong-sim} Suppose that there is a symplectic matrix $T$ such that $TPT^{\top}=\Sigma$. Then we have (since $T^{\top}$ and $T^{-1}$ are also symplectic) that
\begin{eqnarray*}
T^{-\top}\mathbb{J}_n P T^{\top} = (T^{-\top} \mathbb{J}_n T^{-1}) TP T^{\top} = \mathbb{J}_n TPT^{\top} = \mathbb{J}_n \Sigma,
\end{eqnarray*}
therefore $\mathbb{J}_n P$ is symplectically similar to $\mathbb{J}_n \Sigma$.
Conversely, suppose that there is a symplectic matrix $T$ such that $T^{-\top} \mathbb{J}_nPT^{\top}=\mathbb{J}_n \Sigma$ for a real diagonal matrix $\Sigma$. Then we have
\begin{eqnarray*}
T P T^{\top} = -T\mathbb{J}_n \mathbb{J}_n P T^{\top} &=& -(T\mathbb{J}_n T^{\top}) T^{-\top} \mathbb{J}_nP T^{\top},\\
&=& -\mathbb{J}_n T^{-\top} \mathbb{J}_n P T^{\top},\\
&=& -\mathbb{J}_n\mathbb{J}_n \Sigma,\\
&=& \Sigma.
\end{eqnarray*}
Therefore, $P$ is symplectically congruent to $\Sigma$.
Suppose that $P \geq 0$ is symplectically congruent to $\Sigma$, and $\mathbb{J}_n P$ is diagonalizable. Then, by the above, $\mathbb{J}_n \Sigma$ is also diagonalizable. Furthermore, since $\Sigma \geq 0$, the matrix $\mathbb{J}_n \Sigma$ has eigenvalues of the form $\pm \imath \sigma_1$, $\pm \imath \sigma_2$, $\ldots$, $\pm \imath \sigma_n$ (for some $\sigma_i \geq 0$ for $i=1,2,\ldots,n$). It follows that $\mathbb{J}_n P$ is also diagonalizable with the same set of eigenvalues. In the special case that $P>0$, then $\Sigma>0$, $\mathbb{J}_n P$ is diagonalizable by Williamson's Theorem \cite{Will36}, \cite[Lemma 2]{PSL09}, and $\sigma_i >0$ for $i=1,2,\ldots,n$. $\hfill \Box$
\section{Proof of Theorem \ref{thm:sym-diag}}
\label{app:sym-diag} Define $\tilde P$ as in Lemma \ref{lem:sym-eig-struct}. From the proof of Lemma \ref{lem:sym-eig-struct} we know that $\mathbb{K}_n \tilde P$ has eigenvalues $\pm \imath \sigma_1$, $\pm \imath \sigma_2$,$\ldots$,$\pm \imath \sigma_n$. Now, let
$$
W=P_s^{\top} V P_s = P_s^{\top} [\begin{array}{ccccccc} v_1 & v_1^{\#} & v_2 & v_2^{\#} &\ldots & v_n & v_n^{\#} \end{array}].
$$
Since $P=P^{\top} \geq 0$ and $\mathbb{J}_n P$ is assumed to be diagonalizable, we have from Lemma \ref{lem:sym-eig-struct}
\begin{eqnarray*}
-\imath W^{-1} \mathbb{J}_n P W &=& -\imath P_s^{\top} V^{-1} P_s \mathbb{J}_n P P_s^{\top} V P_s,\\
&=& -\imath P_s^{\top} V ^{-1}(P_s \mathbb{J}_n P_s^{\top}) (P_s P P_s^{\top}) V P_s,\\
&=& P_s^{\top} (-\imath V^{-1} \mathbb{K}_n \tilde P V) P_s,\\
&=& P_s^{\top} {\rm diag}(\sigma_1,\sigma_2,\ldots,\sigma_n,-\sigma_1,-\sigma_2,\ldots,-\sigma_n) P_s,\\
&=& {\rm diag}(\sigma_1,-\sigma_1,\sigma_2,-\sigma_2,\ldots,\sigma_n, -\sigma_n),
\end{eqnarray*}
Equivalently,
$
W^{-1}\mathbb{J}_n P W = {\rm diag}(\imath\sigma_1,-\imath\sigma_1,\imath \sigma_2,-\imath\sigma_2,\ldots,\imath \sigma_n, -\imath \sigma_n).
$
Moreover, we also have
\begin{eqnarray*}
-\imath W^{\dag} \mathbb{J}_n W &=& -\imath P_s^{\top} V^{\dag} (P_s \mathbb{J}_n P_s^{\top}) V P_s,\\
&=& P_s^{\top} (-\imath V^{\dag} \mathbb{K}_n V)P_s,\\
&=& P_s^{\top} {\rm diag}(I_n,-I_n) P_s,\\
&=& {\rm diag}_n \left({\rm diag}(1,-1) \right).
\end{eqnarray*}
Note that the unitary matrix $U$ in the statement of the theorem satisfies
$$
U^{\dag} {\rm diag}_n\left({\rm diag}(1,-1) \right) U = -\imath \mathbb{J}_n,
$$
and also the matrix $ T_0=WU$ is real since
\begin{eqnarray*}
WU &=& \frac{1}{\sqrt{2}}P_s^{\top}[\begin{array}{ccc} v_1+ v_1^{\#} & -\imath v_1+ \imath v_1^{\#} & v_2+ v_2^{\#} \end{array} \\
&&\quad \begin{array}{cccc} -\imath v_2+ \imath v_2^{\#} &\ldots & v_n+ v_n^{\#} & -\imath v_n+ \imath v_n^{\#} \end{array}].
\end{eqnarray*}
Thus we have that
\begin{eqnarray*}
T_0^{\top} \mathbb{J}_n T_0 = T_0^{\dag} \mathbb{J}_n T_0 = U^{\dag} W^{\dag} \mathbb{J}_n W U = \imath U^{\dag} (-\imath W^{\dag} \mathbb{J}_n W)U=\imath U^{\dag}{\rm diag}_n \left({\rm diag}(1,-1)\right) U =\mathbb{J}_n,
\end{eqnarray*}
and
\begin{eqnarray*}
T_0^{-1} \mathbb{J}_n P T_0 &=& U^{\dag} W^{-1} \mathbb{J}_n P W U,\\
&=&\imath U^{\dag} (-\imath W^{-1} \mathbb{J}_n PW)U,\\
&=& \imath U^{\dag} {\rm diag}(\sigma_1,- \sigma_1, \sigma_2,- \sigma_2,\ldots,\sigma_n, - \sigma_n) U,\\
&=& {\rm diag}_n(\sigma_1 \mathbb{J},\sigma_2\mathbb{J},\ldots,\sigma_n \mathbb{J}),\\
&=& \mathbb{J}_n \Sigma,\\
&=& \Sigma \mathbb{J}_n,
\end{eqnarray*}
with $\Sigma ={\rm diag}(\sigma_1 I_2,\sigma_2 I_2,\ldots,\sigma_n I_2)$. Thus we have constructed a symplectic matrix $T_0$ such that $T_0^{-1} \mathbb{J}_n P T_0=\mathbb{J}_n \Sigma = \Sigma \mathbb{J}_n$ (the second identity follows from the specific form of $\Sigma$). Defining $T=T_0^{\top}$ we have that $ T^{-\top} \mathbb{J}_n P T^{\top} =\mathbb{J}_n \Sigma = \Sigma \mathbb{J}_n$ and from the proof of Lemma \ref{lem:sym-cong-sim} we also conclude that $ T P T^{\top} = \Sigma$, as claimed. $\hfill \Box$
\section{Proof of Theorem \ref{thm:bt-q-lin}}
\label{app:bt-q-lin} We first prove the only if part of Point 1. Suppose that there is a symplectic matrix $T$ such that
$TPT^{\top}=\Sigma$, $T^{-\top}QT^{-1}=\Sigma$, and $\Sigma= {\rm diag}(\sigma_1 I_2, \sigma_2 I_2,\ldots,\sigma_n I_n)$ for some $\sigma_1,\sigma_2,\ldots,\sigma_n \geq 0$. Then we have from Lemma \ref{lem:sym-cong-sim} that $T^{-\top}\mathbb{J}_n P T^{\top} = \mathbb{J}_n \Sigma$ and $T\mathbb{J}_n Q T^{-1}=\mathbb{J}_n\Sigma$. Now, note from Theorem \ref{thm:sym-diag} that $\mathbb{J}_n \Sigma= \Sigma \mathbb{J}_n$ (due to the specific form of $\Sigma$) from which it follows that $T^{-\top}Q \mathbb{J}_n T^{\top} = \mathbb{J}_n \Sigma$. Thus, we have that $T^{-\top}\mathbb{J}_n P T^{\top}=\mathbb{J}_n \Sigma = T^{-\top}Q \mathbb{J}_n T^{\top}$. It follows that $\mathbb{J}_n P=Q \mathbb{J}_n$.
For the if part of Point 1, suppose that $\mathbb{J}_n P =Q \mathbb{J}_n$. Let $\sigma_1$, $\sigma_2$, $\ldots$, $\sigma_n$ be the symplectic eigenvalues of $P$ and define $\Sigma=(\sigma_1 I_1,\sigma_2 I_2,\ldots,\sigma_n I_2)$. Then by Theorem \ref{thm:sym-diag} there exists a symplectic matrix $T$ such that $T^{-\top} \mathbb{J}_nP T^{\top} =\mathbb{J}_n \Sigma$ and $T P T^{\top}=\Sigma$. Since $\mathbb{J}_n P =Q \mathbb{J}_n$, we also have that $T^{-\top} Q\mathbb{J}_n T^{\top}=\mathbb{J}_n \Sigma$ or, equivalently, $T \mathbb{J}_n Q T^{-1}=\mathbb{J}_n \Sigma$ (again using $\mathbb{J}_n \Sigma = \Sigma \mathbb{J}_n$). From this last equality it follows from Lemma \ref{lem:sym-cong-sim} that also $T^{-\top}Q T^{-1}=\Sigma$.
Finally, we prove the last part of Point 1. It is apparent from the above that $P$ and $Q$ must have the same symplectic eigenvalues. Also note that $TPQT^{-1}= (TPT^{\top})(T^{-\top}QT^{-1})=\Sigma^2$. Since the eigenvalues of $PQ$ are squares of the Hankel singular values of $G$ and they are defined independently of the particular similarity transformation $T$ \cite{ZDG95}, $\sigma_1$, $\sigma_2$, $\ldots$, $\sigma_n$ are therefore also Hankel singular values of $G$.
The proof of the only if part of Point 2 is similar to the proof of the only if part of Point 1, so we will leave the details for the reader. For the if part of Point 2, note that since $\sigma_{X,1}$, $\sigma_{X,2}$, $\ldots$, $\sigma_{X,n}$ are the symplectic eigenvalues of $X$ for $X \in \{P,Q\}$, $[\mathbb{J}_nP,Q\mathbb{J}_n]=0$ is, by Lemma \ref{lem:sym-cong-sim}, equivalent to $\mathbb{J}_n P$ and $Q\mathbb{J}_n$ being simultaneously diagonalizable by some complex matrix $W$ as
\begin{eqnarray*}
W^{-1} \mathbb{J}_n P W &=& \imath {\rm diag}(\sigma_{P,1},-\sigma_{P,1},\ldots,\sigma_{P,n},-\sigma_{P,n}),\\
W^{-1} Q \mathbb{J}_n W &=& \imath {\rm diag}(\sigma_{Q,1},-\sigma_{Q,1},\ldots,\sigma_{Q,n},-\sigma_{Q,n}),
\end{eqnarray*}
In particular, the columns of $W$ are simultaneously eigenvectors of $\mathbb{J}_n P$ and $Q \mathbb{J}_n$. Following the corresponding steps in the proof of Theorem \ref{thm:sym-diag}, we can therefore establish that there is a symplectic matrix $T$ such that (again expoiting the specific form of $\Sigma_Q$ to commute it with $\mathbb{J}_n$)
\begin{eqnarray*}
T^{-\top} \mathbb{J}_n P T ^{\top} &=& \mathbb{J}_n \Sigma_P ,\\
T^{-\top} Q \mathbb{J}_n T^{\top} &=& \mathbb{J}_n \Sigma_Q = \Sigma_Q\mathbb{J}_n \Leftrightarrow T \mathbb{J}_n Q T^{-1}=\mathbb{J}_n \Sigma_Q.
\end{eqnarray*}
Therefore, from Lemma \ref{lem:sym-cong-sim} we conclude that $TPT^{\top}=\Sigma_P$ and $T^{-\top} Q T^{-1}=\Sigma_Q$.
Finally, we move on to proving Point 3. We first deal with the only if part. Suppose that there is a symplectic matrix $T$ such that
$$TPT^{\top}=\Sigma_P={\rm diag}(\omega_{P,1},\omega_{P,2},\ldots,\omega_{P,2n-1},\omega_{P,2n}),$$
for some nonnegative numbers $\omega_{P,1},\omega_{P,2},\ldots,\omega_{P,2n-1},\omega_{P,2n}$, and
$$T^{-\top} QT^{-1}=\Sigma_Q={\rm diag}(\omega_{Q,1},\omega_{Q,2},\ldots,\omega_{Q,2n-1},\omega_{Q,2n}),$$
for some nonnegative numbers $\omega_{Q,1},\omega_{Q,2},\ldots,\omega_{Q,2n-1},\omega_{Q,2n}$. Since $\mathbb{J}_n X$ is assumed to be diagonalizable for $X \in \{P,Q\}$, by Lemma \ref{lem:sym-cong-sim} so is the matrix $\mathbb{J}_n \Sigma_X$. Moreover, since $\Sigma_X$ is real positive semidefinite, it follows (recall the proof of Lemma \ref{lem:sym-cong-sim}) that $\omega_{X,2i}=0$ if and only if $\omega_{X,2i-1}=0$ for $X\in \{P,Q\}$ and $i=1,2,\ldots,n$; for if this were not true then $\mathbb{J}_n \Sigma_X$ will have zero as an eigenvalue with geometric multiplicity less than its algebraic multiplicity, contradicting the assumption that $\mathbb{J}_n X$ is diagonalizable. Now, for $X \in \{P,Q\}$, define
$$
d_{X,2j-1}=\left\{ \begin{array}{cc} (\omega_{X,2j}/\omega_{X,2j-1})^{1/4} & \hbox{if $x_{2j-1} \neq0$ and $x_{2j}\neq 0$}\\
1 & \hbox{if $x_{2j-1} = 0$ and $x_{2j} = 0$}
\end{array} \right.,
$$
and
$d_{X,2j}=\frac{1}{d_{X,2j-1}}$ for $j=1,2,\ldots,n$. Also, define $$D_{X}={\rm diag}(d_{X,1},d_{X,2},\ldots,d_{X,2n-1},d_{X,2n}),\; X \in \{P,Q\}.$$ Then notice that, by construction, $D_P$ and $D_Q$ are diagonal symplectic matrices. Moreover, $$D_P T P T^{\top} D_P = {\rm diag}(e_{P,1}I_2,e_{P,2}I_2,\ldots,e_{P,n}I_2),$$
with $e_{P,i}=\sqrt{\omega_{P,2i-1}\omega_{P,2i}}$ for $i=1,2,\ldots,n$, and
$$
D_Q T^{-\top} Q T^{-1} D_Q= {\rm diag}(e_{Q,1}I_2,e_{Q,2}I_2,\ldots,e_{Q,n}I_2),
$$
with $e_{Q,i}=\sqrt{\omega_{Q,2i-1}\omega_{Q,2i}}$ for $i=1,2,\ldots,n$. Define $\tilde T_P= D_P T$ and $\tilde T_Q= D_Q^{-1} T$ and note that by definition $D_P^{-1} \tilde T_P = D_Q \tilde T_Q$. Again, it follows from Lemma \ref{lem:sym-cong-sim} that
\begin{eqnarray*}
\tilde T_P^{-\top} \mathbb{J}_n P \tilde T_P^{\top} &=&\mathbb{J}_n {\rm diag}(e_{P,1}I_2,e_{P,2}I_2,\ldots,e_{P,n}I_n),\\
\tilde T_Q \mathbb{J}_n Q \tilde T_Q^{-1}&=&\mathbb{J}_n {\rm diag}(e_{Q,1}I_2,e_{Q,2}I_2,\ldots,e_{Q,n}I_2).
\end{eqnarray*}
That is, $e_{X,1},e_{X,2},\ldots,e_{X,n}$ are the symplectic eigenvalues of $X$ for $X \in \{P,Q\}$. This completes the proof of the only if part.
Conversely, to prove the if part of Point 3, let $\sigma_{X,1},\sigma_{X,2},\ldots,\sigma_{X,n}$ be symplectic eigenvalues of $X \in \{P,Q\}$, and let $\tilde \Sigma_X={\rm diag}(\sigma_{X,1}I_2,\sigma_{X,2}I_2,\ldots,\sigma_{X,n}I_2)$. Suppose that there exist symplectic matrices $\tilde T_P$ and $\tilde T_Q$, and diagonal symplectic matrices $D_P$ and $D_Q$, such that $\tilde T_P P \tilde T_P^{\top}=\tilde \Sigma_P$ and $\tilde T_Q^{-\top} Q \tilde T_Q^{-1}=\tilde \Sigma_Q$, and $D_P^{-1} \tilde T_P = D_Q \tilde T_Q$. Let $\Sigma_P= D_P^{-1} \tilde \Sigma_P D_P^{-1}$ and $\Sigma_Q=D_Q^{-1} \tilde \Sigma_Q D_Q^{-1}$, and note that both are diagonal since $D_Q$ and $D_P$ are diagonal. Define $T= D_P^{-1} \tilde T_P$, so then also $T=D_Q \tilde T_Q$. It follows that $T P T^{\top} = \Sigma_P$ and $T^{-\top} Q T^{-1}=\Sigma_Q$. $\hfill \Box$
\section{Proof of Lemma \ref{lem:cp-trans}}
\label{app:cp-trans} In this part, we show that a completely passive system after a unitary symplectic transformation remains completely passive. Let $T \in \mathbb{R}^{2n}$ be unitary symplectic and let $\tilde x=T x$, with $\tilde x=(\tilde q_1, \tilde p_1, \ldots, \tilde q_n, \tilde p_n)^{\top}$. Since $T$ is symplectic, the operators $\tilde q_1, \tilde p_1, \ldots, \tilde q_n, \tilde p_n$ satisfy the same canonical commutation relations as $q_1, p_1, \ldots, q_n, p_n$. Define the annihilation operators $\tilde a_i=\frac{1}{2}(\tilde q_i + \imath \tilde p_i)$, $i=1,2,\ldots,n$ and let $\tilde a =(\tilde a_1,\tilde a_2,\ldots,\tilde a_n)^{\top}$. Also define $D(a) = [\begin{array}{cc} a^{\top} & a^{\dag} \end{array}]^{\top}$ and $D(\tilde a) = [\begin{array}{cc} \tilde a^{\top} & \tilde a^{\dag} \end{array}]^{\top}$. We can write
$$
D(\tilde a) = [\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{\top}\tilde x= [\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{\top}Tx,
$$
with
$$
\Sigma=\frac{1}{2}\left[\begin{array}{ccccccc} 1 & \imath & 0 & 0 & \ldots & 0 & 0 \\
0 & 0 & 1 & \imath & \ldots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \ldots & 1 & \imath \end{array} \right].
$$
Since $\sqrt{2} [\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{\top}$) is unitary (see, e.g., \cite{Nurd10b} we have that
$$
[\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{-\top}=2 [\begin{array}{cc} \Sigma^{\dag} & \Sigma^{\top}\end{array}],
$$
and therefore, since $\tilde x = [\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{-\top} D(a) = 2 [\begin{array}{cc} \Sigma^{\dag} & \Sigma^{\top}\end{array}] D(a)$,
$$
D(\tilde a) = 2[\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{\top}T[\begin{array}{cc} \Sigma^{\dag} & \Sigma^{\top}\end{array}]D(a),
$$
The matrix $W=2[\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{\top}T[\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]$ is necessarily Bogoliubov \cite{GJN10}, but it is also complex unitary since $T$ is real unitary and $\sqrt{2}[\begin{array}{cc} \Sigma^{\top} & \Sigma^{\dag}\end{array}]^{\top}$ and $\sqrt{2}[\begin{array}{cc} \Sigma^{\dag} & \Sigma^{\top}\end{array}]$ are both unitary. In particular, $W$ has the doubled up form \cite{GJN10}
$$
W=\left[\begin{array}{cc} W_1 & W_2 \\ W_2^{\#} & W_1^{\#} \end{array} \right],
$$
for some matrices $W_1,W_2 \in \mathbb{C}^{n \times n}$. Since $W$ satisfies $WW^{\dag}=I=W^{\dag}W$ (unitarity) and $W^{\dag}{\rm diag}(I,-I)W={\rm diag}(I,-I)=W{\rm diag}(I,-I)W^{\dag}$ (the Bogoliubov property \cite{GJN10}), it follows by straightforward algebra that
$W_1^{\dag}W_1 +W_2^{\top}W_2^{\#} = I$, $W_1^{\dag}W_2+W_2^{\top}W_1^{\#}=0$, $W_1^{\dag}W_1-W_2^{\top}W_2^{\#}=I$, and $W_1^{\dag}W_2-W_2^{\top}W_1^{\#} = 0$, implying that $W_2=0$ and $W_1$ is unitary. That is, $W= {\rm diag}(W_1,W_1^{\#})$. Therefore, it follows that $\tilde a = W_1 a \Leftrightarrow a = W_1^{\dag} \tilde a$. Since the system was originally completely passive with Hamiltonian $H= \frac{1}{2} a^{\dag}\tilde R a$ and coupling vector $L= \tilde K a$, the transformed system after the application of $T$ has Hamiltonian operator $\tilde H = \frac{1}{2}\tilde a^{\dag} (W_1 \tilde R W_1^{\dag}) \tilde a$ and $\tilde L = (\tilde K W_1^{\dag})\tilde a$. Since $D$ is unchanged when $T$ is applied, the form of $\tilde H$, $\tilde L$, and $D$ implies that the transformed system is again completely passive. $\hfill \Box$
\section{Proof of Theorem \ref{thm:cp-qbr}}
\label{app:cp-qbr} The proof will be split into three main parts: Parts A, B, and C.
{\bf Part A}. Note that due to the diagonal form of $R_{jk}$ (see Section \ref{sec:cp-systems-bt}) we can straightforwardly verify that $\mathbb{J}_n R + (\mathbb{J}_n R)^{\top}=0$. Moreover, from the physical realizability criterion we also have that
$$
4\mathbb{J}_n \Im\{K^{\dag}K\}\mathbb{J}_n + B\mathbb{J}_m B^{\top}=0.
$$
If $B$ satisfies $B=\mathbb{J}_n B \mathbb{J}_m^{\top}=-\mathbb{J}_n B \mathbb{J}_m$ then we get that
\begin{equation}
\left. \begin{array}{c} 4\mathbb{J}_n \Im\{K^{\dag}K\} + B B^{\top} =0 \\
4 \Im\{K^{\dag}K\} \mathbb{J}_n + B B^{\top} =0 \end{array} \right\} \Leftrightarrow 2 \mathbb{J}_n \Im\{K^{\dag}K\}+ 2 \Im\{K^{\dag}K\} \mathbb{J}_n + BB^{\top}=0. \label{eq:K-B-id}
\end{equation}
Using the fact that $A=2\mathbb{J}_n(R+\Im\{K^{\dag}K\})$ and $\mathbb{J}_n R + (\mathbb{J}_n R)^{\top}=0$, (\ref{eq:K-B-id}) implies that $A+A^{\top} + BB^{\top}=0$. That is, if $-B=\mathbb{J}_n B \mathbb{J}_m$ then the Lyapunov equation $AP +PA^{\top} + BB^{\top} =0$ has the unique solution $P=I$, uniqueness following from the assumption that $A$ is Hurwitz. Now, it is a straightforward exercise to verify from the form of $B$ given in Sec. \ref{sec:cp-systems-bt} for a completely passive linear quantum stochastic system that indeed $B=-\mathbb{J}_n B \mathbb{J}_m$. We conclude that for a completely passive system with $A$ Hurwitz the controllability Gramian is $P=I$.
{\bf Part B.} For a completely passive system the matrix $K$ in the coupling vector $L=Kx$ has the special form
$$
K=\left[\begin{array}{ccccccccc} M_1 & \imath M_1 & M_2 & \imath M_2 & \ldots & M_{n-1} & \imath M_{n-1} & M_n & \imath M_n \end{array} \right],
$$
for some column vectors $M_i \in \mathbb{C}^{m}$, $i=1,2,\ldots,n$. From this structure, direct inspection shows that $\Im\{K^{\dag}K\}$ has the block form $\Im\{K^{\dag}K\}=[Z_{ij}]_{i=1,2,\ldots,n,j=1,2,\ldots,m}$, with $Z_{ij}$ real $2 \times 2$ matrices of the special form
$$
Z_{ij} = \left[\begin{array}{cc} z_{1,ij} & z_{2,ij} \\ -z_{2,ij} & z_{1,ij}\end{array}\right].
$$
From this block structure and the block structure of $\mathbb{J}_n$ we have that $\Im\{K^{\dag}K\} \mathbb{J}_n - \mathbb{J}_n \Im\{K^{\dag}K\}=0$. Using this identity and the property of $\mathbb{J}_nR$ exploited in Part A, it follows that $\mathbb{J}_n A \mathbb{J}_n + A=0 \Leftrightarrow \mathbb{J}_n A \mathbb{J}_n = - A$.
Let us proceed to consider the case where $D=[\begin{array}{cc} I_{n_y} & 0_{n_y \times (2m-n_y)}\end{array}]$ (if $n_y=2m$ then $D=I_{2m}$). Consider the Lyapunov equation $A^{\top}Q+QA +C^{\top}C=0$. Since $C^{\top}=-\mathbb{J}_n B \mathbb{J}_m D^{\top}$ (by physically realizability of the system) we have that $C^{\top}C=\mathbb{J}_n B \mathbb{J}_m D^{\top} D \mathbb{J}_m B^{\top} \mathbb{J}_n$.
Also, due to the special form assumed for $D$ we have that $D^{\top}\mathbb{J}_{n_y/2} = \mathbb{J}_m D^{\top}$ and it follows that
\begin{eqnarray*}
\mathbb{J}_n C^{\top} C \mathbb{J}_n &=&\mathbb{J}_n B D^{\top} D B^{\top} \mathbb{J}_n,\\
&=& \mathbb{J}_n B D^{\top} \mathbb{J}_{n_y/2} \mathbb{J}_{n_y/2}^{\top} D B^{\top} \mathbb{J}_n,\\
&=& -\mathbb{J}_n B \mathbb{J}_m D^{\top} D \mathbb{J}_m B^{\top} \mathbb{J}_n,\\
&=& -C^{\top}C.
\end{eqnarray*}
Now consider the Lyapunov equation $A^{\top} Q+ Q A + C^{\top} C=0$. By multiplying this equation on the left and the right by $\mathbb{J}_n$ this equation can be rewritten as the Lyapunov equation $(\mathbb{J}_nA\mathbb{J}_n)^{\top} \bar{Q} + \bar{Q} (\mathbb{J}_nA\mathbb{J}_n) + \mathbb{J}_n C^{\top}C \mathbb{J}_n=0$, with $\bar{Q}=-\mathbb{J}_n Q \mathbb{J}_n$. Using the facts established earlier that $\mathbb{J}_n A \mathbb{J}_n=-A$ and $\mathbb{J}_n C^{\top} C \mathbb{J}_n=-C^{\top}C$, we see that the Lyapunov equation may be rewritten as $A^{\top} \bar{Q} + \bar{Q} A + C^{\top} C=0$. That is, $Q$ and $\bar{Q}$ are solutions of the same Lyapunov equation. Since $A$ is Hurwitz, the solution to this equation is unique and therefore $Q= \bar{Q} \Leftrightarrow Q=-\mathbb{J}_n Q \mathbb{J}_n$. Since we have established that $P=I$, we thus conclude that $[\mathbb{J}_nP,Q\mathbb{J}_n]=\mathbb{J}_n Q\mathbb{J}_n +Q=0$ when $D=[\begin{array}{cc} I_{n_y} & 0 \end{array}]$. Moreover, note in passing that since $\mathbb{J}_n P$ is diagonalizable and $Q\mathbb{J}_n$ commutes with $\mathbb{J}_n P$, we have that $Q\mathbb{J}_n$ is also diagonalizable and therefore so is $\mathbb{J}_n Q$.
{\bf Part C.} Now, consider the general case where there exists a matrix $E$ such that the square matrix $\tilde D=[\begin{array}{cc} D^{\top} & E^{\top} \end{array}]^{\top}$ is unitary and symplectic. We note that the unitarity of $\tilde D$ implies that $DE^{\top}=0$ and $\tilde D^{-1}=\tilde D^{\top}$. Also, the sympletic property of $\tilde D$ implies that $\tilde D^{-1}$ and $\tilde D^{\top}$ are symplectic. Define $\tilde B = B \tilde D^{-1}=B\tilde D^{\top}$. Then we have that $\tilde B \tilde B^{\top} = BB^{\top}$ and $\tilde B \mathbb{J}_n \tilde B^{\top}= B \mathbb{J}_n B^{\top}$. It follows from this that $P=I$ is also the unique solution to the Lyapunov equation $A P+ P A + \tilde B \tilde B^{\top}=0$ and, since the system is physically realizable, $A \mathbb{J}_n + \mathbb{J}_n A^{\top} + \tilde B \mathbb{J}_m \tilde B^{\top}=0$. Let $D_0=[\begin{array}{cc} I_{n_y} & 0_{n_y \times (2m-n_y)} \end{array}]$. We now show that $\mathbb{J}_n C^{\top} = \tilde B \mathbb{J}_m D_0^{\top}$. Indeed, we have
\begin{eqnarray*}
\mathbb{J}_n C^{\top} = B\mathbb{J}_m D^{\top}=\tilde B \tilde D \mathbb{J}_m D^{\top} = \tilde B (\tilde D \mathbb{J}_m \tilde D^{\top}) \tilde D D^{\top} = \tilde B \mathbb{J}_m D_0^{\top},
\end{eqnarray*}
where the last equality follows from the fact that $\tilde D D^{\top}=D_0$ (by the unitarity of $\tilde D$). We thus conclude that the system $\tilde G$ with system matrices $(A,\tilde B,C,D_0)$ is a physically realizable linear quantum stochastic system whose controllability Gramian $P=I_{2n}$ and observability Gramian $Q$ coincides with the original system $G$ with system matrices $(A,B,C,D)$. Due to the special form of $D_0$, we conclude from Part B that $[\mathbb{J}_n P,Q\mathbb{J}_n]=0$.
Finally, that the quasi-balancing transformation $T$ can be obtained by applying Theorem \ref{thm:sym-diag} to $Q$ such that $T^{-\top}Q T^{-1}=\Sigma_Q$ follows from the fact that $[\mathbb{J}_nP,Q\mathbb{J}_n]=0$ along the lines of the proof of Point 2 of Theorem \ref{thm:bt-q-lin}. Moreover, that $T$ is also unitary follows from the observation that $TT^{\top}=I$, since $P=I$ and $\Sigma = I$ (i.e., all the symplectic eigenvalues of $P$ are ones). Also, by Lemma \ref{lem:cp-trans}, the quasi-balanced realization obtained after applying $T$ is again complete passive. Therefore, from Lemma \ref{lem:cp-preservation} it now follows that the reduced system obtained after applying subsystem truncation is completely passive. $\hfill \Box$
\bibliographystyle{ieeetran}
|
1,116,691,497,235 | arxiv | \section{Introduction}
Ultra-hot Jupiters (UHJs) are the hottest giant exoplanets and they are extensively irradiated by their host stars. These planets are ideal laboratories to study the chemistry and physics of planetary atmospheres under extreme conditions.
Theoretical modelling of UHJ atmospheres \citep[e.g.,][]{Lothringer2018,Parmentier2018,Kitzmann2018,Helling2019} suggests that the day-sides as well as the terminators of UHJs are extremely hot and probably dominated by atoms and ions instead of molecules due to thermal dissociation and ionisation.
The thermal emission spectra of several UHJs, including HAT-P-7b \citep{Mansfield2018}, WASP-12b \citep{Stevenson2014}, WASP-18b \citep{Arcangeli2018}, and WASP-103b \citep{Kreidberg2018}, have been observed with the \textit{Hubble Space Telescope} (\textit{HST}). These thermal spectra exhibit a lack of $\mathrm{H_2O}$ features, which is probably due to thermal dissociation \citep{Parmentier2018}.
On the other hand, emission spectroscopy with high-resolution spectrographs has revealed the existence of neutral Fe in KELT-9 \citep{Pino2020}, WASP-189b \citep{Yan2020}, and WASP-33b \citep{Nugroho2020W33}.
In addition to the thermal emission spectra, phase curve observations have been performed for several UHJs, including WASP-33b \citep{Zhang2018, Essen2020}, WASP-121b \citep{Daylan2019, Bourrier2020-TESS}, and KELT-9b \citep{Wong2020,Mansfield2020}. These observations suggest that these UHJs have relatively low day-night temperature contrasts and relatively high heat transport efficiencies.
The increased heat transport efficiency in UHJs could be explained by a new physical mechanism -- thermal dissociation and recombination of $\mathrm{H_2}$ \citep{Bell2018, Komacek2018}.
Transmission spectroscopy has also been widely used in probing the atmospheres of UHJs, and various atomic and ionic species have been detected. In the atmosphere of KELT-9b -- the hottest exoplanet discovered so far, the hydrogen Balmer lines as well as multiple metal lines (including \ion{Fe}{i}, \ion{Fe}{ii}, \ion{Ti}{ii}, \ion{Mg}{i}, and \ion{Ca}{ii}) have been detected \citep{Yan2018, Hoeijmakers2018, Cauley2019, Hoeijmakers2019, Yan2019,Turner2020}.
Various metals as well as the Balmer lines have been detected in KELT-20b/MASCARA-2b \citep{Casasayas-Barris2018,Casasayas-Barris2019, Stangret2020, Nugroho2020, Hoeijmakers2020}.
\ion{Ca}{ii} is detected in WASP-33b \citep{Yan2019}.
The Balmer lines and metals including \ion{Na}{i}, \ion{Mg}{ii}, \ion{Fe}{i}, \ion{Fe}{ii}, \ion{Cr}{i}, and \ion{V}{i} have been detected in WASP-121b \citep{Sing2019, Bourrier2020, Gibson2020,Cabot2020,Ben-Yami2020}.
$\mathrm{H\alpha}$ and \ion{Mg}{ii} have been discovered in WASP-12b \citep{Fossati2010, Jensen2018}.
Neutral Fe has also been detected at the terminator of WASP-76b \citep{Ehrenreich2020}.
Planets experiencing strong stellar irradiation are thought to undergo hydrodynamic atmospheric escape \citep[e.g.,][and references therein]{Owen2019}. The hydrodynamic escape in hydrogen-dominated atmospheres is normally driven by the absorption of stellar extreme-ultraviolet (EUV) flux \citep{Yelle2004,Tian2005,Salz2016}.
However, \cite{Fossati2018} found that heating due to atomic absorption of the stellar UV and optical flux drives the atmospheric escape of UHJs orbiting early-type stars. \cite{Garcia-Munoz2019} further proposed that the absorption of the hydrogen Balmer line series can enhance and even drive the atmospheric escape of UHJs orbiting hot stars.
Observations of atmospheric escape have been performed with the hydrogen $\mathrm{Ly{\alpha}}$ line \citep{Vidal-Madjar2003, Etangs2012, Ehrenreich2015} as well as metal lines in the ultraviolet \citep{Fossati2010, Sing2019, Cubillos2020}, using the STIS spectrograph on \textit{HST}.
The helium 10833 $\mathrm{\AA}$ line has recently been used in probing escaping atmosphere of planets orbiting active stars \citep[e.g.,][]{Spake2018, Nortmann2018, Allart2018, Salz2018, Lampon2020, Palle2020}.
The hydrogen Balmer lines can also be used to probe high-altitude atmospheres and study atmospheric escape. For example, \cite{Yan2018} estimated the Jeans escape rate of KELT-9b with the $\mathrm{H{\alpha}}$ absorption line. Recently, \citet{Wyttenbach2020} modelled the Balmer lines with a hydrodynamic model and retrieved the mass-loss rate of KELT-9b.
The Balmer lines have been detected in four UHJs so far: KELT-9b \citep{Yan2018,Cauley2019,Turner2020}, KELT-20b \citep{Casasayas-Barris2018, Casasayas-Barris2019}, WASP-12b \citep{Jensen2018}, and WASP-121b \citep{Cabot2020}.
Besides, the $\mathrm{H{\alpha}}$ line has been detected in two non-UHJ planets -- HD 189733b \citep{Jensen2012, Barnes2016, Cauley2016, Cauley2017, Cauley2017-HD189} and WASP-52b \citep{Chen2020}. The $\mathrm{H{\alpha}}$ absorption in these two planets probably originates from the excitation of hydrogen atoms due to stellar $\mathrm{Ly{\alpha}}$ line and Lyman continuum irradiation \citep{Christie2013,Huang2017}.
Here, we present the discovery of the Balmer line absorption during the transit of WASP-33b -- a UHJ (equilibrium temperature $\sim$ 2710 K) orbiting an A5 star \citep{Cameron2010}. Several species have previously been detected in its planetary atmosphere, including TiO \citep{Haynes2015,Nugroho2017}, \ion{Ca}{ii} \citep{Yan2019}, \ion{Fe}{i} \citep{Nugroho2020W33}, and evidence of AlO \citep{Essen2019} and FeH \citep{Kesseli2020}.
The paper is organised as follows. We describe the transit observations and data analysis in Sect. 2. The observational results are presented in Sect. 3. In Sect. 4, we present the hydrodynamic model of the Balmer lines and discuss the atmospheric escape of WASP-33b. The conclusions are summarised in Sect. 5.
\section{Data and analysis}
\subsection{Observations}
We observed four transits of \object{WASP-33b} with two spectrographs. The observation logs are summarised in Table \ref{obs_log}.
Two transits were observed with the CARMENES \citep{Quirrenbach2018} installed at the 3.5 m telescope of the Calar Alto Observatory on 5 January 2017 and 16 January 2017. The visual channel of the CARMENES spectrograph has a resolution of \textit{R} $\sim$ 94\,600 and a wavelength coverage of 520--960\,nm.
The first night was photometric (i.e., ideal weather condition during the observation) and the second night was partially cloudy.
Another two transits were observed with the HARPS-North (HARPS-N) spectrograph mounted on the Telescopio Nazionale Galileo telescope on 17 October 2018 and 8 November 2018. The instrument has a resolution of \textit{R} $\sim$ 115\,000 and a wavelength coverage of 383--690\,nm. We used the order-merged one-dimensional spectra from the HARPS-N pipeline (Data Reduction Software). The spectra have an over-sampled wavelength step of 0.01 $\mathrm{\AA}$. We re-binned the spectrum every 3 wavelength points by averaging so that each wavelength point corresponds to 0.03 $\mathrm{\AA}$, which is similar to the CARMENES pixel size at the $\mathrm{H\alpha}$ line centre (0.030 $\mathrm{\AA}$).
Both nights were photometric. However, the spectral flux from the first night observation had a large drop when the telescope was pointing close to the zenith. Such a phenomenon also occurred during transit observations in \cite{Casasayas-Barris2019}, which was probably caused by a problem with the atmospheric dispersion corrector (ADC).
The signal-to-noise ratio (S/N) per wavelength point ($\sim$ 0.03 $\mathrm{\AA}$) at the $\mathrm{H\alpha}$ line centre is plotted in Fig.\ref{SNR}.
Among the four transits, night-1 from CARMENES and night-2 from HARPS-N observations have much higher S/N and, therefore, were used in \cite{Yan2019} for the detection of ionised calcium. In this work, we use and combine all four transits.
\begin{table*}
\caption{Observation logs.}
\label{obs_log}
\centering
\begin{threeparttable}
\begin{tabular}{l c c c c c c c}
\hline\hline \noalign{\smallskip}
Instrument & & Date & Observing Time (UT) & Airmass change & Exposure time [s] & $N_\mathrm{spectra}$ \\
\hline \noalign{\smallskip}
CARMENES & Night-1 & 2017-01-05 & 19:28--23:49 & 1.00--1.54 & 120 \tablefootmark{a} & 93 \\
CARMENES & Night-2 & 2017-01-16 & 19:25--00:07 & 1.01--2.03 & 120 & 66 \\
\hline \noalign{\smallskip}
HARPS-N & Night-1 & 2018-10-17 & 21:39--05:46 & 1.64--1.01--1.54 & 200 & 124 \\
HARPS-N & Night-2 & 2018-11-08 & 19:59--05:01 & 1.74--1.01--1.87 & 200 & 141 \\
\hline \noalign{\smallskip}
\end{tabular}
\tablefoot{
\tablefoottext{a}{The first 19 spectra had exposure time below 120 s. }
}
\end{threeparttable}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{SNR.png}
\caption{Signal-to-noise ratio per wavelength point ($\sim 0.03 \mathrm{\AA}$) at the $\mathrm{H\alpha}$ line centre. The dashed lines indicate the beginning and end of transit.
}
\label{SNR}
\end{figure}
~\\
\subsection{Obtaining the transmission spectral matrix}
We investigated the $\mathrm{H\alpha}$ line (6562.79$\,\mathrm{\AA}$) using both the CARMENES and HARPS-N observations and the $\mathrm{H\beta}$ (4861.35$\,\mathrm{\AA}$) and $\mathrm{H\gamma}$ (4340.47$\,\mathrm{\AA}$) lines from the HARPS-N observations. The data reduction method is similar to the method in \cite{Yan2018}.
The spectra were first normalised and shifted into the Earth's rest frame. We then removed the telluric absorption lines using a theoretical transmission spectral model of the Earth's atmosphere \citep{Yan2015b}.
The spectra were subsequently aligned into the stellar rest frame by correcting the barycentric radial velocity and the stellar systemic velocity \citep[-3.0$\,\mathrm{km\,s^{-1}}$, ][]{Nugroho2017}. We obtained an out-of-transit master spectrum by adding up all the out-of-transit spectra with the squared S/N as weight. We then divided each spectrum by the master spectrum in order to remove the stellar lines.
The residual spectrum was subsequently filtered with a Gaussian high-pass filter ($\sigma \sim$ 300 points) to remove large scale features on the continuum spectrum, which may be attributed to the stability of the HARPS-N ADC, stellar pulsation, or the imperfect normalisation of the blaze variation.
We combined the two CARMENES observations as well as the two HARPS-N observations by binning the spectra with an orbital phase step of 0.005. The binning was performed by averaging the spectra within each phase bin with the squared S/N as weight.
By applying these procedures, we obtained a transmission spectral matrix for each of the $\mathrm{H\alpha}$, $\mathrm{H\beta}$, and $\mathrm{H\gamma}$ lines from the HARPS-N observations and an $\mathrm{H\alpha}$ spectral matrix from the CARMENES observations (upper panels in Figs.~\ref{Ha-CAR+HAR-map} and \ref{Hb+Hc}).
\begin{figure*}
\centering
\includegraphics[width=0.90\textwidth]{Ha-CAR+HAR.pdf}
\caption{Transmission spectral matrices for the $\mathrm{H\alpha}$ line from the CARMENES observations (left) and the HARPS-N observations (right). The color bar indicates the value of relative flux.
\textit{(a)} The observed transmission spectra. The x axis is wavelength expressed in RV relatively to the $\mathrm{H\alpha}$ line centre (6562.79 $\mathrm{\AA}$) in the stellar rest frame. The horizontal dashed lines indicate the four contacts of transit.
\textit{(b)} The best-fit model from the MCMC analysis. The model includes the $\mathrm{H\alpha}$ transmission spectrum and the stellar line profile change (i.e. the CLV and RM effects).
The blue dashed line indicates the RV of the planetary orbital motion plus a constant shift ($V_\mathrm{centre}$). Although the models extend into the ingress and egress regions on the matrices, the fit was only performed on the fully in-transit data.
\textit{(c)} The observed transmission spectra with the RM and CLV effects corrected.
\textit{(d)} The residual between the observation and the model.}
\label{Ha-CAR+HAR-map}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.90\textwidth]{Hb+Hc.pdf}
\caption{Same as Fig.~\ref{Ha-CAR+HAR-map} but for the $\mathrm{H\beta}$ line (left) and the $\mathrm{H\gamma}$ line (right) from the HARPS-N observations.
}
\label{Hb+Hc}
\end{figure*}
\subsection{Model of stellar RM and CLV effects}
The stellar line profile varies during the transit due to the Rossiter-McLaughlin (RM) effect \citep{Queloz2000} and the centre-to-limb variation (CLV) effect \citep{Yan2015a, Czesla2015, Yan2017}. We modelled the RM and CLV effects simultaneously following the method described in \cite{Yan2018}. We used the same stellar and planetary parameters as in \cite{Yan2019}.
The planetary orbit of WASP-33b undergoes a nodal precession \citep{Johnson2015, Iorio2016, Watanabe2020}. We adopted the orbital change rates from \cite{Johnson2015} and calculated the expected orbital inclination ($i$) and spin-orbit angle ($\lambda$) at the dates of our observations. Because the observation dates are very close for the two CARMENES transits as well as for the two HARPS-N transits, the changes of the orbital parameters between them are negligible. Therefore, we set $i = 89.50$ deg and $\lambda = -114.05$ deg for the combined CARMENES transmission matrix; $i = 90.14$ deg and $\lambda = -114.93$ deg for the combined HARPS-N transmission matrix.
\subsection{Fitting the observed spectral matrix}
We fitted the observed transmission spectral matrix with a model consisting of two components: the planetary absorption and the stellar line profile change.
We assumed that the planetary absorption has a Gaussian profile described by the full width at half maximum (FWHM), the absorption depth ($h$), and the radial velocity (RV) shift of the observed line centre compared to the theoretical value ($V_\mathrm{centre}$).
The semi-amplitude of the planetary orbital motion ($K_\mathrm{p}$) is fixed to the expected $K_\mathrm{p}$ value ($231\pm3$ $\mathrm{km\,s^{-1}}$), which is calculated with the planetary orbital parameters.
The stellar line profile change caused by the RM and CLV effects is fixed to the results as calculated in Sect. 2.3.
We sampled from the posterior probability distribution using the Markov Chain Monte Carlo (MCMC) simulations with the \texttt{emcee} tool \citep{Mackey2013}.
We only used the fully in-transit data (i.e. excluding the ingress and egress phases).
\section{Results and discussion}
\subsection{$\mathrm{H\alpha}$ transmission spectrum}
The transmission spectral matrices are shown in Fig.~\ref{Ha-CAR+HAR-map}a. The $\mathrm{H\alpha}$ absorption is clearly detected in both CARMENES and HARPS-N data. The best-fit models of the planetary absorption feature as well as the stellar CLV and RM effects are shown in Fig.~\ref{Ha-CAR+HAR-map}b, and the best-fit parameters are summarised in Table \ref{Tab-fit-reuslt-tran}.
In order to obtain the one-dimensional transmission spectra, we firstly corrected the CLV and RM effects. Then, the residual spectra were shifted into the planetary rest frame. We subsequently averaged all the fully in-transit spectra to derive the final one-dimensional transmission spectra, which are presented in Fig.~\ref{Spec-Ha}. The $\mathrm{H\alpha}$ transmission spectrum of each night is shown in Fig.~\ref{App-Ha-individual}.
The obtained FWHM is 31.6$_{-3.6}^{+4.1}$ $\mathrm{km\,s^{-1}}$ for HARPS-N and 35.6$_{-2.0}^{+2.2}$ $\mathrm{km\,s^{-1}}$ for CARMENES. These values are smaller than those of KELT-9b \citep{Yan2018, Cauley2019, Turner2020} while slightly higher than those of KELT-20b \citep{Casasayas-Barris2018, Casasayas-Barris2019}. The large FWHM indicates that the $\mathrm{H\alpha}$ absorption is optically thick \citep{Huang2017}.
The measured RV shift of the line centre is 2.0$\pm$1.9 $\mathrm{km\,s^{-1}}$ for HARPS-N and 0.8$\pm$1.1 $\mathrm{km\,s^{-1}}$ for CARMENES. The RV shift has been used to measure high-altitude winds at the planetary terminator \citep{Snellen2010, Wyttenbach2015,Louden2015,Brogi2016}.
Nevertheless, the measured $V_\mathrm{centre}$ is relative to the stellar systemic RV and there is a large uncertainty of the stellar RV of WASP-33. This is because precisely measuring the absolute RV of fast-rotating A-type stars is intrinsically challenging. For example, the reported systemic RVs of WASP-33 deviate by several $\mathrm{km\,s^{-1}}$ \citep{Cameron2010, Lehmann2015, Nugroho2017, Cauley2020-W33}. Therefore, we conclude that we do not detect any significant winds at the terminator of WASP-33b considering the uncertainties in the measured $V_\mathrm{centre}$ values and the stellar systemic RV.
We further combined the CARMENES and HARPS-N transmission spectral matrices using the binning method as described in Section 2.2. The stellar CLV and RM effects were already corrected before the averaging. The combined matrix is presented in Fig.~\ref{Ha-combine} and the best-fit parameters are listed in Table \ref{Tab-fit-reuslt-tran}.
We calculated the equivalent width of the absorption line ($W_\mathrm{H\alpha}$) using the same method as in \cite{Yan2018}, except that the integration range was set as $\pm 35\,\mathrm{km\,s^{-1}}$ to match the observed FWHM. Fig.~\ref{LC-Ha} shows the time series of $W_\mathrm{H\alpha}$. There is no obvious pre- or after-transit absorption, although the absorption depth is slightly stronger during the first-half transit.
In general, the fitted parameters between CARMENES and HARPS-N are consistent. However, the CARMENES $\mathrm{H\alpha}$ absorption is somewhat stronger than the HARPS-N absorption. Such a slight difference between the two instruments is also observed for the $\mathrm{H\alpha}$ line in KELT-20b \citep{Casasayas-Barris2019} and KELT-9b \citep{Yan2018,Wyttenbach2020,Turner2020}.
Although the difference could be from random variations, there may also be systematic residuals, which could be due to instrumental effects (e.g., non-linearity, the stability of the HARPS-N ADC) or data reduction procedures (e.g., imperfect normalisation, removal of stellar and telluric lines).
For the case of WASP-33b, the slight difference between the CARMENES and HARPS-N results could be caused by the stellar pulsations, which can affect the stellar $\mathrm{H\alpha}$ line profile.
The host star is a known $\mathrm{\delta}$ Scuti star with pronounced pulsations \citep{Cameron2010, Essen2014, Kovacs2013}.
For the CARMENES spectrum in Fig.~\ref{Spec-Ha}, there is a bump feature on the left of the planetary absorption line. A weaker bump feature is also present in the HARPS-N spectrum.
Such a bump feature is also observed in the transmission spectrum of the \ion{Ca}{ii} infrared triplet lines \citep{Yan2019}, which are obtained using the same transit data as in this work.
On the combined transmission spectral matrix in Fig.~\ref{Ha-combine}, there are bright stripes on the left and right sides of the planetary absorption signal and these stripes extend beyond the transit. These stripes are probably the stellar pulsation signatures, which generate the bump features as observed on the transmission spectra in Fig.~\ref{Spec-Ha}.
For individual CARMENES and HARPS-N observation, the position and strength of the pulsation features were not the same during the transits. Therefore, the stellar pulsation could introduce a difference between the CARMENES and HARPS-N results.
In a preprint posted while the present paper was under review, \cite{Cauley2020-W33} reported the detection of the Balmer lines in WASP-33b using the PEPSI spectrograph mounted on the Large Binocular Telescope.
Their obtained absorption line strength is quantitatively different but generally consistent with our CARMENES and HARPS-N results, considering possible effects of the stellar pulsation.
Although the detection of the $\mathrm{H\alpha}$ line is unambiguous, its strength is potentially affected by the stellar pulsation. To correct the effect of the pulsation, a detailed analysis of the variation in the stellar Balmer lines is required. Such an analysis requires data taken with high S/N and is beyond the scope of this paper. However, considering that the pulsating periods are not synchronous with the planetary orbital motion \citep{Essen2014}, the pulsating contribution to the transmission spectrum should be statistically reduced when combining the four transit spectra together. To evaluate the effect of pulsation, we combined the out-of-transit spectra on the spectral matrix in Fig.\ref{Ha-combine} (i.e., phases --0.10 to --0.05 and +0.05 to +0.10) and assumed the in-transit orbital velocity to repeat during out-of-transit. The obtained out-of-transit spectrum (Fig.\ref{spec-OOT}) shows ripple-like features with semi-amplitude of $\sim$0.2\%, which are most likely the results of the stellar pulsation. The effect of the pulsation to the $\mathrm{H\alpha}$ transmission spectrum should be at a similar order of these ripple features.
We note that \cite{Valyavin2018} analysed the transit light curve of WASP-33b observed with the $\mathrm{H\alpha}$ filter. These latter authors found that the $\mathrm{H\alpha}$ transit depth is significantly deeper than the transit depths measured in broad bands.
Since we detect the strong $\mathrm{H\alpha}$ absorption line with high-resolution spectroscopy, we confirm that the photometric result of \cite{Valyavin2018} is evidence of the $\mathrm{H\alpha}$ absorption in the planetary atmosphere.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Spec-Ha-com.png}
\caption{Transmission spectra of the $\mathrm{H\alpha}$ line. The black circles are spectra binned every five points ($\sim$ 0.15 $\AA$) and the grey lines are the original spectra (i.e., $\sim$ 0.03 $\AA$ per point). The red lines are the best-fit Gaussian functions. The vertical dashed line indicates the rest wavelength line centre. The CLV and RM effects are corrected.
An offset of the y-axis is applied to the spectra for clarity.
}
\label{Spec-Ha}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, height=0.3\textwidth]{Ha-final-map.png}
\caption{Combined transmission spectral matrix of the CARMENES and HARPS-N results for the $\mathrm{H\alpha}$ line. The stellar line profile change due to the CLV and RM effects has been removed before averaging. The horizontal dashed lines indicate the four contacts of transit and the diagonal dashed lines denote the planetary orbital RV.
}
\label{Ha-combine}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{LC-Ha-com.png}
\caption{Time series of the $\mathrm{H\alpha}$ equivalent width. The values are measured on the combined transmission spectral matrix in Fig.~\ref{Ha-combine}. The vertical dashed lines indicate the first ($\mathrm{T_1}$), second ($\mathrm{T_2}$), third ($\mathrm{T_3}$), and fourth ($\mathrm{T_4}$) contacts of the transit. The horizontal line denotes $W_\mathrm{H\alpha}$ = 0.
}
\label{LC-Ha}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{spec-OOT.png}
\caption{Average out-of-transit spectrum of the $\mathrm{H\alpha}$ transmission spectral matrix in Fig.\ref{Ha-combine}. The vertical dashed line denotes the line centre. These spectral features likely originate from the stellar pulsation.
}
\label{spec-OOT}
\end{figure}
\subsection{$\mathrm{H\beta}$ and $\mathrm{H\gamma}$ transmission spectrum}
The $\mathrm{H\beta}$ and $\mathrm{H\gamma}$ lines are only covered by the HARPS-N spectrograph. The absorption signals are relatively weak compared to the $\mathrm{H\alpha}$ line. The best-fit parameters are shown in Table \ref{Tab-fit-reuslt-tran} with the final transmission spectra in Fig.~\ref{Spec-Hb+Hc}. The detection of $\mathrm{H\beta}$ is clear while the $\mathrm{H\gamma}$ signal is less prominent.
The line depth of the $\mathrm{H\beta}$ absorption is smaller than that of the $\mathrm{H\alpha}$ line, but their FWHM values are relatively similar to each other. This is also the case for the $\mathrm{H\alpha}$ and $\mathrm{H\beta}$ lines in KELT-9b \citep{Cauley2019,Wyttenbach2020} and KELT-20b \citep{Casasayas-Barris2019}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Spec-Hb.png}
\includegraphics[width=0.5\textwidth]{Spec-Hga.png}
\caption{Transmission spectra of the $\mathrm{H\beta}$ line (upper panel) and the $\mathrm{H\gamma}$ line (lower panel).
}
\label{Spec-Hb+Hc}
\end{figure}
\begin{table*}
\caption{Fit results of the transmission spectral matrices.}
\label{Tab-fit-reuslt-tran}
\centering
\begin{tabular}{l l c c c c c}
\hline\hline\noalign{\smallskip}
~ & ~ & $V_\mathrm{centre}$ [$\mathrm{km\,s^{-1}}$] & FWHM [$\mathrm{km\,s^{-1}}$] & Line depth [\%] & $R_\mathrm{eff}$ [$R_\mathrm{p}$] & Detection significance\\
\hline \noalign{\smallskip}
~ & CARMENES & 0.8$\pm$1.1 & 35.6$_{-2.0}^{+2.2}$ & 1.11$\pm$0.07 & 1.34$\pm$0.02 & 16 $\mathrm{\sigma}$\\
$\mathrm{H\alpha}$ & HARPS-N & 2.0$\pm$1.9 & 31.6$_{-3.6}^{+4.1}$ & 0.81$\pm$0.09 & 1.26$\pm$0.03 & 9 $\mathrm{\sigma}$\\
~ & combination & 1.2$\pm$0.9 & 34.3$\pm$1.6 & 0.99$\pm$0.05 & 1.31$\pm$0.01 & 20 $\mathrm{\sigma}$\\
\hline \noalign{\smallskip}
$\mathrm{H\beta}$ & HARPS-N & 2.2$\pm$1.7 & 30.6$_{-4.3}^{+4.9}$ & 0.54$\pm$0.07 & 1.18$\pm$0.02 & 8 $\mathrm{\sigma}$\\
\hline \noalign{\smallskip}
$\mathrm{H\gamma}$ & HARPS-N & 7$\pm$15 & 49$_{-26}^{+25}$ & 0.28$_{-0.15}^{+0.09}$ & 1.10$_{-0.05}^{+0.03}$ & 2.3 $\mathrm{\sigma}$\\
\hline
\end{tabular}
\end{table*}
\subsection{Model of the Balmer lines}
\subsubsection{Estimation of the atmospheric conditions}
In order to obtain a rough estimate of the possible atmospheric conditions, and before modelling the lines, one can use the \citet{Lecavelier2008} formula to estimate the atmospheric scale-height in the region probed by the Balmer lines. Indeed, the altitude of absorption $z$ is proportional to the hydrogen Balmer line oscillator strengths $\ln(gf)$: $\Delta z = H\Delta \ln(gf)$, where $H=k_BT/\mu g$ is the pressure scale-height. The $\ln(gf)$ values are 1.635, -0.046, and -1.029 for the H$\alpha$, H$\beta$, and H$\gamma$ lines, respectively.
Taking into account all our measurements from CARMENES and HARPS-N, we computed $H=9\,200\pm1\,200$~km. Considering the decrease of the gravity $g$ with altitude, we estimated that $T/\mu\simeq15\,100\pm2\,100$~[K/u] (at $z\sim1.2$\,${\rm R_P}$). As it is likely that molecular hydrogen is dissociated under these conditions, we can further assume that $\mu$ is between 0.66 and 1.26 (the atmosphere is dominated by a mixture of ionised and neutral hydrogen and helium). Hence, we estimated the upper atmosphere temperature to be between 8\,600 and 21\,700 K. The lower end of the $T$ range should be preferred since when the temperature increases, the amount of ionised hydrogen increases, making $\mu$ decreases as well.
\subsubsection{Model set-up}
To interpret the observed Balmer lines in WASP-33b, we employed the \texttt{PAWN} model (PArker Winds and Saha-BoltzmanN atmospheric model) developed by \citet{Wyttenbach2020}. This tool is a 1-D model of an exoplanet upper atmosphere linked to an MCMC retrieval algorithm. Its purpose is to retrieve parameters of the thermosphere regions (e.g., the temperature, mass-loss rate) from high-resolution transmission spectra. Key features of the \texttt{PAWN} model are summarised here.
First, we can choose the atmospheric structure to be hydrostatic (barometric law) or hydrodynamic (Parker wind transonic solution), with the base density or the mass-loss rate being a free parameter, respectively. The atmospheric profile is assumed to be isothermal in both cases, with the temperature being an additional free parameter. We also assume the atmosphere to be in chemical equilibrium, with Solar abundances. We use a chemical grid calculated with the equilibrium chemistry code presented in \citet{Molliere2017}, from which we interpolate the volume mixing ratios and other useful quantities according to the atmospheric structure.
As we detected Balmer lines, we focus on the neutral hydrogen. In local thermodynamic equilibrium (LTE), the number densities of the different electronic states follow the Boltzmann distribution.
Then, the opacities have a Voigt profile and follow the prescriptions of \cite{Kurucz1979,Kurucz1992,Sharp2007}. Finally, the transmission spectrum is computed following \citet{Molliere2019}. The line profiles are broadened taking into account the planetary rotation (tidally locked solid body rotation perpendicular to the orbital plane). The model is also convolved, binned and normalised in order to be comparable to the data.
On top of the atmospheric model parameters presented above, each line centre is a free parameter. For other planetary parameters (e.g., mass and radius), we used the same values as presented in Table 2 of \cite{Yan2019}. For every MCMC chain, we used 10 walkers for each parameter during 2500 steps, with a burn-in size of 500 steps. For each parameter, we used a uniform or log-uniform prior. For the mass-loss rate we put a lower boundary for the prior at $\log_{10}({\rm \dot{M}}$ [g\,s$^{-1}$]) = 9, as it is expected that WASP-33b is undergoing strong atmospheric escape \citep{Fossati2018}. We tried to fit hydrostatic and hydrodynamic structures to see if one structure would be preferred. The Bayesian information criterion (BIC) allows us to compare the results of different models, and to choose the best-fitting model.
\subsubsection{Model results}
We performed MCMC chains on each individual Balmer absorption line from HARPS-N and CARMENES. We also fitted the three Balmer lines simultaneously, using the combined HARPS-N and CARMENES result. It is important to note that since the depth of the H$\alpha$ line is not the same for the two instruments, we would expect some differences in the retrieved parameters.
The results from the MCMC model fitting are summarised in Table~\ref{Tab-PAWN-MCMC} for the case of a hydrodynamic atmosphere in LTE. For each detection, the retrieved parameters are compatible. The combined fit (all Balmer lines from HARPS-N and CARMENES) points toward a thermospheric temperature of $T=12\,200^{+1300}_{-1000}$ K and a mass-loss rate of ${\rm \dot{M}}=10^{11.8^{+0.6}_{-0.5}}$ g\,s$^{-1}$. The best-fit spectra and the correlation diagram of the combined fit are presented in Fig.~\ref{MCMC-PW-LTE-TS} and Fig.~\ref{MCMC-PW-LTE}, respectively. Before interpreting this result, we mention here that for each scenario (line or instrument), the absorption line was fitted equally well by a hydrostatic structure ($\Delta$BIC<1). This is because, for a hot Jupiter, it is often possible to find a very similar atmospheric structure for both cases, especially when the temperature is high \citep{Wyttenbach2020}. Nevertheless, an evaporating scenario could be preferred for WASP-33b as suggested by forward modelling \citep{Fossati2018}. This latter study predicted a mass-loss rate of about $10^{11}$ g\,s$^{-1}$ for WASP-33b, which is well in line with our retrieved mass-loss rate.
\begin{table}
\caption{MCMC results of the \texttt{PAWN} modeling for an atmosphere in hydrodynamic expansion and in LTE.}
\centering
\begin{tabular}{l l c c}
\hline\hline \noalign{\smallskip}
~ & ~ & $T$ [$10^3$K] & $\log_{10}({\rm \dot{M}}$ [g\,s$^{-1}$])\\
\hline \noalign{\smallskip}
$\mathrm{H\alpha}$ & CARMENES & 14.6$_{-2.1}^{+2.4}$ & 12.8$_{-0.8}^{+0.6}$\\
\hline \noalign{\smallskip}
$\mathrm{H\alpha}$ & HARPS-N & 12.6$_{-2.6}^{+4.0}$ & 11.8$_{-1.4}^{+1.3}$\\
$\mathrm{H\beta}$ & HARPS-N & 12.8$_{-3.3}^{+3.8}$ & 12.1$_{-2.0}^{+1.3}$\\
$\mathrm{H\gamma}$ & HARPS-N & 12.7$_{-3.2}^{+4.7}$ & 12.0$_{-1.9}^{+1.6}$\\
\hline \noalign{\smallskip}
All & Combination & 12.2$_{-1.0}^{+1.3}$ & 11.8$_{-0.5}^{+0.6}$\\
\hline \noalign{\smallskip}
\label{Tab-PAWN-MCMC}
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.97\textwidth]{pawn_emcee_WASP-33_best_fit_compare_zoom_all_vel3.pdf}
\caption{Best-fit \texttt{PAWN} models (blue lines) and the observed transmission spectra (grey lines and black points) in the planetary rest frame. The \texttt{PAWN} models are for the case of a hydrodynamically expanding atmosphere in LTE.
}
\label{MCMC-PW-LTE-TS}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pawn_emcee_many_Hgamma_rotbroad_PW_SB_Chem_noDep_25_corner3.pdf}
\caption{Correlation diagram of the MCMC posterior distributions in the case of a hydrodynamic atmosphere in LTE. The result is for the combined fit of all Balmer lines from HARPS-N and CARMENES. The retrieved parameters are the thermospheric temperature ($T=12\,200^{+1300}_{-1000}$ K) and the atmospheric mass-loss rate (${\rm \dot{M}}=10^{11.8^{+0.6}_{-0.5}}$ g\,s$^{-1}$).
}
\label{MCMC-PW-LTE}
\end{figure}
Our retrieved mass-loss rate of ${\rm \dot{M}}=10^{11.8^{+0.6}_{-0.5}}$ g\,s$^{-1}$ is close to the maximum energy-limited mass-loss rate \citep{Fossati2018}. This could suggest that the heating efficiency is high (on the order of 10-100\,\%), meaning that most of the irradiation energy goes into expansion ($PdV$ work) and escape. However, according to \citet{Salz2016}, when a hot Jupiter has a relatively high gravitational potential (such as WASP-33b), the heating efficiency should decrease by several orders of magnitude. This hints that the energy-limited computation of \citet{Fossati2018} may not be complete and that some energy sources are not taken into consideration. Indeed, \citet{Garcia-Munoz2019} suggested that hot Jupiters orbiting early type stars could undergo a ``Balmer-driven'' evaporation. This mechanism has been proposed for the ultra-hot Jupiter KELT-9b and is supported by the observations of the Balmer series in its thermosphere \citep{Yan2018,Wyttenbach2020}. This ``Balmer-driven'' mechanism takes place when a sufficient quantity of excited hydrogen is present in the thermosphere and the planet is irradiated with intense stellar near-ultraviolet radiation. In that case, the energy absorbed in the thermosphere from the stellar Balmer irradiation exceeds the one absorbed from the stellar high-energy EUV irradiation. Thus, the thermosphere undergoes a stronger heating and expansion, leading to a higher mass-loss rate, even if the heating efficiency stays moderate. The measured mass-loss rate for WASP-33b is ${\rm \dot{M}}=10^{11.8^{+0.6}_{-0.5}}$ g\,s$^{-1}$, while that of KELT-9b is ${\rm \dot{M}}=10^{12.8\pm0.3}$ g\,s$^{-1}$ \citep{Wyttenbach2020}. These measurements are compatible with a ``Balmer-driven'' evaporation, since WASP-33b orbits an A5 star, while KELT-9b orbits an A0V star, where the Balmer flux is extremely high.
\section{Conclusions}
We observed four transits of the ultra-hot Jupiter WASP-33b with the CARMENES and HARPS-N spectrographs. After the correction of the RM and CLV effects, we detected the Balmer H$\alpha$, H$\beta$, and H$\gamma$ transmission spectra of the planetary atmosphere.
The combined H$\alpha$ transmission spectrum has a large absorption depth of 0.99$\pm$0.05\,\%, indicating that the line probes neutral hydrogen atoms in the high-altitude thermosphere. Although the detection of the Balmer lines is unambiguous, the strengths of the lines are affected by the stellar pulsation. Future modelling and correction of the spectral pulsation feature will enable a better constrain the line strength.
We fitted the observed Balmer lines using the \texttt{PAWN} model assuming that the atmosphere is hydrodynamic and in LTE. The model fit returns a thermospheric temperature of $T=12200^{+1300}_{-1000}$ K and a mass-loss rate ${\rm \dot{M}}=10^{11.8^{+0.6}_{-0.5}}$ g\,s$^{-1}$. The high mass-loss rate is consistent with theoretical predictions for UHJs orbiting early type stars \citep[e.g.,][]{Fossati2018, Garcia-Munoz2019}.
The Balmer lines have so far been detected in five ultra-hot Jupiters (KELT-9b, KELT-20b/MASCARA-2b, WASP-12b, WASP-121b, and WASP-33b). Balmer absorption is probably a common spectral feature in the transmission spectra of ultra-hot Jupiters because their hot atmospheres are intensively irradiated by their host stars, which could produce a large number of hydrogen atoms in the excited state. However, for some UHJs, their low atmospheric scale heights \citep[see e.g. the case of WASP-189b,][]{Cauley2020} or the Rossiter-McLaughlin effect \citep[e.g.,][]{Casasayas-Barris2020} could hamper the detection of the Balmer features.
Extending the observations to a larger UHJ sample will enable a systematic study of the Balmer lines and the thermospheric conditions.
\begin{acknowledgements}
We thank the referee for the useful comments.
F.Y. acknowledges the support of the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets (RE 1664/16-1)".
CARMENES is an instrument for the Centro Astron\'omico Hispano-Alem\'an (CAHA) at Calar Alto (Almer\'{\i}a, Spain), operated jointly by the Junta de Andaluc\'ia and the Instituto de Astrof\'isica de Andaluc\'ia (CSIC).
CARMENES was funded by the Max-Planck-Gesellschaft (MPG),
the Consejo Superior de Investigaciones Cient\'{\i}ficas (CSIC),
the Ministerio de Econom\'ia y Competitividad (MINECO) and the European Regional Development Fund (ERDF) through projects FICTS-2011-02, ICTS-2017-07-CAHA-4, and CAHA16-CE-3978,
and the members of the CARMENES Consortium
(Max-Planck-Institut f\"ur Astronomie,
Instituto de Astrof\'{\i}sica de Andaluc\'{\i}a,
Landessternwarte K\"onigstuhl,
Institut de Ci\`encies de l'Espai,
Institut f\"ur Astrophysik G\"ottingen,
Universidad Complutense de Madrid,
Th\"uringer Landessternwarte Tautenburg,
Instituto de Astrof\'{\i}sica de Canarias,
Hamburger Sternwarte,
Centro de Astrobiolog\'{\i}a and
Centro Astron\'omico Hispano-Alem\'an),
with additional contributions by the MINECO,
the Deutsche Forschungsgemeinschaft through the Major Research Instrumentation Programme and Research Unit FOR2544 ``Blue Planets around Red Stars'',
the Klaus Tschira Stiftung,
the states of Baden-W\"urttemberg and Niedersachsen,
and by the Junta de Andaluc\'{\i}a.
Based on data from the CARMENES data archive at CAB (CSIC-INTA).
We acknowledge financial support from the Agencia Estatal de Investigaci\'on of the Ministerio de Ciencia, Innovaci\'on y Universidades and the ERDF through projects PID2019-109522GB-C51/2/3/4, PGC2018-098153-B-C33, AYA2016-79425-C3-1/2/3-P, ESP2016-80435-C2-1-R and the Centre of Excellence ``Severo Ochoa'' and ``Mar\'ia de Maeztu'' awards to the Instituto de Astrof\'isica de Canarias (SEV-2015-0548), Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709), and Centro de Astrobiolog\'ia (MDM-2017-0737), and the Generalitat de Catalunya/CERCA programme.
A.W. acknowledges the financial support of the SNSF by grant number P400P2\_186765.
P.M. acknowledges support from the European Research Council under the Horizon 2020 Framework Program via ERC grant 832428.
I.S. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under grant agreement No 694513.
M.L. achkowledges the funding from the project ESP2017-87143-R.
This work is based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundaci\'on Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,497,236 | arxiv | \section{Introduction
Entanglement and quantum coherence are at the core of quantum information technologies. Among the existing qubit platforms for quantum information processing, nitrogen-vacancy (NV) centers in diamond have attracted significant attention due to their long spin-coherence time, quantum state controllability, and the ability to initialize and readout the spin state optically~\cite{jelezko2004observation,gaebel2006room,hanson2006polarization,hanson2008coherent,fuchs2009gigahertz,bar2013solid,herbschleb2019ultra}. Although there are remarkable applications of NV centers in the areas of quantum sensing and quantum communication~\cite{taylor2008high,sipahigil2012quantum,bernien2013heralded,pfaff2014unconditional,loopfree2015,reiserer2016robust,degen2017quantum,casola2018probing,awschalom2018quantum,mittiga2018imaging,humphreys2018deterministic,bartling2021,Pompili2021}, quantum computation using NV centers remains challenging due to the difficulty of engineering useful long-distance gates, \textit{i.e.} over an optically resolvable distance on the order of micrometers~\cite{jelezko2004observationN,childress2006coherent,neumann2008multipartite,neumann2010quantum,van2012decoherence,dolde2013room} which entangle qubits faster than decoherence rates. Once this long-distance two-NV gate is established, NV centers will be a scalable platform of quantum computation enabled by their nanoscale localization and on-chip integratability~\cite{toyli2010chip}.
Recently, several potential solutions to this challenge have been proposed by making use of boson modes as an information mediator. While photon-mediated NV-NV entanglement has been experimentally demonstrated over a meter and a kilometer length scales~\cite{bernien2013heralded,pfaff2014unconditional,loopfree2015,humphreys2018deterministic,Pompili2021}, based on indistinguishable single photon detection, its extension to two-qubit gates is still challenging due to its slow entangling rate as a result of its low success probability. It has been proposed, however, that the long-distance two-qubit gates can be realized by harnessing such entangled NV-center pair generation under both single-shot readout and local gates based on the measurement outcome~\cite{Perlin_2018}. This is possible if NV centers have access to quantum memories in the decoherence-free subspace~\cite{lidar1998decoherence}, which survive during the multiple entangling attempts of NV centers that cause decoherence~\cite{reiserer2016robust,Perlin_2018,humphreys2018deterministic,bartling2021,Pompili2021}. Alternatively, as a means for extending NV-NV interaction on a wafer without needing single boson detection and with faster gate operations, hybrid quantum systems have been extensively studied where NV centers interface other bosonic systems~\cite{li2015hybrid,li2016hybrid,lemonde2018phonon,NoriPolariton2018}. In a carbon-nanotube-NV-center hybrid system~\cite{li2016hybrid}, for example, it has been proposed to couple NV centers and phonon modes in a suspended carbon nanotube by injecting an electric current through the nanotube
\begin{figure}[b]
\includegraphics[scale=1.0]{fig1_TNR.eps}
\caption{Schematic of NV centers in diamond placed on top of an infinitely long magnon waveguide and a finite length magnetic bar made of YIG.}
\label{fig1}
\end{figure}
Hybrid quantum systems composed of NV centers and magnons in ferromagnets have emerged and attracted attention as another highly promising platform to extend such NV-NV interaction~\cite{trifunovic2013long,flebus2018quantum,flebus2019entangling,muhlherr2019magnetic,zou2020tuning,candido2020predicted,neumanprl2020,rustagi2020,ballestero,wang,solanki}, where NV spins are intrinsically coupled to magnon modes through their dynamical fringe magnetic fields. Taking advantage of virtual-magnon exchange in one-dimensional spin chains~\cite{trifunovic2013long} or transduction of energy quanta in ferromagnetic discs~\cite{candido2020predicted}, NV-NV entanglement has been investigated theoretically~\cite{candido2020predicted,trifunovic2013long}, thus stimulating a variety of experiments on the NV-magnon hybrid system~\cite{wolfe2014off,van2015nanometre,wolfe2016spatially,andrich2017long,du2017control}. Nonetheless, optimal device geometries and gate protocols suitable for entangling separated NV centers have yet to be explored. Moreover, several important practical aspects and entangling schemes of these systems have not been fully addressed theoretically, e.g., realistic ferromagnetic structures, relevant magnetic interactions~\cite{Kalinikos_1986,stancil2009spin,serga2010yig}, finite temperatures, and possible entanglement protocols.
Here we present a practical and realistic hybrid quantum system to engineer NV-NV entanglement over micron length scales via on- and off-resonant magnon excitations at low temperatures ($T\lesssim 150$~mK). The entanglement protocol in this hybrid quantum system is based on the strong coupling of NV spins to the magnon modes in yttrium-iron-garnet (YIG) nanodevices. Under a realistic geometry and accurately taking into account both dipole and exchange interactions, we obtain strong NV-magnon interactions and high entangling gate to decoherence ratio (GDR) in both an infinitely long YIG waveguide and a finite length YIG bar structure (see Fig.~\ref{fig1}). Especially for the latter, we obtain NV-magnon cooperativity ${\cal C}\gtrsim 10^4$ for on-resonance conditions and NV-NV GDR $\approx 10^3$ under off-resonant magnon excitations for two NV centers separated by more than $2$~$\mu$m.
This leads to a usefully-fast entangling gate (relative to the qubit decoherence rate) at optically resolvable NV-NV separations. These values of GDR greatly exceed fidelities that were sufficient to demonstrate error correction on other platforms~\cite{errorcorrection}.
All of our results are obtained within a Hamiltonian formalism~\cite{colpa1978diagonalization,nguyen2005spectral}, which allows for semi-analytical expressions for the coupling in terms of the relevant experimental and geometrical quantities.
Finally, we explore and compare the calculated entanglement quality of both on-resonant transduction and off-resonant virtual-magnon exchange entangling gate protocols, which we regard as another major focus in this work. We achieve this comparison by means of a numerical simulation of the Lindblad master equation taking into account two NV centers and a magnon mode near the resonance condition at finite temperatures. More specifically, we analyze and compare the entanglement negativity, fidelity, and degree of the Bell inequality violation for both cases under different parameters of the NV-magnon hybrid system. Notably, our results show that although the off-resonant protocols are robust at temperatures up to $T \approx 150$~mK due to the absence of magnon occupation decay, the transduction protocol outperforms it due to its faster gate operations at lower temperatures if the magnon damping parameter is sufficiently small $\alpha \lesssim (\Delta\omega/\omega_\mu)(1/4g_\mu T_2^{*})[\pi/(\pi-1)]$, with magnon frequency $\omega_\mu$, NV center coherence time $T_2^{*}$, NV-magnon detuning frequency $\Delta \omega$, and NV-spin-magnon-mode coupling $g_\mu$. Our calculations and analysis serve as a guide for future experiments to engineer on-chip long-distance entangling gates between NV centers mediated by magnons in ferromagnetic nanostructures.
In this article, we begin in Sec.~II with the description of the Hamiltonian formalism for the dipole-exchange magnons coupled to NV centers. In Sec.~III we calculate the full magnonic properties of a YIG waveguide interacting with NV centers. We obtain the NV-NV coupling strength, the entanglement rate, and the gate to decoherence ratio under the off-resonant NV-magnon interaction condition. Similarly, in Sec.~IV we first calculate the magnonic properties of a finite length YIG bar. Secondly, we evaluate both NV-magnon on-resonant coupling strength and its cooperativity as well as the NV-NV coupling strength under the off-resonant condition. We provide for the latter the entanglement rate and the gate to decoherence ratio. Finally, in Sec.~V we present a complete comparison between the transduction and virtual-magnon-exchange protocols in detail under different system parameters and physical conditions.
\section{Hamiltonian formalism of dipole-exchange magnons and NV-magnon interaction
\label{secII}
Here we outline the Hamiltonian formalism of dipole-exchange magnons coupled to NV centers providing a complete and accurate treatment of both magnetic dipole and quantum exchange interactions between the spins in YIG waveguides and bars with finite cross section. This is crucial in our study as the NV centers have eigenfrequencies typically on the order of gigahertz, thus interacting with the so-called dipole-exchange magnons in ferromagnets~\cite{Kalinikos_1986}; using simpler, less accurate magnon dispersion relations as in Ref.~[\onlinecite{trifunovic2013long}] leads to a substantial overestimation of the NV-magnon coupling. As illustrated in Fig.~\ref{fig1}, we consider hybrid quantum devices where NV centers are placed on top of the YIG structures. Whereas multiple NV centers can be placed on top of the infinitely long YIG waveguide in a scalable fashion as shown in Fig.~\ref{fig1}, in the following calculations we only focus on coupling two NV centers. The total Hamiltonian of our hybrid system is written as $\mathcal{H}=\mathcal{H}_\text{NV}+\mathcal{H}_\text{m}+\mathcal{H}_\text{int}$, where $\mathcal{H}_\text{NV}$ is the NV Hamiltonian, $\mathcal{H}_\text{m}$ is the magnon Hamiltonian, and $\mathcal{H}_\text{int}$ is the interaction Hamiltonian,
\begin{eqnarray}
&&\mathcal{H}_\mathrm{NV}=\sum_{i=1,2} D_{\mathrm{NV}}\left(\hat{n}_{\mathrm{NV}} \cdot \mathbf{S}_{\mathrm{NV}_{i}}\right)^{2}+\gamma \mu_{0} \mathbf{S}_{\mathrm{NV}_{i}} \cdot \mathbf{H}_{\mathrm{ext}},\\
&&\mathcal{H}_{\mathrm{m}}=-\mu_{0} \int d \mathbf{r} \mathbf{H}_{\mathrm{ext}} \cdot \mathbf{M} (\mathbf{r})+\frac{\mu_{0}}{2} \int d \mathbf{r} \alpha_\mathrm{ex}(\mathbf{r}) \nabla \mathbf{M}: \nabla \mathbf{M}\nonumber\\
&&\ \ \ \ \ \ \ \ +\frac{\mu_{0}}{2} \int d \mathbf{r} d \mathbf{r}^{\prime}(\nabla \cdot \mathbf{M}(\mathbf{r})) G\left(\mathbf{r}-\mathbf{r}^{\prime}\right)\left(\nabla^{\prime} \cdot \mathbf{M}\left(\mathbf{r}^{\prime}\right)\right),\label{Hmag}\\
&&\mathcal{H}_{\mathrm{int}}=\sum_{i=1,2} \gamma \mu_{0} \mathbf{S}_{\mathrm{NV}_{i}} \cdot \left.\nabla \int d \mathbf{r}^{\prime} G\left(\mathbf{r}-\mathbf{r}^{\prime}\right) \nabla'\cdot \mathbf{M}\left(\mathbf{r}^{\prime}\right)\right|_{\mathbf{r}=\mathbf{r}_{i}}.\nonumber\\\label{Hint}
\end{eqnarray}
Here, $D_\text{NV}=2\pi\times2.877\text{ GHz}$ is the zero-field splitting of the NV center, $\hat{n}_\text{NV}$ is the unit vector along the NV main symmetry axis, $\mathbf{S}_{\text{NV}_i}$ is the spin-$1$ operator of the NV center labeled by $i\in\{1,2\}$, $\gamma=2\pi\times28\text{ MHz/mT}$ is the absolute value of the electron gyromagnetic ratio, $\mu_0$ is the vacuum permeability, $\bf{H}_\text{ext}$ is the external magnetic field, $\bf{M}(\bf{r})$ is the magnetization with the constraint $|\mathbf{M}(\mathbf{r})|=M_s(\mathbf{r})=M_s\mathcal{F}(\mathbf{r})$, $M_s=245.8\ \mathrm{mT}/\mu_0$ is the YIG saturation magnetization, $\mathcal{F}(\mathbf{r})=1$ ($0$) inside (outside) the ferromagnetic structure, ${\alpha_\mathrm{ex}}(\mathbf{r})={\alpha_\mathrm{ex}}\mathcal{F}(\mathbf{r})$, ${\alpha_\mathrm{ex}}=\lambda^2_\text{ex}=D_\text{ex}/\gamma\mu_0M_s$ is the exchange-length squared, $D_\text{ex}=5.39\times 10^{-2}\ \gamma\ \mathrm{mT}\ \mu\mathrm{m}^2$ is the YIG exchange constant, the double-dot product is defined as $\nabla{\bf{M}}:\nabla{\bf{M}}=\partial_a M_b\partial^a M^b$, ${\bf{r}}_i$ is the position of $\text{NV}_i$, $G(\mathbf{r}-\mathbf{r}')=1/4\pi|\mathbf{r}-\mathbf{r}'|$ is the Green's function, and we set $\hbar=1$. We note that the first term in Eq.~(\ref{Hmag}) is the Zeeman energy, the second term is the exchange energy, and the third term is the magnetic dipole energy. Inclusion of both the second and the third term in Eq.~(\ref{Hmag}) results in the dipole-exchange magnons in ferromagnets.
\section{Infinitely long ferromagnetic waveguide
\label{secIII}
Here we consider the case of an infinitely long YIG waveguide with thickness, width, and length given by $d$, $w$, and $l(\rightarrow\infty)$, respectively. The external magnetic field is applied along the YIG waveguide, $\mathbf{H}_\text{ext}=H_\text{ext}\hat{z}$, and NV centers are positioned at height $h$ from its top surface [see illustration in Fig.~\ref{fig2}(a)]. The equilibrium magnetization is $\mathbf{M}_0(\mathbf{r})=M_s\hat{z}\mathcal{F}(\mathbf{r})$, for which its contribution in the interaction Hamiltonian Eq.~(\ref{Hint}) vanishes. The NV main symmetry axis is set to be parallel to the external magnetic field, $\hat{n}_\text{NV}=\hat{z}$, for geometrical simplicity. We further define the deviation from the equilibrium magnetization $\delta\mathbf{M}(\mathbf{r})=\mathbf{M}(\mathbf{r})-\mathbf{M}_0(\mathbf{r})\approx\mathbf{m}(\mathbf{r})-[|\mathbf{m}(\mathbf{r})|^2/2M_s(\mathbf{r})]\hat{z}$, where $\mathbf{m}(\mathbf{r})=m_x(\mathbf{r})\hat{x}+m_y(\mathbf{r})\hat{y}$ is a small two-dimensional magnetization deviation. The linearized magnetization dynamics~\cite{shindou2013topological} are governed by the Hamiltonian equation of motion for $m^-(\mathbf{r})= [2\gamma M_s(\mathbf{r})]^{1/2}a(\mathbf{r})$ and $m^+(\mathbf{r})= [2\gamma M_s(\mathbf{r})]^{1/2}a^*(\mathbf{r})$ using the magnon Hamiltonian $\mathcal{H}_\mathrm{m}$ up to quadratic order in the complex canonical variables $a(\mathbf{r})$ and $a^*(\mathbf{r})$, where we have performed the Holstein-Primakoff approximation~\cite{stancil2009spin} and $m^{\pm}(\mathbf{r})=m_x(\mathbf{r})\pm i m_y(\mathbf{r})$.
To obtain the normal magnon mode frequencies and the dynamical fringe field spatial profiles, we diagonalize the magnon Hamiltonian Eq.~(\ref{Hmag}) by expanding the complex canonical variables assuming totally unpinned surface spins, i.e.,
\begin{equation}
a(\mathbf{r})=\int\frac{dk}{2\pi}e^{ikz}\sum_{nm}f_n^X(x)f_m^Y(y)a_{k,(n,m)}.
\end{equation}
Here, the basis functions are
\begin{eqnarray}
f_n^X(x)&=&\left[\frac{2\mathcal{F}^X(x)}{(1+\delta_{n,0})d}\right]^{\frac{1}{2}}\cos(\kappa_n^Xx),\\ f_m^Y(y)&=&\left[\frac{2\mathcal{F}^Y(y)}{(1+\delta_{m,0})w}\right]^{\frac{1}{2}}\cos(\kappa_m^Yy),
\end{eqnarray}
where $\kappa_n^X=n\pi/d$, $\kappa_m^Y=m\pi/w$, $\mathcal{F}^X(x)=\Theta(x)\Theta(d-x)$, $\mathcal{F}^Y(y)=\Theta(y)\Theta(w-y)$, and $\Theta$ is the Heaviside step function. As we consider the case where both the thickness and the width of the YIG waveguide are small, we restrict our discussion to the magnon mode subspace with $(n,m)=(0,0)$, which presents uniform magnetization deviations across the $x$-$y$ plane and gives the lowest energy magnon band in the dispersion relation.%
After writing $\mathcal{H}_\mathrm{m}$ up to the quadratic order in the complex canonical variables, applying the Bogoliubov transformation, and promoting the complex canonical variables to the quantum creation and annihilation operators, we obtain the diagonalized Hamiltonian (see Appendix B1)
\begin{eqnarray}
\mathcal{H}_\mathrm{m}=\int\frac{dk}{2\pi}\omega_{k,(0,0)}\beta^\dag_{k,(0,0)}\beta_{k,(0,0)},
\end{eqnarray}
where $\omega_{k,(0,0)}$ is the magnon energy and $\beta_{k,(0,0)}$ ($\beta_{k,(0,0)}^{\dagger}$) is the magnon annihilation (creation) operator satisfying $[\beta_{k,(0,0)},\beta^\dag_{k',(0,0)}]=2\pi\delta(k-k')$.
\begin{figure}[t]
\includegraphics[scale=1]{fig2_TNR.eps}
\caption{(a) Schematic and coordinates of NV centers placed on top of an infinitely long YIG waveguide with applied external magnetic field $\bf{H}_\text{ext}$. (b) NV center's transition frequencies and magnon spectrum as a function of external field $H_\text{ext}$ for $d=20$~nm and $w=120$~nm. Shaded area represents continuum of magnon modes. The lowest magnon frequency $\omega_\text{min}$ and the NV transition frequency $\omega_\text{NV}$ of $|g\rangle\leftrightarrow|e\rangle$ are detuned by $\Delta f=3\text{ MHz}$ at $H_\text{ext}=H_\text{c}$. (c) Dispersion relation $f(k)=\omega_{k,(0,0)}/2\pi$ of magnons and the dimensionless coupling $g(k)=g(\bm{\rho},k)$ between magnons and the NV center at $H_\text{ext}=H_\text{c}$. The NV center is positioned at $\bm{\rho}=(x,y)=(d+h,w)$ with $h=25\text{ nm}$ [see the white cross mark in (d)]. The minimum frequency $\omega_\text{min}$ and its respective wavenumber $k_\text{min}$ are shown. (d) Spatial density plot of the dimensionless coupling $g(k_\text{min})$ at $H_\text{ext}=H_\text{c}$ with contours at $|g(k_\text{min})|=0.05,0.1,0.15$ and $0.2$. (e) Effective NV-NV coupling strength $g_\text{eff}$ [Eq.~(\ref{EffInt})] as a function of the NV-NV distance under $\Delta f=3\text{ MHz}$ and $\Delta f=10\ \mathrm{MHz}$. The gray curve shows the coupling due to the direct magnetic dipole-dipole interaction between NV centers. The entanglement rate and the gate to decoherence ratio are shown on the right axis for $T_2^*=1\text{ ms}$. Inset shows the time $\tau$ evolution of the entanglement negativity at $T=0$ from the initial state $|g\rangle_1|e\rangle_2$ scaled by the Bell state negativity $\mathcal{N}_\text{B}$.
}
\label{fig2}
\end{figure}
The coupling strength between magnon modes and NV centers can be obtained by applying the same Bogoliubov transformation in the interaction Hamiltonian Eq.~(\ref{Hint}). As we focus on external magnetic field values $\gamma H_\text{ext}<D_\text{NV}$, the NV center's ground state and the first excited state are $|g\rangle=|S_\text{NV}^z=0\rangle$ and $|e\rangle=|S_\text{NV}^z=-1\rangle$, respectively. Up to the linear order in magnon creation and annihilation operators and using the rotating wave approximation ($|\omega_{k,(0,0)}-\omega_{\rm{NV}} | \ll \omega_{k,(0,0)}+\omega_{\rm{NV}}$), we obtain the interaction Hamiltonian (see Appendix B2)
\begin{eqnarray}
\mathcal{H}_\mathrm{int}=\frac{\sqrt{\omega_M\omega_d}}{\sqrt{w/d^2}}\sum_{i=1,2}\int\frac{dk}{2\pi}g({\bm{\rho}}_i,k)\sigma_{\mathrm{NV}_i}^+\beta_{k,(0,0)}e^{ikz_i}+\mathrm{H.c.},\nonumber\\
\label{Hint0}\end{eqnarray}
in the NV centers' subspaces spanned by $\{|g\rangle_{i},|e\rangle_{i} \}$, where $\omega_M=\gamma\mu_0M_s$, $\omega_d=\mu_0\gamma^2/d^3$, $g({\bm{\rho}}_i,k)$ is the dimensionless coupling between the NV center spin and the $k$-magnon mode, ${\bm{\rho}}_i$ is the $\mathrm{NV}_i$'s position in the $x$-$y$ plane, $\sigma^+_{\mathrm{NV}_i}=|e\rangle_i \langle g|$, and $\sigma^-_{\mathrm{NV}_i}={(\sigma^+_{\mathrm{NV}_i})^{\dagger}}$. The virtual-magnon-mediated NV-NV interaction can be obtained via the Schrieffer-Wolff transformation~\cite{bravyi2011schrieffer} as $\mathcal{H}^{\mathrm{NV}-\mathrm{NV}}_{\mathrm{eff}}=-\left(g_\text{eff}\sigma_{\mathrm{NV}_{1}}^{+} \sigma_{\mathrm{NV}_{2}}^{-}+\text{H.c.}\right)$ with (see Appendix B3)
\begin{eqnarray}\label{GeffWG}
g_\text{eff}=\frac{\omega_{M} \omega_{d}}{w/d^2} \int \frac{d k}{2 \pi}\left|g(k)\right|^{2} \frac{\exp [ik(z_1-z_2)]}{\omega_{k,(0,0)}-\omega_{\mathrm{NV}}},\label{EffInt}
\end{eqnarray}
where $g_\mathrm{eff}$ is the effective NV-NV coupling strength, $\omega_\mathrm{NV}=D_\mathrm{NV}-\gamma H_\mathrm{ext}$ is the transition frequency of $|g\rangle\leftrightarrow|e\rangle$, and we write $g(k)=g({\bm{\rho}}_i,k)$ assuming ${\bm\rho}_1={\bm{\rho}}_2$. The above expression is valid when $(\omega_{M}\omega_{d}d^2/2\pi w)\int dk |g(k)|^2(\omega_{k,(0,0)}-\omega_{\rm{NV}})^{-2} \ll 1$. We note that this effective coupling strength $g_{\rm{eff}}$ for the off-resonant configuration does not depend on the temperature, as it is independent of the initial magnon number state $|n_{\rm{m}} \rangle$ (\textit{i.e.} from second order perturbation theory) even though the NV-magnon coupling strength matrix element is proportional to $\sqrt{n_{\rm{m}} +1}$ (see Appendix B4).
In Fig.~\ref{fig2}(b) we plot the NV center's transition frequencies and magnon mode frequencies as a function of the external magnetic field $H_\text{ext}$, where we have assumed $(d,w)=(20\text{ nm}, 120\text{ nm})$ for the waveguide dimensions~\cite{wang2019spin}. As we take the limit where the length of the YIG waveguide is infinity ($l\rightarrow\infty$), the magnon mode frequencies form a continuum with its minimum denoted as $\omega_\text{min}$. At field $H_\text{ext}=H_c$, the NV center's lower transition frequency $\omega_\text{NV}$ is detuned from the magnon dispersion minimum $\omega_\text{min}$ by $\Delta\omega=\omega_\text{min}-\omega_\text{NV}=2\pi\Delta f=2\pi\times 3\text{ MHz}$. Figure~\ref{fig2}(c) shows the magnon dispersion relation near $\omega_\text{min}$ and the wavenumber dependence of the dimensionless coupling strength $g(k)$ at $H_\text{ext}=H_\text{c}$, ${\bm \rho}_i=(d+h)\hat{x}+w\hat{y}$, and $h=25\text{ nm}$ [see the cross marker in Fig.~\ref{fig2}(d)]. The coupling strength also depends on the spatial position of the NV center relative to the YIG waveguide, which is shown in Fig.~\ref{fig2}(d). As the dynamical fringe magnetic field generated by a single magnon is confined near the YIG device, the coupling strength is larger if the NV center is positioned near the YIG waveguide.
Under the off-resonant condition shown in Fig.~\ref{fig2}(c), the NV centers on top of the YIG waveguide interact to each other via the exchange of virtual magnons. In Fig.~\ref{fig2}(e), we plot the effective NV-NV coupling strength $g_\text{eff}$ [Eq.~(\ref{GeffWG})] as a function of the NV-NV distance \hbox{$\delta z=|z_1-z_2|$} for both $\Delta f=3$~MHz and $\Delta f=10$~MHz cases represented by the red and blue dots, respectively. The coupling decays rapidly with detuning, which allows the entangling interaction to be switched off by increasing the external magnetic field from $H_{\rm{ext}}=H_c$ by $\approx$ $0.1$ mT. We show that the calculated coupling strength is well explained by the analytical formula
\begin{equation}
g_\text{eff}\approx\frac{\omega_M\omega_{\bar{d}}}{\Delta\omega}|g(k_\text{min})|^2\cos(k_\text{min}\delta z)e^{-\delta z/\xi_0}
\end{equation}
as shown by the solid red and blue curves in Fig.~\ref{fig2}(e), where $\xi_\text{0}=\sqrt{D_\text{ex}/\Delta\omega}$ is the spin correlation length and $\omega_{\bar{d}}=\mu_0\gamma^2/(\xi_0 wd)$. The entangling gate rate $\mathrm{ER}=4g_\mathrm{eff}/\pi$ and the gate to decoherence ratio $\mathrm{GDR}=4g_\mathrm{eff}T_2^*/\pi$ are shown on the right axis, where a coherence time $T_2^*=1\text{ ms}$ of the NV center is used~\cite{herbschleb2019ultra}. As we obtain $\mathrm{GDR}>10$ for $1\ \mu\text{m}$ separated NV centers, we predict a useful and practical entangling gate.
To show that this system can manipulate the NV-NV entanglement, we perform a simulation using the Lindblad master equation. In the inset of Fig.~\ref{fig2}(e) we plot the entanglement negativity~\cite{vidal2002computable} $\mathcal{N}$ at $T=0$ as a function of the NV-NV interaction time after the preparation of the initial spin state in $|g\rangle_1|e\rangle_2$, where the negativity is normalized by the Bell state's negativity $\mathcal{N}_\mathrm{B}$. As we obtain $\mathcal{N}>0$, we clearly demonstrate that the NV centers are entangled. If multiple NV centers are placed on top of the YIG waveguide (see Fig.~\ref{fig1}), neighboring two-NV gates can thus be performed by locally changing the external magnetic field around the two NV centers to shift their transition frequencies relative to the minimum magnon mode frequency in the range $\Delta\omega>0$. Alternatively, local electric field~\cite{electricfieldNV2011} or strain~\cite{PhysRevLett.113.020503} can be used to shift NV centers' transition frequencies to avoid applying a local magnetic field at the underlying YIG location, the effect of which is discussed in Appendix K.
In Fig.~\ref{fig3} we plot the NV-NV entanglement rate and the gate to decoherence ratio as a function of the waveguide thickness $d$ for different waveguide dimensions and NV centers' heights $h$. We assume a fixed NV-NV distance of $1\ \mu\text{m}$, $(x_i,y_i)=(d+h,w)$, and $\Delta\omega=2\pi\times3\text{ MHz}$. The red (blue) solid curve shows the waveguide thickness $d$ dependence of the $\mathrm{ER}$ and the $\mathrm{GDR}$ under the fixed aspect ratio $w/d=6$ at $h=25\text{ nm}$ ($5\text{ nm}$), and the red (blue) dashed curve shows the dependence where the waveguide width is kept constant with $w=120\text{ nm}$ at $h=25\text{ nm}$ ($5\text{ nm}$). From these graphs we see that in order to make the entangling gate faster, one can either have the NV center closer to the YIG waveguide (diminishing $h$) or make the waveguide's cross-sectional area smaller. As for placing NV centers in proximity to the YIG waveguide, we note the common challenge of making high coherence NV centers near the diamond surface due to the surface noise known in the area of NV-based quantum sensing~\cite{ohno2012engineering}.
\begin{figure}[t]
\includegraphics[scale=1]{fig3_TNR.eps}
\caption{The entanglement rate and the gate to decoherence ratio between two NV centers separated by $1\ \mu\text{m}$ as a function of the waveguide thickness $d$. NV centers are placed on the YIG waveguide as drawn in Figs. 2(a) and 2(d). Red curves and blue curves are calculated for $h=25 \text{ nm}$ and $5 \text{ nm}$, respectively. Solid curves and dashed curves are calculated for a fixed aspect ratio $w/d$ and width $w$ of the waveguide, respectively. Sharp dips correspond to the nodes in the oscillation of $g_\text{eff}$ as shown in Fig.~\ref{fig2}(e). Calculation is performed for detuning $\Delta\omega/2\pi=3 \text{ MHz}$.
}
\label{fig3}
\end{figure}
\section{Finite length ferromagnetic bar
\label{secIV}
In this section we show that the NV-magnon coupling strength can be strongly enhanced under the magnon confinement effect of a finite length ferromagnetic bar. As the magnon mode frequencies are discretized for this case, the system allows us to control the NV levels to be on- and off-resonant to the magnon levels. Here, the interaction Hamiltonian Eq.~(\ref{Hint0}) can be transformed into the form of the Jaynes-Cummings model~\cite{candido2020predicted,raimond2006exploring}, and the entangling gate schemes used in both quantum optics and circuit quantum electrodynamics can now be implemented in our hybrid quantum system~\cite{sillanpaa2007coherent,ansmann2009violation,manovitz2017fast}.
We first obtain the NV-magnon interaction Hamiltonian for a finite length YIG bar using a similar procedure as done in Sec.~III. For that, we first take the equilibrium magnetization to be $\mathbf{M}_0=M_s\mathcal{F}(\mathbf{r})\hat{z}$ and approximate the $x,y$ component of the resulting static demagnetization field in Eq.~(\ref{Hmag}) to be negligible compared to its $z$ component. Although there is also a finite static demagnetization field contribution in the interaction Hamiltonian~Eq.~(\ref{Hint}), we verified that its value is small under the geometry parameters and NV center positions we consider.
Accordingly, we diagonalize the magnon Hamiltonian through the following expansion of the complex canonical variable
\begin{equation}
a(\mathbf{r})=\sum_{nmp}f_n^X(x)f_m^Y(y)f_p^Z(z)a_{(nmp)},
\end{equation}
where the $z$-directional basis function is \begin{equation}
f_p^Z(z)=\left[\frac{2\mathcal{F}^Z(z)}{(1+\delta_{p,0})l}\right]^{\frac{1}{2}}\cos(\kappa_p^Zz),
\end{equation}
$\kappa_p^Z=p\pi/l$, and $\mathcal{F}^Z(z)=\Theta(z)\Theta(l-z)$. As we consider the case with $d,w\ll l$, we restrict our discussion to the magnon mode subspace with $(n,m)=(0,0)$. Considering $z$-directional modes with $p=0,1,\cdots,N$, where $p=N$ labels the highest $z$-directional wavenumber mode to be taken into account, and keeping terms up to the quadratic order in the complex canonical variables, we obtain a $2(N+1)\times2(N+1)$ non-diagonal quadratic boson Hamiltonian. After applying the Bogoliubov transformation with the paraunitary matrix~\cite{colpa1978diagonalization,shindou2013topological} and promoting the complex canonical variables to the quantum creation and annihilation operators, we obtain (see Appendix C1)
\begin{equation}
\mathcal{H}_\mathrm{m}=\sum_{p=0,1,\cdots}\omega_{(00p)}\beta^\dag_{(00p)}\beta_{(00p)}.
\end{equation}
In a similar way as in Sec.~III, the NV-magnon interaction Hamiltonian can be mapped into the form of the Jaynes-Cummings model~\cite{candido2020predicted,raimond2006exploring} (see Appendix C2)
\begin{equation}
\mathcal{H}_\mathrm{int}=\sum_{i=1,2}\sum_{\mu=(00p)}g_\mu(\mathbf{r}_i)\sigma_{\mathrm{NV}_i}^+\beta_\mu+\mathrm{H.c.},
\end{equation}
where $g_\mu(\mathbf{r}_i)\propto\sqrt{\omega_M\omega_{dwl}}$ [$\omega_{dwl}=\mu_0\gamma^2/(dwl)$] is the coupling strength between the NV center spin and the $\mu$-magnon mode in the unit of energy. {As the magnon creation operator ${\beta_\mu^{\dagger}}$ applied to the magnon number state $|n_{\mu} \rangle$ gives rise to a factor of $\sqrt{n_{\mu}+1}$, we expect the on-resonant NV-magnon configuration to have $\sqrt{n_{\mu}+1}$ faster energy-transfer oscillations between the NV-center spin and the $\mu$-magnon mode. However, at finite temperature, which can be thought of as a statistical mixture of different magnon-number states, these different-period oscillations will average out incoherently. Therefore, finite temperature does not improve the quality of NV-NV entanglement via magnon modes even though the mean magnon number $\langle n_{\mu}\rangle$ is larger, indicating that magnon-mediated NV-NV entanglement needs to be performed at low temperatures $T \lesssim 150 $~mK (see Sec.~V).}
\begin{figure}[t]
\includegraphics[scale=1]{fig4_TNR.eps}
\caption{(a) NV center's transition frequencies and magnon spectrum as a function of external field $H_\text{ext}$ for $(d,w,l)=(5\text{ nm},30\text{ nm},3\ \mu\text{m})$. The dark gray and red lines represent frequencies $\omega_{(00,p=5)}$ and $\omega_\mathrm{NV}$, respectively. (b) Zoom-in of the crossing region between $\omega_{(005)}$ and $\omega_\text{NV}$. (c) Spatial plot of the coupling strength $g=g_{(005)}$ at $H_\text{ext}=H_\text{c}$ and $h=5\text{ nm}$. The white rectangle delimits the bar dimension, and the white cross mark represent the position of $\text{NV}_1$ referred in (d). The corresponding cooperativity ${\cal C}_{(005)}$ is shown on the right axis. (d) Effective NV-NV coupling strength $g_{\rm{eff}}$ between two NV centers as a function of the NV-NV distance, where $\text{NV}_1$ and $\text{NV}_2$ are placed at $\mathbf{r}_1=(d+h)\hat{x}+w\hat{y}+(400\ \mathrm{nm})\hat{z}$ and $\mathbf{r}_2=\mathbf{r}_1+\delta z \hat{z}$, respectively. The red (blue) curve is calculated for $(d,p)=(5\text{ nm},5)$ [$(d,p)=(20\text{ nm},12)$]. The entanglement gate rate (ER) and the gate to decoherence ratio (GDR) are shown on the right axis. In both cases aspect ratio is $w/d=6$, length of the magnetic bar is $l=3 \ \mu\text{m}$, and detuning is $\Delta f=\Delta\omega/2\pi=(\omega_{(00p)}-\omega_\text{NV})/2\pi=3 \text{ MHz}$.}
\label{fig4}
\end{figure}
In Fig.~\ref{fig4}(a) we plot the external magnetic field $H_\text{ext}$ dependence of the discretized magnon mode frequencies of a YIG bar with dimensions $(d,w,l)=(5\text{ nm}, 30\text{ nm}, 3\ \mu\text{m})$. The neighboring magnon mode frequencies are separated from each other by over $2\pi\times 10\text{ MHz}$ for modes with $p\geq5$, as shown in Fig.~\ref{fig4}(b). At field $H_\text{ext}=H_\text{c}$, the NV center's transition frequency $\omega_\text{NV}$ and the magnon mode frequency $\omega_{(005)}$ are on-resonant. We plot in Fig.~\ref{fig4}(c) the spatial distribution of the NV-magnon coupling strength $g_{(005)}$ at $H_\text{ext}=H_\text{c}$ for a fixed NV center height $h=5\text{ nm}$ [see Fig.~\ref{fig2}(a)], and obtain $g_{(005)}\approx 2\pi\times 0.5$~MHz depending on the NV center positions. With the Gilbert damping parameter of YIG $\alpha=10^{-5}$~\cite{tabuchi2014hybridizing} and the coherence time of NV centers $T_2^*=1\text{ ms}$~\cite{herbschleb2019ultra}, we show on the right axis of Fig.~\ref{fig4}(c) the corresponding {single magnon $\mu$-mode} cooperativity\cite{li2015hybrid,NoriPolariton2018}
\begin{equation}
{{\cal C}_\mu=\frac{|g_\mu(\mathbf{r})|^2}{\alpha\omega_\mu /T_2^{*}}}
\end{equation}
which is a dimensionless measure of the coupling. {We emphasize that because this represents the single-magnon-mode cooperativity, the temperature dependence only appears in $\alpha$ and $T_{2}^*$ which for the purpose of our low-temperatures analysis are assumed to be independent of temperature.} We find ${\cal C}_{(005)}\gtrsim 10^4$ over a wide range of NV center positions, achieving the strong coupling regime for our hybrid quantum system. In contrast to Sec.~III, where we have a translationally invariant infinitely long waveguide, here the position of the NV center along $z$-direction plays a major role in the coupling strength. Our calculations enable us to optimize both the coupling strength and the cooperativity in order to increase NV-NV entanglement efficiency in our system.
The virtual-magnon-mediated NV-NV interaction is calculated in a similar way as in Eq.~(\ref{GeffWG}) under the condition $|g_{\mu}(\bf{r})|\ll |\omega_{\mu}-\omega_{\rm{NV}} |$, and we obtain
\begin{equation}
g_\mathrm{eff}=\frac{g_{\mu}\left(\mathbf{r}_{1}\right)g_{\mu}^{*}\left(\mathbf{r}_{2}\right)}{\omega_\mu-\omega_\mathrm{NV}}
\end{equation}
with $\mu=(005)$ (see Appendix C3). {In the same way as in the waveguide case, this virtual-magnon-mediated coupling strength is independent of temperature.} Here, the two NV centers are placed at \hbox{$\mathbf{r}_1=(d+h)\hat{x}+w\hat{y}+(400\ \mathrm{nm})\hat{z}$} [see a cross mark in Fig.~\ref{fig4}(c)] and $\mathbf{r}_2=\mathbf{r}_1+\delta z \hat{z}$, where $\delta z$ is the NV-NV distance along the bar length. In Fig.~\ref{fig4}(d) we plot $g_\mathrm{eff}$ as a function of $\delta z$ for the detuning \hbox{$\Delta\omega=\omega_{(005)}-\omega_\text{NV}=2\pi\times3\mathrm{ MHz}$}, which could be produced by electric field~\cite{electricfieldNV2011}, strain~\cite{PhysRevLett.113.020503} or magnetic field deviation from $H_{\rm{ext}}=H_c$. The corresponding entangling gate rate and the gate to decoherence ratio are shown on the right axis. Surprisingly, useful entangling gates for $2.2\ \mu\text{m}$ separated NV centers with $g_{\rm{eff}}=2\pi\times90$~kHz and $\rm{GDR}>700$ are predicted for this YIG bar system. This makes experiments more accessible in terms of the independent optical initialization and the readout of NV centers than the waveguide case.
We have also calculated these quantities for a less challenging to fabricate YIG geometry with \hbox{$(d,w,l,h)=(20\text{ nm}, 120\text{ nm}, 3\ \mu\text{m}, 5\text{ nm})$}. The result is plotted as a blue curve in Fig.~\ref{fig4}(d), for which we obtain $\text{GDR}>100$ for the $2.2\ \mu\text{m}$ separated NV centers. This result clarifies the significance of using the YIG bar structures to entangle two NV centers separated by a few micrometers. Moreover, the discretized magnon mode frequencies allows for controlling the NV center frequencies to be on-resonant to one of the magnon mode frequencies, which enables the entanglement of two NV centers via the transduction of energy quanta that we discuss in the next section. We also comment that it would be possible to control the NV-magnon coupling strength via parametric driving of the discretized magnon modes as studied in the cavity quantum electrodynamics~\cite{Leroux2018} (see Appendix I).
\section{Transduction and virtual-magnon exchange protocols
\label{secV}
In this section, we explore and compare two entangling gate protocols for our hybrid quantum system, on-resonant transduction and off-resonant virtual-magnon exchange. Entanglement via the transduction protocol is simulated by controlling the NV center frequencies independently as illustrated in the left schematic of Fig.~\ref{fig5}(a). For this case, the NV spins are initially prepared in the state $|g\rangle_1|e\rangle_2$, i.e., $\text{NV}_1$ ($\text{NV}_2$) is in its ground (excited) state. We first make $\omega_{\text{NV}_2}$ on-resonant to the $\mu$-magnon mode frequency $\omega_{{\mu}}$ for a certain time $\tau_\mathrm{var}$ during which $\omega_{\text{NV}_1}$ is detuned from $\omega_{{\mu}}$ by ${\delta\omega=}2\pi\times5\text{ MHz}$. Second we swap the $\mathrm{NV}_1$ spin state and the magnon state by making $\omega_{\rm{NV_1}}=\omega_{\mu}$ for the swap gate time $\tau_\mathrm{SWAP}$ during which $\omega_{\mathrm{NV}_2}$ is detuned from $\omega_{\mu}$ by ${\delta\omega}$. The total interaction time in this protocol is $\tau_\mathrm{int}=\tau_\mathrm{var}+\tau_\mathrm{SWAP}$ and is varied by changing $\tau_\mathrm{var}$. The control of the NV centers' transition frequencies can be performed by applying a local magnetic field, electric field~\cite{electricfieldNV2011}, or strain~\cite{PhysRevLett.113.020503}. An alternative possibility of controlling the transition frequencies would be to use a periodic modulation of the external magnetic field~\cite{Oliver1653,XufengPRL2020} (see Appendix J). In contrast, in the virtual-magnon exchange protocol the NV centers' frequencies are both detuned from the $\mu$-magnon mode frequency by $\Delta \omega=\omega_{\mu}-\omega_{\mathrm{NV}_{1,2}}=2\pi\times 3\ \mathrm{MHz}$ [see the right schematic of Fig.~\ref{fig5}(a)]. After the preparation of the NV centers' spin state in $|g\rangle_1|e\rangle_2$, the whole system evolves over the interaction time $\tau_\mathrm{int}$.
\begin{figure}[t]
\includegraphics[scale=1]{fig5_TNR.eps}
\caption{(a) Schematic of on-resonant transduction (left) and off-resonant virtual-magnon exchange (right) entanglement protocols. (b) Comparison of the two protocols at $T=70 \text{ mK}$. The top two figures show NV center's excited state population $p_{ie}$ ($i=1,2$) and magnon population $\langle n \rangle=\langle \hat{n}_\mu\rangle$ [$\mu=(005)$] at the end of the gate operations as a function of the total system interaction time. NV centers are separated by $2.2 \ \mu\text{m}$ on top of the YIG bar [see Fig.~\ref{fig4}(c)]. For the transduction protocol, NV center frequencies are modulated as illustrated in the inset, where each line represents the frequency of NV centers or the magnon mode. The bottom two figures show entanglement measures as a function of the interaction time. The red, sky blue, and gray curves are the entanglement negativity scaled by the Bell-state's negativity, the degree of the Bell inequality violation (violated if the curve is above zero), and the fidelity to the target pure entangled states, respectively.}
\label{fig5}
\end{figure}
The time evolution of our hybrid quantum system for both protocols is simulated using the Lindblad master equation~\cite{lindblad1976,breuer2002theory,li2015hybrid} at a finite temperature $T$ considering two NV centers and a magnon mode $\mu$,
\begin{eqnarray}
\dot{\rho}=&&-i[\mathcal{H}(t), \rho]+2 \kappa\left(1+\bar{n}_\mathrm{m}^\mathrm{th}\right) \mathcal{D}[a] \rho\nonumber\\
&&+2 \kappa \bar{n}_\mathrm{m}^\mathrm{th} \mathcal{D}\left[a^{\dagger}\right] \rho+\frac{\gamma_{2}}{2}\sum_{i=1,2} \mathcal{D}\left[\sigma^z_{\mathrm{NV}_i}\right] \rho,\label{mainLindblad}
\end{eqnarray}
where $\mathcal{D}[\mathcal{O}] \rho=\mathcal{O} \rho \mathcal{O}^{\dagger}-\frac{1}{2}(\mathcal{O}^{\dagger} \mathcal{O}\rho+\rho\mathcal{O}^{\dagger} \mathcal{O})$, $\kappa=\alpha\omega_\mu$, $\gamma_2=1/T_2^*$, $a=\beta_{\mu}$, $a^\dagger=\beta^\dagger_{\mu}$, $\bar{n}_\mathrm{m}^\mathrm{th}=(\exp[\omega_\mu/k_\mathrm{B}T]-1)^{-1}$ is the thermal magnon population, $T$ is temperature, $k_\mathrm{B}$ is the Boltzmann constant, and $\rho$ is the density operator. Here, the magnon damping parameter $\kappa=\alpha\omega_{\mu}$ is based on the dissipation term in the Landau–Lifshitz–Gilbert equation $\left.\partial_t\mathbf{M}\right|_{\mathrm{diss}}=(\alpha/M_{\mathrm{s}})\mathbf{M}\times \partial_t\mathbf{M}$, resulting in $\left.\partial_t\beta_\mu\right|_\mathrm{diss}\approx-\alpha\omega_\mu\beta_\mu$ under the assumption $\partial\omega_{\mu}/\partial H_{\mathrm{ext}}\approx \mu_0 \gamma$, which is verified by Fig.~\ref{fig4}(b) (see Appendix D1). For the magnon mode contribution in the total Hamiltonian $\mathcal{H}(t)$, we only take into account the magnon mode with $\mu=(005)$, as this mode produces the dominant contribution in the NV-NV interaction as well as the magnon induced decoherence of NV centers in both protocols. As the NV center's longitudinal decay rate is much smaller than the transverse decoherence rate~\cite{bar2013solid}, we assume it to be zero in the simulation. The two NV centers are separated by $2.2 \ \mu\text{m}$ along the YIG bar length with $\mathbf{r}_1=(d+h)\hat{x}+w\hat{y}+(400\ \mathrm{nm})\hat{z}$ and $\mathbf{r}_2=\mathbf{r}_1+(l-800\ \mathrm{nm})\hat{z}$. We use the Gilbert damping parameter $\alpha=10^{-5}$ of YIG~\cite{tabuchi2014hybridizing} and the NV center coherence time $T_2^*=1 \text{ ms}$~\cite{herbschleb2019ultra}.
In the upper two panels of Fig.~\ref{fig5}(b), we plot the NV centers' excited state population $p_{ie}$ ($i=1,2$) and the magnon population $\langle n \rangle=\langle \hat{n}_{\mu}\rangle$ [$\mu=(005)$] at the end of the transduction {(on resonant)} and the virtual-magnon exchange {(off resonant)} protocols as a function of the total system interaction time $\tau_\mathrm{int}$ at $T=70 \text{ mK}$. In the lower two panels we plot three different entanglement measures as a function of the interacting time $\tau_\mathrm{int}$ for each protocol. More specifically, we plot the entanglement negativity normalized by the Bell-state's negativity, the degree of the Bell inequality violation, and the fidelity to the target pure entangled states, which are given by the red, sky blue, and gray curves, respectively. The resulting states are entangled if $\mathcal{N}>0$, and one expects to observe the violation of the Clauser-Horne-Shimony-Holt (CHSH) form of Bell inequality if $\text{CHSH Violation}>0$~\cite{horodecki1995violating,bartkiewicz2013entanglement} (see Appendix D1).
In Fig.~\ref{fig5}(b) we first find that the transduction protocol is faster in gate operation as compared to the virtual-magnon exchange protocol. This is because the NV-magnon on-resonant coupling rate $g_{\mu}\approx 2\pi \times 0.5$~MHz is larger than the off-resonant NV-NV coupling rate $g_{\rm{eff}}\approx 2\pi \times 90$~kHz. On the other hand, it is observed that the virtual-magnon exchange protocol results in larger amplitude oscillations in the NV centers' excited state populations and higher fidelity under the parameters and the temperature used in the simulation. This result is understood by a combination of two factors. First, the virtual-magnon exchange protocol only creates magnons virtually (with magnon population suppressed by $g_\mu/\Delta\omega$ due to the energy mismatch), thus being approximately insensitive to the magnon damping parameter. Secondly, the magnon damping rate $\alpha\omega_\mu$ is faster than the NV center's decoherence rate $1/T_2^*$, and therefore there is more loss of information if a real magnon is excited. Nonetheless, in both protocols we predict entangled states can be manipulated and the violation of the Bell inequality will be observed.
To further compare the two entanglement protocols, we have performed simulations under multiple temperatures and have observed that the virtual-magnon-exchange protocol is more robust at higher temperatures up to $\approx150$~mK (see Appendix D2 and E). Moreover, we show that both protocols do not produce useful entanglement for $T\gtrsim150$~mK due to the NV centers' dephasing from magnon number fluctuations of modes with $\mu\neq(005)$. We have also evaluated the decay contribution due to these magnon modes and have verified that this is negligible for temperatures $T\leq150$~mK for both upper and lower transitions of NV centers (see Appendix H). Interestingly, the transduction protocol improves more drastically at lower temperatures than the virtual-magnon exchange protocol. Based on the zero temperature analysis, we find an inequality for which the transduction protocol performs better (see Appendix D2)
\begin{eqnarray}
\alpha \lesssim \frac{\Delta\omega/g_\mu}{4(1-1/\pi)}\frac{1}{\omega_\mu T_2^{*}}.
\end{eqnarray}
For the parameters used in this section, the transduction protocol is shown to outperform the virtual-magnon exchange protocol (with $\Delta \omega=2\pi\times 3\ \mathrm{MHz}$) if $\alpha \lesssim 10^{-7}$. In Appendix D2 we provide phase diagrams in ($\alpha$, $1/T_2^*$)-space for which protocol gives higher fidelity under multiple detuning values. Analytical expressions for the fidelity in the limit $\alpha\omega_\mu/g_\mu\ll1$ and $T_2^{*-1}/g_\mu\ll1$ are also provided. To show that the magnon-mediated entanglement scheme can directly be extended to two-qubit entangling gates, we have also calculated an average gate fidelity $\bar{F}$~\cite{nielsen2002simple} as a square-root-of-$i$SWAP gate for the off-resonance protocol, and have obtained $\bar{F}\approx0.88$ at $T=70$~mK (see Appendix F).
As for keeping the system at low temperatures $T\lesssim 150$~mK, we note that the laser illumination and microwave irradiation on the system for the initialization, manipulation, and readout of NV centers may cause unwanted heating. Although YIG has been studied under microwave irradiations in superconducting qubit platforms~\cite{Lachance-Quirion425} and color centers have been studied under laser illuminations in dilution refrigerator temperatures $T<100$~mK~\cite{Evans662,PhysRevLett.120.053603,singh2020epitaxial,PhysRevB.102.104114}, it would be important to minimize the average microwave irradiation and laser illumination power on the system to maintain the required low temperatures. Here, of particular interest is the possibility of cooling down the target magnon mode to its ground state in analogy to cavity optomechanics techniques~\cite{PhysRevLett.99.093901,PhysRevLett.99.093902,CoolingMech2011,NVmechcooling2013,NVmechcooling2017}, e.g., via the optomagnonic interaction~\cite{PhysRevLett.121.087205} or via the coupling to NV centers~\cite{NVmechcooling2013,NVmechcooling2017}. For example in Fig.~\ref{fig5}(b), we have observed that the mean magnon occupation number at the end of the on-resonant protocol is smaller than its thermal level [see $\langle n(\tau_{\rm{int}}=0)\rangle$ in the off-resonant protocol], which is reminiscent of the ground-state cooling of magnons and motivates future studies on the alternative cooling methods of the NV-magnon hybrid quantum system.
We also note that the small Gilbert damping parameter $\alpha= 10^{-5}$ used in the current study may be optimistic for small YIG structures as the value is obtained from bulk YIG samples~\cite{tabuchi2014hybridizing}. This is partially due to the nontrivial magnetic behavior at millikelvin temperatures of the gadolinium-gallium-garnet (GGG) substrates on which YIG is typically grown~\cite{kosen2019microwave}, which would be mitigated by employing a free-standing structure~\cite{awschalom2021quantum}, and also due to the impurity relaxation mechanism in YIG~\cite{jermain2017increased}. However, with remarkable advances in recent magnonics research, it has been shown that the damping of thin YIG films can be improved considerably, e.g., with techniques based on a recrystallization of amorphous YIG into single crystals~\cite{Recrystalization2016}. Additionally, we obtain a high cooperativity $\mathcal{C}\approx 500$ even with the larger Gilbert damping parameter $\alpha=10^{-3}$ as calculated from Fig.~\ref{fig4}(c). We have further performed simulations with $\alpha=10^{-3}$ in Appendix G, and find that the entangled state can still be produced at $T=70$~mK for the off-resonant protocol, although further optimization on the detuning frequency is needed to improve the quality of the entanglement in order to avoid the overlap of the NV centers' transition frequencies with the now broader linewidth of the magnon mode resonance (see Appendix G).
\section{Conclusion}
\label{secVI}
We study hybrid quantum systems consisting of NV centers in diamond and magnons in ferromagnetic bar and waveguide structures. Based on the Hamiltonian formalism of the dipole-exchange magnons, we predict useful two-NV entangling gates over $1$-$2\ \mu\text{m}$ NV-NV separations at finite temperatures. Transduction and virtual-magnon exchange protocols of entanglement are explored and compared under realistic experimental conditions. Although the transduction protocol is faster in gate operation, the virtual-magnon exchange protocol results in higher fidelity as the typical Gilbert damping parameter of YIG makes the magnons less coherent than the NV centers. We have obtained entangled state's fidelity $F\approx0.81$ for the transduction protocol and $F\approx0.95$ for the virtual-magnon exchange protocol at $T=70$~mK. The virtual-magnon exchange protocol is also found to be robust against thermal magnon fluctuations, although the transduction protocol outperforms it close to zero temperature for $\alpha\omega_\mu T_2^*\lesssim {(\Delta\omega/g_\mu)}/{[4(1-1/\pi)]}$. Calculations presented in this study help to implement optimal device geometries and entangling gate protocols in future experiments trying to entangle spatially separated NV centers using magnons in ferromagnets.
\section*{Acknowledgement}
This work is supported by the Vannevar Bush Faculty Fellowship ONR N00014-17-1-3026, the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Science and Engineering Division (M.~F, D.~D.~A.), the U.S. Department of Energy, Office of Basic Energy Sciences under Award Number DE-SC0019250 (D.~C. and M.~E.~F.), and the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers (D. D. A.). The authors thank H.-S. Chang, J.C. Karsch, G. Smith, P.C. Jerger, A. Crook, Y. Tsaturyan, L.R. Weiss, and S.E. Sullivan for useful discussions.
\normalem
|
1,116,691,497,237 | arxiv | \section[#1]{\centering #1}}
\newcommand{\ssubsection}[1]{\subsection[#1]{\centering #1}}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{proposition}[thm]{Proposition}
\theoremstyle{definition}
\newtheorem{example}[thm]{Example}
\newtheorem{remark}[thm]{Remark}
\renewcommand{\P}{P_\infty}
\renewcommand{\S}{\sigma}
\newcommand{\e}{\eta}
\renewcommand{\a}{\alpha}
\renewcommand{\b}{\beta}
\renewcommand{\r}{\gamma}
\newcommand{\Q}{\mathbb Q}
\newcommand{\Z}{\mathbb Z}
\newcommand{\C}{\mathbb C}
\newcommand{\R}{\mathbb R}
\newcommand{\PP}{\mathbb P}
\newcommand{\HH}{\mathbb H}
\renewcommand{\k}{\mathfrak k}
\renewcommand{\o}{\omega}
\newcommand{\sm}{\left(\smallmatrix}
\newcommand{\esm}{\endsmallmatrix\right)}
\begin{document}
{\thanks{\scriptsize This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education(NRF-2016R1D1A1B03934504).}}
\maketitle
\noindent
\begin{abstract}
Let $E$ be an elliptic curve defined over $\Q$, and let $G$ be the torsion group $E(K)_{tors}$ for some cubic field $K$ which does not occur over $\Q$.
In this paper, we determine over which types of cubic number fields (cyclic cubic, non-Galois totally real cubic, complex cubic or pure cubic) $G$ can occur, and if so, whether it can occur infinitely often or not.
Moreover, if it occurs, we provide elliptic curves $E/\Q$ together with cubic fields $K$ so that $G= E(K)_{tors}$.
\end{abstract}
\noindent
{\bf Key Words:} elliptic curve, modular curve, torsion subgroup, cubic field. \\
{\it 2010 Mathematics Subject Classification}. {Primary: 11G05;
Secondary: 11G18}.
\section{Introduction}\label{sec:Introduction}
A celebrated theorem, finally proved by Mazur \cite{M}, states that the torsion group $E(\Q)_{tors}$ of an elliptic curve $E$ over the rational numbers must be isomorphic to one of the following 15 types:
\begin{equation}\label{eq:rt}
\begin{array}{ll}
\Z/n\Z,&n=1,2,3,\dots,10,12\\
\Z/2\Z\times\Z/2n\Z,&n=1,2,3,4
\end{array}
\end{equation}
Let $E$ be an elliptic curve over $\Q$, and $K$ be a cubic number field.
Najman \cite{N} determined that $E(K)_{tors}$ is one of the following 20 types:
\begin{equation}\label{eq:ct}
\begin{array}{ll}
\Z/n\Z,&n=1,2,3,\dots,10,12,13,14,18,21\\
\Z/2\Z\times\Z/2n\Z,&n=1,2,3,4,7
\end{array}
\end{equation}
Moreover, he showed that the elliptic curve 162B1 over $\Q(\zeta_9)^+$ is the unique rational elliptic curve
with torsion $\Z/21\Z$ over a cubic field, and for all the other groups $G$ in the list \eqref{eq:ct}, there exist infinitely many rational elliptic curves that have torsion $G$ over some cubic field.
Later, Gonz\'alez-Jim\'enez, Najman and Tornero \cite{GNT} determined the set of possible torsion structures over a cubic field of a rational elliptic curve such that $E(\Q)_{tors}= G$ for each $G$ listed in \eqref{eq:rt}. Also, they studied the number of cubic fields $K$ such that $E(\Q)_{tors}\neq E(K)_{tors}$.
Recently, Derickx and Najman \cite{DN} determined all the possible torsion groups of elliptic curves over cyclic cubic fields, over non-cyclic totally real cubic fields and over complex cubic fields.
Also Gonz\'alez-Jim\'enez \cite{G} gave an explicit description of the possible torsion growth of rational elliptic curves with complex multiplication over cubic fields.
Let $K$ be a cubic number field.
Then $K=\Q(\alpha)$ for some $\alpha$ whose minimal polynomial is a cubic polynomial $f(x)$.
If all three roots of $f(x)$ are real, $K$ is called a {\it totally real cubic field}, and if $f(x)$ has a non-real root, it is called {\it complex cubic field}.
Moreover, if $K$ contains all three roots of $f(x)$, i.e., $K$ is a Galois extension of $\Q$, then $K$ is called a {\it cyclic cubic field}. Indeed, a cyclic cubic field must be a totally real cubic field.
Finally, if $K$ can be obtained by adjoining the cube root $\sqrt[3]{n}$ of a positive integer $n$, then $K$ is called a {\it pure cubic field}.
In this paper, we determine whether a torsion group $G$ not occurring over $\Q$ can occur over which types of cubic fields, and if so, whether it can occur infinitely often or not.
Moreover, if it occurs, we provide elliptic curves $E$ together with cubic fields $K$ so that $G= E(K)_{tors}$.
Finally, we note that $K$-{\it rational} means defined over a field $K$, and {\it rational} without $K$ means $\Q$-rational.
\section{Results}
In this section, for each torsion group $G$ not occurring over $\Q$, we give results on it case by case.
However we don't need to treat the case $E(K)_{tors}=\Z/21\Z$ as stated in the Introduction.
Similar to Cremona and Watkins \cite{CW}, for a positive integer $N$, we will describe models for elliptic curves with $N$-isogenies as
$$y^2 = x^3 + A_N(t,U)x + B_N(t,U)$$
where $A_N(t,U)=f(t)U^2$ and $B_N(t,U)=g(t)U^3$, $t$ and $U$ are parameters and $f$ and $g$ are functions of $t$.
A fixed value of $t$ corresponds to a family of quadratic twists of an elliptic curve (all with the same $j$-invariant), a fixed value of $t$ and a value of $U$ up to squares defines an elliptic curve,
and a $(t,U)$-pair defines a model.
\begin{lemma} \label{lem:GNT} {}\
\begin{itemize}
\item[(a)] \cite[Lemma 2.5]{GNT}\label{lem:GNT} Let $p$ be prime, $f$ a $p$-isogeny on $E$ over $\Q$, and let ${\rm ker}(f)$ be generated by $P$.
Then the field of definition $\Q(P)$ of $P$ (and all of its multiples) is a cyclic (Galois) extension of $\Q$ of order dividing $p-1$.
\item[(b)] \cite[Lemma 18]{N} Let $E$ be an elliptic curve over $\Q$ and $K$ a cubic number field. If $P$ is a $K$-rational $n$-torsion point
of $E$ where $n$ is odd and not divisible by $3$, then $P$ generates a $\Q$-rational $n$-isogeny of $E$.
\end{itemize}
\end{lemma}
\begin{center}
2.1.\,$E(K)_{tors}=\Z/13\Z$
\end{center}
\begin{lemma}\label{lem:13}
Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/13\Z$ for some cubic field $K$.
Then $K$ is a cyclic cubic field.
\end{lemma}
\begin{proof}
By Mazur's Theorem, $E(\Q)_{tors}=\{O\}$ is trivial. So Lemma \ref{lem:GNT} shows that $K$ must be cyclic.
\end{proof}
Now we will show that $\Z/13\Z$ occurs infinitely often over cyclic cubic fields and construct an infinite family of elliptic curves whose torsion group is $\Z/13\Z$ over cyclic cubic fields.
By the computation in \cite{CW}, we have a family of elliptic curves $E_{t,U}$ which have $13$-isogenies over cyclic cubic fields $K_{t,U}$ as follows:
\begin{equation}\label{eq:iso13}
E_{t,U}:\ \ y^2 = x^3 + A_{13}(t,U)x + B_{13}(t,U),
\end{equation}
where
\begin{align*}
A_{13}(t,U)=&-27(t^4-t^3+5t^2+t+1)(t^2+1)^2(t^8-5t^7+7t^6-5t^5+5t^3+7t^2\\
&+5t+1)U^2,\\
B_{13}(t,U)=&-54(t^4-t^3+5t^2+t+1)(t^2+1)^4(t^{12}-8t^{11}+25t^{10}-44t^9+40t^8\\
&+18t^7-40t^6-18t^5+40t^4+44t^3+25t^2+8t+1)U^3
\end{align*}
with $t\neq 0$.
$K_{t,U}=\Q(\alpha_{t,U})$ where $\alpha_{t,U}$ is a root of an irreducible cubic polynomial $a_3(t,U)x^3+a_2(t,U)x^2 +a_1(t,U)x+a_0(t,U)$ for some rational numbers $t$ and $U$ where
\begin{align*}
a_3(t,U)=&t^{12},\\
a_2(t,U)=&9t^8(t-1)^2(t^2+1)(t^4-t^3+5t^2+t+1)U,\\
a_1(t,U)=&27t^4(t^2+1)^2(t^4-t^3+5t^2+t+1)(t^8-5t^7+15t^6-29t^5+16t^4-3t^3\\
&-9t^2-3t+1)U^2,\\
a_0(t,U)=&27(t^2+1)^3(t^4-t^3+5 t^2+t+1)(t^{14}-8 t^{13}+38 t^{12}-124 t^{11}+245 t^{10}\\
&-326 t^9+228 t^8+120 t^7+12 t^6+38 t^5-43 t^4-80 t^3-34 t^2-4 t+1)U^3.
\end{align*}
For simplicity, let $E_t$, $K_t$ and $\alpha_t$ denote $E_{t,1}$, $K_{t,1}$ and $\alpha_{t,1}$, respectively.
Now we will find a quadratic twist $E_{t,U}$ of $E_t$ that has a $K_t$-rational 13-torsion point.
Note that $\alpha_t$ is the $x$-coordinate of a $13$-torsion point on $E_t$.
Let $\beta_t$ denote its $y$-coordinate.
By a quadratic twist $U=d^2$, $E_t$ becomes $E_{t,U}$ and $(\alpha_t,\,\beta_t)$ maps to $(d^2\alpha_t,\, d^3\beta_t)$.
Thus $U=d^2\in\Q$ and $d^3\beta_t\in K_t$, hence $d\beta_t\in K_t$.
One can easily check that $E_{t,U}$ has a 13-torsion point over $K_t$ if and only if $U=d^2\in\Q$ and $d\beta_t\in K_t$.
Now $\beta_t^2=\alpha_t^3+A_{13}(t,1)\alpha_t+B_{13}(t,1)=a_2(t)\alpha_t^2+a_1(t)\alpha_t+a_0(t)\in K_t$ where
\begin{align*}
a_2(t)=&-9(t^2+1)(t^4-t^3+5t^2+t+1)(t^5-t^4)^2/t^{12},\\
a_1(t)=&-18(t^2+1)(t^4-t^3+5t^2+t+1)(3t^9-12t^8+24t^7-42t^6+15t^5-33t^4\\
&-12t^3-6t^2-6t-3)(t^5-t^4)/t^{12},\\
a_0(t)=&-9(t^2+1)(t^4-t^3+5t^2+t+1)(3t^9-12t^8+24t^7-42t^6+15t^5-33t^4\\
&-12t^3-6t^2-6t-3)^2/t^{12}.
\end{align*}
Hence we have
\begin{equation}\label{eq:beta1}
d^2\beta_t^2=d^2a_2(t)\alpha_t^2+d^2a_1(t)\alpha_t+d^2a_0(t).
\end{equation}
On the other hand, $d^2\beta_t^2=(d\beta_t)^2$ is a square in $K_t$, hence we have
\begin{equation}\label{eq:beta2}
d^2\beta_t^2=(b_2(t)\alpha_t^2+b_1(t)\alpha_t+b_0(t))^2=c_2(t)\alpha_t^2+c_1(t)\alpha_t+c_0(t)
\end{equation}
for some $b_i(t),c_i(t)\in \Q(t)$ with $i=0,1,2$.
By comparing \eqref{eq:beta1} and \eqref{eq:beta2} and using {\sc Maple}, we can obtain the following:
\begin{align*}
U =&\frac{-1}{(t^2+1)(t^4-t^3+5t^2+t+1)},\\
b_2(t) =& 0,\\
b_1(t) =& \frac{3(t-1)}{t^2},\\
b_0(t)=&\frac{9(t^2+1)(t^7-4t^6+7t^5-10t^4-2t^3-t^2-2t-1)}{t^6}.
\end{align*}
Finally, by letting $t=u$ and $U =\frac{-1}{(u^2+1)(u^4-u^3+5u^2+u+1)}$ in \eqref{eq:iso13}, we have an infinite family of rational elliptic curves $E_u$ over cyclic cubic fields $K_u$ with $E_u(K_u)_{tors}=\Z/13\Z$ as follows:
$\bullet\, E_u:\ \ y^2=x^3+A(u)x+B(u)$ where
\begin{align*}
A(u)=&-27(u^8-5 u^7+7 u^6-5 u^5+5 u^3+7 u^2+5 u+1)/(u^4-u^3+5u^2+u+1),\\
B(u)=&54(u^2+1)(u^{12}-8 u^{11}+25 u^{10}-44 u^9+40 u^8+18 u^7-40 u^6-18 u^5+40 u^4\\
&+44 u^3+25 u^2+8 u+1)/(u^4-u^3+5u^2+u+1)^2
\end{align*}
with $u(u^4 -u^3 +5u^2 +u+1)\neq 0$.
$\bullet\, K_u=\Q(\alpha_u)$ where $\alpha_u$ is a root of an irreducible polynomial $a_3(u)x^3+a_2(u)x^2 +a_1(u)x+a_0(u)$ for some rational number $u$ where
\begin{align*}
a_3(u)=&u^{12}(u^4-u^3+5u^2+u+1)^2,\\
a_2(u)=&-9u^8(u-1)^2(u^4-u^3+5u^2+u+1)^2,\\
a_1(u)=&27u^4(u^4-u^3+5u^2+u+1)(u^8-5u^7+15u^6-29u^5+16u^4-3u^3-9u^2\\
&-3u+1),\\
a_0(u)=&-27 u^{14}+216 u^{13}-1026 u^{12}+3348 u^{11}-6615 u^{10}+8802 u^9-6156 u^8\\
&-3240 u^7-324 u^6-1026 u^5+1161 u^4+2160 u^3+918 u^2+108 u-27.\\
\end{align*}
By this result together with Lemma \ref{lem:13}, we have the following result:
\begin{thm} Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/13\Z$ for some cubic field $K$.
Then $K$ is a cyclic cubic field.
Moreover, there exist infinitely many non-isomorphic rational elliptic curves $E$ and cyclic cubic fields $K$ so that $E(K)_{tors}=\Z/13\Z$.
\end{thm}
\begin{center}
2.2.\, $E(K)_{tors}=\Z/14\Z$
\end{center}
In this subsection, we are interested in the case where $\Z/14\Z$ is the full torsion of $E(K)$.
Elliptic curves with $E(K)_{tors}=\Z/2\Z\times\Z/14\Z$ will be treated in Subsection 2.4.
Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/14\Z$ for some cubic field $K$.
By \cite[Theorem 1.2]{GNT}, $E(\Q)_{tors}=\Z/2\Z$ or $E(\Q)_{tors}=\Z/7\Z$.
First consider the case $E(\Q)_{tors}=\Z/2\Z$. Let $P$ be a $7$-torsion point in $E(K)_{tors}$.
Then $K=\Q(P)$, hence $K$ is a cyclic cubic field by Lemma \ref{lem:GNT}.
Also $E(K)_{tors}$ defines a rational $14$-isogeny on $E$, hence it gives rise to a non-cuspidal rational point on $X_0(14)$.
By \cite{K}, $X_0(14)$ contains only two such points.
Now we explain how to find the rational elliptic curves corresponding to them.
By the method explained in \cite{JKL2}, we can construct a map from $X_1(14)\to X_0(14)$.
Note that $X_1(14)$ and $X_0(14)$ are defined by
\begin{align}\label{eq:14-1}
&X_1(14):\,\, y^2 + (x^2 + x)y + x=0,\\ \label{eq:14-2}
&X_0(14):\,\, v^2+(u+3)v+u^3+6u+8=0.
\end{align}
Then the natural map $\phi:X_1(14)\to X_0(14)$ is defined by
\begin{equation}\label{eq:map14}
(u,v)=\phi(x,y)=\left(\frac{-1-y+y^3}{y^2}, \frac{-1-x^2-x^3-y-y^3-3xy-xy^2}{xy}\right).
\end{equation}
We can find 6 rational points satisfying \eqref{eq:14-2} which correspond to 2 non-cuspidal points and 4 cusps.
The rational points corresponding to non-cuspidal points are $P_1:=(-2,3)$ and $P_2:=(-9,-25)$.
By using \eqref{eq:map14}, we have that the points lying above $P_1$ and $P_2$ which satisfy \eqref{eq:14-1} are $Q_1:=(1-\alpha-\alpha^2,\,\alpha)$ and $Q_2:=\left(-3 + 2 \alpha + 2 \alpha^2,\,1-2\alpha^2\right)$, respectively, where $\alpha$ is a root of the irreducible cubic polynomial $x^3+2x^2-x-1$.
Actually,
$$K=\Q(\alpha)=\Q(\zeta_7)^+,$$
the maximal real subfield of the $7$-th cyclotomic field.
By using the rational maps in Table 7 and p.~1133 of \cite{S}, we obtain the elliptic curves $E_1$ and $E_2$ corresponding to $Q_1$ and $Q_2$, respectively, as follows:
\begin{align*}
E_1:\,& y^2+\left(\frac{5}{7}\alpha^2+\frac{2}{7}\alpha +\frac{3}{7}\right)xy+\left(\alpha^2-\frac{1}{7}\alpha-\frac{3}{7}\right)y=x^3+\left(\alpha^2-\frac{1}{7}\alpha-\frac{3}{7}\right)x^2\\
E_2:\,& y^2-\left(\frac{13}{7}\alpha^2+\frac{22}{7}\alpha -\frac{23}{7}\right)xy-\left(\frac{4}{7}\alpha^2+\frac{12}{7}\alpha+1\right)y=x^3-\left(\frac{4}{7}\alpha^2+\frac{12}{7}\alpha+1\right)x^2
\end{align*}
They are $K$-rational elliptic curves with torsion $\Z/14\Z$ over $K$.
Also the $j$-invariants of $E_1$ and $E_2$ are $-3375$ and $16581375$, respectively.
By using LMFDB\cite{L}, we can find that
$$ 49A3:\ \ \ y^2 +xy = x^3 -x^2 -107x+552$$
and
$$ 49A4:\ \ \ y^2 +xy = x^3 -x^2 -1822x+30393$$
are elliptic curves with $j$-invariants $-3375$ and $16581375$, respectively.
By the computer algebra system {\sc Maple}, we confirm that $E_1$ and $E_2$ are isomorphic to 49A3 and 49A4 over $K$, respectively.
Thus we can conclude that 49A3 and 49A4 are two rational elliptic curves corresponding to the two non-cuspidal rational points on $X_0(14)$.
However, proving that they are up to $\Q$-isomorphisms the only rational elliptic curves with torsion $\Z/14\Z$ over $\Q(\zeta_7)^+$
requires some more justification. A priori, the modular curve only tells us that they are the only such curves up to $\overline{\Q}$-isomorphisms.
We still have to exclude the possibility that a quadratic twist, i.e. a rational elliptic curve isomorphic to 49A3 or 49A4 only over
$\Q(\sqrt{d})$, might also have torsion $\Z/14\Z$ over $K$.
The quadratic twist multiplies the $y$-coordinate of the $7$-torsion point
with a quadratic irrationality, and hence moves it out of $K$. But we have to take into account the possibility that at the same time another
$7$-torsion point might become $K$-rational. This would imply that over the composite field $\Q(\sqrt{d})K$ the curve 49A3 resp. 49A4 has
two independent $7$-torsion points; so by the Weil pairing $\Q(\sqrt{d})K=\Q(\zeta_7)$, and hence $d=-7$.
Now we can invoke \cite[Theorem 1.1]{GL}, which says (among other things) that no rational elliptic curve can
acquire its full $7$-torsion over $\Q(\zeta_7)$. Alternatively, one can use {\sc Magma} to check directly that
over $K=\Q(\zeta_7)^+$ the $\Q(\sqrt{-7})$-twists of 49A3 and 49A4 still only have torsion $\Z/2\Z$.
We also mention that 49A3 and 49A4 are $CM$-curves with complex multiplication by $\Z[\frac{1+\sqrt{-7}}{2}]$ resp. $\Z[\sqrt{-7}]$.
Next consider the case $E(\Q)=\Z/7\Z$.
Suppose $E$ is defined by a short Weierstrass form.
In this case, if we adjoin the $x$-coordinate $x(P)$ of a point $P$ of order 2, we have a cubic field $K=\Q(x(P))$ over which $E(K)=\Z/14\Z$.
Note that by \cite[Theorem 1.2]{GNT} (or rather by the proof of \cite[Proposition 29]{N}), $E(\Q)_{tors}\cong\Z/7\Z$ can never give
$E(K)_{tors}\cong\Z/2\Z\times\Z/14\Z$, so $K$ is automatically non-Galois.
Since $X_1(7)$ is a rational curve, it contains infinitely many rational points, hence there exists an infinite family of elliptic curves with $7$-torsion.
One can find the parametrization of such curves in \cite[Table 3]{Ku} as follows;
$\bullet\ \ E_u:\,y^2-(u^2-u-1)xy-(u^3-u^2)y=x^3-(u^3-u^2)x^2$
\noindent with discriminant
$$\Delta_u= u^7(u-1)^7(u^3-8u^2+5u+1)\neq 0.$$
We point out that in \cite{N} the constant term in the cubic factor of this discriminant carries an incorrect sign and that this slip propagates through
that paper (\cite[p.262]{N} and \cite[p.265]{N}). However, it seems that this is merely a typo and that for the computer calculations the correct formula
has been used. For example on page 265 in the proof that $E(\Q)_{tors}\cong\Z/7\Z$ never produces $E(K)_{tors}\cong\Z/2\Z\times\Z/14\Z$ the printed
(i.e. incorrect) formula would lead to a Jacobian of rank $1$. On page 262 (in the proof that $28$-torsion cannot occur) the mistake does not affect the
outcome of the computation.
Note that $E_u$ is isomorphic to the elliptic curve defined by
\begin{align*}
y^2=f_u(x):=&x^3-\frac{1}{3}(u^2-u+1)(u^6-11u^5+30u^4-15u^3-10u^2+5u+1)x\\
&+\frac{2}{27}(u^{12}-18 u^{11}+117 u^{10}-354 u^9+570 u^8-486 u^7+273 u^6\\
&-222 u^5+174 u^4-46 u^3-15 u^2+6 u+1)
\end{align*}
Let $\alpha_u$ be a root of an irreducible polynomial $f_u(x)$ for some rational number $u$.
Then $K_u:=\Q(\alpha_u)$ is a cubic field, so that $E_u(K_u)_{tors}=\Z/14\Z$.
Let $r_1,r_2,r_3$ be the three real roots of $u^3-8u^2+5u+1=0$ with $r_1<r_2<r_3$, then $r_1<0<r_2<1<r_3$.
Put $I:=(-\infty,\,r_1)\cup (0,\,r_2) \cup (1,\, r_3)$, $J:=(r_1,\,0)\cup (r_2,\,1) \cup (r_3,\,\infty)$.
Then $\Delta_u<0$ for $u \in I$ and $\Delta_u>0$ for $u \in J$, hence $E_u$ has torsion $\Z/14\Z$ over complex cubic field $K_u$ when $u\in I\cap\Q$ and over totally real, but non-Galois cubic fields $K_u$ when $u\in J\cap\Q$.
Finally, we consider the torsion of $E_u$ over pure cubic fields.
For that we need the following easy fact.
\begin{lemma}\label{lem:pure} The discriminant of a pure cubic number field is of the form $-27d^2$ for some $d\in\Q$.
\end{lemma}
\begin{proof} Let $K=\Q(\sqrt[3]{n})$ and $f(x)=x^3-n$ for some positive integer $n$.
Then the discriminant of $f$ is given by $D(f)=-R(f,f')$ where $R(f,f')$ is the resultant of $f$ and $f'$.
Note that
$$R(f,f')=\left|\begin{array}{rrrrr}1&0&0&-n&0\\0&1&0&0&-n\\ 3&0&0&0&0
\\0&3&0&0&0 \\ 0&0&3&0&0\end{array} \right|=27n^2.$$
Thus the result follows.
\end{proof}
Suppose $u\in I\cap\Q$. By Lemma \ref{lem:pure} a necessary condition for $K_u$ to be a pure cubic field is
\begin{equation}\label{eq:de}
-27k^2=\Delta_u=u^7(u-1)^7(u^3-8u^2+5u+1)
\end{equation}
for some $k\in\Q$.
By letting $v=\frac{9k}{u^3(u-1)^3}$ in \eqref{eq:de}, we have the following equation:
$$v^2=-3u(u-1)(u^3-8u^2+5u+1),$$
which defines a hyperelliptic curve $C$ of genus 2.
Note that the Jacobian $J(C)$ is of rank 0.
Applying the Chabauty method implemented in the computer algebra system {\sc Magma}, we obtain that all the rational points are $(0,\,0),\,(1,\,0),\infty$.
However, these points cannot give pure cubic fields.
Therefore, the torsion $\Z/14\Z$ cannot occur over pure cubic fields.
\begin{thm} Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/14\Z$ over some cubic field $K$.
\begin{itemize}
\item[(a)] If $E(\Q)_{tors}=\Z/2\Z$, then $E$ is one of {\rm 49A3} and {\rm 49A4}, and $K=\Q(\zeta_7)^+$.
\item[(b)] If $E(\Q)_{tors}=\Z/7\Z$, then there exist infinitely many non-isomorphic rational elliptic curves $E$ over both totally real cubic and complex cubic fields $K$ so that $E(K)_{tors}=\Z/14\Z$.
But there is no rational elliptic curve $E$ so that $E(K)_{tors}=\Z/14\Z$ over a pure cubic field $K$.
\end{itemize}
\end{thm}
\begin{center}
2.3.\, $E(K)_{tors}=\Z/18\Z$
\end{center}
Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/18\Z$ for some cubic field.
By \cite[Theorem 1.2]{GNT}, $E(\Q)_{tors}=\Z/6\Z$ or $E(\Q)_{tors}=\Z/9\Z$.
First consider the case $E(\Q)_{tors}=\Z/6\Z$. We have the following result:
\begin{lemma}\label{tor18} Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/18\Z$ for some cubic field $K$.
If $E(\Q)_{tors}=\Z/6\Z$, then $K$ is a cyclic cubic field.
\end{lemma}
\begin{proof}
Let $Q$ be a $K$-rational $18$-torsion point of $E$. Then $P=6Q$ must be one of the two $\Q$-rational $3$-torsion points of $E$.
If not, $E$ would have two independent $K$-rational $3$-torsion points which by the Weil pairing would lead to the contradiction
$\zeta_3 \in K$.
\par
Over $\overline{\Q}$ there are exactly nine $9$-torsion points $R$ of $E$ with $3R=P$. Always three of them are multiples of each
other (and hence generate the same field extension of $\Q$) and lie in the same cyclic $9$-isogeny. So, fixing the $K$-rational
$9$-torsion point $R=2Q$, we see that $K/\Q$ is Galois if and only if the Galois conjugates of $R$ are $4R$ and $7R$. If $K/\Q$
is not Galois, then each of $R$ and its two Galois conjugates must lie in a different one of the three cyclic $9$-isogenies
containing $P$; so in this case none of the cyclic $9$-isogenies containing $P$ can be $\Q$-rational.
\par
Next we note that if $K/\Q$ is not Galois, then $E$ has a $\Q$-rational $3$-isogeny $\langle S\rangle$ different from $\langle P\rangle$.
If not, then by \cite[Proposition 14]{N} $E$ has a $\Q$-rational $9$-isogeny and we can take the $3$-isogeny it contains, which, by what
was just discussed, for non-Galois $K$ is different from $\langle P\rangle$.
\par
Now assume that $K/\Q$ is not Galois, let $L$ be the Galois closure of $K/\Q$ and $Gal(L/\Q)=\langle \sigma, \tau\rangle\cong S_3$
where $\sigma$ is an automorphism of order $3$ and $\tau$ is the involution fixing $K$. Then
$$ \tau(R)=R,\ \ \tau(S)=-S,\ \ \sigma(S)=S\ \ \ \hbox{\rm and}\ \ \ \sigma(R)=\alpha R+\beta S$$
with $\alpha\in\{1,4,7\}$ and $\beta\in\{1,2\}$. Replacing $S$ by $-S$ if necessary, we can assume $\beta=1$. Then the relation
$(\sigma\tau)^2=id$ forces $\alpha=1$.
\par
Now we consider the elliptic curve $\widetilde{E}=E/\langle S\rangle$, that is, the image of $E$ under the $\Q$-rational $3$-isogeny
whose kernel is $\langle S\rangle$. The $Gal(\overline{\Q}/\Q)$-orbit of $R$ consists of the points $R$, $R+S$ and $R+2S$, which all
map to the same point on $\widetilde{E}$. So the image of $R$ on $\widetilde{E}$ is a $\Q$-rational point, and still a $9$-torsion
point (as $3R=P$ is not in the kernel). But $\widetilde{E}$ also inherits a $\Q$-rational $2$-torsion point. So all in all
$\widetilde{E}$ is an elliptic curve over $\Q$ with a $\Q$-rational $18$-torsion point. This finally is the desired contradiction.
\end{proof}
Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/18\Z$ for some cyclic cubic field.
Then ${\rm Gal}(K/\Q)$ acts on $E(K)_{tors}=\Z/18\Z$.
Thus $E(K)_{tors}$ defines a rational cyclic $18$-isogeny on $E$, hence we get a non-cuspidal rational point on $X_0(18)$.
Conversely, suppose there is a non-cuspidal rational point on $X_0(18)$.
This corresponds to a rational elliptic curve $E$ with a rational cyclic $18$-isogeny.
Here we assume $E$ is defined by a short Weierstrass equation.
Note that the underlying 2-torsion point is rational, but some elements of ${\rm Gal}(\overline\Q/\Q)$ might map the underlying 3-torsion point $P$ to its inverse.
Thus $x(P)\in\Q$, but the $y$-coordinate $y(P)$ of $P$ might be quadratic over $\Q$.
After a suitable quadratic twist with $y(P)$, we get a new rational elliptic curve $E$ with a rational 3-torsion point.
The 2-torsion point and 18-isogeny are still rational.
Thus $E$ has an 18-isogeny $\phi$ whose kernel ${\rm ker}(\phi)$ contains a rational 6-torsion point.
Let $Q\in{\rm ker}(\phi)$ be a point of order 18 and put $K=\Q(x(Q))$.
If $y(Q)$ is not defined over $K$, then there exists an element $\sigma\in{\rm Gal}(\overline{K}/K)$ which maps $Q$ to $-Q$.
Since $P=6Q$, $\sigma$ maps $P$ to $-P$ which is impossible because $P$ is defined over $\Q$.
Thus $K$ is actually equal to $\Q(Q)$.
Then the pairs $(E,\,Q)$ and $(E,\,\langle Q\rangle)$ correspond to a $K$-rational point on $X_1(18)$ and a rational point on $X_0(18)$, respectively.
Since the natural map $X_1(18)\to X_0(18)$ is a Galois covering of degree 3 and it maps $(E,\,Q)$ to $(E,\,\langle Q\rangle)$, $K$ should be a cyclic cubic field.
Thus we have a rational elliptic curve $E$ over a cyclic cubic field $K$ with $E(K)_{tors}=\Z/18\Z$.
Since $X_0(18)$ is a curve of genus 0 with a rational point, it has infinitely many rational points.
Thus there exist infinitely many rational elliptic curves $E$ over cyclic cubic fields $K$ so that $E(K)_{tors}=\Z/18\Z$.
For obtaining such an infinite family, we don't compute directly $18$-isogenies because it requires a big computation.
Instead, we use $9$-isogenies.
By the computation in \cite{CW}, we have a family of elliptic curves $E_{t,U}$ with 9-isogenies as follows:
\begin{equation}\label{eq:9}
E_{t,U}:\ \ y^2 =x^3 + A_{9}(t,U)x + B_{9}(t,U),
\end{equation}
where
\begin{align*}
A_{9}(t,U)&=-2187 (t+1)^3 (9 t^3+27 t^2+27 t+1) U^2,\\
B_{9}(t,U)&=-39366 (t+1)^3 (27 t^6+162 t^5+405 t^4+504 t^3+297 t^2+54 t-1) U^3.
\end{align*}
First, we will find a family of elliptic curves $E_s$ with 9-isogenies and underlying rational 3-torsion by choosing appropriate $U$.
Using {\sc Magma}, we obtain that a linear factor of the 3-division polynomial of $E_{t,1}$ is $x + 81t^3 + 243t^2 + 243t + 81$.
By putting $x=-81t^3-243t^2-243t-81$ in \eqref{eq:9} with $U=1$, we have that the square of the $y$-coordinate of a 3-torsion point is equal to $-2^43^9(t+1)^3$.
Now we put $t=s$ and $U=-3(s+1)$ in \eqref{eq:9}, then up to a $\Q(s)$-rational isomorphism $E_{t,U}$ becomes the following:
$ E_s:\ \ y^2 =x^3 +A(s)x + B(s),$
where
\begin{align}\label{eq:9-2}
A(s)&=-3(s+1)(9s^3+27s^2+27s+1),\\ \nonumber
B(s)&=2(27s^6+162s^5+405s^4+504s^3+297s^2+54s-1),
\end{align}
which has a rational 3-torsion point for any rational number $s\neq -1$.
Using {\sc Magma} we compute that a cubic factor of the 9-division polynomial of $E_s$ is given by
\begin{align*}
F_s(x):=&x^3+(-9s^2 - 30s - 33)x^2 + (27s^4 + 180s^3 + 450s^2 + 516s + 219)x\\
&-27s^6 - 270s^5 - 1053s^4 - 2196s^3 - 2565s^2 - 1566s - 323,\end{align*}
and its discriminant is given by $2^{12}3^{4}(s^2+3s+3)^2$, which is a perfect square.
Thus, for a rational number $s$ such that $F_s(x)$ is irreducible, $E_s$ has a 9-torsion point $P_s$ whose $x(P_s)$ is contained in a cyclic cubic field $K_s:=\Q(\alpha_s)$ where $\alpha_s$ is a root of $F_s(x)$.
Indeed, $P_s$ is defined over $K_s$, for otherwise there exists an element $\sigma\in{\rm Gal}(\overline{K_s}/K_s)$ which maps $P_s$ to $-P_s$, and then the 3-torsion point $3P_s$ maps to $-3P_s$ which is impossible because $3P_s$ is rational.
On the other hand, if $E_s$ has a rational 2-torsion point, then $E_s$ must have a rational 6-torsion point because $E_s$ has a rational 3-torsion point.
In general $f_s(x):=x^3+A(s)x+B(s)$ does not have a linear factor in $x$ over $\Q$.
However, substituting $s=s(u):=\frac{u^3-3u^2}{3u-3}$, $f_{s(u)}(x)$ splits into a product of a linear factor and a quadratic factor over $\Q$.
Thus $E_{s(u)}$ has a rational 2-torsion point, hence we finally have an infinite family of elliptic curves $E_u$ over cyclic cubic fields $K_u$ so that $E_u(K_u)_{tors}=\Z/18\Z$ and $E_u(\Q)_{tors}=\Z/6\Z$ as follows:
$\bullet\, E_u:\ \ y^2 =x^3+A(u)x + B(u),$
where
\begin{align*}
A(u)=&-27(u^3-3 u^2+3 u-3) (u^9-9 u^8+36 u^7-90 u^6+162 u^5\\
&-216 u^4+192 u^3-90 u^2+9 u-3),\\
B(u)=&54(u^6-6 u^5+15 u^4-24 u^3+27 u^2-18 u-3)(u^{12}-12 u^{11}\\
&+66 u^{10}-228 u^9+567 u^8-1080 u^7+1596 u^6-1800 u^5\\
&+1503 u^4-900 u^3+378 u^2-108 u+9).
\end{align*}
$\bullet\, K_u=\Q(\alpha_u)$ where $\alpha_u$ is a root of an irreducible polynomial $a_3(u)x^3+a_2(u)x^2 +a_1(u)x+a_0(u)$ for some rational number $u$ where
\begin{align*}
a_3(u)=&27(u-1)^6,\\
a_2(u)=&-27 (u-1)^4 (u^6-6 u^5+19 u^4-40 u^3+63 u^2-66 u+33),\\
a_1(u)=&9(u-1)^2 (u^3-3 u^2+3 u-3) (u^9-9 u^8+44 u^7-146 u^6+354 u^5-648 u^4\\
&+912 u^3-954 u^2+657 u-219),\\
a_0(u)=&-u^{18}+18 u^{17}-165 u^{16}+1020 u^{15}-4716 u^{14}+17172 u^{13}-50904 u^{12}\\
&+125820 u^{11}-263358 u^{10}+470376 u^9-718146 u^8+934740 u^7-1028268 u^6\\
&+939276 u^5-693360 u^4+399924 u^3-173097 u^2+52326 u-8721.
\end{align*}
Next consider the case $E(\Q)=\Z/9\Z$.
This case can be treated by the exact same method as $E(K)_{tors}=\Z/14\Z$ with $E(\Q)_{tors}=\Z/7\Z$.
As in that case, $K$ is not a cyclic cubic field.
One can find the parametrization of $E_u$ with $E_u(\Q)_{tors}=\Z/9\Z$ in \cite[Table 3]{Ku} as follows;
$\bullet\ \ E_u:\,y^2-(u^3-u^2-1)xy-u^2(u-1)(u^2-u+1)y=x^3-u^2(u-1)(u^2-u+1)x^2.$
\noindent with discriminant $\Delta_u= u^9(u-1)^9(u^2-u+1)^3(u^3-6u^2+3u+1)\neq 0$.
Note that $E_u$ is isomorphic to the elliptic curve defined by
\begin{align*}
y^2=f_u(x):=&x^3-\frac{1}{3}(u^3-3u^2+1)(u^9-9u^8+27u^7-48u^6+54u^5-45u^4+27u^3\\
&-9u^2+1)x+\frac{2}{27}(u^{18}-18 u^{17}+135 u^{16}-570 u^{15}+1557 u^{14}\\
&-2970 u^{13}+4128 u^{12}-4230 u^{11}+3240 u^{10}-2032 u^9+1359 u^8\\
&-1080 u^7+735 u^6-306 u^5+27 u^4+42 u^3-18 u^2+1).
\end{align*}
Let $\alpha_u$ be a root of an irreducible polynomial $f_u(x)$ for some rational number $u$.
Then $K_u:=\Q(\alpha_u)$ is a cubic field, so that $E_u(K_u)_{tors}=\Z/18\Z$.
Let $r_1,r_2,r_3$ be the three real roots of $u^3-6u^2+3u+1=0$ with $r_1<r_2<r_3$, then $r_1<0<r_2<1<r_3$.
Note that $u^2-u+1=0$ has no real root.
Put $I:=(-\infty,\,r_1)\cup (0,\,r_2) \cup (1,\, r_3)$, $J:=(r_1,\,0)\cup (r_2,\,1) \cup (r_3,\,\infty)$.
Then $\Delta_u<0$ for $u \in I$ and $\Delta_u>0$ for $u \in J$, hence $E_u$ has torsion $\Z/18\Z$ over
complex cubic field $K_u$ when $u\in I\cap\Q$ and over totally real, but non-Galois cubic fields $K_u$ when $u\in J\cap\Q$.
Finally, we consider the torsion of $E_u$ over pure cubic fields.
Suppose $u\in I\cap\Q$. Then $K_u$ can be a pure cubic field only if
\begin{equation}\label{eq:de2}
-27k^2=\Delta_u=u^9(u-1)^9(u^2-u+1)^3(u^3-6u^2+3u+1)
\end{equation}
for some $k\in\Q$ by Lemma \ref{lem:pure}.
By letting $v=\frac{9k}{u^4(u-1)^4(u^2-u+1)}$ in \eqref{eq:de2}, we have the following equation:
$$v^2=-3u(u-1)(u^2-u+1)(u^3-6u^2+3u+1),$$
which defines a hyperelliptic curve $C$ of genus 3.
Since the Chabauty method is implemented in {\sc Magma} only for a curve of genus 2, we use another method to find all the rational points on $C$.
By using {\sc Magma}, we compute that the group ${\rm Aut}_\Q(C)$ of rational automorphisms is of order 6, and it has an automorphism $\sigma$ of order 3 as follows:
$$\sigma(u,v)=\left(\frac{u^4-u^3}{u^4},\,\frac{v}{u^4}\right).$$
Then the quotient curve $C/\langle\sigma\rangle$ is an elliptic curve $E$ defined by
$$y^2=x^3 + 1,$$
and the map $\phi: C\to E$ of degree 3 is given by
$$(x,y)=\phi(u,v)=\left(-\frac{u^3-3u^2+1}{3u(u-1)},\,\frac{v(u^2-u+1)}{9u^2(u-1)^2}\right).$$
Note that $E(\Q)=\{O, (-1,0), (0,\pm 1), (2,\pm 3)\}$ is of order 6.
Moreover, $C$ has three obvious rational points $(0,\,0),\,(1,\,0)$ and $\infty$ which are lying above $O$.
Using the map $\phi$, we can compute explicitly the points on $C$ lying above non-trivial points of $E(\Q)$.
They turn out to be not rational, hence $C(\Q)=\{(0,\,0),\,(1,\,0),\infty\}$.
However, these three points cannot give pure cubic fields.
Therefore, the torsion $\Z/18\Z$ cannot occur over pure cubic fields.
\begin{thm} Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/18\Z$ over some cubic field $K$.
\begin{itemize}
\item[(a)] If $E(\Q)_{tors}=\Z/6\Z$, then there exist infinitely many non-isomorphic rational elliptic curves $E$ over cyclic cubic fields $K$ so that $E(K)_{tors}=\Z/18\Z$.
\item[(b)] If $E(\Q)_{tors}=\Z/9\Z$, then there exist infinitely many non-isomorphic rational elliptic curves $E$ over both non-Galois totally real cubic and complex cubic fields $K$ so that $E(K)_{tors}=\Z/18\Z$.
But there is no rational elliptic curve $E$ such that $E(K)_{tors}=\Z/18\Z$ over a pure cubic field $K$.
\end{itemize}
\end{thm}
\begin{center}
2.4.\, $E(K)_{tors}=\Z/2\Z\times\Z/14\Z$
\end{center}
Bruin and Najman \cite{BN} proved the following result:
\begin{thm}\label{thm:2-14} \cite[Theorem 1.2]{BN}
If $E$ is an elliptic curve over a cubic field $K$ with torsion subgroup isomorphic to $\Z/2\Z\times \Z/14\Z$,
then $K$ is cyclic over $\Q$ and $E$ is a base change of an elliptic curve over $\Q$.
\end{thm}
Also, they suggested a method to find a rational model of $E$ from a model over a cyclic cubic field in \cite[Remark 3.10]{BN} as follows;
Given an elliptic curve $E$ over a cubic field $K$ with $E(K)_{tors}=\Z/2\Z\times\Z/14\Z$, if we choose a point $P$ of order $7$ in $E(K)_{tors}$, and write down the unique (long) Weierstrass equation for $E$ such that the points $P$, $2P$ and $4P$ lie on the line $y = 0$ and the points $3P$, $5P$ and $6P$ lie on the line $y =-x$, then this Weierstrass equation has coefficients in $\Q$.
On the other hand, the first author, Kim and Lee \cite{JKL1} provided an infinite family of elliptic curves $E_t$ over cyclic cubic fields $K_t$ with $E_t(K_t)_{tors}=\Z/2\Z\times\Z/14\Z$.
Indeed, there is a typo in the family of \cite{JKL1}, and the first author corrected it in \cite{J}.
By using the method from \cite[Remark 3.10]{BN}, let us find an infinite family of rational elliptic curves $E_u$ over cyclic cubic fields $K_u$ with $E_u(K_u)_{tors}=\Z/2\Z\times\Z/14\Z$.
Firstly, let us write down the family of \cite{JKL1} in a short Weierstrass form, say,
\begin{equation}\label{eq:2-14}
E_t:\ \ y^2=x^3+A(t)x+B(t).
\end{equation}
Here we don't present the coefficients because they are huge and complicated.
The reason we use a short Weierstrass form is that the computer algebra systems could not find a point of order 7 from the given equation in \cite{JKL1}, but we don't know why.
Secondly, using {\sc Magma}, we compute a linear factor of the 7-division polynomial, then using {\sc Maple} we compute a 7-torsion point $P=(x_1,\,y_1)$ as follows:
{\tiny \begin{align*}
x_1=&-\{(t-1)(t+1)^3(t^{11}+5t^{10}+7t^9-53t^8-150t^7+178t^6+1422t^5-906t^4-379t^3-10823t^2+22651t-14001)\alpha_t^2\\
&+(t+1)^2(t^{14}+6t^{13}+7t^{12}-84t^{11}-271t^{10}+330t^9+2879t^8+168t^7-12821t^6-9926t^5+19677t^4+96236t^3\\
&-174941t^2+68918t+26205)\alpha_t-(t^{16}+13t^{15}+63t^{14}+69t^{13}-549t^{12}-1919t^{11}+1227t^{10}+15593t^9+13329t^8\\
&-49369t^7-81699t^6+79599t^5+234489t^4-166773t^3-202663t^2+90019t+134106)\}/\{6144(t-1)^2(t^3+t^2-9t-1)^2\}\\
y_1=&-\{(t-1)(t+1)^3(t^{11}+5t^{10}+7t^9-45t^8-150t^7+114t^6+782t^5+390t^4-2427t^3-1223t^2+1787t+2807)\alpha_t^2\\
&+(t+1)^2(t^{14}+6t^{13}+7t^{12}-76t^{11}-263t^{10}+226t^9+2135t^8+760t^7-7621t^6-9622t^5+19213t^4+18452t^3\\
&-8501t^2-26130t-4971)\alpha_t-(t^{14}+11t^{13}+40t^{12}-14t^{11}-497t^{10}-847t^9+2218t^8+6764t^7-4225t^6\\
&-23171t^5-2172t^4+43778t^3+14481t^2-26521t-26230)(t+1)^2\}/\{256(t-1)(t^3+t^2-9t-1)^3\}
\end{align*}}
where $\alpha_t$ is a root of the following cubic equation:
\begin{equation}\label{eq:defining}
f(x,t)=(t^2-1)x^3+(t^3+2t^2-9t-2)x^2-9(t^2-1)x-t^3-2t^2+9t+2.
\end{equation}
Put $mP=(x_m,\,y_m)$ for $m=1,2,3,4,5,6$.
Let $L_1$(resp. $L_2$) be the line through $P$, $2P$ and $4P$(resp. $3P$, $5P$ and $6P$), and
let $Q=(x_0,\,y_0)$ denote the intersection point of $L_1$ and $L_2$, actually, $y_0=0$.
Thirdly, using {\sc Maple}, we find a change of variables to bring $E_t$ in the form described as above by solving the following system of equations:
\begin{align}\label{eq:change}\nonumber
&p^3y_0+p^2qx_0+s=0,\\ \nonumber
&p^2x_0+r=0,\\ \nonumber
&p^3y_1+p^2qx_1+s=0,\\
&p^3y_2+p^2qx_2+s=0,\\ \nonumber
&-(p^2x_3+r)=p^3y_3+p^2qx_3+s,\\ \nonumber
&-(p^2x_5+r)=p^3y_5+p^2qx_5+s. \nonumber
\end{align}
Here the first two equations mean that $Q$ maps to $(0,\,0)$, the next two equations mean that $P$, $2P$ and $4P$ lie on the line $y=0$, and the last two equations mean that $3P$, $5P$ and $6P$ lie on the line $y=-x$.
Using {\sc Maple}, we obtain the following:
{\tiny \begin{align*}
p=&\{(t-1)(t+1)(t^5+t^4-6t^3-46t^2+53t+29)\alpha_t^2+(t^8+4t^7-4t^6-60t^5-42t^4+492t^3-228t^2-308t-111)\alpha_t\\
&+(t+1)(t^7-2t^6+t^5-58t^4+259t^3-318t^2-5t-6)\}/\{2(t^2+3)(t^6+4t^5+13t^4-40t^3+19t^2+36t+31)\},\\
q=&-1/2,\\
r=&-(t^{12}+4t^{11}+6t^{10}-36t^9-45t^8+168t^7+804t^6-1608t^5+855t^4+788t^3+2166t^2+684t+309)/\\
&\{12(t^2+3)^2(t^8+4t^7+16t^6-28t^5+58t^4-84t^3+88t^2+108t+93)\},\\
s=&(t^{12}+4t^{11}+6t^{10}-36t^9-45t^8+168t^7+804t^6-1608t^5+855t^4+788t^3+2166t^2+684t+309)/\\
&\{24(t^2+3)^2(t^8+4t^7+16t^6-28t^5+58t^4-84t^3+88t^2+108t+93)\}.
\end{align*}}
Lastly, letting $t=u$ and using this change of variables, we obtain an infinite family of rational elliptic curves $E_u$ over cyclic cubic fields $K_u$ with $E_u(K_u)_{tors}=\Z/2\Z\times\Z/14\Z$ as follows:
$\bullet\, E_u:\ \ y^2+xy=x^3+A_2(u)x^2+A_4(u)x+A_6(u)$ where
\begin{align*}
A_2(u)=&\frac{-4(u^6+2u^5+15u^4-20u^3+15u^2+18u+33)(u-1)^2(u+1)^2}{(u^2+3)^3(u^6+4u^5+13u^4-40u^3+19u^2+36u+31)},\\
A_4(u)=&\frac{64(u^6+2u^5+3u^4-20u^3+39u^2+18u+21)(u-1)^6(u+1)^6}{(u^2+3)^6(u^6+4u^5+13u^4-40u^3+19u^2+36u+31)^2},\\
A_6(u)=&\frac{4096(u-1)^{12}(u+1)^{12}}{(u^6+4u^5+13u^4-40u^3+19u^2+36u+31)^3(u^2+3)^9},
\end{align*}
or transformed into short Weierstrass form
$\bullet\, E_u:\ \ y^2=x^3+A(u)x+B(u)$ where
\begin{align*}
A(u)=&-(u^{12}+4u^{11}-10u^{10}-68u^9+3u^8+552u^7+4u^6-2568u^5+2103u^4\\
&+1684u^3+1958u^2+396u+37)/\{48(u^2+3)^3(u^6+4u^5+13u^4-40u^3\\
&+19u^2+36u+31)\},\\
B(u)=&(u^{24}+8 u^{23}+12 u^{22}-120 u^{21}-518 u^{20}+504 u^{19}+5068 u^{18}+568 u^{17}\\
&-24009 u^{16}-15024 u^{15}+62936 u^{14}+183120 u^{13}-550452 u^{12}-851984 u^{11}\\
&+4384056 u^{10}-3808912 u^9+1467519 u^8-4083672 u^7+3590300 u^6\\
&+5512360 u^5+6945498 u^4+2943128 u^3+893052 u^2+120024 u\\
&+3753)/\{864(u^2+3)^6(u^6+4u^5+13u^4-40u^3+19u^2+36u+31)^2\}.
\end{align*}
$\bullet\, K_u=\Q(\alpha_u)$ where $\alpha_u$ is a root of the irreducible polynomial $f(x,u)$ given in \eqref{eq:defining} for some rational number $u$.
By this result together with Theorem \ref{thm:2-14}, we have the following result:
\begin{thm} Suppose $E$ is a rational elliptic curve with $E(K)_{tors}=\Z/2\Z\times\Z/14\Z$ for some cubic field $K$.
Then $K$ is a cyclic cubic field.
Moreover, there exist infinitely many non-isomorphic rational elliptic curves $E$ and cyclic cubic fields $K$ so that $E(K)_{tors}=\Z/2\Z\times\Z/14\Z$.
\end{thm}
As a by-product of all results above, we have the following:
\begin{thm} Any rational elliptic curve does not gain a torsion group not occurring over $\Q$ when the base field is extended to a pure cubic field.
\end{thm}
\begin{center}
{\bf Acknowledgment}
\end{center}
The authors are indebted to the referee for finding a mistake/gap in the first version of the proof of Lemma \ref{tor18}.
\
\
|
1,116,691,497,238 | arxiv | \section{Introduction}\label{sec:intro}
The Left-Right Symmetric model (LRSM)~\cite{Pati:1974yy,Mohapatra:1974hk,Mohapatra:1974gc,Senjanovic:1975rk,Senjanovic:1978ev}
remains one of the best motivated high-energy
completions of the Standard Model of Particle Physics (SM).
It ties together the Majorana nature of neutrinos,
their tiny masses in comparison to the electroweak (EW) scale $v_{\rm EW}$,
and the chiral structure of EW interactions, seemingly disparate phenomena,
to the simultaneous breakdown of $(B-L)$ conservation and left-right parity invariance at a scale $v_R\gg v_{\rm EW}$.
Predicting a plethora of observations,
the model is readily testable at current and near-future experiments;
see ~\cite{Chen:2011de,Mohapatra:2014cja,Senjanovic:2016bya,Mohapatra:2016twe,Arkani-Hamed:2015vfh,Golling:2016gvc} and references therein.
At the Large Hadron Collider (LHC), searches~\cite{Khachatryan:2014dka,Aad:2015xaa} for
$W_R$ gauge bosons and heavy Majorana neutrinos $N$, if kinematically accessible,
focus on the well-studied, lepton number-violating
$(\Delta L = \pm2)$
Drell-Yan process~\cite{Keung:1983uu},
\begin{equation}
p ~p ~\rightarrow ~W_R^\pm ~\rightarrow N ~\ell^\pm_1 ~\rightarrow ~\ell^\pm_1 ~\ell^\pm_2 ~+nj.
\label{eq:sslljjLRSM}
\end{equation}
As seen in Fig.~\ref{fig:feynman_LRSM_qqWR_Nl_NDecay}, Eq.~(\ref{eq:sslljjLRSM}) proceeds for $m_N<M_{W_R}$
first through the on-shell production of $W_R$, then by its decay to $N$.
Recent investigations~\cite{Ferrari:2000sp,Maiezza:2015lza,Gluza:2016qqv,Mitra:2016kov,Mattelaer:2016ynf},
however, have shown that one can obtain a considerable increase in sensitivity to the LRSM at colliders
by relaxing the requisite charged lepton and jet multiplicities stipulated by Ref.~\cite{Keung:1983uu} for Eq.~(\ref{eq:sslljjLRSM})
and similarly for the related single-top channel~\cite{Simmons:1996ws}.
This is particularly true for $M_{W_R}\ggm_N,~v_{\rm EW}$,
which occurs naturally when $v_R \gtrsim\mathcal{O}(10)$ TeV with neutrino triplet Yukawas $y^{\Delta_R} \lesssim \mathcal{O}(10^{-2})$.
Incidentally, such scenarios are also favored by searches for flavor-changing neutral Higgs (FCNH)
transitions~\cite{Chakrabortty:2012pp,Bertolini:2014sua,Maiezza:2014ala,Maiezza:2016bzp} and neutron EDMs~\cite{Zhang:2007fn,Zhang:2007da}.
Along these lines, we reevaluate the necessity of $W_R$ being kinematically accessible to test LR symmetry at hadron colliders.
In the limit that $M_{W_R}$ is of the order or above the total collider energy $\sqrt{s}$ but $m_N\ll\sqrt{s}$,
Eq.~(\ref{eq:sslljjLRSM}) can still proceed if mediated instead by a far \textit{off-shell} $W_R$.
This is akin to the SM Fermi contact interaction.
For $m_N\lesssim\mathcal{O}(1){\rm ~TeV}$,
8 TeV searches~\cite{Khachatryan:2014dka,Aad:2015xaa} for Eq.~(\ref{eq:sslljjLRSM})
are insensitive to this configuration due to the search premise itself:
resonant $W_R$ production implies that momenta of final-state particles scale with $M_{W_R}$,
justifying the use of TeV-scale selection cuts in~\cite{Khachatryan:2014dka,Aad:2015xaa}.
The choice of cuts are motivated by limits from dijet searches that indicate $M_{W_R}\gtrsim2.5{\rm ~TeV}$~\cite{Khachatryan:2015dcf,ATLAS:2015nsi}.
Non-resonant $W_R$ mediation, however, implies that the partonic scale is naturally $\sqrt{\hat{s}}\sim m_N\lesssim\mathcal{O}(1){\rm ~TeV}$,
and therefore is unlikely to lead to final states satisfying the kinematical criteria.
For $m_N\gtrsim\mathcal{O}(1)$ TeV, present methods are sufficient~\cite{ATLAS:2012ak}.
\begin{figure*}[!t]
\begin{center}
\subfigure[]{\includegraphics[width=.42\textwidth]{fig1a.pdf} \label{fig:feynman_LRSM_qqWR_Nl_NDecay} }
\subfigure[]{\includegraphics[width=.42\textwidth]{fig1b.pdf} \label{fig:feynman_Direct_qqWL_Nl_NDecay} }
\end{center}
\caption{
Born diagrams for heavy Majorana $N$ production and decay via (a) $W_R$ (b) $W_{\rm SM}$ currents. Drawn using JaxoDraw~\cite{Binosi:2003yf}.}
\label{fig:feynmanBorn}
\end{figure*}
Interestingly, while the underlying dynamics differ, for the $(M_{W_R},m_N)$ range in consideration,
the mass scale and topology of Eq.~(\ref{eq:sslljjLRSM}) are identical to the heavy Majorana neutrino direct production (DP) process
\begin{equation}
p ~p ~\to ~W_{\rm SM}^{\pm *} ~\to ~\ell^\pm_1 ~N ~\to ~\ell^\pm_1 ~\ell^\pm_2 ~+nj.
\label{eq:sslljjDirect}
\end{equation}
As shown in Fig.~\ref{fig:feynman_Direct_qqWL_Nl_NDecay}, this process,
which may also be labeled as prompt production, transpires through off-shell SM $W$ bosons
and occurs at the scale $m_N$ for $m_N > M_{W_{\rm SM}}$~\cite{Dicus:1991fk,Pilaftsis:1991ug,Datta:1993nm,Han:2006ip,Atre:2009rg}.
Subsequently, hadron collider searches for Eq.~(\ref{eq:sslljjDirect})
can be interpreted as searches for Eq.~(\ref{eq:sslljjLRSM}) in the $M_{W_R}\gtrsim\sqrt{s}$ limit.
Moreover, despite its off-shell nature, the $W_R$ chiral couplings to quark and leptons remain
encoded in azimuthal and polar distributions of the $\ell^\pm\ell^\pm nj$ system~\cite{Han:2012vk}.
Thus, in principle, the dynamics of Eq.~(\ref{eq:sslljjLRSM}) can still be determined,
even in mixed $W_R^{(*)}-W_{\rm SM}^{(*)}$ scenarios as considered in~\cite{Han:2012vk,Chen:2013fna,Dev:2015kca}.
It follows that this holds too for $ee/pp\rightarrow Z_R^{(*)}\rightarrow NN$
In the LRSM, heavy $N$ production can in principle also proceed through Eq.~(\ref{eq:sslljjDirect}) and its neutral current equivalent via neutrino mixing.
However, such mixing between left-handed flavor states $\ell$ and heavy mass eigenstate $N$,
which scales as $V_{\ell N} \sim \sqrt{m_\nu / m_N}$, is necessarily small for the choice of $m_N$ in discussion and observed $m_\nu$.
Subsequently, we neglect the contribution of Eq.~(\ref{eq:sslljjDirect}) in the LRSM throughout this study.
For further discussions, see, e.g., Refs.~\cite{Nemevsek:2012iq,Chen:2013fna,Senjanovic:2016vxw}.
In this context, we reinterpret $\sqrt{s}=8$ TeV LHC limits
on heavy Majorana neutrino DP cross sections~\cite{Khachatryan:2015gha,Khachatryan:2016olu} for the LRSM.
For $\textcolor{black}{m_N = 200-500}$ GeV and right-left coupling ratio $\kappa_R = g_R/g_L$,
we find $(M_{W_R} / \kappa_R) < \textcolor{black}{1.1-1.8}$ TeV are excluded at 95\% CLs.
While weak, the limits are competitive with searches for resonant $M_{W_R}$-$N$ production~\cite{Aad:2015xaa,ATLAS:2012ak};
however, for such low mass scales, the validity of this approach requires $\kappa_R\gg1$.
Projected sensitivities~\cite{Alva:2014gxa} to DP at the high-luminosity LHC
and a hypothetical 100 TeV Very Large Hadron Collider (VLHC) are recast into projections for the LRSM.
At 14~(100) TeV and with $\mathcal{L}=1~(10)~\text{ab}^{-1}$,
one can probe $(M_{W_R} / \kappa_R) < \textcolor{black}{7.9-8.9~(14-40)}$ TeV for $m_N = \textcolor{black}{100 - 700~(1200)}$ GeV.
We also translate sensitivity to $(M_{W_R} / \kappa_R)$ for coefficients of gauge invariant dimension -six operators
in an Effective Field Theory with right-handed neutrinos (NEFT)~\cite{delAguila:2008ir}.
This study continues in the following order:
In Sec.~\ref{sec:model}, the components of LRSM and NEFT relevant for this work are reviewed.
We describe our methodology for reinterpreting (V)LHC limits in Sec.~\ref{sec:method},
and report results in Sec.~\ref{sec:results}.
We summarize and conclude in Sec.~\ref{sec:summary}.
\section{Theoretical Framework}\label{sec:model}
We now briefly summarize the main relations of the minimal LRSM and NEFT relevant to this analysis.
\subsection{Minimal Left-Right Symmetric Model}
In the notation of~\cite{Han:2012vk}, $W_R$ quark chiral currents are
\begin{eqnarray}
\mathcal{L}_{W_R-q-q'} = \frac{-\kappa_R^q g_L}{\sqrt{2}}\sum_{i,j=u,\dots}\overline{u}_i V_{ij}^{R}~W_{R \mu}^+ \gamma^\mu P_R~ d_j + \text{H.c.}
\nonumber
\end{eqnarray}
Here, up-(down-)type quarks with flavor $i (j)$ are represented by $u_i (d_j)$;
$P_{R(L)} = \frac{1}{2}(1\pm\gamma^5)$ is the right-hand [RH] (left-hand [LH]) chiral projection operator;
$V_{ij}^{\rm R}$ denotes the RH analog of Cabbibo-Kobayashi-Masakawa (CKM) matrix $V_{ij}^{\rm L}$; and
$\kappa_{R}^{q}\in\mathds{R}$ is an overall normalization for the $W_R$ interaction strength
with respect to the SM weak coupling $g_L=\sqrt{4\pi\alpha_{\rm EM}}/\sin\theta_W$.
Despite nature maximally violating parity at low energies, $V_{ij}^{\rm R}$ retains its resemblance to $V_{ij}^{\rm L}$,
with $\vert V_{ij}^{\rm R} \vert = \vert V_{ij}^{\rm L}\vert$ for generalized charge conjugation
and $\vert V_{ij}^{\rm R} \vert \approx \vert V_{ij}^{\rm L}\vert+ \mathcal{O}(m_b/m_t)$ for
generalized parity~\cite{Zhang:2007fn,Zhang:2007da,Maiezza:2010ic,Senjanovic:2014pva,Senjanovic:2015yea}.
Throughout this study, we assume five massless quarks and, for simplicity, take $\vert V^L_{ij}\vert,~\vert V^R_{ij}\vert$ to be diagonal with unit entries.
For leptonic coupling to $W_R$, we consider first the decomposition of neutrino chiral states $i,j$ into mass states $m,m'$:
Assuming $i~(m) =1,\dots,3$, LH (light) states and $j~(m')=1,\dots,n$, RH (heavy) states,
we can relate chiral neutrino states and mass eigenstates by the rotation
\begin{eqnarray}
\begin{pmatrix} \nu_{Li} \\ N_{Rj}^c \end{pmatrix}
=
\begin{pmatrix}
U_{3\times3} && V_{3\times n} \\
X_{n\times3} && Y_{n\times n}
\end{pmatrix}
\begin{pmatrix} \nu_{m} \\ N_{m'}^c \end{pmatrix}.
\label{eq:nuMixing}
\end{eqnarray}
Without the loss of generality, we take the rotation of the charged leptons into the mass basis as the identity.
The $U_{3 \times3}$ component of Eq.~(\ref{eq:nuMixing}) is then recognized as the observed light neutrino mixing matrix.
In analogy to $U_{\ell m}$, the entry $Y_{\ell m'} (X_{\ell m})$ quantifies the mixing between the heavy (light) mass state $N_{m'}~(\nu_{m})$
and the RH chiral state with corresponding flavor $\ell$.
Hence, the mixing entries scale as
$\vert Y_{\ell m'}\vert^2 \sim \mathcal{O}(1)$ and
$\vert X_{\ell m}\vert^2 \sim 1 - \vert Y_{\ell m'}\vert^2 \sim \mathcal{O}(m_{\nu_m}/m_{N_{m'}})$~\cite{Keung:1983uu}.
Explicitly, the RH flavor state $N_{\ell}$ in the mass basis is then~\cite{Atre:2009rg,Han:2012vk},
\begin{equation}
N_{\ell} = \sum_{m=1}^{3} X_{\ell m}\nu_{m}^c + \sum_{m'=1}^{n}Y_{\ell m'} N_{m'}.
\label{eq:nuRDecomp}
\end{equation}
With this, the $W_R$ chiral currents for leptons are~\cite{Atre:2009rg,Han:2012vk}
\begin{eqnarray}
\mathcal{L}_{W_R-\ell-\nu/N} &=& \frac{-\kappa_R^\ell g_L}{\sqrt{2}}
\sum_{\ell=e}^{\tau}
\overline{N_{\ell}}~W_{R \mu}^+ \gamma^\mu P_R~ \ell^-+\text{H.c.}
\nonumber\\
&=&
\frac{-\kappa_R^\ell g_L}{\sqrt{2}}
\sum_{\ell=e}^{\tau}
\Bigg[
\sum_{m=1}^3 \overline{\nu^{c}_m} X_{\ell m}^\dagger +
\sum_{m'=1}^3 \overline{N_{m'}} Y_{\ell m'}^\dagger
\Bigg]
\nonumber\\
& &~\times ~W_{R \mu}^+ \gamma^\mu P_R~ \ell^-
+\text{H.c.}
\nonumber
\end{eqnarray}
As for quarks, $\kappa_R^\ell\in\mathds{R}$ normalizes the $W_R$ coupling to leptons.
Throughout this analysis, we adopt the conventional benchmark scenario
and consider only the lightest heavy neutrino mass state $N_{m'=1}$, which we denote as $N$.
\subsection{Effective Field Theory with Heavy Neutrinos}
Heavy Neutrino Effective Field Theory (NEFT)~\cite{delAguila:2008ir,Aparici:2009fh,Bhattacharya:2015vja}
is a powerful extension of the SM EFT~\cite{Buchmuller:1985jz,Grzadkowski:2010es}
that allows for a consistent and agnostic parameterization of new, high-scale,
weakly coupled physics when $N$ mass scales comparable to $v_{\rm EW}$.
As TeV-scale $L$ violation implies~\cite{Ma:1998dn,Kersten:2007vk} the existence of a particle spectrum
beyond the canonical Type I seesaw~\cite{Minkowski:1977sc,GellMann:1980vs,Yanagida:1979as,Mohapatra:1979ia},
it is natural to consider DP sensitivities in terms of NEFT operators.
After extending the SM by three $N_R$, the most general renormalizable theory that can be constructed
from SM symmetries is the Type I Seesaw Lagrangian,
\begin{equation}
\mathcal{L}_{\rm Type~I} = \mathcal{L}_{\rm SM} + \mathcal{L}_{N~\text{Kin.+Mass}} + \mathcal{L}_{N~\text{Yukawa}}.
\end{equation}
Respectively, the three terms are the SM Lagrangian, the kinetic and Majorana mass terms for $N_R$,
and the Yukawa couplings responsible for Dirac neutrino masses.
From this, the NEFT Lagrangian can be built by further extending $\mathcal{L}_{\rm Type~I}$
before EW symmetry breaking (EWSB) by all SU$(3)$ $\otimes$ SU$(2)_L$ $\otimes$ U$(1)_Y$-invariant, irrelevant (mass dimension $d>4$) operators
containing Type I Seesaw fields:
\begin{eqnarray}
\mathcal{L}_{\rm NEFT} &=& \mathcal{L}_{\rm Type~I} + \sum_{d=5}\sum_{i} \frac{\alpha_i}{\Lambda^{(d-4)}}\mathcal{O}_{i}^{(d)}.
\end{eqnarray}
Here, $\alpha_i<\mathcal{O}(4\pi)$ are dimensionless coupling coefficients,
$\Lambda\gg\sqrt{\hat{s}}$ is the mass scale of the underlying theory,
and $\mathcal{O}_{i}^{(d)}$ are gauge invariant permutations of Type I field operators.
The list of $\mathcal{O}_{i}^{(d)}$ are known explicitly for $d=5$~\cite{Aparici:2009fh},
6~\cite{delAguila:2008ir}, and 7~\cite{Bhattacharya:2015vja},
and can be built for larger $d$ following~\cite{Henning:2015alf,Kobach:2016ami}.
At $d=6$, the four-fermion $\mathcal{O}_i^{(6)}$ giving rise to the same parametric
dependence on $m_N$ in the partonic cross section $\hat{\sigma}$
as both DP and the LRSM for $M_{W_R}\gg\sqrt{\hat{s}}$ are
\begin{eqnarray}
\mathcal{O}_V^{(6)} &=& \left(\overline{d}\gamma^\mu P_R u\right)\left(\overline{e}\gamma_\mu P_R N_R\right)
\quad\text{and}\quad \nonumber\\
\mathcal{O}_{S3}^{(6)} &=& \left(\overline{Q}\gamma^\mu P_R N_R\right)\varepsilon\left(\overline{L}\gamma_\mu P_R d\right).
\label{eq:neftD6Ops}
\end{eqnarray}
In Eq.~(\ref{eq:neftD6Ops}), $\varepsilon$ is the totally antisymmetric tensor.
After EWSB and decomposing $N_R$ according to Eq.~(\ref{eq:nuRDecomp}),
but neglecting $\mathcal{O}(X_{\ell m})$ terms, the operators become
\begin{eqnarray}
\mathcal{O}_V^{(6)} &=& \sum_{m'=1}\left(\overline{d}\gamma^\mu P_R u\right)\left(\overline{\ell}\gamma_\mu P_R~Y_{\ell m'}~N_{m'}\right)
\quad\text{and}\quad\nonumber\\
\mathcal{O}_{S3}^{(6)} &=& \sum_{m'=1}\left(\overline{Q}\gamma^\mu P_R ~Y_{\ell m'} N_{m'} \right)\left(\overline{\ell}\gamma_\mu P_R d\right).
\label{eq:neftD6OpsMix}
\end{eqnarray}
As in the LRSM case, we consider only the $N_{m'=1}$ state with mixing as given in Eqs.~(\ref{eq:mumueeMix})-(\ref{eq:emuMix}).
\section{Mimicking Direction Production with Left-Right Symmetry}\label{sec:method}
In this section we describe our procedure for extracting bounds on LRSM and NEFT quantities
from observed and expected (V)LHC limits on heavy Majorana neutrino DP rates.
Our computational setup is summarized in Sec.~(\ref{sec:setup}).
We start by constructing the observable $\varepsilon(M_{W_R})$, which we will ultimately constrain.
The Born-level, partonic heavy $N$ production cross section via (on- or off-shell) $W_R$ currents,
\begin{equation}
q_1\overline{q_2} \to W_R^{\pm (*)} \to N ~\ell^\pm_1,
\end{equation}
with arbitrary lepton mixing is given generically by~\cite{Han:2012vk}
\begin{eqnarray}
\frac{d\hat{\sigma}^{\rm LRSM}}{d\Omega_\ell}
= \frac{3\hat{\sigma}^{\rm LRSM}_{\rm Tot.}}{2^3\pi(2+r_N)}\left[(1-\cos\theta_\ell)^2 + r_N\sin^2\theta_\ell\right]
\label{eq:lrsmDXSec}
\end{eqnarray}
where $r_N \equiv m_N^2/\hat{s}$ and the total cross section is
\begin{eqnarray}
\hat{\sigma}^{\rm LRSM}_{\rm Tot.} &=&
\cfrac{\kappa_R^{q2}\kappa_R^{\ell2} g_L^4}{2^7~3N_c~\pi}
\cfrac{\vert Y_{\ell N}\vert^2~\hat{s}(1-r_N)^2(2+r_N)}{\left[(\hat{s}-M_{W_R}^2)^2 + (M_{W_R} \Gamma_{W_R})^2\right]} ~
\\
&\approx&
\cfrac{\kappa_R^{q2}\kappa_R^{\ell2} g_L^4 }{2^7~3N_c~\pi}\vert Y_{\ell N}\vert^2
\cfrac{\hat{s}}{M_{W_R}^4}(1-r_N)^2(2+r_N).
\label{eq:lrsmTotXSec}
\end{eqnarray}
In the last line we take the $M_{W_R}\gg\sqrt{\hat{s}}$ limit.
For DP, the analogous partonic cross section is
\begin{eqnarray}
\frac{d\hat{\sigma}^{\rm DP}}{d\Omega_\ell}
= \frac{3\hat{\sigma}^{\rm DP}_{\rm Tot.}}{2^3\pi(2+r_N)}\left[(1-\cos\theta_\ell)^2 + r_N\sin^2\theta_\ell\right]
\label{eq:dpDXSec}
\end{eqnarray}
where the total partonic rate for $\sqrt{\hat{s}}\gg M_{W_{\rm SM}}$ is similarly,
\begin{eqnarray}
\hat{\sigma}^{\rm DP}_{\rm Tot.} &=&
\cfrac{g_L^4}{2^7~3N_c~\pi}
\cfrac{\vert V_{\ell N}\vert^2~\hat{s}(1-r_N)^2(2+r_N)}{\left[(\hat{s}-M_{W_R}^2)^2 + (M_{W} \Gamma_{W})^2\right]} ~
\label{eq:dpTotXSecFull}
\\
&\approx&
\cfrac{g_L^4 \vert V_{\ell N}\vert^2}{2^7~3N_c~\pi}
\cfrac{1}{\hat{s}}(1-r_N)^2(2+r_N)
\label{eq:dpTotXSec}
\end{eqnarray}
Comparing the differential and integrated expressions
one sees crucially that the angular and $m_N$ dependence in the two processes are the same.
This follows from the maximally parity violating $V\pm A$ structures of the $W_{\rm SM}/W_R$ couplings.
Na\"ively, one expects the orthogonal chiral couplings to invert the leptons' polarizations with respect to the mediator.
However, as the mediators' polarizations are also relatively flipped with respect to the initial-state quarks,
the outgoing lepton polarization with respect to initial-state quarks, i.e., $\cos\theta_\ell$, is the same.
Hence, universality of $W_R$ chiral couplings to quarks and leptons in the LRSM can be tested without resonantly producing it.
The precise handedness of the couplings can be inferred from azimuthal and polar distributions
of the $\ell^\pm\ell^\pm j j$ final state~\cite{Han:2012vk} as well as single-top channel~\cite{Gopalakrishna:2010xm}.
As DP searches do not (and should not) rely on forward-backward cuts,
which are sensitive to parity asymmetries, their reinterpretation in terms of the LRSM for non-resonant $W_R$ is justified.
Branching rates of $N$ to a final state $A$ can be expressed in terms of the calculable $N\rightarrow A$ partial widths,
\begin{equation}
\BR{N\rightarrow A} \equiv \cfrac{\Gam{N\rightarrow A}}{\sum_i ~\Gam{N\rightarrow A_i}}.
\label{eq:brDef}
\end{equation}
For $M_{W_R}\ggm_N$, the $M_{W_R}$ dependence in Eq.~(\ref{eq:brDef}) cancels.
Hence, the Born-level, partonic same-sign lepton cross section in the LRSM,
\begin{equation}
q_1\overline{q_2} \to W_R^{\pm*} \to N ~\ell^\pm_1 \to \ell^\pm_1 ~\ell^\pm_2 ~X,
\label{eq:sslljjProcess}
\end{equation}
under the narrow width approximation for $N$ is
\begin{eqnarray}
\hat{\sigma}(q_1\overline{q_2} &\to& N ~\ell^\pm_1 \to \ell^\pm_1 ~\ell^\pm_2 ~X)
\nonumber\\
&\approx& \hat{\sigma}^{\rm LRSM}_{\rm Tot.} ~\times~ \BR{N \to \ell^\pm_2 ~X}
\\
&\equiv& \varepsilon^{\ell_1\ell_2}(M_{W_R}) ~\times~ \tilde{\hat{\sigma}}.
\label{eq:epsRedXSecDef}
\end{eqnarray}
In the last line we collect LRSM parameters into the single, dimensionful [TeV$^{-4}$] coefficient
\begin{eqnarray}
\varepsilon^{\ell_1\ell_2}(M_{W_R}) = \cfrac{\kappa_R^{q 2}\kappa_R^{\ell 2}}{M_{W_R}^4}\vert Y_{\ell_1 N}\vert^2 ~ \BR{N \to ~\ell^\pm_2 ~q'_1\overline{q'_2}}.
\quad
\label{eq:epsDef}
\end{eqnarray}
The ``reduced'' partonic cross section $\tilde{\hat{\sigma}}$ contains all kinematical and $m_N$ dependence that
must be convolved with parton distribution functions (PDFs) to build the hadronic cross section.
For the $e^\pm\mu^\pm$ mixed-flavor state, a summation over $ \varepsilon^{e\mu}$ and $\varepsilon^{\mu e}$ is implied.
Inclusive, hadronic level cross sections are obtained from the Collinear Factorization Theorem,
\begin{eqnarray}
& & \sigma(pp \to A+X) = f \otimes f \otimes \hat{\sigma}
\\
& & = \frac{1}{\delta_{ij}+1} \sum_{i,j=u,g,\dots}\int^1_{\tau_0} d\xi_1 \int^1_{\tau_0/\xi_1} d\xi_2
\nonumber\\
& & \Big[f_{i/p}(\xi_1,\mu_f)f_{j/p}(\xi_2,\mu_f) + (1\leftrightarrow2)\Big]\hat{\sigma}(ij\rightarrow A).
\end{eqnarray}
It expresses the production rate of $A$ (and arbitrary beam remnant $X$) in $pp$ collisions
as the convolution $(\otimes)$ of the $ij\rightarrow A$ partonic process rate
and the process-independent PDFs $f_{k/p}(\xi,\mu_f)$,
which for parton species $k$ with longitudinal momentum $p_z = \xi E_p$ resums collinear splittings up to the scale $\mu_f$.
The kinematic threshold $\tau_0$ is the scale below which the process is kinematically forbidden.
For heavy $N$ production, $\tau_0 = m_N^2 / s$.
In terms of $\varepsilon(M_{W_R})$, the hadronic equivalent of Eq.~(\ref{eq:epsRedXSecDef}) is
\begin{eqnarray}
\sigma(p ~p ~\to N ~\ell^\pm_1 \to \ell^\pm_1 ~\ell^\pm_2 + X)
=
\varepsilon(M_{W_R}) \times \tilde{\sigma}.
\label{eq:loHadronicFactorization}
\end{eqnarray}
Here, $\tilde{\sigma}$ is the ``reduced'' hadronic cross section and is related to $\tilde{\hat{\sigma}}$ by
the convolutions $\tilde{\sigma} = f \otimes f \otimes \tilde{\hat{\sigma}}$.
As the next-to-leading order (NLO) in QCD corrections for arbitrary DY processes largely factorize
from the hard scattering process~\cite{Harris:2001sx,Ruiz:2015zca}, Eq.~(\ref{eq:loHadronicFactorization}) holds at NLO:
\begin{eqnarray}
\sigma^{\rm NLO}(p ~p \to N ~\ell^\pm_1 \to \ell^\pm_1 ~\ell^\pm_2 + X) = \varepsilon(M_{W_R}) \times \tilde{\sigma^{\rm NLO}}.
\nonumber\\
\label{eq:nloHadronicFactorization}
\end{eqnarray}
Premising that reported LHC limits on the DP cross section can be applied to the LRSM for kinematically inaccessible $W_R$,
Eq.~(\ref{eq:nloHadronicFactorization}) shows how to translate the upper bound on the rate into an upper bound on $\varepsilon(M_{W_R})$.
For the NEFT operators in Eq.~(\ref{eq:neftD6Ops}), the corresponding partonic scattering rates are given by~\cite{delAguila:2008ir}
\begin{eqnarray}
\hat{\sigma}_{S3}(u\overline{d}\to N\ell^\pm_1 \to \ell^\pm_1\ell^\pm_2 X) &=&
\frac{\alpha_{S3}^2 \vert Y_N\ell_1\vert^2}{2^7~3N_c \pi} \frac{\hat{s}}{\Lambda^4}
\nonumber\\
\times(1-r_N)^2(2+r_N) &\times& \BR{N \to \ell_2 X}, \qquad
\label{eq:eftXsecS3} \\
\hat{\sigma}_V(u\overline{d}\to N\ell^\pm_1 \to \ell^\pm_1\ell^\pm_2 X) &=&
\frac{4\alpha_V^2}{\alpha_{S3}^2}\hat{\sigma}_{S3}.
\label{eq:eftXsecV}
\end{eqnarray}
Comparing to Eqs.~(\ref{eq:lrsmDXSec})-(\ref{eq:dpDXSec}), one finds the mapping
\begin{eqnarray}
\mathcal{O}_{S3}^{(6)} &:& \varepsilon^{\ell_1\ell_2}(M_{W_R}) = \frac{\alpha_{S3}^2}{\Lambda^4}\vert Y_N\ell_1\vert^2\BR{N\to \ell_2 X},~\qquad
\label{eq:od6S3Map}
\\
\mathcal{O}_V^{(6)} &:& \varepsilon^{\ell_1\ell_2}(M_{W_R}) = \frac{4\alpha_V^2}{\Lambda^4}\vert Y_N\ell_1\vert^2 \BR{N\to \ell_2 X}.~\qquad
\label{eq:od6VMap}
\end{eqnarray}
and allows the further interpretation of $\varepsilon(M_{W_R})$.
\begin{figure*}[!t]
\begin{center}
\subfigure[]{\includegraphics[width=.44\textwidth]{fig2a.pdf} \label{fig:lnvWithoutVR_epsMWR_dimuon} }
\subfigure[]{\includegraphics[width=.44\textwidth]{fig2b.pdf} \label{fig:lnvWithoutVR_epsMWR_emuXee} }
\\
\subfigure[]{\includegraphics[width=.44\textwidth]{fig2c.pdf} \label{fig:lnvWithoutVR_mwrKapR_dimuon} }
\subfigure[]{\includegraphics[width=.44\textwidth]{fig2d.pdf} \label{fig:lnvWithoutVR_mwrKapR_emuXee} }
\end{center}
\caption{
(a) As a function of $m_N$, observed 8 TeV LHC upper bound on $\varepsilon^{\mu\mu}(M_{W_R})$ (dash-dot),
expected 14 TeV sensitivity with $\mathcal{L}=100{\rm ~fb^{-1}}$ (solid-triangle) and $1{\rm ~ab^{-1}}$ (dash-dot-diamond),
and expected 100 TeV VLHC sensitivity with $10{\rm ~ab^{-1}}$ (\textcolor{black}{dot-star}).
(b) Same as (a) but with $e^\pm\mu^\pm$ (dash-dot) and $e^\pm e^\pm$ (solid-triangle) at 8 TeV
and $e\mu$ (\textcolor{black}{dot-star}) at 100 TeV.
(c,d) Same as (a,b), respectively, but for lower bounds on $(M_{W_R}/\kappa_R)$.
All limits are obtained at 95\% CL$_s$.
}
\label{fig:lrsmMimicry_Limits}
\end{figure*}
\subsection{Computational Setup}\label{sec:setup}
Practically speaking, the NLO-accurate reduced cross section is determined using
the FeynRules-based~\cite{Christensen:2008py,Alloul:2013bka,Degrande:2014vpa}
NLO-accurate \texttt{Effective Left-Right Symmetric Model} file of~\cite{Mattelaer:2016ynf} and MadGraph5\_amc@NLO~\cite{Alwall:2014hca}.
The processes,
\begin{equation}
p p \to W_R^{\pm *} \to N \mu^\pm ~+X
\end{equation}
is calculated at NLO accuracy assuming test inputs:
\begin{eqnarray}
\{M_{\rm Test}\} &:& M_{W_R} = \textcolor{black}{200}{\rm ~TeV}, ~\kappa_R^{\ell,q}=1,\nonumber\\
& & \vert Y_{\mu N} \vert = 1, ~\BR{N\rightarrow\mu X}=1.\quad
\label{eq:lrsmTestInputs}
\end{eqnarray}
For choice of EW inputs, PDFs, etc., we follow Ref.~\cite{Mattelaer:2016ynf}.
Denoting the $\varepsilon(M_{W_R})$ corresponding to the Eq.~(\ref{eq:lrsmTestInputs}) as $\varepsilon(M_{\rm Test})$,
$\tilde{\sigma}^{\rm NLO}$ is obtained from
the relationship
\begin{equation}
\tilde{\sigma}^{\rm NLO} = \frac{\sigma^{\rm NLO}(p ~p ~\to N ~\mu^\pm + X; \{M_{\rm Test}\})}{\varepsilon(M_{\rm Test})}.
\label{eq:reducedNLO}
\end{equation}
\section{Results and Discussion}\label{sec:results}
\begin{figure*}[!t]
\begin{center}
\subfigure[]{\includegraphics[width=.45\textwidth,height=6.3cm]{fig3a.pdf} \label{fig:lnvWithoutVR_MWRmNExcl} }
\subfigure[]{\includegraphics[width=.45\textwidth]{fig3b.pdf} \label{fig:lrsmMimicry_NEFT} }
\end{center}
\caption{
(a) Observed and expected 95\% CL$_s$ sensitivities to the $(M_{W_R},m_N)$ parameter space $(\kappa_R=1)$ for various collider configurations via
direct and indirect searches in the $\mu^\pm\mu^\pm$ final state.
(b) Observed and expected 95\% CL$_s$ sensitivities to the NEFT dimension-six operators $\mathcal{O}^{(6)}_V$ and $\mathcal{O}^{(6)}_{S3}$
in the $\mu^\pm\mu^\pm$ channel for the collider configurations in Fig.~\ref{fig:lnvWithoutVR_epsMWR_dimuon}.
}
\label{fig:lrsmMimicry_recast}
\end{figure*}
We now report the observed sensitivity to the LRSM from DP searches in the $\mu\mu/ee/e\mu$ channels
by the CMS experiment at $\sqrt{s}=8$ TeV with $\mathcal{L}=19.7{\rm ~fb^{-1}}$~\cite{Khachatryan:2015gha,Khachatryan:2016olu}.
We also report expected sensitivities based on 14 TeV projections with $\mathcal{L}=100{\rm ~fb^{-1}}$ and $1{\rm ~ab^{-1}}$~\cite{Alva:2014gxa},
as well as at 100 TeV with $\mathcal{L}=10{\rm ~ab^{-1}}$~\cite{Alva:2014gxa}.
In all cases, 95\% confidence level (CL) limits are obtained/reproduced
via the CL$_s$ method~\cite{Read:2002hq,Junk:1999kv,ATLAS:2011tau},
using the information available in~\cite{Khachatryan:2015gha,Khachatryan:2016olu,Alva:2014gxa},
and assuming Poisson distributions for signal and background processes.
After obtaining the expected (observed) DP cross section limits $\sigma^{{\rm 95\% CL}_s}_{\rm Exp.~(Obs.)}$,
LRSM constraints are determined from the ``reduced'' cross section $\tilde{\sigma}$, as defined in Eq.~(\ref{eq:reducedNLO}),
with the relation
\begin{equation}
\varepsilon^{\ell_1\ell_2}_{\rm Exp.~(Obs.)}(M_{W_R}) = \cfrac{\sigma^{{\rm 95\% CL}_s}_{\rm Exp.~(Obs.)}}{\tilde{\sigma}^{\rm NLO}}.
\end{equation}
In Fig.~\ref{fig:lrsmMimicry_Limits} we plot as a function of $m_N$ the 8 TeV CMS upper bounds on $\varepsilon(M_{W_R})$
for the (a) $\mu\mu$ (dash-dot) as well as (b) $e\mu$ (dash-dot) and $ee$ (\textcolor{black}{upside-down triangle}) channels.
One finds comparable limits for all modes, with
\begin{eqnarray}
\mu^\pm\mu^\pm &:& \varepsilon^{\ell\ell}(M_{W_R}) \lesssim \textcolor{black}{0.05}{\rm ~TeV}^{-4}, \\
e^\pm\mu^\pm,~e^\pm e^\pm &:& \varepsilon^{\ell\ell}(M_{W_R}) \lesssim \textcolor{black}{0.1}{\rm ~TeV}^{-4}.
\end{eqnarray}
For $m_N\lesssim150{\rm ~GeV}$, $W_{\rm SM}$ production greatly diminishes sensitivity.
A weaker limit for $e$-based channels is due to the larger fake and charge misidentification rates for electrons than for muons,
particularly from top quarks.
These features are seen consistently in projections.
In Fig.~\ref{fig:lnvWithoutVR_epsMWR_dimuon}, the expected sensitivity to $\varepsilon^{\mu\mu}(M_{W_R})$
at 14 TeV with $\mathcal{L}=100{\rm ~fb^{-1}}$ (solid-triangle) and $1{\rm ~ab^{-1}}$ (dash-dot-diamond) are shown.
We find that for $m_N = 100-700{\rm ~GeV}$, one can potentially exclude:
\begin{eqnarray}
\mathcal{L}^{14{\rm ~TeV}}_{100{\rm ~fb^{-1}}} &:& \varepsilon^{\mu\mu}(M_{W_R}) \lesssim \textcolor{black}{5\times10^{-4}{\rm ~TeV}^{-4}}, \\
\mathcal{L}^{14{\rm ~TeV}}_{1{\rm ~ab^{-1}}} &:& \varepsilon^{\mu\mu}(M_{W_R}) \lesssim \textcolor{black}{9\times10^{-5}{\rm ~TeV}^{-4}}.
\end{eqnarray}
At a future 100 TeV VLHC, the large increase in parton density coupled with proposed integrated luminosity goals
of 10-20${\rm ~ab^{-1}}$~\cite{Hinchliffe:2015qma} implies a considerable jump in sensitivity to $\varepsilon(M_{W_R})$ for EW-scale $N$.
For $m_N=100-1200{\rm ~GeV}$, the $\mu\mu$ (\textcolor{black}{dot-star}) in \ref{fig:lnvWithoutVR_epsMWR_dimuon}) and
$e\mu$ (\textcolor{black}{dot-star}) in \ref{fig:lnvWithoutVR_epsMWR_emuXee}) final state can probe with $10{\rm ~ab^{-1}}$:
\begin{eqnarray}
\varepsilon^{\mu\mu}(M_{W_R}) &\lesssim& \textcolor{black}{2\times10^{-7}-1\times10^{-6}{\rm ~TeV}^{-4}}, \\
\varepsilon^{e\mu}(M_{W_R}) &\lesssim& \textcolor{black}{2\times10^{-7}-7\times10^{-6}{\rm ~TeV}^{-4}}.
\end{eqnarray}
\begin{table*}
\centering
\small
\caption{
Observed~\cite{Khachatryan:2015gha,Khachatryan:2016olu} and
expected~\cite{Alva:2014gxa} 95\% CL$_s$ sensitivities to $\varepsilon(M_{W_R})$ and $(M_{W_R}/\kappa_R)$ in the LRSM as well as
$\Lambda/\sqrt[4]{\alpha_V^2 \text{BR}}$ in NEFT assuming various $pp$ collider energies $(\sqrt{s})$ and integrated luminosity caches $(\mathcal{L})$.
}
\begin{tabular}{ c || c || c | c | c | c || c | c | c | c || c | c | c | c }
\hline\hline
\multicolumn{2}{c||}{} & Obs. & \multicolumn{3}{c||}{Exp.} & Obs. & \multicolumn{3}{c||}{Exp.} & Obs. & \multicolumn{3}{c}{Exp.} \\ \hline
$\sqrt{s}$ [TeV] & & 8 & 14 & 14 & 100 & 8 & 14 & 14 & 100 & 8 & 14 & 14 & 100 \\
$\mathcal{L}$ [${\rm ~fb^{-1}}$] & & 19.7 & 100 & $10^3$ & $10^4$ & 19.7 & 100 & $10^3$ & $10^4$ & 19.7 & 100 & $10^3$ & $10^4$ \\ \hline\hline
$m_N$ [GeV] & $\ell^\pm_1\ell^\pm_2$ & \multicolumn{4}{c|}{$\varepsilon(M_{W_R})$~[TeV$^{-4}$]}
& \multicolumn{4}{c|}{$M_{W_R}/\kappa_R$ [TeV]}
& \multicolumn{4}{c}{$\Lambda/\sqrt[4]{\alpha_V^2 \cdot \text{BR}}$ [TeV]}
\\ \hline\hline
\multirow{3}{*}{100} & $\mu\mu$ & $1.95\times10^{-1}$ & $6.45\times10^{-4}$ & $1.00\times10^{-4}$ & $4.96\times10^{-7}$ & 1.3 & 5.8 & 8.4 & 32 & 2.1 & 9.7 & 14 & 53 \\
& $e\mu$ & $8.05\times10^{-1}$ & -- & -- & $1.64\times10^{-6}$ & 0.75 & -- & -- & 20 & 1.5 & -- & -- & 40 \\
& $ee$ & $8.70\times10^{-1}$ & -- & -- & -- & 0.87 & -- & -- & -- & 1.5 & -- & -- & -- \\ \hline
\multirow{3}{*}{200} & $\mu\mu$ & $5.44\times10^{-2}$ & $6.03\times10^{-4}$ & $1.34\times10^{-4}$ & $1.31\times10^{-6}$ & 1.7 & 5.4 & 7.8 & 25 & 2.9 & 9.0 & 13 & 42 \\
& $e\mu$ & $8.19\times10^{-2}$ & -- & -- & $7.49\times10^{-6}$ & 1.3 & -- & -- & 14 & 2.6 & -- & -- & 27 \\
& $ee$ & $7.42\times10^{-2}$ & -- & -- & -- & 1.6 & -- & -- & -- & 2.7 & -- & -- & -- \\ \hline
\multirow{3}{*}{300} & $\mu\mu$ & $4.81\times10^{-2}$ & $6.84\times10^{-4}$ & $9.69\times10^{-5}$ & $9.22\times10^{-7}$ & 1.8 & 5.7 & 8.5 & 27 & 3.0 & 9.5 & 14 & 46 \\
& $e\mu$ & $7.70\times10^{-2}$ & -- & -- & $2.95\times10^{-6}$ & 1.3 & -- & -- & 17 & 2.7 & -- & -- & 34 \\
& $ee$ & $8.42\times10^{-2}$ & -- & -- & -- & 1.6 & -- & -- & -- & 2.6 & -- & -- & -- \\ \hline
\multirow{3}{*}{500} & $\mu\mu$ & $1.06\times10^{-1}$ & $5.74\times10^{-4}$ & $8.04\times10^{-5}$ & $4.79\times10^{-7}$ & 1.5 & 5.4 & 8.9 & 32. & 2.5 & 9.1 & 15 & 54 \\
& $e\mu$ & $1.66\times10^{-1}$ & -- & -- & $5.90\times10^{-7}$ & 1.1 & -- & -- & 26 & 2.2 & -- & -- & 51 \\
& $ee$ & $1.29\times10^{-1}$ & -- & -- & -- & 1.4 & -- & -- & -- & 2.4 & -- & -- & -- \\ \hline
\multirow{2}{*}{1200} & $\mu\mu$ & -- & -- & -- & $1.95\times10^{-7}$ & -- & -- & -- & 40 & -- & -- & -- & 67 \\
& $e\mu$ & -- & -- & -- & $2.09\times10^{-7}$ & -- & -- & -- & 33 & -- & -- & -- & 66 \\ \hline\hline
\end{tabular}
\label{tb:lrsmSensitivity}
\end{table*}
Derived limits on $\varepsilon(M_{W_R})$ hold for rather generic LR scenarios.
Under the strong (but typical) assumptions of a minimal LRSM setting, we can
rewrite constraints as lower bounds on ratio of $M_{W_R}$ and $\kappa_R^{q,\ell}$.
Specifically, assuming gauge coupling universality, one has
\begin{equation}
\kappa_R \equiv \kappa_R^{q} = \kappa_R^{\ell}.
\end{equation}
For single flavor final-states, we take the aligned lepton mixing limit Eq.~(\ref{eq:mumueeMix}),
whereas for the mixed flavor channel, we take the maximally mixed limit Eq.~(\ref{eq:emuMix}), i.e.,
\begin{eqnarray}
& & \vert Y_{\ell N}\vert \approx 1 \quad\text{and}\quad \BR{N\rightarrow \ell^\pm X} \approx 1, \text{or}\qquad
\label{eq:mumueeMix}
\\
& & \vert Y_{e N}\vert \approx \vert Y_{\mu N}\vert \approx 1/\sqrt{2}
\quad\text{and}\quad \nonumber\\
& & \BR{N \to e^\pm X} \approx \BR{N\rightarrow \mu^\pm X} \approx1/2.\qquad
\label{eq:emuMix}
\end{eqnarray}
While $N$ can decay with equal
likelihood to $\ell_i^+$ and $\ell_i^-$,
the same-sign charge stipulation reduces the effective branching by $1/2$.
With this, we invert $\varepsilon(M_{W_R})$, giving
\begin{equation}
\frac{M_{W_R}}{\kappa_R} = \frac{1}{\sqrt[4]{\eta\times\varepsilon^{\ell_1\ell_2}(M_{W_R})}}, \quad
\eta = \left\{\begin{matrix}
2, & \ell_1 = \ell_2 \\
4, & \ell_1 \neq \ell_2
\end{matrix}\right.,
\end{equation}
where $\eta$ accounts for charge and flavor multiplicities.
In Figs.~\ref{fig:lnvWithoutVR_mwrKapR_dimuon} and ~\ref{fig:lnvWithoutVR_mwrKapR_emuXee}, respectively,
we show the lower bounds on $(M_{W_R}/\kappa_R)$ for the same configurations as (a) and (b).
For all channels, the observed 8 TeV limits span:
\begin{eqnarray}
m_N=100-200{\rm ~GeV} &:& \left(\frac{M_{W_R}}{\kappa_R}\right) \gtrsim \textcolor{black}{0.7-1.8}{\rm ~TeV}, \nonumber\\
m_N=200-700{\rm ~GeV} &:& \left(\frac{M_{W_R}}{\kappa_R}\right) \gtrsim \textcolor{black}{1.1-1.8}{\rm ~TeV}. \nonumber
\end{eqnarray}
At $\sqrt{s}=14$ TeV with $\mathcal{L}=100{\rm ~fb^{-1}}$ and $1{\rm ~ab^{-1}}$, the $\mu\mu$ final state can exclude for $m_N = 100-700{\rm ~GeV}$:
\begin{eqnarray}
\mathcal{L}^{14{\rm ~TeV}}_{100{\rm ~fb^{-1}}} &:& \left(\frac{M_{W_R}}{\kappa_R}\right) \lesssim \textcolor{black}{5.2-5.8}{\rm ~TeV}, \\
\mathcal{L}^{14{\rm ~TeV}}_{1{\rm ~ab^{-1}}} &:& \left(\frac{M_{W_R}}{\kappa_R}\right) \lesssim \textcolor{black}{7.8-8.9}{\rm ~TeV}. \quad
\end{eqnarray}
Comparable sensitivity in the $ee$ and $e\mu$ channels is expected.
At 100 TeV with $10{\rm ~ab^{-1}}$, the $\mu\mu$ and $e\mu$ channels for $m_N=100-1200{\rm ~GeV}$ are sensitive to
\begin{eqnarray}
\mu^\pm\mu^\pm &:& \left(\frac{M_{W_R}}{\kappa_R}\right) \lesssim \textcolor{black}{25-40}{\rm ~TeV}, \\
e^\pm\mu^\pm &:& \left(\frac{M_{W_R}}{\kappa_R}\right) \lesssim \textcolor{black}{14-33}{\rm ~TeV}.
\end{eqnarray}
We note that the sharp cutoffs at $m_N=500,~700,~$ and ~1200 GeV for the several scenarios in Fig.~\ref{fig:lnvWithoutVR_MWRmNExcl}
is due to the limited number of mass hypotheses considered in~\cite{Khachatryan:2015gha,Khachatryan:2016olu,Alva:2014gxa}.
A dedicated analysis would show sensitivity to larger $m_N$.
To compare with searches for resonant $W_R$-$N$ production, we plot in Fig.~\ref{fig:lnvWithoutVR_MWRmNExcl}
the region of the $(M_{W_R},m_N)$ parameter space
excluded by the ATLAS experiment at 8 TeV with $20.3{\rm ~fb^{-1}}$~\textcolor{black}{in the $\mu\mu$ channel}~\cite{Aad:2015xaa},
along with our corresponding sensitivities for $\kappa_R=1$.
For $m_N \approx 100-500{\rm ~GeV},$ we find that the reinterpretation of CMS's DP limits are actually
within $\textcolor{black}{1.5\times}$ of present $M_{W_R}$ limits from resonant $W_R$-$N$ and dijet (not shown)
searches~\cite{Aad:2015xaa,Khachatryan:2014dka,Khachatryan:2015dcf,ATLAS:2015nsi}.
However, for such low mass scales, the validity of this approach requires $\kappa_R\gg1$.
With 100${\rm ~fb^{-1}}$ at 14 TeV, projected sensitivities are competitive with
the $\mathcal{O}(5)$ TeV reach from resonant searches using the full HL-LHC dataset~\cite{Ferrari:2000sp,Han:2012vk,Mitra:2016kov}.
With 1${\rm ~ab^{-1}}$ at 14 TeV, and more so with 10${\rm ~ab^{-1}}$ at 100 TeV,
the DP channel can probe super heavy $v_R$ scales favored by
low-energy probes~\cite{Bertolini:2014sua,Maiezza:2014ala,Maiezza:2016bzp,Zhang:2007fn,Zhang:2007da}.
These findings suggest searches for heavy Majorana neutrinos via off-shell $W_R$ may be of some
usefulness at current and future collider experiments.
For completeness, upper limits on $\varepsilon^{\mu\mu}(M_{W_R})$ are recast in terms of the NEFT operators in Eq.~(\ref{eq:neftD6OpsMix}).
Using Eqs.~(\ref{eq:od6VMap})-(\ref{eq:od6S3Map}), the lower bounds on $(\Lambda/\sqrt{\alpha_{V,S3}})$ are
\begin{eqnarray}
\cfrac{\Lambda}{\sqrt[4]{\alpha_V^2 \BR{N\rightarrow\mu X}}} &>& \sqrt[4]{\cfrac{4 \vert Y_{\mu N}\vert^2}{\varepsilon^{\mu\mu}_{\rm Exp~(Obs)}(M_{W_R})}},
\\
\cfrac{\Lambda}{\sqrt[4]{\alpha_{S3}^2 \BR{N\rightarrow\mu X}}} &>& \sqrt[4]{\cfrac{\vert Y_{\mu N}\vert^2}{\varepsilon^{\mu\mu}_{\rm Exp~(Obs)}(M_{W_R})}}.
\end{eqnarray}
As a function of $m_N$, the observed and expected sensitivities to $\mathcal{O}_V$
for the several configurations in Fig.~\ref{fig:lnvWithoutVR_epsMWR_dimuon} and mixing choice in Eq.~(\ref{eq:mumueeMix})
are shown in Fig.~\ref{fig:lrsmMimicry_NEFT}. Over the respective ranges of $m_N$, they span approximately
\begin{eqnarray}
\mathcal{L}^{8{\rm ~TeV}}_{19.7{\rm ~fb^{-1}}}
&:& \frac{\Lambda}{\sqrt[4]{\alpha_V^2 \BR{N\rightarrow\mu X}}} > \textcolor{black}{2.1-3.0}{\rm ~TeV}, \quad \quad
\\
\mathcal{L}^{14{\rm ~TeV}}_{100{\rm ~fb^{-1}}}
&:& \frac{\Lambda}{\sqrt[4]{\alpha_V^2 \BR{N\rightarrow\mu X}}} > \textcolor{black}{8.7-9.7}{\rm ~TeV},
\\
\mathcal{L}^{14{\rm ~TeV}}_{1{\rm ~ab^{-1}}}
&:& \frac{\Lambda}{\sqrt[4]{\alpha_V^2 \BR{N\rightarrow\mu X}}} > \textcolor{black}{13-15}{\rm ~TeV},
\\
\mathcal{L}^{100{\rm ~TeV}}_{10{\rm ~ab^{-1}}}
&:& \frac{\Lambda}{\sqrt[4]{\alpha_V^2 \BR{N\rightarrow\mu X}}} > \textcolor{black}{42-68}{\rm ~TeV}.
\end{eqnarray}
We summarize our reported findings in Tbl.~\ref{tb:lrsmSensitivity}.
\section{Summary and Conclusion}\label{sec:summary}
While the LRSM naturally addresses shortcomings of the SM,
it is not guaranteed its entire particle spectrum lies within the kinematic reach of the LHC or a future 100 TeV VLHC.
Indeed, low-energy probes favor the LR breaking scale to be above the LHC's
threshold~\cite{Chakrabortty:2012pp,Bertolini:2014sua,Maiezza:2014ala,Maiezza:2016bzp,Zhang:2007fn,Zhang:2007da}.
In this context, we argue that when LRSM gauge bosons are too heavy to be produced resonantly,
on-shell production of sub-TeV Majorana neutrinos via the process $pp\to W_R^* \to N\ell^\pm \to \ell^\pm\ell^\pm + nj$
is still possible when mediated by far \textit{off-shell} $W_R$.
In this regime, the process' mass scale and topology are identical to the direct production (DP)
process $pp\to W_{\rm SM}^{*} \to N\ell^\pm \to \ell^\pm\ell^\pm + nj$.
Subsequently, searches for DP of heavy Majorana neutrinos can be translated into searches for LR symmetry.
We have recast current~\cite{Khachatryan:2014dka,Aad:2015xaa}
and projected~\cite{Han:2012vk,Alva:2014gxa} sensitivities to the DP process at $pp$ colliders
into observed and expected sensitivities for the LRSM, in the heavy $M_{W_R}$ limit.
We find the following:
\begin{enumerate}[i)]
\item At the 8 TeV LHC, for $m_N=100-500{\rm ~GeV}$ and right-left coupling ratio $\kappa_R = g_R/g_L$,
searches have excluded at 95\% CL$_s$ ${(M_{W_R}/\kappa_R)< 0.7-1.8{\rm ~TeV}}$.
For $m_N\gtrsim200$ GeV, this is within ${1.5\times}$ of searches for resonant $W_R$ and $W_R$-$N$ production.
\item At 14 TeV with $100{\rm ~fb^{-1}}~(1{\rm ~ab^{-1}})$, one can exclude at 95\% CL$_s$
$\textcolor{black}{(M_{W_R}/\kappa_R) < 5.2-5.8~(7.8-8.9)}$ TeV
for $m_N=100-700{\rm ~GeV}$, well beyond the $\mathcal{O}(5)$ TeV anticipated reach of resonant $W_R$ searches.
\item At 100 TeV with $10{\rm ~ab^{-1}}$, one can probe $(M_{W_R}/\kappa_R) < 14-40{\rm ~TeV}$ at 95\% CL$_s$ for $m_N=100-1200{\rm ~GeV}$,
thereby greatly complimenting low-energy probes of $\mathcal{O}(10)$ TeV $v_R$.
\item In terms of an Effective Field Theory featuring heavy neutrinos, we find limits on mass/coupling scales
for gauge invariant, dimension six operators comparable to the aforementioned limits in the LRSM.
\end{enumerate}
\begin{acknowledgements}
Peter Ballett, Lydia Brenner, Luca Di Luzio, Silvia Pascoli, Carlos Fibo Tamarit, and Cedric Weiland are thanked for discussions.
This work was funded in part by the UK Science and Technology Facilities Council, and
the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement 674896 (Elusives ITN).
\end{acknowledgements}
|
1,116,691,497,239 | arxiv |
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Related Works}
\label{sec:related}
\input{related}
\section{Cascade Residual Learning}
\label{sec:method}
\input{method}
\section{Experiments}
\label{sec:results}
\input{results}
\section{Conclusions}
\label{sec:conclude}
\input{conclude}
{\small
\bibliographystyle{ieee}
\subsection{Two-stage Disparity Computation}
In general, low-level vision tasks, {\it e.g.}, denoising and deblurring, can be improved with post-facto iterative refinement \cite{milanfar13}, and disparity/flow estimation is no exception \cite{brox11}.
Recently, Ilg~{\it et al.}~\cite{ilg17} introduced FlowNet\,2.0, which uses stacking CNNs for optical flow refinement and achieves reasonable gain.
The lessons of the previous works inspire us to employ a two-stage CNN for disparity estimation.
Akin to the proposal of DispNetC (``C'' indicates the network has a correlation layer) \cite{mayer16}, the first stage of our CNN has an hour-glass structure with skip connections.
However, DispNetC outputs disparity image at half the resolution of the input stereo pair.
Differently, our network includes extra deconvolution modules to magnify the disparity, leading to disparity estimates at the same size of the input images.
We call our first stage network \emph{DispFulNet} (``Ful'' means full-resolution).
As shown later in Section\,\ref{sec:results}, our DispFulNet provides extra details and sharp transitions at object boundaries, serving as an ideal starting point for the second-stage refinement.
Note that in our network, the two stages are cascaded in a way recommended by \cite{ilg17}.
Specifically, the first network takes as input the stereo pair $I_L$ and $I_R$ and produces the initial disparity $d_1$ (of the left image).
We then warp the right image $I_R$ according to disparity $d_1$ and obtain a synthesized left image, {\it i.e.},
\begin{equation}
\widetilde{I}_L(x,y) = I_R(x + d_1(x,y), y).
\end{equation}
Then the input to the second network is the concatenation of $I_L$, $I_R$, $d_1$, $\widetilde{I}_L(x,y)$ and the error $e_L = |I_L-\widetilde{I}_L(x,y)|$.
The warping operation is differentiable for bilinear interpolation \cite{ilg17,jaderberg15}, hence our network can be trained end-to-end.
\subsection{Mutiscale Residual Learning}
For the second-stage refinement/rectification, we propose to adopt the residual learning scheme of He~{\it et al.}~\cite{he16}.
Particularly, given the initial disparity $d_1$ obtained with the first stage, the second network outputs the corresponding residual signal $r_2$, then the new disparity $d_2$ is given by $d_1 + r_2$.
In this way, we relieve the ``burden'' of the second-stage network, letting it only focus on learning the highly nonlinear residual.
On par with the spirit in \cite{he16}, in the extreme case when the first stage already produces the optimal disparity, the second-stage network only needs to output zero residual to retain the optimality.
The second-stage of our architecture also takes an hour-glass structure, producing residual signals across multiple scales.
We call our second-stage network \emph{DispResNet} (``Res'' means residual).
In the expanding part of DispResNet, the residuals are produced across several scales.
They are denoted as $\{r_2^{(s)}\}_{s=0}^S$ where $0$ denotes the scale of full resolution.
The summation of $r_2^{(s)}$ with the downsampled disparity $d_1^{(s)}$ leads to the new disparity at scale $s$, {\it i.e.},
\begin{equation}
d_2^{(s)} = d_1^{(s)} + r_2^{(s)}, 0\le s\le S.
\end{equation}
To train DispResNet, we supervise the estimated disparities $\{d_2^{(s)}\}_{s=0}^S$ across $S+1$ scales. Hence, differs from the off-the-shelf residual block structure proposed in \cite{he16}, our network explicitly supervise the residual signals, leading to effective disparity refinement.
In fact, a straightforward application of FlowNet\,2.0\,\cite{ilg17} for disparity estimation is to adopt DispNetS\,\cite{mayer16}---a variation of DispNetC without correlation layer and ``S'' means simple---to \emph{directly} learn the disparity.
Nevertheless, our comparisons in Section\,\ref{sec:results} show that incorporating residual learning brings more gain than its direct learning counterpart, {\it i.e.}, DispNetS.
Furthermore, residual learning also benefits the finetuning of the overall network, as it alleviates the problem of over-fitting~\cite{he16,ilg17}, while using DispNetS harms the performance after overall finetuning.
\subsection{Network Architecture}
Our CRL architecture is illustrated in Fig.\,\ref{fig:arch}, where $d_1=d_1^{(0)}$, and the final disparity output is $d_2^{(0)}$. To obtain the downsampled disparity images $\{d_1^{(s)}\}_{s=0}^{S}$, we have implemented a differentiable bilinear downsampling layer, similar to the sampler module in the spatial transformer networks\,\cite{jaderberg15}.
The first stage, DispFulNet, enlarges the half-resolution disparity estimates of DispNetC \cite{mayer16}.
For a concise presentation, the detailed architecture of DispFulNet is not provided here.
In general, it shares similar spirits with DispNetC.
Though differently, we append extra up-convolutions to the last two convolution layers of DispNetC, the output of the upconvolutions are then concatenated with the left image.
By applying one more convolution (with one output channel) to the concatenated 3-D array, we arrive at the output of DispFulNet---a full-resolution disparity image.
The full-resolution disparity image, along with the other intermediate disparity images at six different scales, are supervised by the ground-truth through computing the $\ell_1$ loss.
The detailed specification of the second stage, DispResNet, is provided in Table.\,\ref{tab:dispres}.
Note that at a certain scale, say, $1/4$, the bilinear downsampling layer {\tt pr\_s1\_4} shrinks {\tt pr\_s1}, the disparity prediction of DispFulNet, by a factor of 4.
The downsampled disparity is then added to the learned residual {\tt res\_4} by the element-wise summation layer {\tt pr\_s2\_4}, leading to the disparity prediction at scale $1/4$.
We follow the typical supervised learning paradigm and compute an $\ell_1$ loss between the disparity estimate and the ground-truth disparity at each scale.
\begin{table}[t]
\centering\scriptsize
\begin{tabular}{|m{22pt}<{\centering}|m{5pt}<{\centering}m{5pt}<{\centering}m{30pt}<{\centering}|m{5pt}<{\centering}m{5pt}<{\centering}|m{75pt}<{\centering}|}
\hline
\textbf{Layer} & \textbf{K} & \textbf{S} & \textbf{Channels} & \textbf{I} & \textbf{O} & \textbf{Input Channels}\bigstrut\\
\hline
conv1 & 5 & 1 & 13/64 & 1 & 1 & left+right+left\_s+err+pr\_s1 \bigstrut[t]\\
conv2 & 5 & 2 & 64/128 & 1 & 2 & conv1 \\
conv2\_1 & 3 & 1 & 128/128 & 2 & 2 & conv2 \\
conv3 & 3 & 2 & 128/256 & 2 & 4 & conv\_3\_1 \\
conv3\_1 & 3 & 1 & 256/256 & 4 & 4 & conv3 \\
conv4 & 3 & 2 & 256/512 & 4 & 8 & conv3\_1 \\
conv4\_1 & 3 & 1 & 512/512 & 8 & 8 & conv4 \\
conv5 & 3 & 2 & 512/1024 & 8 & 16 & conv4\_1 \\
conv5\_1 & 3 & 1 & 1024/1024 & 16 & 16 & conv5 \bigstrut[b]\\
\hline
\hline
res\_16 & 3 & 1 & 1024/1 & 16 & 16 & conv5\_1 \bigstrut[t]\\
pr\_s1\_16 & - & - & 1/1 & 1 & 16 & pr\_s1 \\
pr\_s2\_16 & - & - & 1/1 & 16 & 16 & pr\_s1\_16+res\_16 \bigstrut[b]\\
\hline
upconv4 & 4 & 2 & 1024/512 & 16 & 8 & conv5\_1 \bigstrut[t]\\
iconv4 & 3 & 1 & 1025/512 & 8 & 8 & upconv4+conv4\_1+pr\_s2\_16 \\
res\_8 & 3 & 1 & 512/1 & 8 & 8 & iconv4 \\
pr\_s1\_8 & - & - & 1/1 & 1 & 8 & pr\_s1 \\
pr\_s2\_8 & - & - & 1/1 & 8 & 8 & pr\_s1\_8+res\_8 \bigstrut[b]\\
\hline
upconv3 & 4 & 2 & 512/256 & 8 & 4 & iconv4 \bigstrut[t]\\
iconv3 & 3 & 1 & 513/256 & 4 & 4 & upconv3+conv3\_1+pr\_s2\_8 \\
res\_4 & 3 & 1 & 256/1 & 4 & 4 & iconv3 \\
pr\_s1\_4 & - & - & 1/1 & 1 & 4 & pr\_s1 \\
pr\_s2\_4 & - & - & 1/1 & 4 & 4 & pr\_s1\_4+res\_4 \bigstrut[b]\\
\hline
upconv2 & 4 & 2 & 256/128 & 4 & 2 & iconv3 \bigstrut[t]\\
iconv2 & 3 & 1 & 257/128 & 2 & 2 & upconv2+conv2\_1+pr\_s2\_4 \\
res\_2 & 3 & 1 & 128/1 & 2 & 2 & iconv2 \\
pr\_s1\_2 & - & - & 1/1 & 1 & 2 & pr\_s1 \\
pr\_s2\_2 & - & - & 1/1 & 2 & 2 & pr\_s1\_2+res\_2 \bigstrut[b]\\
\hline
upconv1 & 4 & 2 & 128/64 & 2 & 1 & iconv2 \bigstrut[t]\\
res\_1 & 5 & 1 & 129/1 & 1 & 1 & upconv1+conv1+pr\_s2\_2 \\
pr\_s2 & - & - & 1/1 & 1 & 1 & pr\_s1+res\_1 \bigstrut[b]\\
\hline
\end{tabular}%
\vspace{10pt}
\caption{Detailed architecture of the proposed \emph{DispResNet}. Layers with prefix {\tt pr\_s1} are downsampling layers applying on the predictions of the first stage; while layers with prefix {\tt pr\_s2} are element-wise summation layers leading to predictions of the second stage. {\bf K} means kernel size, {\bf S} means stride, and {\bf Channels} is the number of input and output channels. {\bf I} and {\bf O} are the input and output downsampling factor relative to the input. The symbol {\tt +} means summation for element-wise summation layers; otherwise it means concatenation.}
\label{tab:dispres}%
\end{table}%
One may raise a straightforward question about our design: if a two-stage cascade architecture performs well, why not stacking more stages?
First, adding more stages translates to higher computational cost and memory consumption, which is unrealistic for many practical applications.
Second, in this paper, we aim at developing a two-stage network, where the first one manages to produce full-resolution initializations; while the second stage tries its best to refine/remedy the initial disparities with residual learning.
The two stages play their own roles and couple with each other to provide satisfactory results.
As to be seen in Section\,\ref{ssec:res_oth}, our two-stage network estimate high-quality disparity images with an acceptable execution time: it takes 0.47\,sec with an Nvidia GTX 1080 GPU to obtain a disparity image in the KITTI 2015 stereo dataset.
\subsection{Experimental Settings}\label{ssec:res_setup}
{\bf Datasets: }Three publicly available datasets are adopted for training and testing in this work:
\begin{enumerate}[(i)]
\item \emph{FlyingThings3D} \cite{mayer16}: a large scale synthetic dataset containing more than 22k synthetic stereo pairs for training and 4k for testing. We found this dataset has a few images with unreasonably large disparities ({\it e.g.}, greater than $10^3$), therefore we perform a simple screening on this dataset before using it. Particularly, for a disparity image, if more than $25\%$ of its disparity values are greater than $300$, this disparity image (and the corresponding stereo pair) is removed.
%
\item \emph{Middlebury 2014} \cite{scharstein14}: a small dataset capturing various high-resolution in-door scenes, which has 23 stereo pairs with given ground-truth. We only use this dataset for testing.
%
\item \emph{KITTI 2015} \cite{menze15}: a real-world dataset with dynamic street views from the perspective of a driving car. It provides 200 stereo pairs with sparse ground-truth disparities and 200 pairs for evaluation through its online leaderboard. Similar to the practice in \cite{gidaris17}, we divide its training set into a training split and a validation split, where the training split occupies $85\%$ of the data and the validation split occupies the rest.
\end{enumerate}
{\bf Training: }The Caffe framework~\cite{jia14} is used to implement our CRL scheme. Generally speaking, we first train the DispFulNet, then by fixing its weights, the DispResNet is trained.
After that, we optionally finetune the overall network.
Depending on the targeting dataset for testing, different training schedules are employed.
For presentation, we hereby encode every training schedule with a string.
A segment of such string contains two characters {\tt ND}, meaning that stage {\tt N} is trained on dataset {\tt D}, with stage {\tt 0} denotes the whole network.
For instance, {\tt 1F-1K} means the first stage is trained on the FlyingThings3D, then it is finetuned on KITTI.
The training schedules for the three datasets are presented in Table.\,\ref{tab:train}.
Note that the networks trained for FlyingThings3D are directly applied on the Middlebury data (at the quarter scale).
We adopt a batch size of 4 when training the first or the second stage, and a batch size of 2 when finetuning the overall network due to limited GPU memory.
We employ the parameters provided in \cite{mayer16} when training the first stage or the second stage on the FlyingThings3D dataset.
During finetuning, we train the model for 200\,K iterations; however, when the target dataset is KITTI 2015, we only optimize for 100\,K iterations to lessen the problem of over-fitting.
Since some of the ground-truth disparities are not available for the KITTI dataset, we neglect them when computing the $\ell_1$ loss.
{\bf Testing:} We test our networks on the aforementioned datasets, with two widely used metrics for evaluation:
\begin{enumerate}[(i)]
\item \emph{Endpoint-error} (EPE): the average Euclidean distance between the estimated disparity and the ground-truth.
%
\item \emph{Three-pixel-error} (3PE): computes the percentage of pixels with endpoint error more than 3. We call it three-pixel-error in this work.
\end{enumerate}
\subsection{Architecture Comparisons}
We first compare our design with several similar network architectures.
Particularly, we use either DispNetC or DispFulNet as the first-stage network; while at the second stage, we use either DispNetS (with direct learning) or DispResNet (with residual learning) for improving the disparity estimates.
The plain DispNetC and DispFulNet (with only one stage) are also considered in our evaluation. For DispNetC, we adopted the model released by Dosovitskiy~{\it et al.}~\cite{dosovitskiy15}; while DispFulNet are trained in a similar manner as that in \cite{dosovitskiy15} ({\it e.g.}, with multi-scale loss functions).
During the training process, we follow the schedules shown in Table\,\ref{tab:train}, hence 20 different network models are obtained for comparisons.
Objective performance of the networks on the three datasets are presented in Table.\,\ref{tab:res_arch}.
We have the following observations:
\begin{enumerate}[(i)]
\item Using our DispFulNet as the first stage provides \emph{higher accuracy} compared to DispNetC.
\item Though appending a second-stage network improves the results, our DispResNet bring \emph{extra gain} compared to DispNetS (the propposal in \cite{ilg17}).
\item When DispNetS is served as the second stage, the performance deteriorates after overall finetuning, in accordance with \cite{ilg17}. In contrast, when DispResNet is used, overall optimization further \emph{improves} the performance in most cases (except for Middlebury which is not used for training). From \cite{he16}, that is because learning the residual is less easy to over-fit the training data, making the network more stable for overall optimization.
\end{enumerate}
As a whole, our CRL scheme (DispFulNet+DispResNet) with overall finetuning achieves the best objective qualities in all the three datasets.
In the following, we use this network model for further comparisons.
Fig.\,\ref{fig:res_slf} shows the outputs of our CRL scheme and its first stage, DispFulNet, as well as their absolute differences between the ground-truth disparities.
The three rows are segments taken from the FlyingThings3D, Middlebury and KITTI datasets, respectively.
We see that not only the disparities at object boundaries are greatly improved by the second-stage (DispResNet), some of the occlusion and textureless regions are also rectified.
For instance, the regions within the red boxes (on the ground-truth) are corrected by DispResNet.
\deflen{figresslf}{78pt}
\begin{figure*}[!t]
\centering
\subfloat{\includegraphics[width=\figresslf]{fig_self_lft_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_gth_f_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_sup_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_crl_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_ers_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_erc_f.png}}\\ \vspace{-5pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_lft_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_gth_m_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_sup_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_crl_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_ers_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresslf]{fig_self_erc_m.png}}\\
\addtocounter{subfigure}{-10}\vspace{-5pt}
\subfloat[Left image]{\includegraphics[width=\figresslf]{fig_self_lft_k.png}}\hspace{2pt}
\subfloat[Ground-truth disparity]{\includegraphics[width=\figresslf]{fig_self_gth_k_reg.png}}\hspace{2pt}
\subfloat[First-stage output]{\includegraphics[width=\figresslf]{fig_self_sup_k.png}}\hspace{2pt}
\subfloat[Second-stage output]{\includegraphics[width=\figresslf]{fig_self_crl_k.png}}\hspace{2pt}
\subfloat[First-stage error]{\includegraphics[width=\figresslf]{fig_self_ers_k.png}}\hspace{2pt}
\subfloat[Second-stage error]{\includegraphics[width=\figresslf]{fig_self_erc_k.png}}\\
\caption{Visual comparisons between the first-stage output by DispFulNet and the second-stage output by the whole CRL scheme (DispFulNet+DispResNet). Note that the regions within the red boxes are corrected by DispResNet.}
\label{fig:res_slf}
\end{figure*}
Fig.\,\ref{fig:res_arch} shows the disparity estimates of three different two-stage networks: DispNetC+DispNetS (akin to the proposal of \cite{ilg17}), DispNetC+DispResNet, and DispFulNet+DispResNet (our CRL), where DispNetC+DispNetS uses the model with separate training while DipsNetC+DispResNet uses the model after overall finetuning.
Again, the three rows are segments taken from the FlyingThings3D, Middlebury and KITTI datasets, respectively.
We see that, firstly, the proposed CRL provides sharpest disparity estimates among the three architectures, with the help of its first stage, DispFulNet.
Furthermore, incorporating residual learning in the second stage produces high-quality disparities for ill-posed regions.
Note the disparity estimates within the red boxes are progressively improved from DispNetC+DispNetS and DispNetC+DispResNet, to CRL.
\deflen{figresarc}{95pt}
\begin{figure*}[!t]
\centering
\subfloat{\includegraphics[width=\figresarc]{fig_arch_lft_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_gth_f_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_dcs_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_dcr_f.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_crl_f.png}}\\ \vspace{-5pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_lft_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_gth_m_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_dcs_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_dcr_m.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresarc]{fig_arch_crl_m.png}}\\
\addtocounter{subfigure}{-10}\vspace{-5pt}
\subfloat[Left image]{\includegraphics[width=\figresarc]{fig_arch_lft_k.png}}\hspace{2pt}
\subfloat[Ground-truth disparity]{\includegraphics[width=\figresarc]{fig_arch_gth_k_reg.png}}\hspace{2pt}
\subfloat[DispNetC+DispNetS]{\includegraphics[width=\figresarc]{fig_arch_dcs_k.png}}\hspace{2pt}
\subfloat[DispNetC+DispResNet]{\includegraphics[width=\figresarc]{fig_arch_dcr_k.png}}\hspace{2pt}
\subfloat[CRL]{\includegraphics[width=\figresarc]{fig_arch_crl_k.png}}\\
\caption{Comparisons of three two-stage network architectures. Our proposed CRL deliveries sharpest and finest disparity images. Also note the regions bounded by the red boxes in different disparity images.}
\label{fig:res_arch}
\end{figure*}
\subsection{Comparisons with Other Methods}\label{ssec:res_oth}
In this experiment, we compare the proposed CRL to several state-of-the-art stereo matching algorithms.
For a fair comparison, the Middlebury dataset is not adopted in this experiment as its amount of data is insufficient for finetuning our end-to-end network.
{\bf FlyingThings3D:} Since our method only takes 0.47 second to process a stereo pair in the KITTI 2015 dataset, for a fair comparison, we hereby consider three efficient yet effective methods (with code publicly available), including SPS-St\,\cite{yamaguchi14}, MC-CNN-fst\,\cite{zbontar16}, and DispNetC\,\cite{mayer16}.
We also employ the classic semi-global matching (SGM) algorithm \cite{hirschmuller08} as the baseline.
Note that to compare with MC-CNN-fst, we train its network for 14 epochs, with a dataset containing 17 million samples extracted from the FlyingThings3D.
Performance of the proposed CRL, along with those of the competing methods, are presented in Table.\,\ref{tab:res_fly}.
Again, we see that our approach provides the best performance in terms of both evaluation metrics.
In Fig.\,\ref{fig:res_mth}, we show some visual results of different approaches on the FlyingThings3D dataset, note that our CRL provides very sharp disparity estimates. Besides, our method is the only one that can generate the fine details within the red boxes.
\begin{table}[t!]
\centering\small
\begin{tabular}{m{22pt}<{\centering}||m{22pt}<{\centering}|m{23pt}<{\centering}|m{32pt}<{\centering}|m{35pt}<{\centering}|m{22pt}<{\centering}}
\hline
Metric & SGM & SPS-St & MC-CNN-fst & \hspace{-2pt}DispNetC & CRL\\
\hline
EPE & 4.50 & 3.98 & 3.79 & 1.84 & 1.32\\
3PE & 12.54 & 12.84 & 13.70 & 9.67 & 6.20\\
\hline
\end{tabular}%
\vspace{10pt}
\caption{Objective performance of our work (CRL), along with those of the competing methods on the FlyingThings3D dataset.}
\label{tab:res_fly}%
\end{table}%
\deflen{figresmth}{93pt}
\begin{figure*}[!t]
\centering
\subfloat{\includegraphics[width=\figresmth]{fig_othm_lft_a.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_gth_a_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_mcn_a.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_dpn_a.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_crl_a.png}}\\ \vspace{-5pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_lft_b.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_gth_b_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_mcn_b.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_dpn_b.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_crl_b.png}}\\ \vspace{-5pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_lft_c.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_gth_c_reg.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_mcn_c.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_dpn_c.png}}\hspace{2pt}
\subfloat{\includegraphics[width=\figresmth]{fig_othm_crl_c.png}}\\
\addtocounter{subfigure}{-10}\vspace{-5pt}
\subfloat[Left image]{\includegraphics[width=\figresmth]{fig_othm_lft_d.png}}\hspace{2pt}
\subfloat[Ground truth disparity]{\includegraphics[width=\figresmth]{fig_othm_gth_d_reg.png}}\hspace{2pt}
\subfloat[MC-CNN-fst]{\includegraphics[width=\figresmth]{fig_othm_mcn_d.png}}\hspace{2pt}
\subfloat[DispNetC]{\includegraphics[width=\figresmth]{fig_othm_dpn_d.png}}\hspace{2pt}
\subfloat[CRL]{\includegraphics[width=\figresmth]{fig_othm_crl_d.png}}\\
\caption{Visual results of the proposed CRL, accompanied with those of the competing methods, on the FlyingThings3D dataset. Our method is the only one that successfully estimates the details within the red boxes.}
\label{fig:res_mth}
\end{figure*}
{\bf KITTI 2015 dataset:} Instead of using the training split mentioned in Section\,\ref{ssec:res_setup}, we have also trained our network on all available training data of KITTI 2015 and submitted our results to its online leaderboard.
Table\,\ref{tab:res_kit} shows the leading submission results reported by the KITTI website, where only the three-pixel-error (3PE) values are available.
In the table, ``All'' means all pixels are taken into account when computing 3PE, while ``Noc'' means only the non-occluded pixels are taken into account.
The three columns ``D1-bg,'' ``D1-fg'' and ``D1-all'' means the 3PE of the background, the foreground and the all the estimates. As can be seen, our method \emph{ranks first} in the online leaderboard.
Particularly, our overall 3PE is 2.67\%, while the second method, GC-NET\,\cite{kendall17}, has a 3PE of 2.87\%; however, our runtime is only about half of that of GC-NET.
Visual results are not included here for conciseness, we recommend the readers go to the KITTI website\,\cite{menze15} for more details.
\begin{table*}[htbp]
\vspace{-4pt}
\centering\small
\begin{tabular}{rrrrrrrrr}
& & & & & & & & \bigstrut[b]\\
\cline{2-9} & \multicolumn{1}{c||}{\multirow{2}[4]{*}{Methods}} & \multicolumn{3}{c||}{All} & \multicolumn{3}{c||}{Noc} & \multicolumn{1}{c}{\multirow{2}[4]{*}{Runtime (sec)}} \\
\cline{3-8} & \multicolumn{1}{c||}{} & \multicolumn{1}{c|}{\phantom{bl}D1-bg\phantom{bl}} & \multicolumn{1}{c|}{\phantom{bl}D1-fg\phantom{bl}} & \multicolumn{1}{c||}{\phantom{bl}D1-all\phantom{bl}} & \multicolumn{1}{c|}{\phantom{bl}D1-bg\phantom{bl}} & \multicolumn{1}{c|}{\phantom{bl}D1-fg\phantom{bl}} & \multicolumn{1}{c||}{\phantom{bl}D1-all\phantom{bl}} & \\
\cline{2-9} & \multicolumn{1}{c||}{CRL (Ours)} & \multicolumn{1}{c|}{2.48} & \multicolumn{1}{c|}{\textbf{3.59}} & \multicolumn{1}{c||}{\textbf{2.67}} & \multicolumn{1}{c|}{2.32} & \multicolumn{1}{c|}{\textbf{3.12}} & \multicolumn{1}{c||}{\textbf{2.45}} & \multicolumn{1}{c}{0.47} \\
\cline{2-9} & \multicolumn{1}{c||}{GC-NET\,\cite{kendall17}} & \multicolumn{1}{c|}{\textbf{2.21}} & \multicolumn{1}{c|}{6.16} & \multicolumn{1}{c||}{2.87} & \multicolumn{1}{c|}{\textbf{2.02}} & \multicolumn{1}{c|}{5.58} & \multicolumn{1}{c||}{2.61} & \multicolumn{1}{c}{0.9} \\
\cline{2-9} & \multicolumn{1}{c||}{DRR\,\cite{gidaris17}} & \multicolumn{1}{c|}{2.58} & \multicolumn{1}{c|}{6.04} & \multicolumn{1}{c||}{3.16} & \multicolumn{1}{c|}{2.34} & \multicolumn{1}{c|}{4.87} & \multicolumn{1}{c||}{2.76} & \multicolumn{1}{c}{0.4} \\
\cline{2-9} & \multicolumn{1}{c||}{L-ResMatch\,\cite{shaked17}} & \multicolumn{1}{c|}{2.72} & \multicolumn{1}{c|}{6.95} & \multicolumn{1}{c||}{3.42} & \multicolumn{1}{c|}{2.35} & \multicolumn{1}{c|}{5.74} & \multicolumn{1}{c||}{2.91} & \multicolumn{1}{c}{48*} \\
\cline{2-9} & \multicolumn{1}{c||}{Displets\,v2\,\cite{guney15}} & \multicolumn{1}{c|}{3.00} & \multicolumn{1}{c|}{5.56} & \multicolumn{1}{c||}{3.43} & \multicolumn{1}{c|}{2.73} & \multicolumn{1}{c|}{4.95} & \multicolumn{1}{c||}{3.09} & \multicolumn{1}{c}{265*} \\
\cline{2-9} & \multicolumn{1}{c||}{D3DNet} & \multicolumn{1}{c|}{2.88} & \multicolumn{1}{c|}{6.60} & \multicolumn{1}{c||}{3.50} & \multicolumn{1}{c|}{2.71} & \multicolumn{1}{c|}{6.08} & \multicolumn{1}{c||}{3.26} & \multicolumn{1}{c}{\textbf{0.35}} \\
\cline{2-9} & \multicolumn{1}{c||}{SsSMNet} & \multicolumn{1}{c|}{2.86} & \multicolumn{1}{c|}{7.12} & \multicolumn{1}{c||}{3.57} & \multicolumn{1}{c|}{2.63} & \multicolumn{1}{c|}{6.26} & \multicolumn{1}{c||}{3.23} & \multicolumn{1}{c}{0.8} \\
\cline{2-9} & & & & & & & & \\
\end{tabular}%
\caption{Leading submissions of the KITTI 2015 stereo online leaderboard (as of August 2017). Three-pixel-error of our approach and the other state-of-the-art methods are tabulated, where our approach ranks first. The symbol ``*'' denotes runtime on CPU.}
\label{tab:res_kit}%
\end{table*}%
\subsection{Discussions}
Existing end-to-end CNNs for stereo matching, {\it e.g.}, \cite{kendall17,mayer16} and this work, all relies on a vast amount of training data with ground-truth.
However, it is costly to collect depth data in the real physical world; while synthetic data, {\it e.g.}, the FlyingThings3D dataset, cannot fully reflects the properties of the real environment.
A potential solution to the above dilemma is to borrow the wisdom from traditional approaches and embed the left-right consistency check module into the CNNs.
As mentioned in Section\,\ref{sec:related}, it is explored by \cite{godard17,kuznietsov17} for monocular depth estimation, leading to unsupervised (or semi-supervised) method requiring (very) little amount of data with ground-truth.
However, recent end-to-end CNN-based approaches already produces very accurate disparity estimates, in contrast to the case of monocular depth estimation.
As a result, any new mechanisms ({\it e.g.}, left-right consistency check in this case) introduced to the networks need to be very reliable/robust, otherwise further improvements cannot be achieved.
We leave this problem of designing robust left-right consistency check module for future investigation. |
1,116,691,497,240 | arxiv | \section{Introduction}
The M5 brane conformal anomaly was computed on the gravity side in \cite{Henningson:1998gx} and for the abelian M5 brane in \cite{Bastianelli:2000hi} by extracting it from the Hadamard-Minakshisundaram-DeWitt-Seeley (HMDS) coefficient $a_6$ in the heat kernel expansion. The coefficient $a_6$ was expressed in terms of $46$ local invariants by Gilkey \cite{Gilkey:1975iq} for a smooth compact Riemannian six-manifold $M$ with metric $g_{\mu\nu}$ and for a second order, elliptic, positive definite differential operator of the form
\begin{eqnarray*}
D &=& - g^{\mu\nu} D_{\mu} D_{\nu} - E
\end{eqnarray*}
If there is a gauge bundle over $M$, then $E$ will be matrix valued in that gauge bundle and $D_{\mu}$ will involve both the Christoffel symbol as well as the gauge bundle connection. We follow the notation of the review paper \cite{Vassilevich:2003xt}. The M5 brane conformal anomaly has the general form
\begin{eqnarray}
{\cal{A}} &=& a E_6 + c_1 I_1 + c_2 I_2 + c_3 I_3 + D_i J^i\label{overall}
\end{eqnarray}
where $E_6$ is proportional to the Euler density, $I_i$ are a conformal invariants that are constructed out of the Weyl tensor, and $D_i J^i$ is a scheme-dependent total derivative. On the supergravity side the result is \cite{Henningson:1998gx}
\begin{eqnarray}
{\cal{A}} &=& \frac{4N^3}{(4\pi)^3 7!} \(-\frac{35}{2} E_6 - 1680 I_1 - 420 I_2 + 140 I_3 + D_i J^i\)\label{N}
\end{eqnarray}
For the abelian M5 brane, the result that one gets by applying Gilkey's formula for $a_6$ is \cite{Bastianelli:2000hi}
\begin{eqnarray*}
{\cal{A}} &=& \frac{1}{(4\pi)^3 7!} \(-\frac{245}{8} E_6 - 1680 I_1 - 420 I_2 + 140 I_3 + D_i J^i\)
\end{eqnarray*}
We notice that the $c_i$-coefficients agree up to an overall factor of $4N^3$, while for the coefficient $a$ we would need to add $105/8$ in order to get the same sort of agreement,
\begin{eqnarray}
-\frac{245}{8} + \frac{105}{8} &=& -\frac{35}{2}\label{diff}
\end{eqnarray}
There is however no reason to expect such an agreement for the $a$-coefficient, as was explained in \cite{Bastianelli:2000hi}. Given the match of the $c_i$-coefficients together with the motivation in \cite{Bastianelli:2000hi} for why such a match should be anticipated, there seems to be little doubt about the correctness of the result for the $c_i$ coefficients for the abelian theory. But there is no such corresponding match for the $a$-coefficient, nor has there been any independent computation of the $a$-coefficient in the literature. Therefore we think that it can be motivated to present an independent computation of the $a$-coefficient. Only the combination $a E_6$ has an invariant significance, but not $a$ in isolation since we can always rescale $E_6$ such that $a=1$. The result we get for the integrated anomaly on $S^6$ in a first computation is
\begin{eqnarray*}
\int_{S^6} {\cal{A}} = \frac{2}{105} \cdot \frac{245}{8} - 1
\end{eqnarray*}
But by a careful examination of zero modes, we trace $-1$ to a zero mode that has been overaccounted for \cite{Christensen:1979iy}, \cite{Fradkin:1983mq}, \cite{Tseytlin:2013fca} and our final result is therefore
\begin{eqnarray*}
\int_{S^6} {\cal{A}} = \frac{2}{105} \cdot \frac{245}{8}
\end{eqnarray*}
in agreement with \cite{Bastianelli:2000hi}. There are many indirect evidences that suggest that this value for $a E_6$ is the correct one \cite{M1}, \cite{M2}, \cite{M3}. Since $S^6$ is conformally flat, the Weyl tensor is zero and so $I_i=0$. The only term that survives in the integrated conformal anomaly on $S^6$ is the term that is proportional to the Euler characteristic $E_6$.
By taking into account the normalization for abelian gauge group above, we may then from the result in \cite{Maxfield:2012aw}, \cite{Cordova:2015vwa} infer that for $SU(N)$ gauge group for any finite $N$ on $S^6$ we have the conformal anomaly
\begin{eqnarray*}
\int_{S^6} {\cal{A}} = \frac{2}{105} \cdot \(4N^3 - \frac{9}{4}N - \frac{7}{4}\) \frac{35}{8}
\end{eqnarray*}
Let us now consider dimensions of curvature invariants. If we assign the metric tensor the length dimension $[g_{\mu\nu}] = 2$, then we get
\begin{eqnarray*}
[R^{\lambda}{}_{\mu\nu\rho}] &=& 0\cr
[R_{\mu\nu}] &=& 0\cr
[R] &=& - 2
\end{eqnarray*}
Any product of these quantities or covariant derivatives thereof, such that all indices are contracted in the end, is called a local curvature invariant. We see that any local curvature invariant $K$ has an even dimension. If we integrate such a local curvature invariant over an $n$-dimensional manifold as $\int d^n x \sqrt{g} K$, then this integrated curvature invariant will have a dimension that is even if $n$ is even, and odd if $n$ is odd.
Let us now assume that $[D] = -2$ and write $D = r^{-2} \widehat D$ where $[\widehat D] = 0$. Then we have the expansion
\begin{eqnarray*}
{\mbox{tr}}(e^{-\frac{t}{r^2} \widehat D}) &=& \frac{1}{(4\pi)^3} \(\frac{a_0 r^6}{t^3} + \frac{a_2 r^4}{t^2} + \frac{a_4 r^2}{t} + a_6 + {\cal{O}}(t)\)
\end{eqnarray*}
and we see that the HMDS coefficients have the following dimensions, $[a_0]=-6$, $[a_2]=-4$, $[a_4]=-2$ and $[a_6]=0$. If the HMDS coefficients are given by integrated local curvature invariants on a smooth manifold without boundary, then they must all have even dimensions.
If we perform dimensional reduction down to five dimensions, the heat kernel expansion will acquire the following structure,
\begin{eqnarray*}
K(t) &=& \frac{1}{(4\pi)^{5/2}} \(\frac{a_0 r^5}{t^{5/2}} + \frac{a_2 r^3}{t^{3/2}} + \frac{a_4 r}{t^{1/2}} + a_5 + {\cal{O}}(t^{1/2})\)
\end{eqnarray*}
If again we run the same sort of argument as above, we see that the HMDS coefficients have the dimensions $[a_0]=-5$, $[a_2]=-3$, $[a_4]=-1$ and $[a_5]=0$. If the HMDS coefficients are given by integrated local curvature invariants on a smooth five-manifold without boundary, then they must all have odd dimensions. So as $a_5$ has dimension $0$, which is even, we should find $a_5 = 0$.
It is known how the HMDS coefficients $a_0$, $a_2$, $a_4$ and $a_6$ can be computed from curvature invariants \cite{Seeley:1967ea}, \cite{Gilkey:1975iq}. We can also compute these coefficients directly if we know the spectrum. Our method is based on the Euler-Maclaurin integral formula. This formula is normally used as an approximation method where a discrete sum is approximated by an integral. Our key observtion is that this approximation formula gives exact results for these first few heat kernel coefficients. This is a very general result. We then apply this method to the abelian M5 brane on $S^6$ where we can work out both the spectrum and the curvature invariants explicitly. Our first result is as follows. On $S^6$ we find agreement for all heat kernel coefficients $a_6$, for all fieds and ghosts fields that appear in the quantized $(2,0)$ tensor multiplet, when computed both ways, except for the ghost vector field where we need to add $1$ to the HMDS coefficient in order to match with the result that we get from the spectrum. This $1$ is later traced to an overcounted zero mode and is removed by hand. When we reduce along a singular fiber of $S^6$ down to 5d, we find that $a_5 = 0$ for all fields, except for the ghost vector field where we get $a_5 = 1$ that we later trace back to an overcounted zero mode that we remove by hand so as to get $a_5 = 0$ for all fields including the vector ghost. Not only $a_6$ in 6d has a physical interpretation but also the other heat kernel coefficients as well \cite{Vassilevich:2003xt}. For instance they contain the information about the short distance behavior of the propagotors. Our method gives exact results not only for $a_6$ but also for $a_0,...,a_5$. In section \ref{EML} we describe how we use the Euler-Maclaurin formula to compute $a_0,...,a_6$ exactly if we know the spectrum. In section \ref{6d} we apply this method to compute the heat kernel for the 6d tensor multiplet. In section \ref{5d} we perform dimensional reduction to 5d along a circle fiber that becomes singular at the north and south poles of $S^6$. In section \ref{discuss} we resolve the mismatch by removing any overcounted zero modes. There are three appendices. In appendix \ref{SO} we obtain the representations of $SO(7)$, Casimir invariants and dimensions corresponding to the various spherical harmonics on $S^6$, along with branching rules as we reduce along the singular fiber down to 5d. In appendix \ref{Tseytlin} we reproduce results in \cite{Bastianelli:2000hi} by applying the general formula in \cite{Gilkey:1975iq} to $S^6$. In appendix \ref{SUSY} we briefly discuss the partition function. We show that there is a huge supersymmetric cancelation of the modes.
This is a revised version where the mismatch that appeared in the first version has been resolved. I thank Arkady Tseytlin for pointing out relevant references where it was shown that this mismatch was due to a overcounting of zero modes in the heat kernel as we change variables.
\section{The heat kernel expansion}\label{EML}
Let us assume that $D$ is a differential operator on a six-manifold with eigenvalues $\lambda_n$ and degeneracies $d_n$. Let us further assume that we want to regularize the following, possibly divergent, partition function
\begin{eqnarray}
Z_D = \frac{1}{\(\det D\)^{1/2}} = \prod_{n=0}^{\infty} \lambda_n^{-d_n/2}\label{Z}
\end{eqnarray}
There are two different ways we may regularize. One way is to introduce the Minakshisundaram-Pleijel (MP) zeta function
\begin{eqnarray}
\zeta_D(s) &=& \sum_{n=0}^{\infty} d_n \lambda_n^{-s}\label{MPzeta}
\end{eqnarray}
The other way is to introduce the heat kernel
\begin{eqnarray}
K_D(t) &=& \sum_{n=0}^{\infty} d_n e^{-t \lambda_n}\label{Heat}
\end{eqnarray}
We may define the partition function as
\begin{eqnarray*}
Z_D &=& e^{\frac{1}{2} \zeta'(0)}
\end{eqnarray*}
Let us now assume that the eigenvalues take the form
\begin{eqnarray*}
\lambda_n &=& \frac{\widetilde{\lambda}_n}{r^2}
\end{eqnarray*}
where $\widetilde{\lambda}_n$ are dimensionless and $r$ is a length scale characterizing the six-manifold. The zeta function is
\begin{eqnarray*}
\zeta(s) &=& r^{2s} \widetilde{\zeta}(s)\cr
\widetilde{\zeta}(s) &=& \sum_{n=1}^{\infty} d_n \widetilde{\lambda}_n^{-s}
\end{eqnarray*}
and then
\begin{eqnarray*}
\zeta(0) &=& \widetilde{\zeta}(0)\cr
\zeta'(0) &=& \widetilde{\zeta}(0) \ln(r) + \widetilde{\zeta}'(0)
\end{eqnarray*}
and the partition function becomes
\begin{eqnarray*}
Z_D &=& e^{\frac{1}{2} \widetilde{\zeta}'(0)} r^{\frac{1}{2} \widetilde{\zeta}(0)}
\end{eqnarray*}
In order to see how the partition function scales with $r$, that is, the conformal anomaly, we only need to compute $\widetilde{\zeta}(0)$, and not its derivative, which will be a slightly more complicated computation.
The two quantities are related by a Mellin transform as
\begin{eqnarray*}
\zeta_D(s) &=& \frac{1}{\Gamma(s)} \int_0^{\infty} dt t^{s-1} K_D(t)\cr
K_D(t) &=& \frac{1}{2\pi i} \oint_C ds t^{-s} \zeta_D(s) \Gamma(s)
\end{eqnarray*}
where the contour encircles all the poles. If we only know the value of $\zeta_D(s)$ at $s = 0$, then we can only evaluate the inverse Mellin transform at the pole of $\Gamma(s)$ at $s=0$. Thus we can extract the following term in the heat kernel,
\begin{eqnarray*}
K_D(t) &=& \frac{1}{2\pi i} \oint_{s=0} \frac{ds}{s} \zeta(s) t^{-s} + ... = \zeta_D(0) + ...
\end{eqnarray*}
What the above computation shows, is that
\begin{eqnarray*}
a_6 &=& (4\pi)^3 \zeta_D(0)
\end{eqnarray*}
The Mellin transform is defined as
\begin{eqnarray*}
{\cal{M}} K_D(s) &=& \int_0^{\infty} dt t^{s-1} K_D(t)
\end{eqnarray*}
We have the relation
\begin{eqnarray*}
{\cal{M}} K_D(s) &=& \zeta_D(s) \Gamma(s)
\end{eqnarray*}
where
\begin{eqnarray*}
\Gamma(s) &=& \int_0^{\infty} dx x^{s-1} e^{-x}
\end{eqnarray*}
is the gamma function. The gamma function has simple poles at $s = - n$ for $n = 0,1,2,...$ with residues
\begin{eqnarray*}
\frac{1}{2\pi i} \oint_{s=- n} ds \Gamma(s) &=& \frac{(-1)^n}{n!}
\end{eqnarray*}
The inverse Mellin transform is
\begin{eqnarray*}
K_D(t) &=& \frac{1}{2\pi i} \oint_C ds t^{-s} {\cal{M}} K_D(s)
\end{eqnarray*}
where the contour encircles all the poles of ${\cal{M}} K_D(s)$. Let us check this formula by an example. Let us take $K_D(t) = e^{-t}$. The Mellin transform is ${\cal{M}} K_D(s) = \Gamma(s)$ and we get
\begin{eqnarray*}
K_D(t) = \frac{1}{2\pi i} \oint ds t^{-s} \Gamma(s) = \sum_{n=0}^{\infty} t^{n} \frac{(-1)^n}{n!} = e^{-t}
\end{eqnarray*}
We have
\begin{eqnarray*}
K_D(t) &=& \frac{1}{2\pi i} \oint_C ds t^{-s} \zeta(s) \Gamma(s)\cr
&=& \sum_{n=1}^{\infty} d_n \frac{1}{2\pi i} \oint ds t^{-s} \frac{1}{\lambda_n^s} \Gamma(s)
\end{eqnarray*}
Now once having taking out the sum from the integral, the only poles are those of the gamma function. The integrals can be computed and we get the result
\begin{eqnarray*}
K_D(t) &=& \sum_{n=1}^{\infty} d_n \sum_{k=0}^{\infty} \(t \lambda_n\)^k \frac{(-1)^k}{k!}\cr
&=& \sum_{n=1}^{\infty} d_n e^{-t \lambda_n}
\end{eqnarray*}
This is a nice consistency check.
Let us now return to our general expression for the heat kernel, but instead of using the poles of the gamma function, let us this time use the poles from the MP-zeta function. Let us now suppose that we have the following pole structure for the MP-zeta function,
\begin{eqnarray*}
\zeta_D(s) &=& \frac{a_0}{s-3} + \frac{a_1}{s-2} + \frac{a_2}{s-1} + \zeta_{reg}(s)
\end{eqnarray*}
In particular, there is no pole at $s=0$, and near $s=0$ we have the expansion
\begin{eqnarray*}
\zeta_D(s) &=& \zeta_D(0) + {\cal{O}}(s)
\end{eqnarray*}
Then the heat kernel becomes
\begin{eqnarray*}
K_D(t) &=& a_0 \frac{1}{t^3} \Gamma(3) + a_1 \frac{1}{t^2} \Gamma(2) + a_2 \frac{1}{t} \Gamma(1) + \zeta_D(0) + {\cal{O}}(t)
\end{eqnarray*}
Let us be more specific and let us assume that the eigenvalues and the degeneracies take the following form
\begin{eqnarray*}
\lambda_n &=& n^2 + 2 a n + b\cr
d_n &=& d_p n^p + d_{p-1} n^{p-1} + ...+ d_0
\end{eqnarray*}
Then, by following closely the approach in \cite{Nash:1992sf}\footnote{By some more effort, this approach can also be used to compute the derivative $\zeta'(0)$.}, we may expand
\begin{eqnarray*}
\frac{1}{\lambda_n^s} &=& \frac{1}{\(n^2 + 2 a n + b\)^s}\cr
&=& \frac{1}{n^{2s}} \frac{1}{\(1 + \frac{2a}{n} + \frac{b}{n^2}\)^s}\cr
&=& \frac{1}{n^{2s}} \(a_0 + \frac{a_{-1}}{n} + \frac{a_{-2}}{n^2} + \frac{a_{-3}}{n^3} + ...\)
\end{eqnarray*}
and then we get the expansion
\begin{eqnarray*}
\zeta_D(s) &=& \sum_{n=1}^{\infty} \(d_p n^p + d_{p-1} n^{p-1} + ...+ d_0\) \cr
&& \frac{1}{n^{2s}} \(a_0 + \frac{a_{-1}}{n} + \frac{a_{-2}}{n^2} + \frac{a_{-3}}{n^3} + ...\)
\end{eqnarray*}
We may write this as
\begin{eqnarray*}
\zeta_D(s) &=& \sum_{n=1}^{\infty} \(c_p n^{p-2s} + c_{p-1} n^{p-1-2s} + ... c_0 n^{-2s}+ c_{-1} n^{-1-2s} + ...\)
\end{eqnarray*}
The coefficients are
\begin{eqnarray*}
c_p &=& d_p a_0\cr
c_{p-1} &=& d_p a_{-1} + d_{p-1} a_0\cr
&...&\cr
c_0 &=& d_p a_{-p} + ... + d_0 a_0\cr
c_{-1} &=& d_p a_{-p-1} + ... + d_0 a_{-1}\cr
&...&
\end{eqnarray*}
Now we can perform the sum over $n$ that gives
\begin{eqnarray*}
\zeta_D(s) &=& c_p \zeta(2s-p) + c_{p-1} \zeta(2s-p+1) + ... + c_0 \zeta(2s) + c_{-1} \zeta(2s+1) + ...
\end{eqnarray*}
Finally we can evaluate this at $s=0$.
\begin{eqnarray*}
\zeta_D(0) &=& c_p \zeta(-p) + c_{p-1} \zeta(1-p) + ... + c_0 \zeta(0) + \lim_{s\rightarrow 0} \(\frac{c_{-1}}{2s}\)
\end{eqnarray*}
Let us now assume that $p = 5$. Then we need the following coefficients
Here
\begin{eqnarray*}
a_0 &=& 1\cr
a_{-1} &=& \(- 2 a\) s + {\cal{O}}\(s^2\)\cr
a_{-2} &=& \(2 a^2 - b\) s + {\cal{O}}\(s^2\)\cr
a_{-3} &=& \(- \frac{8 a^3}{3} + 2 a b\) s + {\cal{O}}\(s^2\)\cr
a_{-4} &=& \(4 a^4 - 4 a^2 b + \frac{b^2}{2}\) s + {\cal{O}}\(s^2\)\cr
a_{-5} &=& \(-\frac{32 a^5}{5} + 8 a^3 b - 2 a b^2\) s + {\cal{O}}\(s^2\)\cr
a_{-6} &=& \(\frac{32 a^6}{3} - 16 a^4 b + 6 a^2 b^2 - \frac{b^3}{3}\) s + {\cal{O}}\(s^2\)\cr
&...&
\end{eqnarray*}
and then we get
\begin{eqnarray*}
\lim_{s\rightarrow 0} \frac{c_{-1}}{s} &=& d_5 \(\frac{32 a^6}{3} - 16 a^4 b + 6 a^2 b^2 - \frac{b^3}{3}\)\cr
&& + d_4 \(-\frac{32 a^5}{5} + 8 a^3 b - 2 a b^2\)\cr
&& + d_3 \(4 a^4 - 4 a^2 b + \frac{b^2}{2}\)\cr
&& + d_2 \(- \frac{8 a^3}{3} + 2 a b\)\cr
&& + d_1 \(2 a^2 - b\)\cr
&& + d_0 \(- 2 a\)
\end{eqnarray*}
Also, since $a_{-n} = 0$ at $s=0$ for $n=1,2,...$, we see that at $s=0$ we have
\begin{eqnarray*}
c_p \zeta(-p) + c_{p-1} \zeta(1-p) + ... + c_0 \zeta(0) &=& d_p \zeta(-p) + d_{p-1} \zeta(1-p) + ... + d_0 \zeta(0)
\end{eqnarray*}
We can now present the final result in a closed formula,
\begin{eqnarray}
\zeta_{D}(0) &=& d_5 \(\zeta(-5) + \frac{16 a^6}{3} - 8 a^4 b + 3 a^2 b^2 - \frac{b^3}{6}\)\cr
&& + d_4 \(\zeta(-4) -\frac{16 a^5}{5} + 4 a^3 b - a b^2\)\cr
&& + d_3 \(\zeta(-3) + 2 a^4 - 2 a^2 b + \frac{b^2}{4}\)\cr
&& + d_2 \(\zeta(-2) - \frac{4 a^3}{3} + a b\)\cr
&& + d_1 \(\zeta(-1) + a^2 - \frac{b}{2}\)\cr
&& + d_0 \(\zeta(0) - a\)\label{zetaatnegative}
\end{eqnarray}
If the spectrum is discrete, the trace of the heat kernel is an discrete sum. It may be hard to compute the sum exactly. However, a sum can be approximated by an integral using the Euler-Maclaurin formula \cite{M}. If we have a function $f(x)$ whose value at infinity and whose every derivative at infinity vanishes, then the Euler-Maclaurin formula can be reduced to
\begin{eqnarray*}
\sum_{n=0}^{\infty} f(n) &=& \int_0^{\infty} dx f(x) + \frac{1}{2} f(0) - \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!} f^{(2k-1)}(0)
\end{eqnarray*}
where $B_{2k}$ are the Bernoulli numbers. It may seem that not much has been gained as the infinite sum on the right-hand side may seem to be even more difficult than the sum that we started with. Indeed, this is not the way that the Euler-Maclaurin formula is usually presented. Instead the sum on the right-hand side is usually truncated at some finite value and an error term is added. Here we are interested in exact results, and therefore it seems more appropriate for us to write the infinite sum without adding an error term. Here the derivatives at zero sit in a Taylor expansion of $f(x)$ around zero as
\begin{eqnarray*}
f(x) = \sum_{k=0}^{\infty} \frac{1}{k!} f^{(k)}(0) x^k = \sum_{k=0}^{\infty} d_k x^k
\end{eqnarray*}
where we define
\begin{eqnarray*}
d_k &=& \frac{1}{k!} f^{(k)}(0)
\end{eqnarray*}
Then we get
\begin{eqnarray*}
\sum_{n=0}^{\infty} f(n) &=& \int_0^{\infty} dx f(x) + \frac{1}{2} f(0) - \sum_{k=1}^{\infty} \frac{B_{2k}}{2k} d_{2k-1}
\end{eqnarray*}
Let us now apply this to the specific case when $D$ has eigenvalues on the form
\begin{eqnarray*}
\lambda_n &=& n^2 + 2 a n + b
\end{eqnarray*}
for some coefficients $a$ and $b$. Let us assume these eigenvalues come with the degeneracies
\begin{eqnarray*}
d_n &=& \sum_{k=0}^p d_k n^k
\end{eqnarray*}
for some polynomial of degree $p$. The trace of the heat kernel is given by the sum
\begin{eqnarray*}
K(t) &=& \sum_{n=0}^{\infty} d_n e^{-t\(n^2 + 2 a n + b\)}
\end{eqnarray*}
We then define the associated Euler-Maclaurin integral as
\begin{eqnarray*}
I(t) &=& \int_0^{\infty} dx d(x) e^{-t\(x^2 + 2 a x + b\)}
\end{eqnarray*}
where
\begin{eqnarray*}
d(x) &=& \sum_{k=0}^p d_k x^k
\end{eqnarray*}
The key point is that the full integrand when evaluated at $t=0$ reduces to $d(x)$, which is a polynomial of finite degree. It is this simple observation that explains how the Euler-Maclaurin formula can give us exact results for the first few coefficients in the small-$t$ expansion. By applying the Euler-Maclaurin formula, we find that
\begin{eqnarray*}
K(t) &=& I(t) + \frac{1}{2} d_0 - \sum_{k=1}^p \frac{B_k}{k} d_{k-1} + {\cal{O}}(t)
\end{eqnarray*}
For $p=5$ we get
\begin{eqnarray*}
K(t) &=& I(t) + \frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5
\end{eqnarray*}
where the integral can be computed. Its series expansion reads
\begin{eqnarray*}
I(t) &=& \sum_{k=0}^{\infty} \frac{a_k}{t^{(6-k)/2}}
\end{eqnarray*}
where
\begin{eqnarray*}
a_0 &=& d_5\cr
a_2 &=& \(6a^2 - b\) d_5 - 2 a d_4 + \frac{1}{2} d_3\cr
a_4 &=& \(16a^4-12a^b+b^2\)d_5 + \(4ab-8a^3\)d_4 + \(4a^2-b\)d_3 - 2a d_2 + d_1\cr
a_6 &=& \(\frac{16}{3}a^6 - 8a^4 b + 3 a^2 b^2 - \frac{1}{6} b^3\) d_5 + \(-\frac{16}{5}a^5 + 4 a^3 b - a b^2\) d_4\cr
&& + \(2 a^4 - 2 a^2 b + \frac{1}{4} b^2\) d_3 + \(-\frac{4}{3} a^3 + ab\) d_2\cr
&& + \(a^2 - \frac{1}{2} b\) d_1 - a d_0
\end{eqnarray*}
We also find that
\begin{eqnarray*}
a_1 &=& \frac{3\sqrt{\pi}}{8} \(d_4 - 5 a d_5\)\cr
a_3 &=& \frac{\sqrt{\pi}}{8} \(\(15ab-35a^3\)d_5+\(15a^2-3b\)d_4-6ad_3+2d_2\)\cr
a_5 &=& \frac{\sqrt{\pi}}{16} \Bigg(\(-63a^5+70a^3b-15ab^2\)d_5+\(35a^4-30a^2b+3b^2\)d_4\cr
&&+\(-20a^3+12ab\)d_3+\(12a^2-4b\) d_2 - 8a d_1 + 8 d_0\Bigg)
\end{eqnarray*}
The heat kernel coefficients $a_k$ are integrals of local geometric invariants. No such invariant exists for a smooth manifold without boundary in odd dimensions. Therefore we must have $a_k = 0$ for odd $k$'s. This puts constraints the spectrum, the degeneracy must be correlated with the eigenvalues so that $a_k = 0$ for odd $k$'s. For all the cases that we will encounter, we find that indeed $a_k = 0$ for odd $k$'s.
By identifying this result with the one that we got in (\ref{zetaatnegative}), we find the identity
\begin{eqnarray*}
\sum_{k=0}^p \zeta(-k) d_k &=& - \frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5
\end{eqnarray*}
which gives us the values of the Riemann zeta function at negative integers \cite{DG}, \cite{AGB}, \cite{KB}
\begin{eqnarray*}
\zeta(-k) &=& (-1)^k \frac{B_{k+1}}{k+1}
\end{eqnarray*}
\section{The heat kernel for the tensor multiplet}\label{6d}
The 6d $(2,0)$ tensor multiplet consists of five scalars $\phi^A$, a two-form $B_{MN}$ with selfdual field strength $H_{MNP} = \partial_M B_{NP} + \partial_N B_{PM} + \partial_P B_{MN}$, and four $SO(5)$-Majorana-Weyl fermions $\psi$. The $SO(5)$-Majorana condition means 11d Majorana spinor reduced to 6d, where a 6d Weyl projection is imposed. The superconformal Lagrangian on a Lorentzian six-manifold is given by
\begin{eqnarray*}
{\cal{L}} &=& \frac{1}{2\pi} \(-\frac{1}{24} H_{MNP}^2 - \frac{1}{2} (D_M\phi^A)^2 - \frac{R}{10} (\phi^A)^2 + \frac{i}{2} \bar\Psi\Gamma^M D_M\Psi\)
\end{eqnarray*}
where $R$ is the Ricci scalar. In this Lagrangian, $H_{MNP}$ is non-selfdual. For the quantization of the two-form gauge field, we need to supplement this Lagrangian with terms coming from two anticommuting vector-ghosts and three commuting massless scalar ghosts. The partition
function for a non-selfdual two-form, including the ghost contributions, is given by
\begin{eqnarray*}
Z_B &=& \frac{\det\triangle_1}{\det^{\frac{1}{2}}\triangle_2 \det^{\frac{3}{2}}\triangle_0}
\end{eqnarray*}
By Hodge decomposition, any nonzero mode is either exact or coexact. We may then write the partition function as \cite{Bak:2016vpi}
\begin{eqnarray*}
Z_B &=& \frac{\det^{\frac{1}{2}}\triangle_1^{coex}}{\det^{\frac{1}{2}}\triangle_2^{coex} \det^{\frac{1}{2}}\triangle_0^{coex}}
\end{eqnarray*}
where the subscript $coex$ means that we restrict to the space of coexact forms. On $S^6$ it is these coexact form that correspond to states in irreducible representations of $SO(7)$.
We will now apply the Euler-Maclaurin formula to obtain the heat kernel expansions for each field in the tensor multiplet. The sum that we compute is on the general form
\begin{eqnarray*}
K(t) &=& \sum_{n=0}^{\infty} d_n e^{-t \lambda_n}
\end{eqnarray*}
To compute the small-t expansion of this sum, we use the spectrum ($\lambda_n$ and $d_n$) of the corresponding differential operator acting on that field. We work out the spectrum on $S^6$ using representation theory of its isometry group $SO(7)$ in appendix \ref{SO}.
\subsection{Conformally coupled scalar field}
The Ricci scalar is $R = 30$ on $S^6$ of unit radius. The conformal Laplacian $\triangle_{conformal} = \triangle + 6$ has the spectrum
We have
\begin{eqnarray*}
\lambda_n &=& n^2 + 5 n + 6\cr
d_n &=& \frac{1}{60} n^5 + \frac{5}{24} n^4 + n^3 + \frac{55}{24} n^2 + \frac{149}{60} n + 1
\end{eqnarray*}
The Euler-Maclaurin integral is
\begin{eqnarray*}
I(t) &=& e^{-6t} \int_0^{\infty} s(x) e^{-t(x^2+5x)}
\end{eqnarray*}
It has the series expansion
\begin{eqnarray*}
I(t) &=& \frac{1}{60} \(\frac{1}{t^3} - \frac{1}{t^2} - 18 + {\cal{O}}(t)\)
\end{eqnarray*}
The Bernoulli part is
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5 &=& \frac{1139}{3780}
\end{eqnarray*}
In total therefore
\begin{eqnarray*}
a_6 = \frac{1139}{3780} - \frac{18}{60} = \frac{1}{756}
\end{eqnarray*}
\subsection{Massless scalar ghost}
We have
\begin{eqnarray*}
\lambda_n &=& n^2 + 5 n\cr
d_n &=& \frac{1}{60} n^5 + \frac{5}{24} n^4 + n^3 + \frac{55}{24} n^2 + \frac{149}{60} n + 1
\end{eqnarray*}
The Euler-Maclaurin integral is
\begin{eqnarray*}
I(t) &=& \frac{1}{60} \(\frac{1}{t^3} + \frac{5}{t^2} + \frac{12}{t}\)
\end{eqnarray*}
The Bernoulli contribution is
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5 &=& \frac{1139}{3780}
\end{eqnarray*}
In total therefore
\begin{eqnarray*}
a_6 = \frac{1139}{3780}
\end{eqnarray*}
\subsection{Vector ghost}
We have
\begin{eqnarray*}
\lambda_n &=& n^2 + 5n + 4\cr
d_n &=& \frac{1}{12} n^5 + \frac{25}{24} n^4 + \frac{14}{3} n^3 + \frac{215}{24} n^2 + \frac{25}{4} n
\end{eqnarray*}
The Euler-Maclaurin integral is
\begin{eqnarray*}
I(t) &=& e^{-4t} \frac{1+3t}{12 t^3}
\end{eqnarray*}
Its series expansion is
\begin{eqnarray*}
I(t) &=& \frac{1}{12 t^3} - \frac{1}{12 t^2} - \frac{1}{3t} + \frac{10}{9} + {\cal{O}}(t)
\end{eqnarray*}
The Bernoulli part is
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5 &=& - \frac{1823}{3780}
\end{eqnarray*}
In total therefore
\begin{eqnarray*}
a_6 = \frac{10}{9} - \frac{1823}{3780} = \frac{2377}{3780}
\end{eqnarray*}
From $a_0$ we read off that the vector field here has 5 components (since $1/12 = 5/60$). This can be understood from that co-exact one-forms constitute only one part of the vector harmonics. The other part is consisting of one derivative acting on scalar harmonics. That amounts to in total $5+1=6$ vector components.
\subsection{Two-form gauge field}
We have
\begin{eqnarray*}
\lambda_n &=& n^2 + 5 n + 6\cr
d_n &=& \frac{n^5}{6} + \frac{25 n^4}{12} + 9 n^3 + \frac{185 n^2}{12} + \frac{25 n}{3}
\end{eqnarray*}
The Euler-Maclaurin integral is
\begin{eqnarray*}
I(t) &=& e^{-6t} \frac{1+2t}{6t^3}
\end{eqnarray*}
whose series expansion is
\begin{eqnarray*}
I(t) &=& \frac{1}{6t^3} - \frac{2}{3t^2} + \frac{1}{t} + {\cal{O}}(t)
\end{eqnarray*}
The Bernoulli contribution is
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5 &=& - \frac{586}{945}
\end{eqnarray*}
In total therefore
\begin{eqnarray*}
a_6 &=& - \frac{586}{945}
\end{eqnarray*}
From $a_0$ we read off that the two-form has $10 = 5 \cdot 4/2$ components since $1/6 = 10/60$.
\subsection{Fermion}
The fermionic harmonics correspond to the representation $(n,0,1)$ of $SO(7)$. This representation has the Casimir invariant and the dimension
\begin{eqnarray*}
\lambda^{Casimir}_n &=& (n+3)^2 - \frac{15}{4}\cr
d_n &=& \frac{n^5}{15} + \frac{2n^4}{3} + \frac{7n^3}{3} + \frac{10 n^2}{3} + \frac{8 n}{5}
\end{eqnarray*}
According to \cite{Trautman:1995fr}, the eigenvalues of the Dirac operator on $S^6$ are $\pm(n+3)$ where each sign comes with the degeneracy $f_n = \frac{1}{15}(n+1)(n+2)(n+3)(n+4)(n+5)$ for $n=0,1,2,...$. We see that $f_n = d_n$, while the eigenvalues are related to the Casimir invariant through the Lichnerowicz formula
\begin{eqnarray*}
\(i \Gamma^{\mu} D_{\mu}\)^2 &=& - D^{\mu} D_{\mu} - \frac{R}{8}
\end{eqnarray*}
where for $S^6$ we have $R/8 = 15/4$. The partition function is
\begin{eqnarray*}
Z_F = \prod_{n=0}^{\infty} \(n+3\)^{2 f_n} = \prod_{n=0}^{\infty} \(n^2 + 6 n + 9\)^{f_n}
\end{eqnarray*}
The associated heat kernel is
\begin{eqnarray*}
- 2 \sum_{n=0}^{\infty} f_n e^{-t \(n^2 + 6 n + 9\)}
\end{eqnarray*}
The overall factor of $-2$ comes from the way that we map (\ref{Z}) to (\ref{Heat}). We have the Euler-Maclaurin integral
\begin{eqnarray*}
I(t) &=& - e^{-9t} \frac{2+13 t+40t^2}{15 t^3}
\end{eqnarray*}
whose series expansion is
\begin{eqnarray*}
I(t) &=& - \(\frac{2}{15 t^3} - \frac{1}{3 t^2} + \frac{4}{15 t} - \frac{51}{10} + {\cal{O}}(t)\)
\end{eqnarray*}
The Bernoulli contribution is
\begin{eqnarray*}
-2\(\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5\) &=& - \frac{19087}{3780}
\end{eqnarray*}
In total therefore
\begin{eqnarray*}
a^F_6 = \frac{51}{10} - \frac{19087}{3780} = \frac{191}{3780}
\end{eqnarray*}
From $2/15 = 8/60$ we get $8$ spinor components. In the 6d theory we have 4 real Weyl spinors. This has the same number of components as 2 real 8-component Dirac spinors or one complex 8-component Dirac spinor.
\subsection{The $(2,0)$ tensor multiplet}
We summarize the above results. For the conformal scalar ($S$), massless scalar ghost ($S_0$), vector ghost ($V$), two-form ($T$) and fermion ($F$) respectively, we have
\begin{eqnarray*}
a^S_6&=& {\cal{N}} \frac{5}{72}\cr
a^{S_0}_6 &=& {\cal{N}} \frac{1139}{72}\cr
a^V_6 &=& {\cal{N}} \frac{2377}{72}\cr
a^T_6 &=& - {\cal{N}} \frac{586}{18}\cr
a^F_6 &=& {\cal{N}} \frac{191}{72}
\end{eqnarray*}
where we have taken out a common factor
\begin{eqnarray*}
{\cal{N}} &=& \frac{2}{105}
\end{eqnarray*}
The heat kernel coefficient associated to the two-form is
\begin{eqnarray*}
a^B_6 &=& a^T_6 - a^V_6 + a^{S_0}_6
\end{eqnarray*}
because spherical harmonics correspond to co-exact forms \cite{Bak:2016vpi}. We get
\begin{eqnarray*}
a^B_6 &=& {\cal{N}} \frac{221}{4} - 2
\end{eqnarray*}
If we use the total number of ghosts in the ghost tower, then we have the relation
\begin{eqnarray*}
a^B_6 &=& b_6^{tot,T} - 2 b_6^{tot,V} + 3 b_6^{tot,S_0}
\end{eqnarray*}
corresponding to $2$ vector ghosts and $3$ massless scalar ghosts. The coefficients are related with those above as
\begin{eqnarray*}
a^{tot,T}_6 &=& a^{T}_6 + a^{V}_6\cr
a^{tot,V}_6 &=& a^{V}_6 + a^{S_0}_6\cr
a^{tot,S_0}_6 &=& a^{S_0}_6
\end{eqnarray*}
We get
\begin{eqnarray*}
a_6^{tot,T} &=& {\cal{N}} \frac{11}{24}\cr
a_6^{tot,V} &=& {\cal{N}} \frac{293}{6}\cr
a_6^{tot,S_0} &=& {\cal{N}} \frac{1139}{72}
\end{eqnarray*}
In the tensor multiplet there are $5$ conformally coupled scalar fields, one selfdual two form and $4$ Majorana-Weyl fermions. The heat kernel coefficient for the tensor multiplet is therefore
\begin{eqnarray*}
a^{M5}_6 = 5a^S_6 + \frac{1}{2} a^B_6 + a^F_6 = - \frac{5}{12}
\end{eqnarray*}
Let us now compare our result with the coefficients\footnote{We ignore the overall factor $-\frac{1}{(4\pi)^3 7!}$ in their expressions.} of the Euler density that were obtained in \cite{Bastianelli:2000hi},
\begin{eqnarray*}
a^S &=& \frac{5}{72}\cr
a^B &=& \frac{221}{4}\cr
a^F &=& \frac{191}{72}
\end{eqnarray*}
For the whole tensor multiplet then
\begin{eqnarray*}
a^{M5} = 5 S + \frac{1}{2} B + F = - \frac{245}{8}
\end{eqnarray*}
We see that we have a perfect match\footnote{We can not really say that much because we could always fix say $a^S$ to whatever number we like by changing the overall coefficient. However, we think our match is much stronger than that. One reason to believe so, is because $191$ is a large prime integer number.} for $a^S$ and $a^F$. Then we have a mismatch only for $a^B$. We would now like to track this mismatch a bit further. In \cite{Bastianelli:2000hi}, the explicit expressions for the ghosts associated to the two-form were not written down in the final form where the coefficient of $E_6$ could have been read off. But we can easily extract the value of that coefficient from the expressions presented in \cite{Bastianelli:2000hi} by evaluating the curvature invariants\footnote{The relations between curvature invariants in \cite{Bastianelli:2000hi} and our curvature invariants (\ref{curv}) are $A_{10} = -L_1, A_{11} = -L_2, A_{12} = -L_3$ and $A_{13} = -K_1,A_{14} =K_2,
A_{15} = -K_3, A_{16} = K_4, A_{17} = K_5$. The various minus signs arise from our convention that $R_{ij} = R_{kijk} = - R_{kikj}$} on $S^6$. By using the expression of ${\cal{A}}_B$ written in terms of curvature invariants\footnote{Again we ignore the prefactor $-\frac{1}{(4\pi)^3 7!}$}, we get
\begin{eqnarray*}
{\cal{A}}_B = \frac{442}{7} = \frac{8}{7} \cdot \frac{221}{4}
\end{eqnarray*}
Hence there appears (no surprise) the factor of $\frac{8}{7} = 60 {\cal{N}}$ that shall be common to all expressions in \cite{Bastianelli:2000hi} where curvature invariants appear. We further compute those expressions in \cite{Bastianelli:2000hi} for the individual contributions to the two-form on $S^6$ with the following results
\begin{eqnarray*}
a^{T} &=& \frac{8}{7}\cdot \frac{11}{24}\cr
a^{V} &=& \frac{8}{7}\cdot \(-\frac{11}{3}\)\cr
a^{S_0} &=& \frac{8}{7}\cdot \frac{1139}{72}
\end{eqnarray*}
Comparing these numbers with ours, we see that there is mismatch only for $a^{tot,V}_6$. Re-instating the normalization factor ${\cal{N}}$, this mismatch becomes
\begin{eqnarray}
a^{tot,V}_6 - a^V = {\cal{N}} \cdot \(\frac{293}{6} + \frac{11}{3}\) = 1\label{6dV}
\end{eqnarray}
\section{Dimensional reduction to five dimensions}\label{5d}
We view $S^6$ as an $S^5$ fibered over an interval such that $S^5$ shrinks to zero size at the end-points, and perform dimensional reduction along the Hopf fiber of $S^5$. Group theoretically this amounts to first re-arranging the $SO(7)$ harmonics in terms of $SO(6)$ harmonics of $S^5$, and subsequently reducing $SO(6) = SU(4) \rightarrow SU(3) \times U(1)_H$ where $U(1)_H$ is the isometry group of the Hopf circle. Dimensional reduction amounts to keeping only the modes that are neutral under $U(1)_H$.
\subsection{Massless scalar ghost}
We will now pick the states with zero $U(1)_H$ charge from the heat kernel of scalar harmonics $(n,0,0)$. Using the branching rules in the appendix \ref{SO}, we see that we shall keep the following representations of $SU(3)$,
\begin{eqnarray*}
R_{2m} &=& \bigoplus_{{{\ell}}=0}^m ({{\ell}},{{\ell}})\cr
R_{2m+1} &=& \bigoplus_{{{\ell}}=0}^m ({{\ell}},{{\ell}})
\end{eqnarray*}
We have the dimension $\dim({{\ell}},{{\ell}}) = ({{\ell}}+1)^3$. Then the heat kernel becomes
\begin{eqnarray*}
K_{S_0}(t) &=& \sum_{m=0}^{\infty} \(e^{- t(4m^2+8m)}+e^{- t(4m^2+12m+5)}\) \sum_{{{\ell}}=0}^m ({{\ell}}+1)^3
\end{eqnarray*}
The sum is
\begin{eqnarray*}
\sum_{{{\ell}}=0}^m ({{\ell}}+1)^3 &=& \frac{1}{4} (m+1)^2 (m+2)^2
\end{eqnarray*}
The corresponding Euler-Maclaurin integral has the series expansion
\begin{eqnarray*}
&&\int_0^{\infty} dx \frac{1}{4}(x+1)^2(x+2)^2 \(e^{- t(4x^2+10x)} + e^{-t(4x^2+14 x+6)}\) \cr
&=& \frac{3\sqrt{\pi}}{512}\(\frac{1}{t^{5/2}} + \frac{71}{12} \frac{1}{t^{3/2}} + \frac{1747}{96}\frac{1}{t^{1/2}}\) - \frac{21}{40} + {\cal{O}}(t^{1/2})
\end{eqnarray*}
The Bernoulli contribution is obtained from expanding out the summand at $t=0$
\begin{eqnarray*}
\frac{1}{2} (m+1)^2 (m+2)^2 &=& \frac{m^4}{2} + 3m^3 + \frac{13 m^2}{2} +6 m + 2
\end{eqnarray*}
Then
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 - \frac{1}{252} d_5 = \frac{1}{2}\cdot 2 - \frac{1}{12} \cdot 6 + \frac{1}{120} \cdot 3 = \frac{21}{40}
\end{eqnarray*}
Thus we get
\begin{eqnarray*}
a_5^{S_0} = -\frac{21}{40} + \frac{21}{40} = 0
\end{eqnarray*}
\subsection{Conformally coupled scalar}
Let us next consider the conformally coupled scalar. This gives the Euler-Maclaurin integral
\begin{eqnarray*}
I_S(t) &=& \frac{3\sqrt{\pi}}{512} \(\frac{1}{t^{5/2}} - \frac{1}{12t^{3/2}} + \frac{67}{96 t^{1/2}}\) - \frac{21}{40}\cr
&=& \frac{3\sqrt{\pi}}{512} \(\frac{1}{t^{5/2}} + \(\frac{71}{12} - 6\) \frac{1}{t^{3/2}} + \frac{67}{96 t^{1/2}}\) - \frac{21}{40}
\end{eqnarray*}
while the Bernoulli part is the same as for the massless scalar, resulting in
\begin{eqnarray*}
a_5^{S} &=& 0
\end{eqnarray*}
The other heat kernel coefficients are of course interesting to study as well. From $a_0$ and $a_2$ we may deduce that
\begin{eqnarray*}
{\mbox{Vol}} &=& \frac{3\sqrt{\pi}}{512}\cr
R &=& \frac{71}{2}
\end{eqnarray*}
For $a_4$, we may understand the difference by applying a formula for $a_4$ where for a conformal scalar $E = - 6$,
\begin{eqnarray*}
\frac{1}{360} \(60 RE + 180 E^2\) &=& - \frac{35}{2}
\end{eqnarray*}
We next notice that
\begin{eqnarray*}
\frac{67}{96} - \frac{1747}{96} &=& - \frac{35}{2}
\end{eqnarray*}
This explains the difference betweeen $a_4$ in for the above the two heat kernels.
\subsection{Vector ghost}
The representations with zero $U(1)_H$ charge can be extracted from the branching rules in the appendix \ref{SO}. They are
\begin{eqnarray*}
R_{n=2m} &=& \bigoplus_{{{\ell}}=1}^m ({{\ell}},{{\ell}}) \oplus \bigoplus_{{{\ell}}=0}^{m-1} \(({{\ell}},{{\ell}})\oplus ({{\ell}}+2,{{\ell}}-1)\oplus ({{\ell}}-1,{{\ell}}+2)\oplus ({{\ell}}+1,{{\ell}}+1)\)\cr
R_{n=2m+1} &=& \bigoplus_{{{\ell}}=1}^m ({{\ell}},{{\ell}}) \oplus \bigoplus_{{{\ell}}=0}^{m} \(({{\ell}},{{\ell}})\oplus ({{\ell}}+2,{{\ell}}-1)\oplus ({{\ell}}-1,{{\ell}}+2)\oplus ({{\ell}}+1,{{\ell}}+1)\)
\end{eqnarray*}
with the corresponding dimensions
\begin{eqnarray*}
\dim R_{2m} &=& \frac{5}{4}m^4+\frac{11}{2}m^3+\frac{29}{4}m^2+3m\cr
\dim R_{2m+1} &=& \frac{5}{4}m^4+\frac{19}{2}m^3+\frac{101}{4}m^2+27m+9
\end{eqnarray*}
The heat kernel becomes
\begin{eqnarray*}
K(t) &=& \sum_{m=0}^{\infty} \(d^0_{2m} e^{-t \lambda_{2m}} + d^0_{2m+1} e^{-t \lambda_{2m+1}}\)
\end{eqnarray*}
The corresponding Euler-Maclaurin integral becomes
\begin{eqnarray*}
I(t) &=& \frac{3\sqrt{\pi}}{512} \(\frac{5}{t^{5/2}} - \frac{77}{12 t^{3/2}} - \frac{3521}{96 t^{1/2}}\) - \frac{9}{8} + {\cal{O}}(t^{1/2})
\end{eqnarray*}
and the Bernouill part is extracted from the summand at $t=0$,
\begin{eqnarray*}
d_{2m} + d_{2m+1} &=& \frac{5}{4} m^4 + 15 m^3 + \frac{65}{2} m^2 + 30 m + 9
\end{eqnarray*}
and becomes
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 &=& \frac{17}{8}
\end{eqnarray*}
Thus in total we get
\begin{eqnarray*}
a_5^V = \frac{17}{8} - \frac{9}{8} = 1
\end{eqnarray*}
\subsection{Two-form gauge field}
The representations with zero $U(1)_H$ charge can be extracted from the branching rules in the appendix \ref{SO}. They are
\begin{eqnarray*}
R_{n=2m} &=& \bigoplus_{{{\ell}}=2}^m ({{\ell}}+1,{{\ell}}-2)^2 \oplus \bigoplus_{{{\ell}}=1}^m ({{\ell}},{{\ell}})^2 \oplus \bigoplus_{{{\ell}}=0}^m ({{\ell}}-1,{{\ell}}+2)^2\cr
&&\oplus \bigoplus_{{{\ell}}=0}^{m-1} \(({{\ell}},{{\ell}})\oplus ({{\ell}}+2,{{\ell}}-1)\oplus ({{\ell}}+1,{{\ell}}+1) \oplus ({{\ell}}+1,{{\ell}})\)\cr
R_{n=2m+1} &=& \bigoplus_{{{\ell}}=2}^m ({{\ell}}+1,{{\ell}}-2)^2 \oplus \bigoplus_{{{\ell}}=1}^m ({{\ell}},{{\ell}})^2 \oplus \bigoplus_{{{\ell}}=0}^m ({{\ell}}-1,{{\ell}}+2)^2\cr
&&\oplus \bigoplus_{{{\ell}}=0}^{m} \(({{\ell}},{{\ell}})\oplus ({{\ell}}+2,{{\ell}}-1)\oplus ({{\ell}}+1,{{\ell}}+1) \oplus ({{\ell}}+1,{{\ell}})\)
\end{eqnarray*}
with the corresponding dimensions
\begin{eqnarray*}
\dim R_{2m} &=& \frac{9}{4} m^4 + 12 m^3 + \frac{81}{4} m^2 + \frac{21}{2} m\cr
\dim R_{2m+1} &=& \frac{9}{4} m^4 + 15 m^3 + \frac{135}{4} m^2 + 30 m + 9
\end{eqnarray*}
The heat kernel becomes
\begin{eqnarray*}
K(t) &=& \sum_{m=0}^{\infty} \(d^0_{2m} e^{-t \lambda_{2m}} + d^0_{2m+1} e^{-t \lambda_{2m+1}}\)
\end{eqnarray*}
We get
\begin{eqnarray*}
I(t) &=& \frac{27\sqrt{\pi}}{512} \(\frac{1}{t^{5/2}} - \frac{49}{12 t^{3/2}} + \frac{419}{96 t^{1/2}}\) - \frac{27}{20} + {\cal{O}}(t^{1/2})
\end{eqnarray*}
and from the summand at $t=0$
\begin{eqnarray*}
\frac{9}{2} m^4 + 27 m^3 + 54 m^2 + \frac{81}{2} m + 9
\end{eqnarray*}
we read off that
\begin{eqnarray*}
\frac{1}{2} d_0 - \frac{1}{12} d_1 + \frac{1}{120} d_3 &=& \frac{27}{20}
\end{eqnarray*}
Thus
\begin{eqnarray*}
a_5^T = -\frac{27}{20} + \frac{27}{20} = 0
\end{eqnarray*}
\subsection{Fermion}
We see that no states are neutral under $U(1)_H$ and we get
\begin{eqnarray*}
K(t) &=& 0
\end{eqnarray*}
and so trivially we have
\begin{eqnarray*}
a_5^F &=& 0
\end{eqnarray*}
\subsection{The 5d SYM}
The total heat kernel coefficient is obtained by the same formula as in 6d,
\begin{eqnarray*}
a_5^{SYM} &=& 5 a_5^{S} + \frac{1}{2} a_5^B + a_5^F,\cr
a_5^B &=& a_5^T - a_5^V + a_5^{S_0}
\end{eqnarray*}
where, from the above results, we have
\begin{eqnarray*}
a_5^S &=& 0\cr
a_5^{S_0} &=& 0\cr
a_5^{V} &=& 1\cr
a_5^T &=& 0\cr
a_5^F &=& 0
\end{eqnarray*}
Therefore we get
\begin{eqnarray*}
a_5^{SYM} &=& \frac{1}{2}
\end{eqnarray*}
This is not an integer and seems to be not directly related to the mismatch of $1$ that we got in 6d. But let us notice that for the total heat kernels
\begin{eqnarray*}
a_5^{tot,T} &=& a_5^T + a_5^V\cr
a_5^{tot,V} &=& a_5^V + a_5^{S_0}\cr
a_5^{tot,S_0} &=& a_5^{S_0}
\end{eqnarray*}
we have
\begin{eqnarray*}
a_5^{tot,T} &=& 1\cr
a_5^{tot,V} &=& 1\cr
a_5^{tot,S_0} &=& 0
\end{eqnarray*}
Let us further note that a two-form in 6d, reduces to both a two-form and a one-form in 5d. So it seems that we always find this extra contribution of $1$ from the one-forms, wherever they appear.
\section{Resolving the mismatch}\label{discuss}
Let us now resolve the mismatch.
There is a zero mode in the spectrum. The zero mode is for the scalar ghost where $\lambda_n = n^2 + 4 n = 0$ for $n = 0$ which comes with the degeneracy $d_{0} = 1$. Thus here is a zero mode that we need to further gauge fix. In the end that gauge fixing amounts to just removing the $n=0$ mode from the spectrum. It does not concern the vector ghost, and at first sight it does not seem to explain the mismatch. One may argue that we should then shift $a_6^{S_0}$ by one unit. If we do that, then we get one more mismatch. It does not cure the mismatch. It makes it worse. The heat kernel is well-defined with the zero mode included. In fact, zero modes play an important role in heat kernels when they are applied to index theorems \cite{Gilkey}.
Nevertheless, the key to understanding the mismatch, is to realize that the heat kernel includes zero modes, and that new zero modes can arise as we do the Hodge decomposition of a $p$-form. Any such new zero modes that arises by the Hodge decomposition must be removed by hand since they were not there originally. This mechanism was first discovered in \cite{Christensen:1979iy}, \cite{Fradkin:1983mq} and it was applied to higher spin fields on $S^6$ in \cite{Tseytlin:2013fca}. I would like to thank Tseytlin for helping me resolve the mismatch puzzle by pointing out the relevant references.
Let us now illustrate how this mechanism resolves the mismatch by considering the vector ghost. The vector field $v_i$ on $S^6$ decomposes into an exact plus a coexact piece as
\begin{eqnarray*}
v_i &=& v'_i + \partial_i v
\end{eqnarray*}
for the non-zero modes. This decomposition amounts to the relation
\begin{eqnarray*}
\det{}' \triangle^{tot}_1 &=& \det{}' \triangle^{coex}_1 \det{}' \triangle_0^{coex}
\end{eqnarray*}
where zero modes are taken out, as indicated by primes. These determinants may be written in terms of the MP zeta functions as
\begin{eqnarray*}
e^{-\zeta'_{\triangle^{tot}_1}(0)} &=& e^{-\zeta'_{\triangle_1^{coex}}(0)} e^{-\zeta'_{\triangle^{coex}_0}(0)}
\end{eqnarray*}
where zero modes are not included. The relation for the heat kernels is different because for the heat kernels we include the zero modes. Therefore, for the heat kernels, the corresponding relation will read
\begin{eqnarray*}
K^{tot}_{\triangle_1}(t) &=& K_{\triangle_1^{coex}}(t) + \(K_{\triangle_0^{coex}}(t) - 1\)
\end{eqnarray*}
where for the scalar $v$ we have to subtract the zero modes that otherwise would be overaccounted for, since it was not there originally, on the left-hand side. Namely, these zero modes do not survive as we take the derivative of $v$ to get the exact piece $v_i = \partial_i v + ...$. There is exactly one zero mode for a scalar on $S^6$. This amounts to a correction of our previous claim that $a_6^{tot,V} = a_6^V + a^{S_0}_6$. The corrected relation reads
\begin{eqnarray*}
a_6^{tot,V} = a^V_6 + \(a^{S_0}_6 - 1\)
\end{eqnarray*}
This in turn corrects (\ref{6dV}) to now read
\begin{eqnarray*}
a_6^{tot,V} - a^V &=& 0
\end{eqnarray*}
and thus after the correction, we find agreement. Let us next consider the two-form. Again Hodge-decomposing the two-form in a coexact and exact pieace as
\begin{eqnarray*}
B_{ij} &=& B'_{ij} + \partial_{[i} B_{j]}
\end{eqnarray*}
we find that we need to subtract any zero modes of the vector field $B_i$ on $S^6$ from the heat kernel for the two-form. Now there are no zero modes for the vector field on $S^6$, so therefore we got the agreement for the two-form without any correction.
As we dimensionally reduce, we again find the same pattern. By correcting for zero mode for the ghost vector, we find the result $a_5^V = 0$.
Our result can not be used to exclude the possibilty that we may need to add some extra local degrees of freedom at loci where the circle fiber shrinks to zero size, that is, at the north and south poles of $S^6$. Such extra local contributions will not depend on $r$ because the local geometry near the norh and south poles is flat $\mathbb{R}^5$ and no $r$-dependence can arise from there. A similar situation occurs when we put M5 brane on $S^4 \times \Sigma$ where $\Sigma$ is a Riemann manifold \cite{Cordova:2016cmu}. If we view $S^4$ as $S^3$ fibered over an interval such that $S^3$ shrinks to zero size at the end points, and reduce along the Hopf fiber of $S^3$, we find new degrees of freedom at the north and south poles corresponding to a D4 brane ending on a D6 brane. We may remove the north pole of $S^4$ by cutting along a small boundary-$S^3$ near the north pole (and similarly for the south pole). In M-theory we then have the boundary manifold $S^3 \times N_5$ where $N_5$ is the five-dimensional normal bundle of $S^4 \times \Sigma$ in eleven dimensions. We see that by cutting out the north pole, the M5 brane will end on an eight-manifold. Upon dimensional reduction along the Hopf fiber of $S^3$ down to Type IIA string theory, this eight-manifold becomes a seven-manifold that will correspond to a D6-brane as was shown in \cite{Cordova:2016cmu}. For our $S^6$ we may do an analogous thing. Cutting along a small $S^5$ near the north pole we get a boundary manifold $S^5 \times N_5$. This boundary should correspond to an M9 brane in M-theory. We may reduce along the Hopf fiber of $S^5$ down to Type IIA string theory, where we find D4 brane ending on some nine-manifold, which should be identified with some 8-brane in IIA string theory on which D4 can end.
\subsection*{Acknowledgments}
I would like to Arkady Tseytlin for helping me resolve the mismatch puzzle, and Dongsu Bak for discussions. This work was supported in part by NRF Grant 2017R1A2B4003095.
|
1,116,691,497,241 | arxiv | \section{Introduction}
Game theory is a branch of mathematics that formalizes competitions
with rational rules and rational players
\cite{osborne1994course}. This theory has broad applications in a great
number of fields, from biology to social
sciences and economics. Recently, much attention has been focused on
transferring concepts of game theory to the
quantum realm. Of course, quantum games are games in the standard sense but
the approach allows for quantum phenomena in the course of the game
\cite{piotrowski_invitation_2002, multiqubit_entangling}. Some classical game
theoretical issues can be extended to allow quantum strategies. Usually, the
set of quantum strategies is much larger than a ``classical'' one and
entanglement implies more complex behavior of agents than the ``classical
mixing'' of strategies \cite{osborne1994course} in such games. An $N$-player
quantum game can be defined as a 4-tuple
\begin{equation}
\Gamma = (\mathcal{H},\rho,\mathcal{S},\mathcal{P}),
\end{equation}
where $\mathcal{H}$ is a Hilbert space, $\rho$ is a quantum state (i.e. a density
matrix), $\mathcal{S} = \{S_i\}_{i=1}^N$ is the set of possible player's
strategies and $\mathcal{P} = \{P_i\}_{i=1}^N$ is a set of payoff functions for
the players. A quantum strategy $s_i^\alpha \in S_i$ is a completely positive
trace preserving (CPTP) map. The payoff function of $i$-th player $P_i$ assigns
to a given set of player's strategies $\{s_j^{\alpha_j}\}_{j=1}^N$ a real
number
-- the payoff. Usually, the set of strategies is limited to unitary operators
and the payoff is determined via a measurement of an appropriate variable.
Access to such rich strategy sets allows for some spectacular results. For
example, it has been shown that if only one player is aware of the quantum
nature of the system, he/she will never lose in some types of games
\cite{eisert1999quantum}. Recently, it has been demonstrated that a player can
cheat by appending additional qubits to the quantum system
\cite{miszczak2011qubit}. Moreover, one can study the impact of random
strategies on the course of the game~\cite{kosik_quantum_2007}.
The seminal works of Axelrod~\cite{axelrod1984evolution} and Nowak and
May~\cite{nowak1992evolutionary} incited the researchers to investigate the
population structures with local interactions that model various real social
structures with sometimes astonishing accuracy. In that way, evolutionary
game theory has been married with network structure analysis. In particular,
the issues of coordination and cooperation with the involved dilemmas and
efficiency problems have been analysed from this point of view
\cite{tomassini2007}. Game theoretical models, although often unrealistic if
applied to complex human behaviour, provide a simple way of understanding
some important aspects of complex human decisions. Quantum game theory
approach extends such analyses in an interesting way \cite{Busemeyer_2006,
abbot, paw_slad, sladkowski2003giffen, edward2003quantum}. Parrondo's
paradox, showing that in some cases combination of apparently losing games can
result in successes, spurred us on to the analysis of Parrondo's paradox in
this context presented in the present work.
This paper is organized as follows. In Section~\ref{sec:parrondo} we give a
brief description of Parrondo's games, concentrating on the cooperative game.
In Section~\ref{sec:model} we present our model used for simulation. In
Section~\ref{sec:results} we present results obtained from simulation. Finally,
in Section~\ref{sec:conc} we draw the final conclusions.
\section{Parrondo's games}\label{sec:parrondo}
\subsection{Original paradox}
The Parrondo's paradox \cite{parrondo1996eec} was originally discovered in the
following context. Consider two coin tossing games, $A$ and $B$.
Let the first
game be a toss of a biased coin with winning probability $p = \frac12 - \epsilon$. The second game is based on two biased
coins and the choice of the coin depends on the current state (pay-off) of the game. Coin $B_1$ is selected if the capital of
the player is a multiple of 3. This coin has a probability of winning $p_1$.
Otherwise, coin $B_2$ with winning
probability $p_2$ is chosen. Each winning results in a gain of one unit of
capital, while each loss results in a loss
of one unit of capital. Choosing for example:
\begin{equation}
p_1 = \frac{1}{10} - \epsilon, \; p_2 = \frac34 - \epsilon,
\end{equation}
results in a losing game $B$. This happens because, the coin $B_1$ is played
more often than $\frac13$ of the time.
However, if games $A$ and $B$ are interwoven in the described way, the probability of selecting the coin $B_1$ approaches
$\frac13$ thus resulting in a winning game. Furthermore, the capital gain from
this game can overcome the small capital loss
resulting from game $A$. This construction can be generalized to
history-dependent games instead of capital-dependent
ones~\cite{parrondo_new_2000}.
Since its discovery the Parrondo's paradox has been used to describe
situations where losing strategies can combine to win. There exists deep
connections between the paradox and a variety of physical phenomena. The
original Parrondo's games can be described as discrete-time and
discrete-space flashing Brownian ratchets. This fact has been established
using discretization of the Fokker-Planck equation. In the recent years, many
examples from physics to population genetics have been reported in the
literature, showing the generality of the paradox. Generally, the paradox can
occur in the case of nonlinear interactions of random behavior with an
asymmetry. In our case, the nonlinearity is due to switching of the games A
and B. The asymmetry comes from biased coins. A large number of effects, where
randomness plays a constructive role, including but not limited to stochastic
resonance, volatility pumping, the Brazil nut paradox, can be viewed as being
in the class of Parrondian phenomena. For a review of the Parrondo's paradox
see~\cite{abbott2010asymmetry}. For material regarding modeling Parrondo's
paradox as a quantum walk see~\cite{meyer2002parrondo,bulger2008position}.
\subsection{Cooperative Parrondo's games}
Cooperative Parrondo's games were introduced by Toral
\cite{toral_cooperative_2001}. The scheme is as follows. Consider an ensemble
of $N$ players, each with his/hers own capital $C_i(t)$, $i = 1, 2, \ldots, N$.
As in the original paradox, we consider two games, A and B. Player $i$ can play
either game A or B according to some rules. The main difference from the
original paradox is that probabilities of winning game B depend on the state
of players
$i-1$ and $i+1$. For simplicity, we only consider the case when the
probabilities of winning at time $t$, depend only on the present state of the
neighbors, hence the probabilities are given by:
\begin{itemize}
\item $p_1$ if player $i-1$ is a winner and player $i+1$ is a winner
\item $p_2$ if player $i-1$ is a winner and player $i+1$ is a loser
\item $p_3$ if player $i-1$ is a loser and player $i+1$ is a winner
\item $p_4$ if player $i-1$ is a loser and player $i+1$ is a loser
\end{itemize}
The game, by definition, is a winning one, when the average value of the capital
\begin{equation}
\langle C(t) \rangle = \frac1N \sum_{i=1}^N C_i(t),\label{eq:avpay}
\end{equation}
increases with time. If each agent starts the game with a given capital,
$C_0$, we define the average capital gain as:
\begin{equation}
\langle C_{\rm G}(t)\rangle = \frac1N\sum_{i=1}^N (C_i(t) - C_0).
\end{equation}
\section{The model}\label{sec:model}
\subsection{Preliminaries}
There are several known approaches to quantization of Parrondo's
games~\cite{flitney_quantum_2002,gawron_quantum_2005}.
We model a cooperative quantum Parrondo's game as a multidimensional quantum
random walk (QRW)~\cite{flitney_quantum_2004}. The average position of the walker along each axis
determines each player's payoff. As in the classical case, we consider two games, $A$ and $B$. The first game has a
probability of winning $p_0$, while the second has four probabilities
associated $\{p_i\}_{i=1}^4$. Similar to the
classical case the probabilities of winning game $B$ depend on the state of the neighboring players. The following two
possible schemes of alternating between games $A$ and $B$ are considered
\begin{enumerate}
\item random alternation, denoted $A+B$
\item games played in succession $AABBAABB\ldots$, denoted $[2,2]$.
\end{enumerate}
The Hilbert space
associated with the walker
consists of two components: the coin's Hilbert space and the position Hilbert space
\begin{equation}
\mathcal{H} = \mathcal{H}_{\rm c} \otimes \mathcal{H}_{\rm pos}.
\end{equation}
We introduce two base states in the single coin Hilbert space, the $\ket{L}$ and $\ket{R}$ states. These states
represent the classical coin's heads and tails respectively.
We focus our attention on the three dimensional case (i.e. a three-player
game). This allows us to limit the size of the quantum system under
consideration and allows us to handle it numerically. We
assume the state of the walker as
\begin{equation}
\ket{\Psi} = \ket{C} \otimes \ket{\psi},
\end{equation}
where $C$ is the state of all coins and $\psi$ represents the position of the
walker in a two dimensional space. Furthermore, the position component of the state of the walker $\ket{\Psi}$,
$\ket{\psi}$, is itself a two component system $\ket{\psi} = \ket{\psi_x}\otimes\ket{\psi_y}$. The Hilbert space
$\mathcal{H}_c$ is a three-qubit space, hence its dimension is $\textnormal{dim}(\mathcal{H}_c)=8$.
The evolution of the state $\Psi$ is governed by the operator
\begin{equation}
U = U_{\rm pos}U_{\rm c3}U_{\rm c2}U_{\rm c1},
\end{equation}
where $U_{pos}$ is the position update operator. The position update is based
on the current state of the coins of all players, and the operator is given by
\begin{equation}
U_{\rm pos} = \sum_{ (A,B,C) \in \atop \{ P_r, P_l \}^{\times 3}} A
\otimes B \otimes C \otimes f(A) \otimes f(B)
\otimes
f(C),
\end{equation}
where
\begin{equation}
f(X) = \left\{
\begin{array}{ccc}
S & \mathrm{if} & X\equiv P_{\rm r} \\
S^\dagger & \mathrm{if} & X\equiv P_{\rm l} \\
\end{array}\right.
\end{equation}
and $S$ is the shift operator in the position space, $S\ket{x} = \ket{x+1}$,
$P_{\rm r}$ and $P_{\rm l}$ are the
projection operators on the coin states $\ket{R}$ and $\ket{L}$ respectively.
The tossing of the first player's coin when game $A$ is played is given by the operator
\begin{equation}
U_c = U_0 \otimes {\rm 1\hspace{-0.9mm}l}_{\rm c} \otimes {\rm 1\hspace{-0.9mm}l}_{\rm c} \otimes {\rm 1\hspace{-0.9mm}l}_{\rm pos}
\end{equation}
where ${\rm 1\hspace{-0.9mm}l}_{\rm pos}$ is an identity operator on the entire position space and
${\rm 1\hspace{-0.9mm}l}_c$ is an identity operator on a single
coin space. In the case of game $B$, the tossing of the first player's coin is
realized by the operator
\begin{equation}
\begin{split}
U_c &= U_1 \otimes P_{\rm r} \otimes P_{\rm r} \otimes \1_{\rm pos}
+ U_2
\otimes P_{\rm r} \otimes P_{\rm l} \otimes
\1_{\rm pos} +\\ &+ U_3 \otimes P_{\rm l} \otimes P_{\rm r} \otimes
\1_{\rm pos} + U_4 \otimes P_{\rm l}
\otimes P_{\rm l} \otimes \1_{\rm pos},
\end{split}
\end{equation}
where $U_k$ are the operators of tossing a single
coin, given by
\begin{equation}
U_k = \left(
\begin{array}{cc}
\sqrt{\rho_k} & \sqrt{1 - \rho_k}\mathrm{e}^{\mathrm{i}\theta_k} \\
\sqrt{1 - \rho_k}\mathrm{e}^{\mathrm{i}\phi_k} & -\sqrt{\rho_k}\mathrm{e}^{\mathrm{i}(\theta_k +
\phi_k)}
\end{array}
\right),\label{eq:unitary}
\end{equation}
where $k \in \{0,1,2,3,4\}$, $1-\rho$ is the classical probability that the
coin changes its state, and $\phi_k$ and $\theta_k$ are phase angles. The
classical probabilities $p_i$ and the quantum counterparts $\rho_1$
parameterize the Parrondo phenomena in both situations. In general, there is no
numerical relations between $p_i$ and $\rho _i$. Therefore we use different
symbols to avoid misunderstanding. If not stated otherwise we assume the phase
angles to be $\phi_k =\theta_k = \pi/2$ for all $k$. However, in the last
paragraph of Section~\ref{sec:results} we show the influence of the phase
angles on the behavior of the game.
\subsection{Studied cases}
We assume the probabilities $\rho_k$ to be: $\rho_0 = 0.5$, $\rho_1 = \rho_2 = \rho_3 = 0.5$ and study the impact of
the variation of parameter $\rho_4$ on the behavior of the
game. The following special cases of the initial state of the coins are assumed:
\begin{enumerate}
\item GHZ state, $\ket{C} = \frac{1}{\sqrt{2}}(\ket{LLL}+\ket{RRR})$
\item W state, $\ket{C} = \frac{1}{\sqrt{3}}(\ket{LLR}+\ket{LRL}+\ket{RLL})$
\item separable state, $\ket{C} = \frac{1}{2\sqrt{2}}(\ket{L} -
\ket{R})^{\otimes 3}$
\item A semi-entangled state, $\ket{C} =J\ket{LLL}$
\end{enumerate}
In the last case, the operator $J$ is given by~\cite{abbot}
\begin{equation}
J(\omega) = \exp(\mathrm{i} \frac{\omega}{2}\sigma_x^{\otimes 3}) = \1^{\otimes
3}\cos\frac{\omega}{2} + \mathrm{i} \sigma_x^{\otimes
3}\sin\frac{\omega}{2}\label{eq:J},
\end{equation}
where $\omega \in [0, \pi/2]$ is a measure of entanglement. In the case of
$\omega = \frac{\pi}{2}$, the resulting maximally entangled state is of the
GHZ class:
\begin{equation}
J\left(\frac{\pi}{2}\right)\ket{LLL} = \frac{1}{\sqrt{2}}\left( \ket{LLL}
+ \mathrm{i} \ket{RRR} \right).
\end{equation}
We investigate the
following scenarios of games:
\begin{enumerate}
\item Game A only, denoted $A$
\item Game B only, denoted $B$
\item Game A and B chosen randomly, denoted $A+B$
\item Game A and B played in the sequence: two games of type A, followed by two games of type B, leading to
AABBAABBAABB\ldots, denoted $[2,2]$
\end{enumerate}
\section{Results and discussion}\label{sec:results}
Figure~\ref{fig:rhos} shows the average capital gains of all players as
defined by Eq.~\eqref{eq:avpay}.
Figures~\ref{fig:coin_normal}, \ref{fig:coin_GHZ} and \ref{fig:coin_W} show results when the initial state of the coin
is separable, the GHZ state and the W state respectively. The capital gains
are taken after 16 rounds of the game. In
each round each players plays exactly once.
In the case of a separable initial state, the Parrondo Paradox occurs if
$\rho_4\in[0.1, 0.5)$. Game $[2,2]$
exhibits the Paradox in the whole interval, whereas game $A+B$ is a Parrondo
game only for $\rho_4 = 0.4$. Detailed
results for $\rho_4 = 0.4$ are shown in Figure~\ref{fig:normal_det}. Interestingly, when game B becomes winning, game
$[2,2]$ can become a losing game. This happens for $\rho_4\in(0.5,0.9]$.
\begin{figure}[!h]
\subfloat[separable
state]{\label{fig:coin_normal}\includegraphics{pay_normal_rhos}}~
\subfloat[GHZ state]{\label{fig:coin_GHZ}\includegraphics{pay_GHZ_rhos}}\\
\subfloat[W state]{\label{fig:coin_W}\includegraphics{pay_W_rhos}}
\caption{Average capital gains of all players for different initial states of the coin after 16 rounds of the
game. Lines are eye-guides.}\label{fig:rhos}
\end{figure}
When the initial state of players' coins is set to be the GHZ state, the nature
of game B changes significantly: the game becomes a winning one for
$\rho_4\in[0.1,0.5)$. As opposed to the previous case, games $[2,2]$ and $A+B$
are also winning in this case. When $\rho_4$ increases further, games $[2,2]$
and $A+B$ become winning games once again, whereas game B becomes a losing
game. Comparison of detailed evolutions of the average capital gains is shown
in Figure~\ref{fig:GHZ_comp}. These plots show that, as $\rho_4$ increases, the
behavior of capital changes from oscillatory decreasing (increasing) to linear
decreasing (increasing). Finally, we note that the bigger is the average loss
of capital in game B, the greater is the capital gain when games $[2,2]$ and
$A+B$ are played.
\begin{figure}[!h]
\centering\includegraphics{pay_normal}
\caption{Average capital gains of all players in the case of separable
initial state, $\rho_4 = 0.4$. Lines are
eye-guides.}\label{fig:normal_det}
\end{figure}
Selecting the W state as the initial one, we find that there is no paradoxical
behavior. This is due to the fact, that for this initial state game A becomes a
losing game as well. To test if this initial state can lead to paradoxical
behavior, we investigated some other game types for this case.
Figure~\ref{fig:W_games} shows the results for games AAABB, AABBB and AAABBB
denoted $[3,2]$, $[2,3]$ and $[3,3]$ respectively. They also do not exhibit any
paradoxical behavior. Therefore, it may be appropriate to propose a method of
distinguishing between the two maximally entangled three qubit states.
Such a possibility might be used i quantum state tomography or initial state
preparation for some configurations.
\begin{figure}[!h]
\centering\includegraphics{parrondo}
\caption{Quantum circuit for distinguishing the W and GHZ
states.}\label{fig:circ}
\end{figure}
Consider the quantum circuit depicted in Figure~\ref{fig:circ}. The input
qubits are the initial state of the coin ($\ket{\mathrm{GHZ}}$ or
$\ket{\mathrm{W}}$) and registers $\ket{p_i}$ holding the payoff of the $i$-th
player. After a measurement is performed on these registers, a payoff of each
player is obtained. Classical addition of these payoffs allows us to
determine, whether the initial coin state was a GHZ state or a W state.
The change in the behavior of game $A$ when changing from the GHZ to the W
state can be explained as follows. The fair coin operator acting on the GHZ
transfers it to the state
\begin{equation}
\ket{\psi} = \frac14 \left[ (1 - \mathrm{i}), (\mathrm{i} - 1), (\mathrm{i} - 1), (\mathrm{i} - 1),
(\mathrm{i} - 1), (\mathrm{i} - 1), (\mathrm{i} - 1), (1 - \mathrm{i})
\right]^{\rm T}.\label{eq:state_step}
\end{equation}
After another application of the coin flip gate, this state becomes again a
GHZ state. Both, in the GHZ state and the state given by
Eq~\eqref{eq:state_step} the probabilities of increasing or decreasing a
players payoff are equal. This is not the case for the W state. In this state,
the ``fair'' coin flip causes the players to lose capital.
\begin{figure}[!h]
\subfloat[$\rho_4 = 0.7$]{\label{fig:GHZ_comp07}\includegraphics{pay_GHZ_1}}~
\subfloat[$\rho_4 = 0.9$]{\label{fig:GHZ_comp09}\includegraphics{pay_GHZ_2}}
\caption{Comparison of detailed evolutions of capital for the GHZ initial state. Lines are
eye-guides.}\label{fig:GHZ_comp}
\end{figure}
\begin{figure}[!h]
\centering\includegraphics{W_games}
\caption{Average capital gains of all players for different games with the
W state being the initial state of the
coins. Lines are eye-guides.}\label{fig:W_games}
\end{figure}
Figure~\ref{fig:ent} depicts the behavior of the studied games for different
values of the parameter $\omega$ introduced in Eqn.\eqref{eq:J}. In this
setup, games A and B are both losing games when $\omega < \frac{\pi}{2}$.
Furthermore, the games [2,2] and A+B do not exhibit paradoxical behavior. When
the value of parameter $\omega$ reaches its maximum, two interesting things
happen: game A becomes a fair game again and, what is more interesting, the
paradoxical behavior is restored for games [2,2] and A+B.
\begin{figure}[!h]
\subfloat[$\omega =
0$]{\label{fig:ent_0}\includegraphics{pay_GHZ_ent_0}}~
\subfloat[$\omega =
\frac{\pi}{10}$]{\label{fig:ent_1}\includegraphics{pay_GHZ_ent_1}}\\
\subfloat[$\omega =
\frac{2\pi}{10}$]{\label{fig:ent_2}\includegraphics{pay_GHZ_ent_2}}~
\subfloat[$\omega =
\frac{3\pi}{10}$]{\label{fig:ent_3}\includegraphics{pay_GHZ_ent_3}}\\
\subfloat[$\omega =
\frac{4\pi}{10}$]{\label{fig:ent_4}\includegraphics{pay_GHZ_ent_4}}~
\subfloat[$\omega =
\frac{5\pi}{10}$]{\label{fig:ent_5}\includegraphics{pay_GHZ_ent_5}}\\
\caption{Average capital gains of all players for different values of the
entanglement $\omega$}\label{fig:ent}
\end{figure}
Finally, we test the impact of the phase angles of the elements of the coin
operator $\phi$ and $\theta$ defined in Eq~\eqref{eq:unitary}. Maps of the
average capital gains are shown in Figures~\ref{fig:map_GHZ} and
\ref{fig:map_separable} for the GHZ, separable and W initial coin states
respectively. In the case of the A+B the results were averaged ten times to
obtain a smoother picture. The resolution of the plots is $\frac{\pi}{8}$ in
each direction. Results for the GHZ state show that games A and B
are insensitive to the phase changes. Game A always remains a fair game and
game B is always a losing one. The randomness of selection of a specific game
in the A+B setup has its reflection in the map of the payoffs. The highly
structured setup od the [2,2] game results in a highly structured map. The
parameter values for which the paradox occurs are shown in
Figure~\ref{fig:GHZ_possibility}. Next, we move to the separable state. In
this case games A and B show a similar structure in the average capital gains.
This is reflected in games AB and [2,2] for this initial state.
Figure~\ref{fig:separable_possibility} shows phase angle values for which the
paradox occurs. Finally, in the case of the W state games A and B are losing
games and are insensitive to the changes of phase angles. As such, game[2,2]
is also losing and does not exhibit any change in the average capital gain.
Game A+B shows some sensitivity to the phase angle values, however it is the
effect of random switchings between games A and B.
\begin{figure}
\centering{\includegraphics{GHZ_map}}
\caption{Map of the average capital gain of all players for the GHZ state
for different game setups. The color shows the average capital gain
value.}\label{fig:map_GHZ}
\end{figure}
\begin{figure}
\centering{\includegraphics{GHZ_possibility}}
\caption{The gray color marks the values of angles $\phi$ and $\theta$
where the paradox occurs for the GHZ state.}\label{fig:GHZ_possibility}
\end{figure}
\begin{figure}
\centering{\includegraphics{separable_map}}
\caption{Map of the average capital gain of all players for the separable
state
for different game setups. The color shows the average capital gain
value.}\label{fig:map_separable}
\end{figure}
\begin{figure}
\centering{\includegraphics{separable_possibility}}
\caption{The gray color marks the values of angles $\phi$ and $\theta$
where the paradox occurs for the separable
state.}\label{fig:separable_possibility}
\end{figure}
\clearpage
\section{Conclusions}\label{sec:conc}
We investigated quantum cooperative Parrondo's games modeled using
multidimensional quantum walks. We studied different initial states of the
coins of the players: the separable state, the GHZ state and the W state. We
showed that cooperative Parrondo's games can be implemented in the quantum
realm. Furthermore, our analysis shows how the behavior of a game depends on
the initial state of the coins of all players. One interesting result is that
if the initial state of the coins is separable and one game is a winning one,
then the game where games A and B are interwoven can become a losing game. This
effect does not occur when the initial state of the coins is set to be the GHZ
state. In this case games $A+B$ and $[2,2]$ are always non-losing games. This
shows that the choice of the initial state may be crucial for the paradoxical
behaviour. However, the most important result of our work is showing that the
Paradox can also be observed in cooperative quantum games. As a by-product, it
has been shown that the quantum Parrondo paradox may be used to easily
distinguish between the GHZ and W states.
\begin{acknowledgements}
Work by J.~S{\l}adkowski was supported by the Polish National
Science Centre under the project number DEC-2011/01/B/ST6/07197. Work by
{\L}.~Pawela was supported by the Polish Ministry of Science and Higher
Education under the project number IP2011 014071. Numerical simulations
presented in this work were
performed on the ``Leming'' and ``\'Swistak'' computing systems of The Institute of Theoretical and Applied
Informatics, Polish Academy of Sciences.
\end{acknowledgements}
|
1,116,691,497,242 | arxiv | \section{Introduction}
The problem of generating frames by iterative
actions of operators \cite{ACMT, ACAMP, AP} has emerged within the research related to the dynamical sampling problem \cite{AADP13}-\cite{ACAMP}.
The
conditions under which a frame generated by iterative
actions of operators
exists for a finite-dimensional or a separable Hilbert space have been stated in \cite{ACMT} and \cite{ACAMP}. If we have a frame, then a linear combination of a dual frame with the dynamically sampled coefficients reproduce the original signal.
The natural follow-up questions to ask in this setup are: whether we can obtain a scalable frame under iterative actions, and if not, whether we can find a dual frame which preserves the dynamical structure.
Let $A $ be an operator on a separable Hilbert space $\mathbb{H}$. We consider a countable set of vectors $G $ in $\mathbb{H}$, and a function $L : G \rightarrow \mathbb Z_+$, where $\mathbb Z_+ = \mathbb N \cup \Set{0}$. Related to the iterated system of vectors \begin{equation}\label{oursystem}
\{A^j {\bf g} \; | \; {\bf g} \in G, \; 0 \leq j \leq L({\bf g}) \},\end{equation}
we answer the following two questions:
\begin{itemize}
\item[(Q1)]
What conditions on $A$, $ G$ and $L$ ensure that \eqref{oursystem}
is a scalable frame for $\mathbb{H}$?
\item[(Q2)] Assuming the system \eqref{oursystem}
is a frame for $\mathbb{H}$, can we obtain a dual frame for \eqref{oursystem}, perhaps by iterative actions of some operator?
\end{itemize}
The motivation for studying systems of type \eqref{oursystem} comes from the
{\it dynamical sampling problem }
(DSP): Find sampling locations
that allow the reconstruction of an unknown function ${\bf f} $
from the scarce
samples of ${\bf f}$,
and its
evolved states $A^n {\bf f}$.
In the DSP, $n$ represents time, and
$A^*$ is an evolution operator; for instance, $A^*$ can represent the heat evolution operator,
${\bf f}$ the temperature at time
$n
= 0$,
and
$(A^*)^n {\bf f}$ the temperature at time
$n$. The DSP for the heat evolution operator was studied in \cite{LV09, RCLV11}; generalizations of the DSP and related applications can be found in \cite{AADP13}-\cite{ACAMP}.
More precisely, the DSP is as follows: Let the initial state of a dynamical system be represented by an unknown element
${\bf f} \in \mathbb{H}$.
Say the initial state ${\bf f} $ is evolving under the action of an
operator $A^*$ to the states
${\bf f}_j = A^*{\bf f}_{j-1}$, where ${\bf f}_0 = {\bf f}$ and $j \in \mathbb Z_+$.
Given a set of vectors $G \subset \mathbb{H}$, one can find conditions on $ A$, $G $ and $ L = L({\bf g})$ which
allow the recovery of the initial state ${\bf f}$ from the set of samples
$\{ \langle A^{* j}
{\bf f} , {\bf g} \rangle \; | \; {\bf g} \in G\}_{ j =0}^{L({\bf g})}$.
In short, the problem of signal recovery via dynamical sampling is solvable if the
set of vectors $F_A^{L}(G) : =\{ A^{ j}
{\bf g} \; | \; {\bf g} \in G\}_{j=0}^{ L({\bf g})}$ is a frame for $\mathbb{H}$, \cite{ACMT}. In frame theory it is known that every frame has at least one dual frame; if $F_A^{L}(G)$ is a frame for $\mathbb{H}$, and its dual frame elements are ${\bf h}_{{\bf g}, j}$, then all ${\bf f} \in \mathbb{H}$ are reconstructed as
\begin{equation}\label{signalreconstructuonDS}
{\bf f} = \sum_{{\bf g} \in G} \sum_{j=0}^{L({\bf g})}\langle {\bf f}, A^j {\bf g} \rangle {\bf h}_{{\bf g}, j}.
\end{equation}If the frame $F_A^{L}(G)$ is {\textit{ scalable}}, then its dual frame elements are $w^2_{j,{\bf g}} A^j {\bf g}$ for some {\it scaling coefficients } $w_{j, {\bf g}}$, and the reconstruction formula \eqref{signalreconstructuonDS} is
\begin{equation}\label{signalreconDSscalable}
{\bf f} = \sum_{{\bf g} \in G} \sum_{j=0}^{L({\bf g})}w_{j,{\bf g}}^2 \langle {\bf f}, A^j {\bf g} \rangle A^j {\bf g}.
\end{equation}Notice that the frame coefficients in \eqref{signalreconstructuonDS} are exactly the samples
\begin{equation}
\label{sampleseq}
\langle A^{* j} {\bf f} , {\bf g} \rangle=\langle {\bf f} , A^j {\bf g} \rangle.
\end{equation} Thus the set of samples $\{ \langle A^{* j}
{\bf f} , {\bf g} \rangle \; | \; {\bf g} \in G\}_{ j =0}^{L({\bf g})}$ is sufficient for the recovery of ${\bf f}$.
Since \eqref{signalreconstructuonDS} requires that the dual frame of $F_A^{L}(G)$ is known, unless the frame is scalable as in \eqref{signalreconDSscalable}, it is significant to find the answers to questions (Q1) and (Q2).
\subsection{Contribution and organization} In Section \ref{prelim} we recall the notions of frames, scalable frames and, in particular, frames of iterative actions of operators, i.e., dynamical frames.
In Section \ref{allnewstuffbeyongAA}, we illustrate the dynamical nature of the canonical dual frame of \eqref{oursystem} in Theorem \ref{canondualframedysam}, and the fusion frame structure of dynamical frames (Corollary \ref{fusiondyn}). In Section \ref{mainresults} we give a characterization of scalability in Theorem \ref{multiscalablediagonalgen}, under the assumption that $A$ is normal. Section \ref{blockdiagOpsubsection} contains several generalized examples of frames and scalable frames in lower dimensions, and we characterize frame scalability in $\mathbb R^2$ and $\mathbb R^3$. In addition, we provide examples of operators which are not normal, yet generate scalable frames for $\mathbb R^2$ and $\mathbb R^3$.
Motivated by these results, we
study block-diagonal operators, which combine low-dimensional frames into higher-dimensional frames (Theorem \ref{blockresultbig}).
%
In Section \ref{compansection}, we also provide examples of dynamical scalable frames, generated using companion operators \cite{HJ85} and generalized companion operators. In section \ref{conclusion} we give initial answers to question (Q3), addressing frame scalability when multiple operators are involved.
\section{Preliminaries}\label{prelim}
Frames are a generalization of orthonormal bases.
For an orthonormal basis $ \{{\bf f}_i\}_{i \in I}$ of \( \mathbb{H} \), it holds
\begin{equation}\label{onmbrepr} {\bf f} = \sum_{i \in I} \inpro{{\bf f}, {\bf f}_i} {\bf f}_i \;\; \text{ for all } {\bf f} \in \mathbb{H}. \end{equation}
%
The uniqueness of representation \eqref{onmbrepr} is not always an advantage.
In applications such as image and signal processing, the loss of a single coefficient during data transmission will prevent the recovery of the original signal,
unless we ensure redundancy via frame spanning.
Since finding a dual frame can be computationally challenging, one significant direction of current research has been on the construction of tight frames in finite dimensions
\cite{ BM03, STDH07, CMKLT06, CFHWZ12, HKLW07}. A tight frame plays the role of its own dual, and provides a reconstruction formula as in \eqref{onmbrepr} up to a constant.
Recently, the theme of scalable frames has been developed as a method of constructing tight frames from general frames by manipulating the length of frame vectors.
Scalable frames maintain erasure resilience and sparse expansion properties of frames \cite{CC13, CKLMNPS14, KOF13, KOPT13, CKOPW15}.
First, let us review relevant definitions and known results. Throughout this paper $\mathbb{H}$ denotes a separable Hilbert space.
Given an index set $I$, a sequence $F = \{{\bf f}_i\}_{i \in I}$ of nonzero elements of $\mathbb{H}$ is a \textit{frame} for $\mathbb{H}$, if there exist $0<A \leq B < \infty$ such that
\begin{equation}
\label{frameineq}
A\Vert {\bf f} \Vert^2 \leq \sum_{i \in I} \vert \langle {\bf f} , {\bf f} _i \rangle \vert^2 \leq B\Vert {\bf f} \Vert^2 \;\; \text{ for all } {\bf f} \in \mathbb{H}.
\end{equation}
In finite dimensions,
we find it useful to express frames as matrices, so we abuse the notation of $F$ as follows: when $\dim \mathbb{H} = n$, a frame $F=\{{\bf f}_i\}_{i \in I}$ for $\mathbb{H}$ is often represented by a $n \times k$ matrix $F$, whose column vectors are ${\bf f}_i$, $i = 1, \ldots, k$.
The frame operator $S = FF^*$ is then positive, self-adjoint
and invertible.
For each frame $F$ there exists at least one \textit{dual} frame $ G= \{{\bf g}_i\}_{i \in I}$, satisfying
\begin{equation}
\label{framerepr}
{\bf f} = \sum_{i \in I} \langle {\bf f}, {\bf f}_i \rangle {\bf g}_i = \sum_{i \in I} \langle {\bf f}, {\bf g}_i \rangle {\bf f}_i \;\; \text{ for all } {\bf f} \in \mathbb{H}.
\end{equation}
The matrix equation $ F G^* = G F ^* = I$ is an equivalent expression to the frame representation \eqref{framerepr}.
The set $ \{{\bf g}_i=S^{-1}{\bf f}_i\}_{i \in I}$ is called the canonical dual frame.
Finding a dual frame can be computationally challenging; thus it is of interest to work with tight frames. We say that a frame is $A$-\textit{tight} if $A=B$ in \eqref{frameineq}. In this case, the function reconstruction is simplified since the frame operator is the identity operator up to scalar multiplication.
So, for an $A$-tight frame, we only need one frame for both analysis and reconstruction, as \eqref{framerepr} becomes
\begin{equation}
{\bf f} =\frac{1}{A} \sum_{i \in I} \langle {\bf f}, {\bf f}_i \rangle {\bf f}_i = \frac{1}{A}FF^*{\bf f} \;\; \text{ for all } {\bf f} \in \mathbb{H}.
\end{equation}
When $A=1$, we call $F$ a Parseval frame.
If a frame $F=\{ {\bf f}_i\}_{i \in I}$ is not tight, but we can find scaling coefficients $w_i \ge 0$, $i \in I$, such that the scaled frame $F_w=\{w_i{\bf f}_i\}_{i \in I}$ is tight, then we call the original frame $F$ a \textit{scalable} frame.
We note that the notion of scalability of a frame is defined for a unit-norm frame in \cite{CKLMNPS14}, but in this manuscript we do not require a scalable frame to be unit-norm.
For a scalable frame, the scaled frame representation becomes
\begin{equation}\label{defscalablerepr} {\bf f} = \sum_{i \in I} \langle {\bf f}, w_i {\bf f}_i \rangle w_i {\bf f}_i = F_wF_w^*{\bf f} = F D_{w^2}F ^*{\bf f}\;\; \text{ for all } {\bf f} \in \mathbb{H},\end{equation}
where $D_{w^2}$ denotes a diagonal operator with $w_i ^2$ as diagonal entries.
If the scaling coefficients $w_i $ are positive for all $i \in I$, then we call the original frame $F$ a \textit{strictly scalable} frame.
Let $I$ denote a finite or countable index set, let $G = \{ {\bf f}_s \}_{s\in I} \subset \mathbb{H}$ and let $A : \mathbb{H} \to \mathbb{H}$ be a bounded operator. We call the collection
\begin{equation}\label{orifdynfr}
F_{G}^{\bf L } (A) = \cup_{s \in I} \{A^j {{\bf f}}_{s} \,:\, j=0,1,\ldots,L_s \}
\end{equation}
a {\it dynamical system}, where $L_s \geq 0$ ($L_s$ may go to $\infty$) and ${\bf L}=(L_s)_{s \in I}$ is a sequence of iterations. The operator $A$, involved in generating the set \eqref{orifdynfr}, is sometimes referred to as a {\it dynamical operator}.
If $A$ is fixed, then we use the notation $F_{G} ^{\bf L}$, and if $G =\{ {\bf f} \}$ and ${\bf L} =\{ L \}$, then we label \eqref{orifdynfr} by $F_{{\bf f}} ^L$.
Note that in \cite{ACMT}, ${\bf f}_{s}$ are chosen to be the standard basis vectors, while in this manuscript, we allow the use of any nonzero vector ${\bf f}_{s} \in \mathbb{H}$. If \eqref{orifdynfr} is a frame for $\mathbb{H}$, then we call $F_{G}^{\bf L } (A)$ a \textit{dynamical frame}, generated by operator $A$, set $G$ and sequence of iterations ${\bf L}$.
\section{New results on dynamical frames}\label{allnewstuffbeyongAA}
As we are about to see in Theorem \ref{canondualframedysam}, the canonical dual frame of a dynamical frame preserves the dynamical structure, just like the canonical duals of wavelet or Gabor frames preserve the corresponding wavelet/Gabor structure \cite{Gro01}.
\begin{thm}\label{canondualframedysam}
Let $G = \{ {{\bf f}}_{s} \}_{s \in I} \subset\mathbb{H}$, where $I$ is a countable index set, and assume that $F_{G}^{\bf L} (A)$ is a frame for $\mathbb{H}$, with frame operator $S$.
The canonical dual frame of $F_{G}^{\bf L} (A)$ is the dynamical frame $F_{ G'} ^{\bf L} (B)$, generated by $B=S^{-1}AS$, $G' = \{ {{\bf g}}_s = S^{-1}{{\bf f}}_{s} \}_{ s \in I}$, and sequence of iterations $\bf L$.
That is, for every ${\bf f} \in \mathbb{H}$ the frame reconstruction formula is
\begin{equation}\label{frdynreprAB}
{\bf f} = \sum_{s\in I}\sum_{j=0} ^{L_s}\langle A^{*j}{\bf f}, {{\bf f}}_{s} \rangle B^j {{\bf g}}_{s}.
\end{equation}
\end{thm}
\begin{proof}
The elements of the canonical dual frame of $F_{G}^{\bf L} (A)$ are computed as $S^{-1} \left(A^j {{\bf f}}_{s}\right)$, $s \in I$, $j=0,1,\ldots,L_s$. Let ${{\bf g}}_s = S^{-1}{{\bf f}}_{s}$, $s \in I$, then for all $j \geq 0$ we have
$$B^j {{\bf g}}_{s} = (S^{-1}AS)(S^{-1}AS)\ldots(S^{-1}AS){{\bf g}}_s = S^{-1}A^j \left(S {{\bf g}}_{s}\right) = S^{-1}\left( A^j {{\bf f}}_{s}\right),$$
and \eqref{frdynreprAB} follows by \eqref{framerepr} and \eqref{sampleseq}.
\end{proof}
It is a known fact in frame theory that an invertible operator preserves the frame inequality. It follows from this that under the action of an invertible operator, the dynamical structure is preserved:
\begin{thm}\label{123}
Let $\mathbb{H}_1$ and $\mathbb{H}_2$ be two separable Hilbert spaces. Let $G = \{ {\bf f}_s \}_{s \in I} \subset \mathbb{H}_1$, where $I$ is a countable index set. Let ${\bf L}=(L_s)_{s \in I}$, $L_s \geq 0$.
Let $A$ be an operator on $\mathbb{H}_1$ and let
$B: \mathbb{H}_1 \rightarrow \mathbb{H}_2$ be an invertible operator. Set ${\bf g}_s = B {\bf f}_s \in \mathbb{H}_2$, $s \in I$, and $C = B AB^{-1}$. TFAE:
\begin{itemize}
\item[(i)] The set
$\displaystyle F=\cup_{s\in I} \{A^j {\bf f}_{s}\}_{j=0}^{L_s}$ is a frame for $\mathbb{H}_1$,
\item[(ii)] The set $\displaystyle BF= \cup_{s\in I} \{C^j {\bf g}_{s}\}_{j=0}^{L_s}$ is a frame for $\mathbb{H}_2$.
\end{itemize}
\end{thm}
\begin{proof}
Let ${\bf g}_s = B{\bf f}_s \in \mathbb{H}_2$, $s \in I$, and set $C = B AB^{-1}$. Note that $C^j = B A^jB^{-1}$, due to $B^{-1}B=I$. For all $A^j {\bf f}_s \in F\subset \mathbb{H}_1$, we have \begin{equation}
BA^j{\bf f}_s =BA^j B^{-1}B{\bf f}_s =BA^j B^{-1} {\bf g}_s = C^j{\bf g}_s\in BF\subset \mathbb{H}_2. \end{equation} The operator $B$ is invertible, thus $BF$ is a frame if and only if $F$ is a frame, so (i) and (ii) are equivalent.
\end{proof}
\begin{com}
If $\mathbb{H}=\mathbb{H}_1=\mathbb{H}_2$, then Theorem \ref{123} is a generalization of the change of basis result. Notice that under the action of an invertible operator $B: \mathbb{H} \rightarrow \mathbb{H} $, the elements of a dynamical frame $F$ for $\mathbb{H}$ preserve the dynamical structure, i.e., $BF$ is also a dynamical frame for $\mathbb{H}$.
\end{com}
Fusion frames \cite{CK04} are frames which decompose into
a union of frames for subspaces of a Hilbert space $\mathbb{H}$.
Given a countable index set $I$, let $\mathcal{W}: = \{W_i \, | \, i \in I \}$ be a family of closed subspaces in $\mathbb{H}$. Let the orthogonal projections
onto $W_i$ be denoted by by $P_i$. Then $\mathcal{W}$ is a \textit{fusion frame} for $\mathbb{H}$, if there exist $C, D >0$ such that
\[
C \Vert {\bf f} \Vert^2 \leq \sum_{i \in I} \Vert P_i({\bf f}) \Vert^2 \leq D\Vert {\bf f} \Vert^2 \;\; \text{ for all } {\bf f} \in \mathbb{H}. \]
Let $F_{i}=\{{\bf f}_{ij} \}_{ j \in J_i }$ be a frame for $W_i$, $i \in I$, with frame bounds $A_i$, $B_i$. If $0 < A = \inf_{i \in I} A_i \leq \sup_{i \in I} B_i = B < \infty$, then
\cite{CK04}:
\begin{equation}\label{cassazaff}
\hspace{-2mm} \cup_{i \in I} F_{i} \; \text{is a frame for $\mathbb{H}$ if and only if} \;
\{ W_i\}_{i \in I} \; \text{ is a fusion frame for $\mathbb{H}$.}
\end{equation}
If $F_i$ denotes the frame matrix formed by the frame vectors for each $W_i$, and $G_i$ contains the dual frame elements $\{ {\bf g}_{ij}\}_{j \in J_i}$, then the fusion frame operator $S$ positive and invertible on $\mathbb{H}$, and for all ${\bf f} \in \mathbb{H}$, we have
\begin{equation}
\label{fusionfrmatrix}
{\bf f} = \sum_{i \in I} F_iG_i ^* {\bf f} = \sum_{i \in I} G_iF_i ^* {\bf f}.
\end{equation}
By \eqref{cassazaff} and \eqref{fusionfrmatrix}, a dynamical frame induces a fusion frame:
\begin{corollary}\label{fusiondyn}
Let
$F= \cup_{s\in I} \{A^j {{\bf f}}_{s} \}_{ j=0}^{ L_s}$ be a frame for $\mathbb{H}$. We introduce subspaces of $\mathbb{H}$ by
\begin{equation}
W_s = \overline{span \{A^j {{\bf f}}_{ s} \,:\, 0 \leq j \leq L_s \}},\;\; \text{ for all } s \in I.
\end{equation}
Then $\{W_s \}_{s \in I}$ is a fusion frame of $\mathbb{H}$.
\end{corollary}
\section{ Scalable frames generated by dynamical operators}\label{mainresults}
Now, we
study the scalability of frames of type \eqref{oursystem}.
{A prior result on this topic (see Theorem 8 in \cite{AP}) has restrictive requirements, and delivers a tight frame if the involved operator $A$ is a contraction, i.e., $A^j {\bf f} \rightarrow 0$ for all elements ${\bf f}$ in the studied Hilbert space. }
Our research results illuminate the fact that - in finite dimensions - obtaining a tight or a scalable frame is possible in many cases.
If the operator $B$ occurring in Theorem \ref{123} is unitary, then the property of scalability is preserved, and we have:
\begin{corollary}\label{scalabilityinparalel}
Let $G = \{ {\bf f}_s \}_{s \in I} \subset \mathbb{H}$ and ${\bf L}=(L_s)_{s\in I} $, $L_s \geq 0$.
Let $A$ be a bounded operator on a separable Hilbert space $\mathbb{H}$.
If $B$ is a unitary operator on $\mathbb{H}$,
then
$\cup_{s \in I} \{A^j {\bf f}_{s}\}_{j=0}^{L_s}$ is a scalable frame if and only if
$\cup_{s \in I} \{C^j {\bf g}_{s}\}_{j=0}^{L_s}$ is a scalable frame, where $C =BA B^{*}$ and
${\bf g}_s = B{\bf f}_s$, $s \in I$. \end{corollary}
\begin{corollary}\label{generalSchurstatement}
Let $A, R$ be two operators on a separable Hilbert space $\mathbb{H}$, and let $U$ be a unitary operator on $\mathbb{H}$. Let $ {\bf f}_{s} \in \mathbb{H}$, and set ${\bf v}_s = U^* {\bf f}_{s}$ for all $s \in I$, where $I$ is a countable index set.
If $A=URU^*$, then TFAE:
\begin{itemize}
\item[(i)] $\displaystyle \cup_{s \in I} \{ A^j {\bf f}_{ s} \}_{j=0}^{L_s}$ is a scalable frame for $\mathbb{H}$,
\item[(ii)] $\displaystyle \cup_{s \in I} \{R^j {\bf v}_s \}_{j=0}^{L_s}$ { is a scalable frame for $\mathbb{H}$.}
\end{itemize}
\end{corollary}
Corollary \ref{generalSchurstatement} is relevant to the {\it Schur} decomposition: recall, any
operator $A$ on a finite-dimensional Hilbert spaces $\mathbb{H}$ has a non-unique Schur decomposition of type $A=URU^*$, where $U$ is a unitary $n \times n$ matrix, and $R$ is of Schur form.
When $A=A^*$, i.e., $A$ is normal, then the Schur decomposition becomes unique, and is reduced to the classical unitary diagonalization. In the next subsection, we exploit the simplicity of the unitary diagonalization of normal operators to give more explicit conditions on the normal operator $A$ in order to ensure scalability of a frame of type $F^{\bf L} _G(A)$.
\subsection{Normal operators }
Let $A$ be a normal operator on $\mathbb{H}$. By the spectral theorem, there exists a unitary operator $U$, and a diagonal operator $D$ such that
$A=UDU^*$; in fact, for each $j \in \mathbb Z_+$ $A^j = UD^jU^*$.
Now, let $\mathcal{G} = \{ {\bf f}_s \}_{s \in I}$ and set ${\bf v}_s=U^*{\bf f}_s$, $s \in I$.
Then for each $j \in \mathbb Z_+$,
\begin{equation}\label{connection}
A^j {\bf f}_s = UD^j U^* {\bf f}_s= UD^j {\bf v}_s = U(D^j {\bf v}_s )
\;\; \text{ for all } {\bf f}_s \in \mathcal{G}.
\end{equation}
Corollary \ref{generalSchurstatement} for normal operators reads as follows:
\begin{corollary}\label{connectSymDiagmulti}
Let $A$ be a normal operator on $\mathbb{H}$, and let $A=UDU^*$ be its unitary diagonalization.
Let $\{ {\bf f}_{s}\}_{ s \in I} \subset \mathbb{H}$, and set ${\bf v}_{s} = U^* {\bf f}_{s}$, $s \in I$.
TFAE
\begin{itemize}
\item[(i)] The set
$\displaystyle \cup_{ s \in I}\{ A^j {\bf f}_{s} | \; j=0,1,\ldots, L_s\}$ is a scalable frame for $\mathbb{H}$.
\item[(ii)] The set
$ \displaystyle \cup_{ s \in I} \{ D^j {\bf v}_{s} | \; j=0,1,\ldots, L_s\}$ is a scalable frame for $\mathbb{H}$.
\end{itemize}
\end{corollary}
We now restrict our attention to a finite dimensional Hilbert space $\mathbb{H} =\mathbb R^n$ or $\mathbb C^n$. Let us first point out that the frame scalability property is preserved under simple manipulations:
\begin{proposition} Let $F = \{{\bf f}_i\}_{i=1}^k $ be a scalable frame for $\mathbb{H}$, $\dim H = n$.
Then the following are also scalable frames:
\begin{itemize}
\item[(i)] any column or row permutation of $F $
\item[(ii)] \(\{ U {\bf f}_i\}_{i=1}^k\) for any unitary matrix $U$
\end{itemize}
\end{proposition}
Given a diagonal operator $D$ in a Hilbert space $\mathbb{H}$ with $\dim \mathbb{H} = n$, we first focus our attention on solving the \textit{one-vector problem}: we look for conditions on $D$, and an unknown vector ${\bf v} \in \mathbb{H}$, which generate a scalable frame for $\mathbb{H}$ of type \eqref{oursystem}.
Let $L\geq 0$, let $D$ denote a diagonal $n\times n$ matrix, with diagonal entries $a_1,\ldots,a_n$, and let ${\bf v} = (x(1), \ldots, x(n))^T \in \mathbb{H}$.
Let $ w_j \in \mathbb R_+$, $0\leq j \leq L$, be scaling coefficients such that $ F_W= \{ w_j D^j {\bf v} \}_{ j=0} ^{ L} $ is a Parseval frame for $\mathbb{H}$, i.e.,
\begin{equation}\label{wantedS}
F_W F_W^*= I.
\end{equation}
Note that \eqref{wantedS} is equivalent to the system of equations
\begin{eqnarray}\label{wanteddiagscalable}
|x(i)|^2 \sum_{k=0} ^L w_k ^2 | a_i |^{2k} & =& 1, \quad i=1,\ldots, n; \nonumber \\
\sum_{k=0} ^L w_k ^2 \left( a_i\bar{a_j } \right)^k& =& 0, \quad i \neq j.
\end{eqnarray}
There exist real solutions of \eqref{wanteddiagscalable} when $n \leq 2$. For instance, when $\mathbb{H} = \mathbb R^2$, the choice of ${\bf v} = (0.5, 0.5)^T$ and $D= diag(1, -1)$ generates the set $\{{\bf v}, D{\bf v}, D^2 {\bf v} , D^3{\bf v}\}$, which is a Parseval frame for $\mathbb R^2$ .
However, when $\mathbb{H} = \mathbb R^3$, the equation $\sum_{k=0} ^L w_k ^2 \left( a_ia_j \right)^k = 0, \; i \neq j$ implies that for the first three $a_i's$, we always have the relation $a_1a_2$, $ a_1a_3$, and $a_2a_3$ are all negative numbers assuming $w_i \neq 0, \, i =1, 2, 3$, which is not possible. Thus we have:
\begin{thm}
Let ${\bf v} \in \mathbb R^n$, and $a_1, \dots, a_n \in \mathbb R$.
If $n \ge 2$, then any normal operator for $\mathbb R^n$ can not generate a strictly scalable frame from ${\bf v}$.
\end{thm}
In contrast to the real case, there exists a solution to the one-vector problem in $\mathbb C^n$, involving the $k$-th root of unity
\begin{ex}
Let $\gamma = e^{2\pi i/ k}$, $k \ge n$.
Then the following dynamical operator $A$ and the vector ${\bf v}$
\begin{equation*}
A = \left( \begin{array}{ccc}
1& 0& 0 \\
0 & \ddots & 0 \\
0 & 0 & \gamma^{n-1}
\end{array}\right), \quad
{\bf v} = \frac{1}{\sqrt{k}} \left( \begin{array}{c}
1 \\
\vdots \\
1
\end{array}\right)
\end{equation*}
generate the Harmonic tight frame $F_{{\bf v}}^{k-1}$.
\end{ex}
Next, we consider the multi-generator case:
By \eqref{defscalablerepr}, the scaling coefficients $ w_{s,j}$ related to vectors $D^j {\bf v}_s$, $0\leq j \leq L(s)$, where ${\bf v}_s =(x_s(1), \ldots, x_s(n))^T$, $1\leq s \leq p$, need to be solutions to the following system of equations:
\begin{equation}\label{wantedScalDiagmultivrs}
\left\{ \begin{array}{ll}
\sum_{s=1} ^p |{x_{s}(i)}|^2 \left[w_{s,0}^2+ w_{s,1}^2 |{a_i}|^2 + \ldots + w_{s,L_s}^2 |{a_i}|^{2L_s} \right] = 1, \\
\sum_{s=1} ^p x_{s} (i) \bar{x_{s}} (j) \left[ w_{s,0}^2 + w_{s,1}^2 a_i \bar{a_j} + \ldots + w_{s,L_s}^2(a_i \bar{a_j})^{L_k} \right] = 0, \end{array}\right. \\
\end{equation}
for all $i,j=1,\ldots, n$, $i \neq j$.
\begin{proposition}\label{multiscalablediagonalgen}
Let $D$ be a diagonal $n\times n$ matrix with
diagonal entries $a_1,\ldots, a_n \in \mathbb C$, and let ${\bf v}_s = (x_{s}(1), \ldots, x_{s}(n))^T \in \mathbb C^n$, $s \in \{1,\cdots, p\}$, $p \geq 1$.
TFAE:
\begin{itemize}
\item[(i)] The set $\cup_{s=1} ^p \{ D^j {\bf v}_s \; | \; j =0,1,\ldots, L_s\}$ is a scalable frame for $\mathbb{H}$
\item[(ii)] There exist scaling coefficients $w_{s,0}, w_{s,1},\ldots, w_{s,L_s}$, $1\leq s \leq p$, which satisfy conditions \eqref{wantedScalDiagmultivrs}. \end{itemize}
\end{proposition}
By Corollary \ref{connectSymDiagmulti} and Proposition \ref{multiscalablediagonalgen}, the following result holds true for a finite dimensional Hilbert space $\mathbb{H}$:
\begin{thm}\label{symmetriccaseequivalence}
Let $A = UDU^*$ be a normal $n\times n$ matrix, where $U$ is unitary, and $D$ is diagonal, with diagonal entries $a_1, \ldots, a_n \in \mathbb C$.
Let ${\bf f}_s \in \mathbb{H}$, and set ${\bf v}_{s} = U^* {\bf f}_s = (x_s(1), \ldots, x_s(n))^T$, $1\leq s \leq p$.
The set $\cup_{s=1}^p \{ A^j {\bf f}_{s} \; | \; 0\leq j \leq L_s \}$ is a scalable frame of $\mathbb{H}$ if an only if there there exists a positive solution $w_{s,0}, w_{s,1},\ldots, w_{s,L_s}$, $1\leq s \leq p$ to the system of equations \eqref{wantedScalDiagmultivrs}, defined with respect to $a_1, \ldots, a_n$ and $x_s(1), \ldots, x_s(n)$, $1\leq s \leq p$.
\end{thm}
\begin{com}
The problem of finding specific conditions under which the set in item (ii) in Corollary \ref{generalSchurstatement} is a scalable frame for $\mathbb{H}$ is still open for operators which do not possess a unitary diagonalization.
For this reason, we further study several operators with special structures, such as block-diagonal operators (section \ref{blockdiagOpsubsection}) and companion operators (subsection \ref{compansection}).
\end{com}
\section{Block-diagonal operators } \label{blockdiagOpsubsection}
In this section,
we explore the case when the operator $A$ is of block-diagonal form. Block-diagonal operators give us a chance to offer a partial answer to (Q1) in the case when we don't have a unitary diagonalization.
Note that in subsection \ref{blocks} we give examples of
operators which generate scalable frames in Hilbert spaces of dimension $2$ and $3$. Since we can treat $\mathbb{H}$ with $\dim \mathbb{H} = n$ as a decomposition of several subspaces of dimensions 2 and 3, the examples in subsection \ref{blocks} provide infinite examples of block-diagonal operators which generate scalable frames for $\mathbb{H}$.
\begin{thm}\label{stackScale}
Let $F_s$ be a scalable frame for $\mathbb{H}_{s}$, with $\dim \mathbb{H}_s = n_s$, $s =1, \ldots p$, and let
\begin{equation}\label{scal333}
G = \left( \begin{array}{ccc}
F_1 & 0 & 0 \\
0 & \ddots & 0 \\
0 & 0 & F_p
\end{array}\right). \end{equation}
Then $G$ is a scalable frame for
$\mathbb{H}= \mathbb{H}_1 \oplus \ldots \oplus \mathbb{H}_p$.
\end{thm}
\begin{definition}\label{wellembededvr}
Let $A_s : \mathbb{H}_s \rightarrow \mathbb{H}_s$ be an operator on $\mathbb{H}_s$, with $\dim \mathbb{H}_s = n_s$, $1\leq s \leq p$.
Let $A : \mathbb{H}_s \rightarrow \mathbb{H}_s $ be a block-diagonal operator on $\displaystyle \mathbb{H}= \oplus_{s=1}^p \mathbb{H}_s$, constructed as follows:
\begin{equation}\label{blockdiagdynamoperator}
A = \left( \begin{array}{ccc}
A_1 & \ldots & 0\\
\vdots & \vdots & \vdots \\
0 & \ldots & A_p
\end{array} \right).
\end{equation}
Let ${\bf v} \in \mathbb{H}_s$ for some $1\leq s \leq p$. We say that ${\bf v}$ is {\it well-embeded } in ${\bf f} \in \mathbb{H}$ with respect to operator \eqref{blockdiagdynamoperator} if
\begin{equation}
\begin{cases} {\bf f}(j) = {\bf v}(i), &\mbox{if } j = n_1 +\ldots n_s +i\\
{\bf f}(j) = 0, & \mbox{otherwise.} \end{cases}
\end{equation}
\end{definition}
Whenever ${\bf v}$ is well-embedded in ${\bf f}$ with respect to \eqref{blockdiagdynamoperator}, we have
$$A{\bf f}=\left( \begin{array}{c}
0 \\
A_s {\bf v} \\
0
\end{array} \right). $$
\begin{thm}\label{blockresultbig} Let $A_s : \mathbb{H}_s \rightarrow \mathbb{H}_s$ be an operator on $\mathbb{H}_s$, with $\dim \mathbb{H}_s = n_s$, $1\leq s \leq p$.
Let $A : \mathbb{H}_s \rightarrow \mathbb{H}_s $ be a block-diagonal operator on $\displaystyle \mathbb{H}= \oplus_{s=1}^p \mathbb{H}_s$, constructed as in \eqref{blockdiagdynamoperator}.
Let ${\bf f}_{s,1}, \ldots, {\bf f}_{s,m_s} \in \mathbb{H}$, $1\leq s \leq p$ be well-embedded vectors ${\bf v}_{s, 1} \ldots, {\bf v}_{s,m_s} \in \mathbb{H}_s$, $1\leq s\leq p$.
\begin{equation}\label{bigguy}
\text{The set} \;\;\;\; \; \bigcup_{s=1}^p \{ A^j {\bf f}_{s, k} \;\; | \;\; 1\leq k \leq m_s \}_{j=0}^{ L_{s,k} } \;\;\;\; \; \;\;\;\; \; \;\;\;\; \;
\end{equation}
{ is a (scalable) frame of $\mathbb{H}$}
if and only if $\; \{ A_s^j {\bf v}_{s,k} \; | \; 1\leq k\leq m_s \}_{j=0}^{ L_{s,k} } $ are (scalable) frames of $\ \mathbb{H}_s $ for all $1\leq s \leq p$.
\end{thm}
\begin{proof}
We assume that all $m_s =1$, i.e., ${\bf f}_{s,k} = {\bf f}_s$, ${\bf v}_{s,k} = {\bf v}_s$, and $ L_{s,k} = L_s$, $1\leq s \leq p$, to simplify the presentation of the proof.
The matrix representation of $ \cup_{s=1}^p \{ A^j {\bf f}_s \}_{j=0}^{L_s}$ with scaling coefficients
$w_{s, j}$, $0\leq j \leq L_s$ for each $s=1,\ldots, p$ is of block-diagonal form:
\begin{equation*}
F=\left( \begin{array}{ccccccc}
w_{1,0} {\bf v}_{1} & \ldots & w_{1, L_1}A_1^{L_1} {\bf v}_{1}& &&& \\
&&&\ddots &&& \\
&&& & w_{p, 0}{\bf v}_{p} & \ldots & w_{p, L_p} A_p^{L_p}{\bf v}_{p}
\end{array}\right) . \end{equation*}
If $F$ is a tight frame, then row vectors of $F$ are orthogonal and have the same norm and so does
$ (w_{s, 0} {\bf v}_{s} \ldots w_{s, L_s} A_s^{L_s} {\bf v}_{s})$ for each
$s =1, \ldots, p$. This implies that the system $ \{ A_s^j {\bf v}_{s}\}_{j=0}^{L_k}$ is a scalable frame for $\mathbb{H}_s$ for all $1\leq s \leq p$.
Now, suppose that for each $1\leq s \leq p$, the system $ \{ A^j {\bf v}_{s}\}_{j=0}^{L_s}$ is a scalable frame for $\mathbb{H}_s$. Then, there exist some scaling coefficients $w_{s,j}$, $1\leq s \leq p$, $0\leq j\leq L_s$, such that $ \{ w_{s,j}A_s^{j} {\bf v}_{s} | 0\leq j \leq L_s\}$ is a Parseval frame for each $s=1, \ldots p$.
\end{proof}
\subsection{Scalable dynamical frames for $\mathbb R^2$ and $\mathbb R^3$}\label{blocks}
For the classification of a tight frame in this section, we use the notion of the {\it diagram vector}.
For any \({\bf f} \in\mathbb{R}^n\), we define the diagram vector associated with \({\bf f}\), denoted \(\tilde{{\bf f}}\), by
\begin{equation*}
\tilde{{\bf f}} =
\frac{1}{\sqrt{n-1}}
\left( \begin{array}{c}
{\bf f}(1)^2-{\bf f}(2)^2\\ \vdots \\ {\bf f}(n-1)^2 -{\bf f}(n)^2 \\
\sqrt{2n}{\bf f}(1){\bf f}(2) \\ \vdots \\ \sqrt{2n}{\bf f}(n-1){\bf f}(n)
\end{array} \right)
\in\mathbb{R}^{n(n-1)\times 1},
\end{equation*}
where the difference of squares
${\bf f}(i)^2- {\bf f}(j)^2$ and the
product \({\bf f}(i){\bf f}(j)\) occur exactly once for \(i < j, \ i = 1, 2, \cdots, n-1.\)
Analogously, for any vector \({\bf f}\in\mathbb{C}^n\), we define the diagram vector associated with \({\bf f}\), denoted \(\tilde{{\bf f}}\), by
\begin{equation*}
\tilde{{\bf f}} =
\frac{1}{\sqrt{n-1}}
\left( \begin{array}{c}{\bf f}(1) \overline{{\bf f}(1)}-{\bf f}(2)\overline{{\bf f}(2)} \\ \vdots \\ {\bf f}(n-1)\overline{{\bf f}(n-1)}-{\bf f}(n)\overline{{\bf f}(n)} \\
\sqrt{n}{\bf f}(1) \overline{{\bf f}(2)} \\ \sqrt{n} \overline{{\bf f}(1)} {\bf f}(2) \\ \vdots \\ \sqrt{n}{\bf f}(n-1)\overline{{\bf f}(n)}
\\ \sqrt{n} \overline{{\bf f}(n-1)} {\bf f}(n)
\end{array} \right) \in\mathbb{C}^{3n(n-1)/2},
\end{equation*}
where the difference of the form
${\bf f}(i) \overline{{\bf f}(i)} - {\bf f}(j) \overline{{\bf f}(j)}$ occurs exactly once for \(i < j, \ i = 1, 2, \cdots, n-1\) and the
product of the form \({\bf f}(i) \overline{{\bf f}(j)} \) occurs exactly once for \(i \neq j.\)
The diagram vectors give us the following characterizations of tight frames and scalable frames:
\begin{thm}
\label{charTight}\cite{ CKLMNS13, CKLMNPS14}
Let \(\{{\bf f}_i\}_{i=1}^k\) be a sequence of vectors in \( \mathbb{H} \), not all of which are zero. Then \(\{{\bf f}_i\}_{i=1}^k\) is a tight frame if and only if \(\sum_{i=1}^k\tilde{{\bf f}_i}=0\).
\end{thm}
\begin{thm}\label{charScale}\cite{CKLMNS13, CKLMNPS14}
Let \(\{{\bf f}_i\}_{i=1}^k\) be a unit-norm frame for $\mathbb{H}$ and $c_1, \cdots, c_k$ be nonnegative numbers, which are not all zero.
Let $\tilde{G}$ be the Gramian associated to the diagram vectors \(\{ \tilde{{\bf f}}_i\}_{i=1}^k\) .
Then $\{c_i {\bf f}_i\}_{i=1}^k $ is a tight frame for $\mathbb{H}$ if and only if
${\bf f} = \left( c_1^2 \ldots c^2_k \right)^T$
belongs to the null space of $\tilde{G}$.
\end{thm}
Let $\{{\bf e}_1, \ldots, {\bf e}_n\}$ be the standard orthonormal basis in $\mathbb R^n$ or $\mathbb C^n$.
\begin{proposition}\label{niceR2example}
Let $A= \left( \begin{array}{cc}
a & c \\
b & d
\end{array}\right)$ be an operator in $\mathbb R^2$, where $a, b, c, d$ are not all zeros.
If $a=0$ and $b \neq 0$, then $F_{{\bf e}_1}^1 $ is a scalable frame for $\mathbb R^2$.
\end{proposition}
\begin{proof}
If $a=0$ and $b\neq 0$, then $F_{{\bf e}_1}^1 = \{ (1,0)^T, (0,b)^T\}$. Since the two vectors in $F_{{\bf e}_1}^1 $ are orthogonal, $F_{{\bf e}_1}^1 $ is a strictly scalable frame for $\mathbb R^2$.
\end{proof}
We highlight that, when $b=d \neq 0$ and $c=-d/4$ in Proposition \ref{niceR2example}, the matrix $A$ is non-diagonalizable yet generates a scalable frame for $\mathbb R^2$.
\begin{proposition}\label{2tight}
Let $a, b, c, d $ be real numbers such that
$a \neq -d$, \[b= \frac{\pm 1}{a+d}\sqrt{\frac{a^2(a+d)^2 + (a+d)^2 +a^2}{1+(a+d)^2}}, \text{ and }\]
\[ c = \mp a(ad+a^2+1) \sqrt{\frac{1+(a+d)^2}{(a+d)^2 +a^2(a+d)^2 +a^2}}.\]
Then the operator $A= \left( \begin{array}{cc}
a & c \\
b & d
\end{array}\right)$ in $\mathbb R^2$ generates a tight frame
$$F_{{\bf e}_1}^2 =\left( \begin{array}{ccc}
1 & a & a^2 + bc \\
0 & b & ab+bd
\end{array}\right) . $$
\end{proposition}
\begin{thm}\label{2scale}
Let $a, b, c, d $ be real numbers such that
$a>0$ and $abcd \neq 0$. Then the following two statements are equivalent:
\begin{enumerate}
\item $0< -\frac{ac}{bd} <1$.
\item The system
$$ F= \left( \begin{array}{ccc}
1& a & c \\
0 & b & d
\end{array}\right)$$
is a strictly scalable frame for $\mathbb R^2$.
\end{enumerate}
\end{thm}
\begin{proof}
We first note that the condition
$0< -\frac{ac}{bd} <1$
is equivalent to
($a>0, \, -\frac{b}{c}> \frac{a}{d} >0$) or ($a>0, \, -\frac{d}{a}> \frac{c}{b} >0$). \\
$(1)\Rightarrow(2)$: \quad
The conditions $a>0, \, -\frac{b}{c}> \frac{a}{d} >0$ imply that
$$ d>0, \, ad-bc>0, \, \frac{ac}{bd} > -1$$
and the conditions $a>0, \, -\frac{d}{a}> \frac{c}{b} >0$
imply that $$ d<0, \, ad-bc < 0, \, \frac{ac}{bd} > -1. $$
Then
$$ x=\sqrt{ \frac{ac}{bd}+1}, \, y=\sqrt{ \frac{c}{-b(ad-bc)}}, \, z=\sqrt{ \frac{a}{d(ad-bc)}} $$
are positive numbers and
$$ F= \left( \begin{array}{ccc}
x & ya & zc \\
0 & yb & zd
\end{array}\right)$$
is a Parseval frame for $\mathbb R^2$. \\
$(1)\Leftarrow(2)$: \quad
It the system $F$ is strictly scalable, then the normalized system
$$F'=\left( \begin{array}{ccc}
1 & \frac{a}{\sqrt{a^2 + b^2}} & \frac{c}{\sqrt{c^2+d^2}} \\
0 & \frac{b}{\sqrt{a^2 + b^2}} & \frac{d}{\sqrt{c^2+d^2}}
\end{array}\right)$$
is a unit-norm scalable frame. By Theorem \ref{charScale}, the Gramian matrix of diagram vectors of $F'$ has positive scalings in its null space:
\begin{equation}\label{2x3e1}
\frac{a^2cd-abc^2+abd^2-b^2cd}{ab(c^2+d^2)}>0,
\end{equation}
\begin{equation}\label{2x3e2}
\frac{-cd(a^2+b^2)}{ab(c^2+d^2)}>0.
\end{equation}
Inequality (\ref{2x3e2}) implies that $ -\frac{ac}{bd}>0$.
Next we show that $-\frac{ac}{bd}<1$. \\
In case $b>0$, inequality (\ref{2x3e1}) implies that
$$ a^2cd+abd^2 > bc ( ac +bd).$$
If ($c>0$ and $ac +bd \ge 0$) or ($c<0$ and $ac +bd \le 0$), then $ a^2cd+abd^2 >0$, which implies $-\frac{ac}{bd}<1$.
If $c>0$ and $ac +bd < 0$, then $ ac < -bd$, which implies $1 < -\frac{bd}{ac}$ since $ac>0$.
Similarly, if $c<0$ and $ac +bd > 0$, then $ ac > -bd$, which implies $1 < -\frac{bd}{ac}$ since $ac<0$.
This is equivalent to $-\frac{ac}{bd}<1$. \\
In case $b<0$, suppose that $-\frac{ac}{bd} \ge 1$. Multiply both sides by the positive number $-abd^2$. On one hand we have
$ a^2cd \ge -abd^2 $ and on the other hand, from inequality (\ref{2x3e1}), we have
$a^2cd-abc^2 <-abd^2+b^2cd$. Since $ a^2cd \ge -abd^2 $, we have
$-abd^2-abc^2 <-abd^2+b^2cd$, which implies $-\frac{ac}{bd} < 1$. This contradicts our assumption.
\end{proof}
This observation provides us the conditions for a dynamical operator $A$ in $\mathbb R^2$ to generate a scalable frame $F_{{\bf e}_1}^2 $ for $\mathbb R^2$.
\begin{corollary}\label{2x3scale}
Let $a, b, c, d $ be real numbers such that
$a>0$ and $0< -\frac{a(a^2+bc)}{b^2(a+d)}<1$.
Then the operator $A= \left( \begin{array}{cc}
a & c \\
b & d
\end{array}\right)$ generates a strictly scalable frame
$$F_{{\bf e}_1}^2 =\left( \begin{array}{ccc}
1 & a & a^2 + bc \\
0 & b & ab+bd
\end{array}\right) .$$
\end{corollary}
If $2 \sin^2(\omega)-1 >0$, then the operator
$$A= \left( \begin{array}{cc}
\cos(\omega) & -\sin(\omega) \\
\sin(\omega) & \cos(\omega)
\end{array}\right)$$
satisfies the condition on Theorem \ref{2scale}. Consequently we have:
\begin{ex}
Let
$$A= \left( \begin{array}{cc}
\cos(\omega) & -\sin(\omega) \\
\sin(\omega) & \cos(\omega)
\end{array}\right),$$
where $2 \sin^2(\omega)-1 >0$.
Then the operator $A$ generates a strictly scalable frame
$$F_{{\bf e}_1}^2 = \left( \begin{array}{ccc}
1& \cos(\omega) & \cos(2\omega) \\
0& \sin(\omega) & \sin(2\omega)
\end{array}\right). $$
\end{ex}
\begin{proposition}\label{2x4scale}
Let $a, b, c, d $ be real numbers such that $abcd<0$.
Then the system
$$ F= \left( \begin{array}{cccc}
1 & 0& a & c \\
0 & 1 & b & d
\end{array}\right)$$
is a strictly scalable frame for $\mathbb R^2$.
\end{proposition}
\begin{proof}
We define
$$ p= \sqrt{ \left( \frac{acd}{b} -c^2 \right) s^2 + 1 }, \,q= \sqrt{ \left(\frac{bcd}{a} +d^2\right) s^2 + 1}, \, r =\sqrt{-\frac{cd}{ab}}. $$
For any $a, b, c, d$ such that $abcd<0$, one can select $s$ such that $p>0$ and $q>0$. Those choices of $p, q, r, s$ guarantee that
the system
$$ F= \left( \begin{array}{cccc}
p &0 & ra & sc \\
0 & q & rb & sd
\end{array}\right)$$
is a Parseval frame.
\end{proof}
\begin{corollary}\label{2x4scale}
Let $a, b$ be real numbers such that $a+b^2<0$.
Then the operator $A= \left( \begin{array}{cc}
0 & a \\
1 & b
\end{array}\right)$ generates a strictly scalable frame $F_{{\bf e}_1}^3 $ for $\mathbb R^2$.
\end{corollary}
We next explore when a dynamical operator $A$ generates a scalable frame $F_{{\bf e}_1}^3 $ in $\mathbb R^3$.
We first observe the following systems in $\mathbb R^3$ when $ab \neq 0$
\begin{equation}\label{twosystemsdynscal}
F1 = \left( \begin{array}{ccccc}
1 & 0 & 0 & x & y \\
0 & 1 & 0 & a& c \\
0 & 0 & 1& b & d \\
\end{array}\right), \quad
F2 = \left( \begin{array}{ccccc}
1 & 0 & x & y \\
0 & 1 & a& c \\
0 & 0 & b & d \\
\end{array}\right).
\end{equation}
If $F$ is a tight frame, by Theorem \ref{charTight}, we have
\begin{equation} \label{onlytwo}
\begin{array} {ccc}
ax + cy &=& 0\\
bx+dy &=&0 \\
ab+cd&=& 0,
\end{array}
\end{equation}
which implies that $x=y=0$.
That is, the last two vectors have only two nonzero elements in the same entries.
We note that if the first column of $A$ is ${{\bf e}}_1$, then the system $F_{{\bf e}_1}^3 $ can not be a frame for $\mathbb R^3$.
Let
\begin{equation}\label{genmatrR3}
A = \left( \begin{array}{ccc}
0 & a & x\\
1 & b & y \\
0 & c & z
\end{array}\right).
\end{equation}
Then the corresponding $F_4$ system has the following entries:
$$
F_{{\bf e}_1}^3 = \left( \begin{array}{cccc}
1 & 0 & a & ab+cx\\
0 & 1 & b & b^2+cy + a \\
0 & 0 & c & bc +cz
\end{array}\right).
$$
By (\ref{onlytwo}), for the system $F_{{\bf e}_1}^3 $ to be a strictly scalable frame, we need to assume
$ a=ab+cx=0$ or $b = b^2+cy + a =0$.
We first consider the case $ a=ab+cx=0$.
\begin{proposition}\label{prop7import}
Let $a, b, c, d $ be real
numbers such that
$a>0$ and $0< -\frac{a(a^2+bc)}{b^2(a+d)}<1$.
Then the operator
\begin{equation}\label{nonhermandherm}
A = \left( \begin{array}{ccc}
0 & 0 & 0\\
1 & a & c \\
0 & b & d
\end{array}\right)
\end{equation}
generates a strictly
scalable frame
\begin{equation}\label{26}
F_{{\bf e}_1}^3 = \left( \begin{array}{cccc}
1 & 0& 0 & 0\\
0 & 1 & a & a^2 + bc \\
0 & 0& b & ab+bd
\end{array}\right).
\end{equation}
\end{proposition}
\begin{proof}
This follows from Theorem \ref{stackScale} and Theorem \ref{2scale}.
\end{proof}
When $b = b^2+cy + a =0$, we have
$$ A = \left( \begin{array}{ccc}
0 & a & x\\
1 & 0 & -a/c \\
0 & c & cz
\end{array}\right).
$$
By applying row and column permutations, $F_{{\bf e}_1}^3 $ can be written in the same form as (\ref{26}).
Similarly, the following operator, with a suitable choice of the second and third column:
\begin{equation}\label{genmatrR3}
A = \left( \begin{array}{ccc}
0 & a & x\\
0 & b & y \\
1 & c & z
\end{array}\right)
\end{equation}
generates a scalable frame $F_{{\bf e}_1}^3 $, which also can be written in the same form as (\ref{26}).
We note that any tight or scalable frame in $\mathbb R^n$ with $n$ frame vectors is an orthogonal basis. A trivial example of a scalable dynamical frame is the following:
\begin{ex}\label{examplelemma} Let
\begin{equation}\label{companion}
A = \left( \begin{array}{cc}
0 & 1 \\
I_{n-1}& 0 \\
\end{array}\right) . \end{equation}
Then the sequence $F_{{\bf e}_1} ^ L $ is a scalable frame of $\mathbb R^n$ if and only if $L \geq n$.
\end{ex}
For instance, when $n= L=3$, the resulting frame is $F_{{\bf e}_1}^3 =\{ {\bf e}_1, {\bf e}_2,{\bf e}_3, {\bf e}_1\}$, and the scaled frame $ \{ {2}^{-1/2}{\bf e}_1, {\bf e}_2,{\bf e}_3,2^{-1/2}{\bf e}_1\}$ is a Parseval frame.
Notice that \eqref{companion} is an example of a companion \cite{HJ85} operator. It makes sense to explore the conditions under which a companion operator generates a scalable frame.
\section{Companion operators and generalizations}\label{compansection}
Let $a_1, \ldots, a_n \in \mathbb R$ which are not all zeros, then
\begin{equation}\label{companiondef}
A = \left( \begin{array}{c|c}
0 & a_1 \\
\hline
& a_2\\
I_{n-1}& \vdots \\
& a_n\\
\end{array}\right) \end{equation}
is called a companion operator \cite{HJ85}.
\begin{proposition}
Let the dynamical operator $A$ be a companion operator \eqref{companiondef} in $\mathbb R^n$, then we have
\begin{enumerate}
\item $ F_{{\bf e}_1}^{n-1} = I. $
\item for any orthogonal matrix $U$, the operator $UAU^{-1}$ generates an orthonormal basis $U$.
\end{enumerate}
\end{proposition}
It is known that the standard orthonormal basis $B$ can not be extended to a scalable frame by adding one vector ${\bf f} \in \mathbb{H} \setminus B$,
\cite{DKN15, KOF13}. Thus we explore when one can generate a dynamical frame by adding two vectors.
Although a companion operator $A$ does not generate a scalable frame $F_{{\bf e}_1}^{n} $, it can generate a scalable frame $F_{{\bf e}_1}^{n+1} $ under certain conditions.
Using the companion operator $A$, we have
\begin{equation}\label{thissystemisframe}
F_{{\bf e}_1}^{n} = ({\bf e}_1 \ldots {\bf e}_{n} \, \, {\bf f} ), \quad
F_{{\bf e}_1}^{n+1} = ({\bf e}_1 \ldots {\bf e}_{n} \, \, {\bf f} \,\, {\bf g}),
\end{equation}
where
\[ {\bf f}= \left( \begin{array}{c}
a_1 \\
a_2 \\
a_3\\
\vdots\\
a_{n-1} \\
a_n
\end{array}\right) \text{ and }
{\bf g}= \left( \begin{array}{c}
a_1a_n \\
a_1+ a_2a_n \\
a_2+ a_3a_n\\
\vdots\\
a_{n-2} + a_{n-1} a_n\\
a_{n-1} +a_n^2
\end{array}\right).
\]
Similar calculations as in observation (\ref{onlytwo}) produce the following result:
\begin{proposition}
\label{ext}
\cite{DKN15} Let $\{{{\bf e}}_1, \ldots {{\bf e}}_n\}$ be the standard orthonormal basis in $\mathbb R^n$ with $n \ge 2$. Let ${\bf f}$ and ${\bf g}$ be two unit-norm vectors in $\mathbb R^n$.
If either system $\{{{\bf e}}_1, \ldots {{\bf e}}_n, {\bf f}, {\bf g}\}$ or $\{{{\bf e}}_1, \ldots {{\bf e}}_{n-1}, {\bf f}, {\bf g}\}$ is scalable, then
${\bf f}$ and ${\bf g}$ have only two nonzero elements in the same entries.
\end{proposition}
We now assume that $F_{{\bf e}_1}^{n+1} $ is scalable. Then by Proposition \ref{ext},
$a_m=0$ implies that $a_{m-1}=0$ for $m\ge 2$. This implies that $a_1=\ldots = a_{n-2}=0$.
\begin{proposition}\label{companionstandardresult}
Let $a$ and $b$ be real numbers such that
$a>0$ and $0< -\frac{a^2}{a+b^2}<1$.
Then the companion operator $A$ in $\mathbb R^n$,
\begin{equation}\label{gennonherm}
A = \left( \begin{array}{cccccc}
0 & 0 & ...&0& 0 & 0 \\
1 & 0 & ...&0& 0& 0 \\
&.&.&.&.& \\
0 & 0 & ...&1 & 0 & a\\
0 & 0 & ...&0& 1 & b
\end{array}\right)
\end{equation}
generates a strictly scalable frame $F_{{\bf e}_1}^{n+1} $.
\end{proposition}
\begin{proof}
We have
\begin{equation}\label{nicegen}
F_{{\bf e}_1}^{n+1} =\left( \begin{array}{ccccc}
I_{n-2} &&&\\
& 1& 0 & a & ab \\
& 0& 1& b & a+b^2 \\
\end{array}\right).
\end{equation}
The strict scalability follows from Theorem \ref{2scale} and Theorem \ref{stackScale}.
\end{proof}
We note that the operator $A$ in (\ref{gennonherm}) is not diagonalizable. Next, we generalize the structure of $A$ while ensuring that the new matrix generates scalable frames by iterative actions.
\begin{ex}\label{g1}
Let $a$ and $b$ be real numbers such that $0< -\frac{a(a^2+bc)}{b^2(a+d)}<1$ and
$a>0$.
Then the operator
\begin{equation}\label{nicegen}
A =\left( \begin{array}{ccccc}
0 & 0 & 0 & \ldots & 0\\
1 & 0 & 0 &\ldots & 0\\
0 & 1 & 0 & \ldots & 0\\
\vdots & \vdots & \vdots &\vdots\\
0 & \ldots & 1 & a & c \\
0 & \ldots & 0 & b & d
\end{array}\right)
\end{equation}
generates a strictly scalable frame $F_{{\bf e}_1}^{n} $ for $\mathbb R^n$.
\end{ex}
\begin{proof}
We have
\begin{equation}\label{nicegen}
F_{{\bf e}_1}^{n} =\left( \begin{array}{cccc}
I_{n-2} &&&\\
& 1& a & a^2+bc \\
& 0& b & ab+bd \\
\end{array}\right).
\end{equation}
The strict scalability follows by Proposition \ref{2tight} and Proposition \ref{stackScale}.
\end{proof}
\begin{ex}
Let $2 \sin^2(\omega)-1 >0$. Then
\begin{equation}\label{nicegen}
A =\left( \begin{array}{ccccc}
0 & 0 & 0 & \ldots & 0\\
1 & 0 & 0 &\ldots & 0\\
0 & 1 & 0 & \ldots & 0\\
\vdots & \vdots & \vdots &\vdots\\
0 & \ldots & 1 & \cos(\omega) & -\sin(\omega) \\
0 & \ldots & 0 & \sin(\omega) & \cos(\omega)
\end{array}\right)
\end{equation}
generates a strictly scalable frame $F_{{\bf e}_1}^{n}$.
\end{ex}
\begin{ex}
Let $2 \sin^2(\phi)-1 >0$ and let
\begin{equation}\label{realschursimple}
A = \left( \begin{array}{cccccc}
\pm 1 & 0 & 0 & 0 & \ldots & 0 \\
0 & \pm 1 & 0 & 0 & \ldots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0& 0 & \ldots & \pm 1& 0 & 0 \\
0& 0 & \ldots & 0& \cos \phi & -\sin \phi \\
0& 0 & \ldots & 0& \sin \phi & \cos \phi
\end{array}\right).\end{equation}
The set
\begin{equation}
\{ {\bf e}_{n-1}, A{\bf e}_{n-1}, A^2{\bf e}_{n-1}\} \cup \bigcup_{l=1}^{n-2} \{ {\bf e}_l, A {\bf e}_l, \ldots, A^{L_l} {\bf e}_l \}
\end{equation}
is a strictly scalable frame of $\mathbb R^n$. \end{ex}
\section{Concluding remarks and generalizations}\label{conclusion}
We have studied the scalability of dynamical frames in a separable Hilbert space $\mathbb{H}$. Given an operator $A$ on $\mathbb{H}$ and a (at most countable) set $G \subset \mathbb{H}$,
we have explored the relations between $A$, $G$ and the number of iterations that make the system \eqref{oursystem} a scalable frame. When $\dim \mathbb{H}$ is finite, and $A$ is a normal operator, we have fully answered question (Q1).
Since we have not achieved a full answer for operators which are not unitary diagonalizable, we have offered a partial answer by studying block-diagonal operators, which are not necessarily normal. Note that the block-diagonal matrix $A$ in Theorem \ref{blockresultbig} cannot be normal if one of its blocks is not normal. Also, we have established the canonical dual frame for frames of type $F_G(A)$; in particular, we showed that the canonical dual frame has, as anticipated, an iterative set structure. This result holds true in any separable Hilbert space $\mathbb{H}$.
We now pose a new question, which is a generalization of (Q1):
\vspace{2.1mm}
(Q3) \textit{ Given multiple operators $A_s$, $s \in I$ on a separable Hilbert space $\mathbb{H}$, and one fixed vector ${\bf v} \in \mathbb{H}$, when is the system $\cup_{s \in I} \{ A_s ^j {\bf v} \}_{j=0}^{L_s}$ a (scalable) frame for $\mathbb{H}$?}
\vspace{2mm}
The next example shows how to generate a scalable frame for $\mathbb R^3$ using two dynamical operators.
\begin{ex}\label{multigenscalable} {Let } $\alpha =2\pi/3$,
\[ A_1 = \left( \begin{array}{ccc}
\cos{\alpha } & -\sin{\alpha} &0\\
\sin{\alpha} & \cos{\alpha } &0 \\
0&0&0
\end{array}\right), \; \text{and} \;
A_2 = \left( \begin{array}{ccc}
\cos{\alpha } & 0 &-\sin{\alpha } \\
0 &0&0 \\
\sin{\alpha } &0 & \cos{\alpha} \end{array}\right). \]
$$ \text{Then } \; \{ {\bf e}_1, A_1 {\bf e}_1, A_1^2 {\bf e}_1, A_2 {\bf e}_1, A_2^2 {\bf e}_1\} \; \text{is a strictly scalable frame for $\mathbb R^3$.}$$
\begin{center}
\begin{tikzpicture}[scale=1.75]
\draw [red] (-1,0) arc (180:360:1cm and 0.5cm);
\draw[red, dashed] (-1,0) arc (180:0:1cm and 0.5cm);
\draw [blue](0,1) arc (90:270:0.5cm and 1cm);
\draw[blue, dashed] (0,1) arc (90:-90:0.5cm and 1cm);
\draw (0,0) circle (1cm);
\shade[ball color=blue!10!white,opacity=0.20] (0,0) circle (1cm);
\draw[->] (-1.2,0)--(1.2, 0)node[right] {$y$};
\draw[->] (0,-1.2) -- (0,1.2) node[above] {$z$};
\draw[->] (0.5, 0.5) -- (-0.5, -0.5) node[left, below] {$x$};
\draw[ultra thick,->] (0, 0) -- (-0.45, -0.45) node[above] {${{\bf e}}_1$};
\draw[red, ultra thick, ->] (0, 0)--(0.93, 0.22 )node[right, above] {$A_1{{\bf e}}_1$};
\draw[red, ultra thick, ->] (0, 0)--(-0.67, 0.38 )node[right, above] {$A^2_1{{\bf e}}_1$};
\draw[blue, ultra thick, ->] (0, 0)--(0.28, -0.8 )node[right, above] {$A_2{{\bf e}}_1$};
\draw[blue, ultra thick, ->] (0, 0)--(0.27, 0.88 )node[right, above] {$A^2_2{{\bf e}}_1$};
\end{tikzpicture}
\end{center}
\end{ex}
The following proposition is a generalization of the principle introduced in Example \ref{multigenscalable}:
\begin{proposition}
Let $i, j, k,l \in \mathbb N$ be such that $p<k \le n, \, q <l \le n$, and let $N\in \mathbb N$.
For each $m=1, \ldots, N$, we define $A^{pq}_{kl} (m) =[ a_{ij} (m) ]_{ i, j =1}^n$ as
$$
a_{pq} (m) = a_m,\,
a_{pl} (m) = b_m,\,
a_{kq} (m) = c_m,\,
a_{kl} (m) = d_m.
$$
If for each $m =1, \ldots, N$, $a_m, b_m, c_m$ and $d_m$ satisfy the conditions of Corollary \ref{2scale}, and the system
\begin{equation} \label{nicesystm}
\{{{\bf e}}_1 \} \cup \cup_{m=1}^N \{ A^{pq}_{kl} (m) {{\bf e}}_1, (A^{pq}_{kl} (m))^2 {{\bf e}}_1 \}\end{equation} spans $\mathbb R^n$, then
\eqref{nicesystm} is a strictly scalable frame for $\mathbb R^n$.
\end{proposition}
\begin{proof}
By Corollary \ref{2scale}, the set
$ \{{{\bf e}}_1 \} \cup \{ A^{pq}_{kl} (m) {{\bf e}}_1, (A^{pq}_{kl} (m))^2 {{\bf e}}_1 \} $ is a scalable frame for a 2-dimensional subspace for each $m =1, \ldots, N$. Thus, there exist some suitable scaling coefficients $x(m), y(m), z(m)$, and by Theorem \ref{charTight},
$$ \widetilde{x(m){{\bf e}}_1} + \widetilde{y(m)A^{pq}_{kl} (m) {{\bf e}}_1} + \widetilde{z(m) A^{pq}_{kl} (m))^2 {{\bf e}}_1 }=0 .$$
This implies that the system \eqref{nicesystm}
is a scalable frame for $\mathbb R^n$.
\end{proof}
{For a frame generated by iterative actions of multiple operators, that is, a { \it multi-dynamical} frame, we find that its canonical dual frame is also multi-dynamical:}
\begin{thm}
Let $A_s$, $s \in I$, be operators on a separable Hilbert space $\mathbb{H}$, let $L_s \geq 0$, and fix a vector ${\bf v} \in \mathbb{H}$. If $\cup_{s \in I} \{ A_s ^j {\bf v} \}_{j=0}^{L_s}$ is a frame for $\mathbb{H}$, with frame operator $S$, then its canonical dual frame is
\begin{equation}
\cup_{s \in I} \{ B_s ^j {\bf f} \}_{j=0}^{L_s},
\end{equation}$\text{where} \; {\bf f} = S^{-1}{\bf v}, \; \text{and} \; B_s= S^{-1}A_s S, \, s \in I.$
\end{thm}
\begin{proof}
If $S$ denotes the frame operator of the frame $\cup_{s} \{ A_s ^j {\bf v} \}_{j=0}^{L_s}$ for $\mathbb{H}$, then its canonical dual frame elements are $S^{-1}A_s ^j {\bf v}$. Since $ B_s^j = S^{-1}A_s^j S$, we obtain that the dual frame elements are
$$S^{-1}A_s ^j {\bf v} =S^{-1}A_s ^j S S^{-1} {\bf v} =S^{-1}A_s ^j S {\bf f} = B_s^j {\bf f}.$$
\end{proof}
\section*{Acknowledgement}
We express our gratitude to Professor S. Narayan for many helpful conversations on this work.
Kim was supported by the Central Michigan University FRCE Research Type A Grant \#C48143. Aceska was supported by the BSU Aspire Research Grant ``Frame Theory and Modern Sampling Strategies''.
|
1,116,691,497,243 | arxiv | \section{Programmable Rydberg Arrays in 2D}
Our experiments are carried out on the second generation of an experimental platform described previously \cite{AtomArrayNature2017}. The new apparatus uses a spatial light modulator (SLM) to form a large, two-dimensional (2D) array of optical tweezers in a vacuum cell (Fig.~\ref{fig_trapping}a, Methods). This static tweezer array is loaded with individual $^{87}$Rb atoms from a magneto-optical trap (MOT), with a uniform loading probability of 50--60\% across up to 1000 tweezers. We rearrange the initially loaded atoms into programmable, defect-free patterns using a second set of moving optical tweezers that are steered by a pair of crossed acousto-optical deflectors (AODs) to arbitrary positions in 2D (Fig.~\ref{fig_trapping}a) \cite{AntoineAssembly2016}. Our parallel rearrangement protocol (see Methods) enables rearrangement into a wide variety of geometries including square, honeycomb, and triangular lattices (left panels in Fig.~1b-d). The procedure takes a total time of $50$--$100$~ms for arrays of up to a few hundred atoms and results in filling fractions exceeding $99\%$.
Qubits are encoded in the electronic ground state
$|g\rangle$ and the highly-excited $n = 70$ Rydberg state $|r\rangle$ of each atom.
We illuminate the entire array from opposite sides with two counter-propagating laser beams at 420 and 1013 nm, shaped into light sheets (see Methods), to coherently couple $|g\rangle$ to $|r\rangle$ via a two-photon transition (Fig.~\ref{fig_trapping}a).
The resulting many-body dynamics $U(t)$ are governed by a combination of the laser excitation and long-range van der Waals interactions between Rydberg states ($V_{ij} = V_0 / |\mathbf{x_i} - \mathbf{x_j}|^6$), described by the Hamiltonian
\begin{equation}
\frac{H}{\hbar} = \frac{\Omega}{2}\sum_i (|g_i\rangle\langle r_i| + |r_i\rangle\langle g_i|) - \Delta\sum_i n_i + \sum_{i<j} V_{ij}n_i n_j
\label{ryd_ham}
\end{equation}
where $\hbar$ is the Planck's constant, $n_i = |r_i\rangle \langle r_i|$, and $\Omega$ and $\Delta$ are the two-photon Rabi frequency and detuning, respectively.
After evolution under the Hamiltonian (\ref{ryd_ham}), the state of each atomic qubit is read out by fluorescence imaging that detects only atoms in $|g\rangle$, while atoms in $|r\rangle$
are detected as loss. Detection fidelities exceed 99\% for both states (see Methods).
The Rydberg blockade mechanism \cite{jaksch2000fast,lukin2001} is central to understanding the programmable dynamics driven by the Hamiltonian (\ref{ryd_ham}). It originates from the long-range interactions between Rydberg states, providing an effective constraint that prevents simultaneous excitation of atoms within a blockade radius $R_b\equiv(V_0 / \Omega)^{1/6}$. We control the effective blockade range $R_b/a$ by programming the lattice spacing $a$ for the atom array. Using these control tools, we explore quantum evolution resulting in a wide variety of quantum phases.
\section{Checkerboard Phase}
\begin{figure}
\includegraphics[width=\columnwidth]{Figure2_v8_AI.pdf}
\caption{\textbf{Benchmarking of quantum simulator using checkerboard ordering.} \textbf{a.} A quasi-adiabatic detuning sweep $\Delta(t)$ at constant Rabi frequency $\Omega$ is used to prepare the checkerboard ground state with high fidelity. \textbf{b.} Two-site correlation function $G^{(2)}(k,l)$, averaged over all pairs of atoms on a 12$\times$12 array, showing near-perfect alternating correlations throughout the entire system. \textbf{c.} Exponential fits of rectified horizontal and vertical correlations are used to extract correlation lengths in the corresponding directions $\xi_H$ and $\xi_V$. \textbf{d.} Histogram of many-body state occurrence frequency after 6767 repetitions of the experiment on a 12$\times$12 array. The two most frequently occurring microstates correspond to the two perfect checkerboard orderings, and the next four most common ones are those with a single defect in one of the corners of the array. \textbf{e.} Probability of finding a perfect checkerboard ground state as a function of array size. The slightly higher probabilities in odd$\times$odd systems is due to commensurate edges on opposing sides of the array. All data in this figure are conditioned on defect-free rearrangement of the array.
}
\label{fig_checkerboard}
\end{figure}
The smallest value of $R_b/a$ that results in an ordered phase for the quantum many-body ground state of the system corresponds to $R_b/a\approx 1$, where only one out of every pair of nearest-neighbor atoms can be excited to $|r\rangle$. On a square array, this constraint leads to a $\mathbb{Z}_2$-symmetry-broken \textit{checkerboard} phase with an antiferromagnetic (AF) ground state. To realize such a state, we initialize the array at $R_b/a = 1.15$ ($a = 6.7~\mu$m, $\Omega = 2\pi \times 4.3$~MHz) with all atoms in $|g\rangle$. We then dynamically sweep the detuning $\Delta$ from negative to positive values while keeping the Rabi frequency $\Omega$ fixed to bring the system quasi-adiabatically into the checkerboard phase (Fig.~\ref{fig_trapping}b and Fig.~\ref{fig_checkerboard}a). A similar approach can be used to create analogous ordered phases on other lattice geometries (Fig.~\ref{fig_trapping}c, d).
We quantify the strength of antiferromagnetic correlations in the checkerboard phase over many experimental repetitions using the connected density-density correlator $G^{(2)}(k,l) = \frac{1}{N_{(k,l)}} \sum_{i,j} (\langle n_i n_j\rangle - \langle n_i\rangle \langle n_j\rangle)$, where the sum is over all pairs of atoms $(i,j)$ separated by the same relative lattice displacement $\mathbf{x}$\,$=$\,$(k,l)$ sites, normalized by the number of such pairs $N_{(k,l)}$. Our measurement of $G^{(2)}(k,l)$ on a 12$\times$12 system (Fig.~\ref{fig_checkerboard}b) yields horizontal and vertical correlation lengths of $\xi_H = $ 11.1(1) and $\xi_V = $ 11.3(1) respectively (Fig.~\ref{fig_checkerboard}c), showing long-range correlations across the entire 144 atom array. These exceed the values reported previously for two-dimensional systems \cite{AntoineAdiabatic2017, WaseemAdiabatic2017} by nearly an order of magnitude.
Single-site readout also allows us to study individual many-body states of our system (Fig.~\ref{fig_checkerboard}d). Out of 6767 repetitions on a 12x12 array, the two perfectly ordered states $|\text{AF}_1\rangle$ and $|\text{AF}_2\rangle$ are by far the most frequently observed microstates, with near-equal probabilities between the two. We benchmark our state preparation by measuring the probability of observing perfect checkerboard ordering as a function of system size (Fig.~\ref{fig_checkerboard}e). We find empirically that the probability scales with the number of atoms according to an exponential $0.97^N$, offering a benchmark that includes all experimental imperfections such as finite detection fidelity, non-adiabatic state preparation, spontaneous emission, and residual quantum fluctuations in the ordered state (see Methods). Remarkably, even for a system size as large as 15$\times$15 (225 atoms), we still observe the perfect antiferromagnetic ground state with probability 0.10${^{+5}_{-4}}$\% within the exponentially large Hilbert space of dimension $2^{225} \approx 10^{68}$.
\section{(2+1)D Ising Quantum Phase Transition}
We now describe quantitative studies of the quantum phase transition into the checkerboard phase. Quantum phase transitions fall into universality classes characterized by critical exponents that determine \textit{universal} behavior near the quantum critical point, independent of the microscopic details of the Hamiltonian \cite{SachdevQPT}. The transition into the checkerboard phase is expected to be in the paradigmatic---but never previously observed---quantum Ising universality class in (2+1) dimensions \cite{Rhine2020} (with expected dynamical critical exponent $z=1$ and correlation length critical exponent $\nu = 0.629$).
To explore universal scaling across this phase transition for a large system, we study the dynamical build-up of correlations associated with the quantum Kibble-Zurek mechanism \cite{QKZM2005, Keesling2019} on a $16\times16$ (256 atoms) array, at fixed $R_b/a = 1.15$. We start at a large negative detuning with all atoms in $|g\rangle$ and linearly increase $\Delta/\Omega$, stopping at various points to measure the growth of correlations across the phase transition (Fig.~\ref{fig_kzm}a, b). Slower sweep rates $s=d\Delta/dt$ result in longer correlation lengths $\xi$, as expected (Fig.~\ref{fig_kzm}c).
The quantum Kibble-Zurek mechanism predicts a universal scaling relationship between the control parameter $\Delta$ and the correlation length $\xi$. Specifically, when both $\Delta$ and $\xi$ are rescaled with the sweep rate $s$ (relative to a reference rate $s_0$)
\begin{gather}
\tilde{\xi}=\xi(s/s_0)^\mu\\
\tilde{\Delta}=(\Delta-\Delta_c)(s/s_0)^\kappa
\label{kz_rescale}
\end{gather}
with exponents $\mu \equiv \nu/(1+z\nu)$ and $\kappa \equiv -1/(1+z\nu)$, then universality implies that the rescaled $\tilde{\xi}$ vs. $\tilde{\Delta}$ collapses onto a single curve \cite{Keesling2019} for any sweep rate $s$. Taking $z=1$ to be fixed (as expected for a Lorentz-invariant theory), we extract $\nu$ for our system by finding the value that optimizes this universal collapse.
In order to obtain $\nu$, we first independently determine the position of the critical point $\Delta_c$, which corresponds to the peak of the susceptibility $\chi = - \partial^2 \langle H \rangle /\partial\Delta^2$ and is associated with a vanishing gap \cite{SachdevQPT}.
For adiabiatic evolution under the Hamiltonian (\ref{ryd_ham}), the susceptibility $\chi$ is related to the mean Rydberg excitation density $\langle n \rangle$ by $\chi = \partial \langle n \rangle / \partial \Delta$ according to the Hellman-Feynman theorem. We measure $\langle n \rangle$ vs. $\Delta$ along a slow linear sweep to remain as adiabatic as possible. We take the numerical derivative of the fitted data to obtain $\chi$, finding its peak to be at $\Delta_c/\Omega = 1.12(4)$ (see Methods).
Having identified the position of the critical point, we now extract the value of $\nu$ that optimizes data collapse (inset of Fig.~\ref{fig_kzm}d and Methods). The resulting $\nu =0.62(4)$ rescales the experimental data to clearly fall on a single universal curve (Fig.~\ref{fig_kzm}d). This measurement is in good agreement with the predicted $\nu = 0.629$ for the quantum Ising universality class in (2+1) dimensions\cite{Rhine2020}, and distinct from both the mean-field value\cite{SachdevQPT} of $\nu = 1/2$ and the previously verified value in (1+1) dimensions \cite{Keesling2019} of $\nu = 1$. Despite imperfections associated with non-adiabatic state preparation and decoherence in our system, this demonstration of universal scaling highlights opportunities for quantitative studies of quantum critical phenomena on our platform.
\begin{figure}
\includegraphics[width=\columnwidth]{fig_kzm_v7_16x16.pdf}
\caption{\textbf{Observation of the (2+1)D Ising quantum phase transition on a 16$\times$16 array.}
\textbf{a.} The transition into the checkerboard phase is explored using a linear detuning sweep $\Delta(t)$ at constant $\Omega$. The resulting checkerboard ordering is measured at various endpoints.
\textbf{b.} Example of growing correlations $G^{(2)}$ with increasing $\Delta/\Omega$ along a linear sweep with sweep rate $s = 15$~MHz/$\mu$s. \textbf{c.} Growth of correlation length $\xi$ for $s$ spanning an order of magnitude from 15~MHz/$\mu$s to 120~MHz/$\mu$s. $\xi$ used here measures correlations between the coarse-grained local staggered magnetization (see Methods). \textbf{d.} For an optimized value of the critical exponent $\nu$, all curves collapse onto a single universal curve when rescaled relative to the quantum critical point $\Delta_c$. Inset: distance $D$ between all pairs of rescaled curves as a function of $\nu$ (see Methods). The minimum at $\nu = 0.62(4)$ (red dashed line) yields the experimental value for the critical exponent (red and gray shaded regions indicate uncertainties).
}
\label{fig_kzm}
\end{figure}
\begin{figure*}[!t]
\includegraphics[width= \textwidth]{Figure3_v10.pdf}
\caption{\textbf{Phase diagram of the two-dimensional square lattice.} \textbf{a.} Example fluorescence image of atoms in the checkerboard phase and the corresponding Fourier transform averaged over many experimental repetitions $\langle\mathcal{F}(\mathbf{k})\rangle$, highlighting the peak at $(\pi,\pi)$ (circled). \textbf{b.} Image of atoms in the striated phase and the corresponding $\langle\mathcal{F}(\mathbf{k})\rangle$ highlighting peaks at $(0,\pi)$, $(\pi,0)$ and $(\pi,\pi)$ (circled). \textbf{c.} Image of atoms in the star phase with corresponding Fourier peaks at $(\pi/2,\pi)$ and $(\pi,0)$ (circled), as well as at symmetric partners $(\pi,\pi/2)$ and $(\pi,0)$. \textbf{d.} The experimental phase diagram is constructed by measuring order parameters for each of the three phases for different values of the tunable blockade range $R_b/a$ and detuning $\Delta/\Omega$. Red markers indicate the numerically calculated phase boundaries (see Methods). \textbf{e.} The order parameters evaluated numerically using DMRG for a 9$\times$9 array (see Methods).
}
\label{fig_phases}
\end{figure*}
\section{Phase Diagram of the Square Lattice}
A rich variety of new phases have been recently predicted for the square lattice when Rydberg blockade is extended beyond nearest neighbors \cite{Rhine2020}. To map this phase diagram experimentally, we use the Fourier transform of single-shot measurement outcomes $\mathcal{F}(\mathbf{k}) = \left|\sum_{i} \text{exp} (i \mathbf{k \cdot x_i}/a) n_i / \sqrt{N}\right|$, which characterizes long-range order in our system. For instance, the checkerboard phase shows a prominent peak at $\mathbf{k}=(\pi,\pi)$, corresponding to the canonical antiferromagnetic order parameter: the staggered magnetization (Fig.~\ref{fig_phases}a). We construct order parameters for all observed phases using the symmetrized Fourier transform $ \tilde{\mathcal{F}} (k_1,k_2) = \langle \mathcal{F}(k_1,k_2) + \mathcal{F}(k_2,k_1) \rangle/2$, averaged over experimental repetitions, which takes into account the reflection symmetry in our system (see Methods).
When interaction strengths are increased such that next-nearest (diagonal) neighbor excitations are suppressed by Rydberg interactions ($R_b/a \gtrsim \sqrt{2}$), translational symmetry along the diagonal directions is also broken, leading to the appearance of a new \textit{striated} phase (Fig.~\ref{fig_phases}b). In this phase, Rydberg excitations are mostly located two sites apart and hence appear both on alternating rows and alternating columns. This ordering is immediately apparent through the observation of prominent peaks at $\mathbf{k}~=~(0,\pi)$, $(\pi, 0)$, and $(\pi, \pi)$ in the Fourier domain. As discussed and demonstrated below, quantum fluctuations, appearing as defects on single shot images (Fig.~\ref{fig_phases}b), play a key role in stabilizing this phase.
At even larger values of $R_b/a \gtrsim 1.7$, the \textit{star} phase emerges, with Rydberg excitations placed every four sites along one direction and every two sites in the perpendicular direction. There are two possible orientations for the ordering of this phase, so Fourier peaks are observed at $\mathbf{k}$ = $(\pi, 0)$ and $(\pi/2, \pi)$, as well as at their symmetric partners $(0, \pi)$ and $(\pi, \pi/2)$ (Fig.~\ref{fig_phases}c). In the thermodynamic limit, the star ordering corresponds to the lowest-energy classical configuration of Rydberg excitations on a square array with a density of 1/4.
We now systematically explore the phase diagram on 13$\times$13 (169 atoms) arrays, with dimensions chosen to be simultaneously commensurate with checkerboard, striated, and star orderings (see Methods). For each value of the blockade range $R_b/a$, we linearly sweep $\Delta$ (similar to Fig.~\ref{fig_kzm}a but with a ramp-down time of 200~ns), stopping at evenly spaced endpoints to raster the full phase diagram.
For every endpoint, we extract the order parameter corresponding to each many-body phase, and plot them separately to show their prominence in different regions of the phase diagram (Fig.~\ref{fig_phases}d).
We compare our observations with numerical simulations of the phase diagram using the density-matrix renormalization group (DMRG) on a smaller 9$\times$9 array with open boundary conditions (Fig.~\ref{fig_phases}e and red markers in Fig.~\ref{fig_phases}d). We find excellent agreement in the extent of the checkerboard phase. For the striated and star phases, we also find good similarity between experiment and theory, although due to their larger unit cells and the existence of many degenerate configurations, these two phases are more sensitive to both edge effects and experimental imperfections. We emphasize that the numerical simulations evaluate the order parameter for the exact ground state of the system at each point, while the experiment quasi-adiabatically prepares these states via a dynamical process. These results establish the potential of programmable quantum simulators with tunable, long-range interactions for studying large quantum many-body systems that are challenging to access with state-of-the-art computational tools \cite{Montangero2020}.
\section{Quantum Fluctuations in the Striated Phase}
\begin{figure}
\includegraphics[width=\columnwidth]{FIG4_V4_new.pdf}
\caption{\textbf{Probing correlations and coherence in the striated phase via quench dynamics.} \textbf{a.} Unit cell of striated ordering (dashed box) with (0,0) and (1,1) sublattices outlined in red and blue, respectively. The fill shade on each site reflects the mean Rydberg excitation. \textbf{b.} The variational states for the (0,0) and (1,1) sublattices are illustrated on the Bloch sphere (see Methods). The black arrow illustrates the phase $\phi_q$ of $\Omega$ during the quench. \textbf{c.} Probability $P^{(d)}$ of an excitation, conditioned on observing no nearest-neighbor excitations, and zero (red), three (light blue), or four (dark blue) diagonal next-nearest neighbor excitations. $P^{(0)}$ is plotted for $\phi_q = \pi/2$, showing resonant de-excitation of the (0,0) sublattice near the bare-atom resonance (leftmost vertical line). $P^{(3)}$ and $P^{(4)}$ are plotted for $\phi_q = -\pi/2$, showing excitation peaks for the (1,1) sublattice at interaction shifts corresponding to 3 or 4 diagonal neighbors (two rightmost vertical lines). \textbf{d, e.} $P^{(0)}$ and $P^{(4)}$ vary with quench phase $\phi_q$ at their corresponding resonances ($\Delta_q/2\pi$ = 1.4 and 20.4~MHz, respectively), demonstrating coherence on both the (0,0) and (1,1) sublattices. Solid line fits are used to extract Bloch vector components.
}
\label{fig_striated}
\end{figure}
We now explore the nature of the striated phase. In contrast to the checkerboard and star phases, which can be understood from a dense-packing argument \cite{Rhine2020}, this phase has no counterpart in the classical limit ($\Omega \to 0$) (see Methods). Striated ordering allows the atoms to lower their energy by partially aligning with the transverse field, favoring this phase at finite $\Omega$. This can be seen by considering the $2\times2$ unit cell, within which one site has a large Rydberg excitation probability (designated the (0,0) sublattice) (Fig.~\ref{fig_striated}a). Excitations on its nearest-neighbor (0,1) and (1,0) sublattices are suppressed due to strong Rydberg blockade. The remaining atoms on the (1,1) sublattice have no nearest neighbors in the Rydberg state and experience a much weaker interaction from four next-nearest (diagonal) neighbors on the (0,0) sublattice, thus allowing the (1,1) atoms to lower their energy by forming a coherent superposition between ground and Rydberg states (Fig.~\ref{fig_striated}b).
We experimentally study quantum fluctuations in this phase by observing the response of the system to short quenches (with quench times $t_q < 1/\Omega_q$). The dependence on the detuning $\Delta_q$ and laser phase $\phi_q$ of the quench contains information about local correlations and coherence, which allows us to characterize the quantum states on the different sublattices. The quench resonance for each site depends on the state of its nearest and next-nearest neighbors. Due to the large difference between the interaction energies on the (0,0) and (1,1) sublattices, when one sublattice is resonantly driven, the other is effectively frozen.
The nature of the striated phase is revealed using nine-particle operators to measure the state of an atom, conditioned on its local environment. Specifically, we evaluate the conditional Rydberg density $P^{(d)}$, defined as the excitation probability of an atom if all nearest neighbors are in $|g\rangle$, and exactly $d$ next-nearest (diagonal) neighbors are in $|r\rangle$ (see Methods). For $d=0$, we observe a dip in $P^{(0)}$ near the bare atom resonance (Fig.~\ref{fig_striated}c), corresponding to resonant de-excitation of the (0,0) sublattice. Meanwhile, $P^{(3)}$ and $P^{(4)}$ have two separate peaks that correspond to resonant excitation of the (1,1) sublattice with $d=3$ and $d=4$ next-nearest neighbor excitations, respectively (Fig.~\ref{fig_striated}c). Remarkably, we find that the quench response of both the (0,0) and (1,1) sublattices depends on the phase $\phi_q$ of the driving field during the quench (Fig.~\ref{fig_striated}d,e). The measured visibilities, together with a simple mean-field model (see Methods), enable the estimation of unknown Bloch vector components on the two sublattices, yielding $\langle \sigma_x\rangle = -0.82(6)$, $\langle \sigma_y\rangle = 0.25(2)$ for the (0,0) sublattice, and $\langle \sigma_x \rangle = -0.45(4)$, $\langle \sigma_y\rangle = 0.09(1)$ for the (1,1) sublattice. We emphasize that accurate characterization requires the use of more sophisticated variational wavefunctions (based on e.g. tensor networks) and warrants further investigation. This approach can also be extended through techniques such as shadow tomography \cite{PreskillShadowTomography2020}.
\section{Outlook}
These experiments demonstrate that two-dimensional Rydberg atom arrays constitute a powerful platform for programmable quantum simulations with hundreds of qubits. We expect that system size, quantum control fidelity, and degree of programmability can all be increased considerably via technical improvements. In particular, array sizes and rearrangement fidelities, along with atomic state readout, are currently limited by collisions with background gas particles, and can be improved with an upgraded vacuum system \cite{EndresEntanglement2020} and increased photon collection efficiency. Quantum coherence can be enhanced using higher-power Rydberg lasers and by encoding qubits in hyperfine ground states \cite{AtomArrayPRL2019, WhitlockRydbergReview2020}. Tweezers with different atomic \cite{ThompsonYb2019,EndresEntanglement2020,KaufmanClock2020} and molecular \cite{DoyleMoleculeTweezers2019, NiMoleculeTweezers2019} species can provide additional features and lead to novel applications in both quantum simulations and metrology.
Finally, rapidly switchable local control beams can be used to perform universal qubit operations in parallel across the system.
Our experiments realize several new quantum phases and provide unprecedented insights into quantum phase transitions in two-dimensional systems. These studies can be extended along several directions, including the exploration of non-equilibrium entanglement dynamics via rapid quenches
across quantum phase transitions \cite{Turner2018, DalmonteLatticeGaugeTheories2020, Scars2020}, the investigation of topological quantum states of matter on frustrated lattices \cite{samajdar2020quantum, verresen2020prediction}, the simulation of lattice gauge theories \cite{LatticeGaugeTheoryReview2020, MontangeroLGT2020}, and the study of broader classes of spin models using hyperfine encoding \cite{ZollerRydbergSimulator2010}. Quantum information processing can also be explored with hardware-efficient methods for multi-qubit operations \cite{AtomArrayPRL2019} and protocols for quantum error correction and fault tolerant control \cite{RydbergFaultTolerance2017}.
Finally, our approach is well suited for efficient implementation of novel algorithms for quantum optimization \cite{FarhiQAOA2014,LeoQAOA} and sampling \cite{ WildSampling2020}, enabling experimental tests of their performance with system sizes exceeding several hundred qubits.
\vfill\null
\section{Methods}
\noindent\textbf{2D Optical Tweezer Array}\\
Our 2D tweezer array is generated by a free-running 810-nm Ti:Sapphire laser (M Squared, 18-W pump). The laser illuminates a phase-control spatial light modulator (Hamamatsu X13138-02), which imprints a computer generated hologram on the wavefront of the laser field. The phase hologram is calculated using the phase-fixed weighted Gerchberg-Saxton (WGS) algorithm \cite{Kim:19} to produce an arbitrary arrangement of tweezer spots after propagating to the focus of a microscope objective (Mitutoyo: 3.5~mm glass thickness corrected, 50$\times$, NA=0.5). Using this method, we can create tweezer arrays with roughly 1000 individual tweezers (Extended Data Fig.~\ref{fig_large_tweezer_arrays}). When calculating the phase hologram, we improve trap homogeneity by pre-compensating for the variation in diffraction efficiency across the tweezer array (roughly given by $\sinc^2 (\frac{\pi}{2}(\theta_\text{trap}/\theta_\text{max}))$ where $\theta$ denotes the deflection angle from zeroth order).
We also use the phase control of our SLM to correct for optical aberrations on tweezers within the experimentally-used field of view at the plane of atoms (Extended Data Fig.~\ref{fig_tweezer_aberrations}). Aberrations reduce the peak intensity of focal spots (characterized by the Strehl ratio), and correspondingly reduce the light shift of our tweezers on the atoms. By measuring these light shifts as we scan several low-order Zernike polynomials, we quantify and correct for various aberrations in our optical system. Using this method, we compensate for 70 milliwaves of aberrations, observe a total increase of $18\%$ in our trap intensity (Extended Data Fig.~\ref{fig_tweezer_aberrations}c), and measure a corresponding reduction in the range of trap frequencies (Extended Data Fig.~\ref{fig_tweezer_aberrations}d). Aberration correction additionally allows us to place tweezers closer together (minimum separation 3~$\mu$m) to reach larger blockade ranges $R_b/a$.
Tweezers in the array have waists $\sim$~900~nm, trap depths of $\sim~2\pi \times 17$~MHz, and radial trap frequencies of $\sim~2\pi \times ~80$~kHz. In each experimental cycle, the tweezers are loaded from a magneto-optical trap (MOT) with uniform loading probabilities of 50--60\% after 50--100~ms loading time.\\
\noindent\textbf{Atom Rearrangement}\\
Atoms are rearranged using an additional set of dynamically moving tweezers, which are overlaid on top of the SLM tweezer array. These movable tweezers are generated by a separate 809-nm laser source (DBR from Photodigm and tapered amplifier from MOGLabs), and are steered with a pair of independently-controlled crossed acousto-optic deflectors (AODs) (AA Opto Electronic DTSX-400). Both AODs are driven by an arbitrary waveform which is generated in real time using our home-built waveform generation software and an arbitrary waveform generator (AWG) (M4i.6631-x8 by Spectrum Instrumentation). Dynamically changing the RF frequency allows for continuous steering of beam positions, and multi-frequency waveforms allow for multiple moving tweezers to be created in parallel \cite{Endres2016}.
While many 2D sorting protocols have been described previously \cite{AntoineAssembly2016, AhnSorting2017, MingshengZhan2DSorting, Birkl2019, Antoine2DSorting}, we implement a novel protocol which is designed to leverage parallel movement of multiple atoms simultaneously. More specifically, we create a row of moving traps which scans upwards along the SLM tweezer array to move one atom within each column up in parallel. This is accomplished by scanning a single frequency component on the vertical AOD to move from the bottom to the top of the SLM array, during which individual frequency components are turned on and off within the horizontal AOD to create and remove tweezers at the corresponding columns. This protocol is designed for SLM tweezer arrays in which traps are grouped into columns and rows. While this does constrain the possible geometries, most lattice geometries of interest can still be defined on a subset of points along fixed columns and rows. \\
\noindent\textbf{Rearrangement Algorithm}\\
Here we detail the rearrangement algorithm, which is illustrated in Extended Data Fig.~3. It operates on an underlying rectangular grid of rows and columns, where the SLM traps correspond to vertices of the grid. We pre-program a set of `target traps' that we aim to fill.
\emph{Pre-sorting:} We begin by ensuring that each column contains a sufficient number of atoms to fill the target traps in that column. In each experimental cycle, due to the random loading throughout the array, some columns may contain excess atoms while other columns may lack a sufficient number of atoms. Accordingly, we apply a `pre-sorting' procedure in which we move atoms between columns. To fill a deficient column $j$, we take atoms from whichever side of $j$ has a larger surplus.
We identify which atoms to take by finding the nearest atoms from the surplus side which are in rows for which column $j$ has an empty trap.
We then perform parallel horizontal sorting to move these atoms into the empty traps of $j$ (not all surplus atoms need to be from the same source column).
If the one-side surplus is insufficient to fill column $j$, then we move as many surplus atoms as possible from this one side and leave $j$ deficient. We then proceed to the next deficient column, and cycle through until all columns have sufficient atoms.
In typical randomly loaded arrays, this process takes a small number of atom moves compared to the total number of moves needed for sorting. This specific algorithm can fail to properly distribute atoms between columns due to lack of available atoms, but these failures are rare and do not limit the experimental capabilities
\emph{Ejection:} After pre-sorting, we eject excess atoms in parallel by scanning the vertical AOD frequency downward, beginning at a row in which we want to pick up an atom, and ending below the bottom row of the array.
In each downward scan, we eject a single atom from each column containing excess atoms; we repeat this process until all excess atoms are ejected.
\emph{Parallel sorting within columns:} After pre-sorting and ejection, each column has the correct number of atoms to fill all of its target traps by moving atoms up/down within the column. We now proceed to shuffle the $i^{\text{th}}$-highest loaded atoms to the $i^{\text{th}}$-highest target traps.
As the atoms cannot move through each other, in a single vertical scan atoms are moved as close as possible to their target locations, reaching their targets unless they are blocked by another atom.
We repeat upward/downward scans until all atoms reach their target locations. \\
\noindent\textbf{Rearrangement Parameters and Results}\\
When using moving tweezers to pick up and drop off atoms in the SLM traps, the moving tweezers ramp on/off over $15~\mu$s while positioned to overlap with the corresponding SLM trap. The moving tweezers are approximately twice as deep as the static traps, and move atoms between SLM traps with a speed of 75~$\mu$m/ms. Typical rearrangement protocols take a total of 50-100~ms to implement in practice, depending on the size of the target array and the random initial loading.
Alignment of the AOD traps onto the SLM array is pre-calibrated by measuring both trap arrays on a monitor CMOS camera and tuning the AOD frequencies to match positions with traps from the SLM array.
A single round of rearrangement results in typical filling fractions of $\sim~98.5\%$ across all target traps in the system. This is limited primarily by the finite vacuum-limited lifetime ($\sim$~10~s) and the duration of the rearrangment procedure. To increase filling fractions, we perform a second round of rearrangement (having skipped ejection in the first round to keep excess atoms for the second round). Since the second round of rearrangement only needs to correct for a small number of defects, it requires far fewer moves and can be performed more quickly, resulting in less background loss. With this approach, we achieve filling fractions of $\sim 99.2\%$ over more than 200 sites, with a total experimental cycle time of $400$~ms.\\
\noindent\textbf{Rydberg Laser System}\\
Our Rydberg laser system is an upgraded version of a previous setup \cite{AtomArrayCats2019}. The 420-nm laser is a frequency-doubled Ti:Sapphire laser (M Squared, 15-W pump). We stabilize the laser frequency by locking the fundamental to an upgraded ultra-low expansion (ULE) reference cavity (notched cylinder design from Stable Laser Systems), with finesse $\mathcal{F}=30,000$ at 840~nm. The 1013-nm laser source is an external-cavity diode laser (Toptica DL Pro), which is locked to the same reference cavity ($\mathcal{F} = 50,000$ at 1013~nm). To suppress high-frequency phase noise from this diode laser, we use the transmitted light through the cavity, which is filtered by the narrow cavity transmission spectrum ($30$~kHz linewidth) \cite{AtomArrayPRL2018}. This filtered light is used to injection-lock another laser diode, whose output is subsequently amplified to 10 W by a fiber amplifier (Azur Light Systems).
Using beam shaping optics to homogeneously illuminate the atom array with both Rydberg lasers, we achieve single-photon Rabi frequencies of $(\Omega_\text{420}, \Omega_\text{1013}) = 2\pi \times (160, 50)~$MHz. We operate with an intermediate state detuning $\delta = 2\pi \times 1$~GHz, resulting in two-photon Rabi frequency $\Omega = \Omega_\text{420} \Omega_\text{1013} / 2\delta \sim 2\pi \times 4$~MHz. Small inhomogeneities in the Rydberg beams result in Rabi frequency variations of $\sim 2\%$ RMS and $\sim 6\%$ peak-to-peak across the array.
With these conditions, we estimate an off-resonant scattering rate of $1/(20~\mu$s) for atoms in $\ket{g}$ and $1/(150~\mu$s) for atoms in $\ket{r}$ at peak power.\\
\noindent\textbf{Rydberg Beam Shaping}\\
We illuminate our 2D atom array with counter-propagating Rydberg laser beams from each side. Instead of using elliptical Gaussian beams, we shape both Rydberg excitation beams into one-dimensional top-hats (light sheets) to homogeneously illuminate the plane of atoms (Extended Data Fig.~\ref{fig_tophat}). To ensure homogenous illumination over the entire array, we define our target field profile in the plane of the atoms with both uniform amplitude cross section and flat phase profile. Using a single phase-only SLM in the Fourier plane to control both phase and amplitude in the image plane is inherently limited in efficiency; therefore, in practice, we compromise between optimizing hologram efficiency and beam homogeneity. We generate these holograms using the conjugate gradient minimization algorithm (Extended Data Fig.~\ref{fig_tophat}c)\cite{Bowman:17}. In all experiments in this work, we use 1D top-hat beams with a flat-width of $105~\mu$m and a perpendicular Gaussian width of $25~\mu$m. The conversion efficiencies into the top-hat modes are 30\% for 420~nm and 38\% for 1013~nm.
Since holographic beam shaping relies on the intricate interplay of different high spatial frequency components in the light field, it is extremely sensitive to optical aberrations.
We correct for all aberrations up to the window of our vacuum chamber by measuring the amplitude and phase of the electric field as it propagates through the optical beampath (Extended Data Fig.~\ref{fig_tophat}a,b) \cite{Zupancic:16}. We do so by picking off a small portion of the Rydberg beam and observing it on a camera with small pixel size and with sensor cover removed for high-fidelity beam characterization (Imaging Source DMM 27UP031-ML). In this way, we reduce the wavefront error in our beam down to $\lambda/100$, and also use the measured field profile as the starting guess in our hologram generation algorithm (Extended Data Fig.~\ref{fig_tophat}~a,b). Furthermore, by imaging the top-hat beams we also correct for remaining inhomogeneities by updating the input of our optimization algorithm (Extended Data Fig. \ref{fig_tophat}e,f). Due to aberrations and imperfections of the vacuum windows, we observe slightly larger intensity variations on the atoms than expected ($\sim 3\%$ RMS, $\sim 10\%$ peak-to-peak). \\
\noindent\textbf{Rydberg Pulses}\\
After initializing our atoms in the ground state $|g\rangle$, the tweezer traps are turned off for a short time ($<$5~$\mu$s) during which we apply a Rydberg pulse. The pulse consists of a time-dependent Rabi frequency $\Omega(t)$, time-dependent detuning $\Delta(t)$, and a relative instantaneous phase $\phi(t)$. This is implemented by controlling the amplitude, frequency, and phase of the 420-nm laser using a tandem AOM system, similar to what is described previously \cite{AtomArrayCats2019}.\\
\noindent\emph{Quasi-Adiabatic Sweeps:} To prepare many-body ground states with high fidelity, we use an optimized quasi-adiabatic pulse shape (Fig.~\ref{fig_checkerboard}a). The coupling $\Omega(t)$ is initially ramped on linearly at large fixed negative detuning, held constant during the detuning sweep $\Delta(t)$, and finally ramped down linearly at large fixed positive detuning. The detuning sweep $\Delta(t)$ consists of a cubic spline interpolation between five points: initial detuning, final detuning, an inflection point where the slope reaches a minimum, and two additional points that define the duration of the slow part of the sweep. The sweep used for finding perfect checkerboard ground state probabilities (Fig.~\ref{fig_checkerboard}e) was obtained by optimizing the parameters of the spline cubic sweep to maximize the correlation length on a 12$\times$12 (144 atoms) array. The sweep used in detection of the star and striated phases was optimized based on maximizing their respective order parameters. In particular, the inflection point was chosen to be near the position of the minimum gap in these sweeps in order to maximize adiabaticity. \\
\noindent\emph{Linear Sweeps:} To probe the phase transition into the checkerboard phase (Fig.~\ref{fig_kzm}), we use variable-endpoint linear detuning sweeps in which $\Omega$ is abruptly turned off after reaching the endpoint.
This ensures that projective readout happens immediately after the end of the linear sweep instead of allowing time for further dynamics,
and is essential for keeping the system within the quantum Kibble-Zurek regime. Linear sweeps are done from $\Delta$ = $-16$ to 14 MHz ($\Delta/\Omega$ = -3.7 to 3.3) at sweep rates $s$ = 15, 21, 30, 42, 60, 85, and 120 MHz/$\mu$s. Data for locating the quantum critical point (Extended Data Fig.~\ref{fig_sus}a) is taken from the slowest of these sweeps ($s$ = 15 MHz/$\mu$s) to remain as close as possible to the ground state. For mapping out the 2D phase diagram (Fig. \ref{fig_phases}), we use the same variable-endpoint linear sweeps at fixed sweep rate $s = 12~$MHz / $\mu$s, except that $\Omega$ is ramped down over $200~$ns after reaching the endpoint.
\\
\noindent\textbf{State Detection}\\
At the end of the Rydberg pulse, we detect the state of atoms by whether or not they are recaptured in our optical tweezers. Atoms in $\ket{g}$ are recaptured and detected with fidelity $99\%$, limited by the finite temperature of the atoms and collisions with background gas particles in the vacuum chamber.
Atoms excited to the Rydberg state are detected as a loss signal due to the repulsive potential of the optical tweezers on $|r \rangle$. However, the finite Rydberg state lifetime\cite{RydbergProperties2009} ($\sim 80~\mu$s for 70S$_{1/2}$) leads to a probability of $\sim 15\%$ for $|r\rangle$ atoms to decay to $|g\rangle$ and be recaptured by the optical tweezers. In our previous work \cite{AtomArrayCats2019}, we increased tweezer trap depths immediately following the Rydberg pulse to enhance the loss signal for atoms in $\ket{r}$. In 2D, this approach is less effective because atoms which drift away from their initial traps can still be recaptured in a large 3D trapping structure created by out-of-plane interference of tweezers.
Following an approach similar to what has been previously demonstrated\cite{Saffman2019}, we increase the Rydberg detection fidelity using a strong microwave (MW) pulse to enhance the loss of atoms in $|r\rangle$ while leaving atoms in $|g\rangle$ unaffected. The MW source (Stanford Research Systems SG384) is frequency-tripled to $6.9$~GHz and amplified to 3 W (Minicircuits, ZVE-3W-183+). The MW pulse, containing both $6.9$~GHz and harmonics, is applied on the atoms using a microwave horn for $100$~ns. When applying a Rydberg $\pi$-pulse immediately followed by the MW pulse, we observe loss probabilities of $98.6(4)\%$. Since this measurement includes both error in the $\pi$-pulse as well as detection errors, we apply a second Rydberg $\pi$-pulse after the MW pulse, which transfers most of the remaining ground state population into the Rydberg state. In this configuration, we observe $99.1(4)\%$ loss probability, which is our best estimate for our Rydberg detection fidelity (Extended Data Fig~\ref{fig_mw_ionization}). We find that the loss signal is enhanced by the presence of both MW fundamental and harmonic frequencies.\\
\noindent\textbf{Coarse-Grained Local Staggered Magnetization}\\
We define the coarse-grained local staggered magnetization for a site $i$ with column and row indices $a$ and $b$, respectively, as:
$$m_i = \frac{(-1)^{a+b}}{N_i} \sum_{\langle j, i \rangle} (n_i - n_j)$$ where $j$ is summed over nearest neighbors of site $i$ and $N_i$ is the number of such nearest neighbors (4 in the bulk, 3 along the edges, or 2 on the corners).
The value of $m_i$ ranges from $-1$ to 1, with the extremal values corresponding to the two possible perfect antiferromagnetic orderings locally on site $i$ and its nearest neighbors (Extended Data Fig.~\ref{fig_local_stagg}a,b).
The two-site correlation function for $m_i$ can then be defined as an average over experiment repetitions $G^{(2)}_m(k,l) = \frac{1}{N_{(k,l)}} \sum_{i,j} (\langle m_i m_j\rangle - \langle m_i\rangle \langle m_j\rangle)$, where the sum is over all pairs of sites $i, j$ separated by a relative lattice distance of $\mathbf{x} = (k, l)$ sites and normalized by the number of such pairs $N_{(k,l)}$ (Extended Data Fig.~\ref{fig_local_stagg}c). We obtain the correlation length $\xi$ by fitting an exponential decay to the radially averaged $G^{(2)}_m(k,l)$ (Extended Data Fig.~\ref{fig_local_stagg}d). The coarse-grained local staggered magnetization $m_i$ is defined such that the corresponding $G^{(2)}_m(k,l)$ is isotropic (Extended Data Fig.~\ref{fig_local_stagg}c), which makes for natural radial averaging. This radial average captures correlations across the entire array better than purely horizontal or vertical correlation lengths $\xi_H$ and $\xi_V$, which are more sensitive to edge effects.\\
\noindent\textbf{Determination of the Quantum Critical Point}\\
To accurately determine the location of the quantum critical point $\Delta_c$ for the transition into the checkerboard phase, we measure mean Rydberg excitation $\langle n \rangle$ vs. detuning $\Delta$ for a slow linear sweep with sweep rate $s = 15$~MHz/$\mu$s (Extended Data Fig. \ref{fig_sus}a). To smoothen the measured curve, we fit a polynomial for $\langle n \rangle$ vs. $\Delta$ and take its numerical derivative to identify the peak of the susceptibility $\chi$ as the critical point\cite{SachdevQPT} (Extended Data Fig.~\ref{fig_sus}b).
Small oscillations in $\langle n \rangle$ result from the linear sweep not being perfectly adiabatic. To minimize the effect of this on our fitting, we use the lowest-degree polynomial (cubic) whose derivative has a peak, and choose a fit window in which the reduced chi-squared metric indicates a good fit. Several fit windows around $\Delta/\Omega = 0$ to 2 give good cubic fits, and we average results from each of these windows to obtain $\Delta_c/\Omega$ = 1.12(4).
We also numerically extract the critical point for a system with numerically-tractable dimensions of 10$\times$10. Using the density-matrix renormalization group (DMRG) algorithm, we evaluate $\langle n \rangle$ as a function of detuning $\Delta$, and then take the derivative to obtain a peak of the susceptibility at $\Delta_c/\Omega = 1.18$ (Extended Data Fig.~\ref{fig_sus}c,d). To corroborate the validity of our experimental fitting procedure, we also fit cubic polynomials to the DMRG data and find that the extracted critical point is close to the exact numerical value (Extended Data Fig.~\ref{fig_sus}d). This numerical estimate of the critical point for a 10$\times$10 array is consistent with the experimental result on a larger $16\times 16$ array. Moreover, our experiments on arrays of different sizes show that $\Delta_c/\Omega$ does not vary significantly between $12\times 12$, $14\times 14$, and $16\times 16$ arrays (Extended Data Fig. \ref{fig_collapse_distance}b).\\
\noindent\textbf{Data Collapse for Universal Scaling}\\
Optimizing the universal collapse of rescaled correlation length $\tilde{\xi}$ vs. rescaled detuning $\tilde{\Delta}$ requires defining a measure of the distance between rescaled curves for different sweep rates $s_i$. Given $\tilde{\xi}^{(i)}_j$ and $\tilde{\Delta}^{(i)}_j$, where the index $i$ corresponds to sweep rate $s_i$ and $j$ labels sequential data points along a given curve, we define a distance \cite{Seno2001}
\begin{equation}
D = \sqrt{\frac{1}{N}\sum_i \sum_{i'\neq i} \sum_j \left \lvert\tilde{\xi}^{(i')}_j - f^{(i)}\left(\tilde{\Delta}^{(i')}_j\right)\right \rvert^2}.
\end{equation}
The function $f^{(i)}(\tilde{\Delta})$ is the linear interpolation of $\tilde{\xi}^{(i)}_j$ vs. $\tilde{\Delta}^{(i)}_j$, while $N$ is the total number of terms in the three nested sums. The sum over $j$ only includes points that fall within the domain of overlap of all data sets, avoiding the problem of linear interpolation beyond the domain of any single data set. Defined in this way, the collapse distance $D$ measures all possible permutations of how far each rescaled correlation growth curve is from curves corresponding to other sweep rates.
Applied to our experimental data, $D$ is a function of both the location of the critical point $\Delta_c$ and the critical exponent $\nu$ (Extended Data Fig. \ref{fig_collapse_distance}a). Using the independently measured $\Delta_c/\Omega = 1.12(4)$, we obtain $\nu = 0.62(4)$ for optimal data collapse, and illustrate in particular the better collapse for this value than for other values of $\nu$ (Extended Data Fig.~\ref{fig_collapse_distance}c-e). The quoted uncertainty is dominated by the corresponding uncertainty of the extracted $\Delta_c/\Omega$, rather than by the precision of finding the minimum of $D$ for a given $\Delta_c/\Omega$. Our experiments give consistent values of $\Delta_c/\Omega$ and $\nu$ for systems of size 12$\times$12, 14$\times$14, and 16$\times$16 (Extended Data Fig.~\ref{fig_collapse_distance}b).\\
\noindent\textbf{Order Parameters for Many-Body Phases}\\
We construct order parameters to identify each phase using the Fourier transform to quantify the amplitude of the observed density-wave ordering. We define the symmetrized Fourier transform $ \tilde{\mathcal{F}} (k_1,k_2) = \langle \mathcal{F}(k_1,k_2) + \mathcal{F}(k_2,k_1) \rangle/2$ to take into account the $\textit{C}_4$ rotation symmetry between possible ground-state orderings for some phases. For the star phase, the Fourier amplitude $\tilde{\mathcal{F}} (\pi, \pi/2)$ is a good order parameter because ordering at $\mathbf{k} = (\pi, \pi/2)$ is unique to this phase. The striated phase, on the other hand, shares its Fourier peaks at $\mathbf{k}$ = $(\pi, 0)$ and $(0, \pi)$ with the star phase, and its peak at $\mathbf{k}$ = $(\pi, \pi)$ with the checkerboard phase; hence, none of these peaks alone can serve as an order parameter.
We therefore construct an order parameter for the striated phase to be $\tilde{\mathcal{F}} (0, \pi) - \tilde{\mathcal{F}} (\pi/2, \pi)$, which is nonzero in the striated phase and zero in both checkerboard and star. Similarly, the checkerboard shares its $\mathbf{k} = (\pi, \pi)$ peak with the striated phase, so we construct $\tilde{\mathcal{F}} (\pi,\pi) - \tilde{\mathcal{F}} (0, \pi)$ as an order parameter which is zero in the striated phase and nonzero only in checkerboard. \\
\noindent\textbf{Numerical Simulations of the 2D Phase Diagram}\\
We numerically compute the many-body ground states at different points in the $(\Delta/\Omega, R_b/a)$ phase diagram using the density-matrix renormalization group (DMRG) algorithm \cite{white1992density,white1993density}, which operates in the space of the so-called matrix product state (MPS) ans\"{a}tze.
While originally developed for one-dimensional systems, DMRG can also be extended to two dimensions by representing the 2D system as a winding 1D lattice \cite{stoudenmire2012studying}, albeit with long-range interactions. A major limitation to two-dimensional DMRG is that the number of states required to faithfully represent the ground-state wavefunction has to be increased exponentially with the width of the system in order to maintain a constant accuracy. For our calculations, we employ a maximum bond dimension of $1600$, which allows us to accurately simulate $10\times 10$ square arrays \cite{Rhine2020}. We also impose open boundary conditions in both directions and truncate the van der Waals interactions so as to retain up to third-nearest-neighbor couplings.
The numerical convergence criterion is set by the truncation error, and the system is regarded to be well-converged to its true ground state once this error drops below a threshold of $10^{-7}$. In practice, this was typically found to be achieved after $\mathcal{O}(10^2)$ successive sweeps.
Since the dimensions of the systems studied in Figure~\ref{fig_phases}, (13\,$\times$\,13 (experimentally) and 9\,$\times$\,9 (numerically), are both of the form $(4n+1)$\,$\times$\,$(4n+1)$, the two phase diagrams are expected to be similar. In particular, both these system sizes are compatible with the commensurate ordering patterns of the crystalline phases observed in this work, and can host all three phases (at the appropriate $R_b$/a) with the same boundary conditions. Likewise, for extraction of the QCP, we use a 10$\times$10 array as it is the largest numerically accessible square lattice comparable to the 16$\times$16 array used in our study of the quantum phase transition.\\
\noindent\textbf{Mean-Field Wavefunction for the Striated Phase}\\
To understand the origin of the striated phase, it is instructive to start from a simplified model in which we assume that nearest-neighbor sites are perfectly blockaded. Since we always work in a regime where $R_b/a > 1$, this model should also capture the essential physics of the full Rydberg Hamiltonian.
In the classical limit of $\Omega = 0$, the perfect checkerboard state has an energy per site of $-\Delta/2 + V (\sqrt{2}a) + V (2a)$, with $V(x)$ being the interaction between sites at a distance $x$, whereas the corresponding energy for the star-ordered state is $-\Delta/4$ (neglecting interactions for $x>2a$). Accordingly, there is a phase transition between the checkerboard and star phases when $\Delta = 4 [V (\sqrt{2}a) + V (2a)]$. On the other hand, for the same density of Rydberg excitations, the striated phase has a classical energy per site of $-\Delta/4 + V (2a)/2$, which is always greater than that of the star phase; hence, striated ordering never appears in the classical limit.
At finite $\Omega$, however, the striated phase emerges due to a competition between the third-nearest-neighbor interactions and the second-order energy shift upon dressing a ground state atom off-resonantly with the Rydberg state. We can thus model the ground state of the striated phase as a product state, where (approximately) $1/2$ of the atoms are in the ground state, $1/4$ of the atoms are in the Rydberg state, and the remaining $1/4$ are in the ground state with a weak coherent admixture of the Rydberg state. A general mean-field ansatz for a many-body wavefunction of this form is given by
\begin{alignat}{1}
\rvert \Psi^{}_\textsc{str} (a_1^{}, a^{}_2) \rangle = &\bigotimes_{\mathbf{i} \in A_1} \left ( \cos a_1 \rvert g \rangle_\mathbf{i} + \sin a_1 \rvert r \rangle_\mathbf{i} \right)\\
\nonumber&\bigotimes_{\mathbf{i} \in A_2} \left ( \cos a_2 \rvert g \rangle_\mathbf{i} + \sin a_2 \rvert r \rangle_\mathbf{i} \right) \bigotimes_{\mathbf{j} \in B} \rvert g \rangle_\mathbf{j},
\end{alignat}
where $A_1$ and $A_2$ represent the two sublattices of the (bipartite) $A$ sublattice, and $a_{1,2}$ are variational parameters. If $a_1=a_2$, then our trial wavefunction simply represents a checkerboard state, but if $a_1\ne a_2$, this state is \textit{not} of
the checkerboard type, and leads to the striated phase.
Based on this ansatz, we can now explicitly see how the striated phase may become energetically favorable in the presence of a nonzero $\Omega$. Consider the atoms on the partially excited sublattice to be in the superposition $\rvert g \rangle + [\Omega/\{4V(\sqrt{2}a)-\Delta \}]\rvert r \rangle$; this describes the state of the atoms on the $(1,1)$ sublattice in the notation of Fig.~\ref{fig_striated}. The net energy per site of the system is then
\begin{equation*}
-\frac{\Delta}{4} + \frac{V(2a)}{2} -\frac{\Omega^2}{ 4\,(4V(\sqrt{2}a) -\Delta)} +\frac{\Omega^2\, V(\sqrt{2}a)}{ 2\,(4V(\sqrt{2}a) -\Delta)^2}
\end{equation*}
where the third and fourth terms are the second-order energy shift and mean-field interaction shift, respectively.
From this expression, we observe that if the energy gained from the dressing (these last two terms) is larger than $V(2a)/2$, then the striated phase prevails over the star phase.\\
\noindent\textbf{Dynamical Probe of the Striated Phase}\\
We prepare striated ordering using an optimized cubic spline sweep along $R_b/a = 1.47$, ending at $\Delta/\Omega = 2.35$. Immediately after this sweep, the system is quenched to detuning $\Delta_q$ and relative laser phase $\phi_q$. We quench at a lower Rabi frequency $\Omega_q = \Omega/4 \approx 2\pi \times 1$~MHz to improve the resolution of this interaction spectroscopy. For the chosen lattice spacing, the interaction energy between diagonal excitations is $2\pi \times 5.3~$MHz. The reference phase for the atoms $\phi = 0$ is set by the instantaneous phase of the Rydberg coupling laser at the end of the sweep into striated ordering. In the Bloch sphere picture, $\phi = 0$ corresponds to the $+x$ axis, so the wavefunctions on (0,0) and (1,1) sublattices correspond to vectors pointing mostly up or mostly down with a small projection of each along the $+x$ axis. In the same Bloch sphere picture, quenching at $\phi_q = \pi/2$ or $-\pi/2$ corresponds to rotations around the $+y$ or $-y$ axes (Fig. \ref{fig_striated}a).\\
To resolve the local response of the system, we use high-order correlators which are extracted from single-shot site-resolved readout. In particular, we define an operator $\hat{\mathcal{O}}_i^{(d)}$ on the eight atoms surrounding site $i$. This operator projects the neighboring atoms into configurations in which all four nearest atoms are in $\ket{g}$ and exactly $d$ of the diagonal neighbors are in $\ket{r}$. Specifically, the operator $\hat{\mathcal{O}}_i^{(d)}$ decomposes into a projector $\hat{A}_i$ on the four nearest neighboring atoms and $\hat{B}_i^{(d)}$ on the four diagonal neighbors, according to $\hat{\mathcal{O}}_i^{(d)} = \hat{A}_i \hat{B}_i^{(d)}$. Defining $\bar{n}_i = |g\rangle_i\langle g|$ and $n_i = |r\rangle_i\langle r|$, the nearest neighbor projector is written as $\hat{A}_i = \prod_{\langle j, i \rangle } \bar{n}_j$, where $\langle . \rangle$ denotes nearest neighbors. The projector $\hat{B}_i^{(d)}$ sums over all configurations of the diagonal neighbors (indexed $k_1, k_2, k_3, k_4$) with $d$ excitations:
\begin{align}
\hat{B}_i^{(4)} &= n_{k_1}n_{k_2}n_{k_3}n_{k_4} \\
\hat{B}_i^{(3)} &= \bar{n}_{k_1}n_{k_2}n_{k_3}n_{k_4} + n_{k_1}\bar{n}_{k_2}n_{k_3}n_{k_4} + \ldots \\
\hat{B}_i^{(2)} &= \bar{n}_{k_1}\bar{n}_{k_2}n_{k_3}n_{k_4} + \bar{n}_{k_1}{n}_{k_2}\bar{n}_{k_3}n_{k_4} + \ldots
\end{align}
These operators are used to construct the conditional Rydberg density $$P^{(d)} = \frac{\sum_i \langle n_i \hat{\mathcal{O}}_i^{(d)}\rangle}{\sum_i \langle \hat{\mathcal{O}}_i^{(d)} \rangle}$$ which measures the probability of Rydberg excitation on site $i$ surrounded by neighboring-atom configurations for which $\hat{\mathcal{O}}_i^{(d)}=1$.
To quantify coherences, we measure these conditional probabilities on their corresponding resonances, after a fixed quench with variable quench phase $\phi_q$. For a single particle driven by the Hamiltonian $H=\Omega (\cos \phi_q \sigma_x + \sin \phi_q \sigma_y)/2 + \Delta \sigma_z/2$ for time $\tau$, the resulting Heisenberg evolution is given by $\sigma_z' = U^\dagger \sigma_z U$, where $U = e^{-i H \tau}$. The resulting operator can be expressed as
\begin{align}
\sigma_z' &= \tilde{\Omega} \sin2\alpha (-\sigma_x \sin \phi_q + \sigma_y \cos \phi_q) \\
&+ 2\tilde{\Delta}\tilde{\Omega} \sin^2\alpha (\sigma_x \cos\phi_q + \sigma_y \sin \phi_q) \\
&+ (\cos^2 \alpha - (1 - 2\tilde{\Delta}^2)\sin^2 \alpha) \sigma_z
\end{align}
where $\tilde{\Delta} = \Delta / \sqrt{\Delta^2 + \Omega^2}$, $\tilde{\Omega} = \Omega / \sqrt{\Delta^2 + \Omega^2}$, and $\alpha = \frac{1}{2}\tau \sqrt{\Delta^2+\Omega^2}$.
We fit the conditional probabilites $P^{(0)}$ and $P^{(4)}$ as a function of $\phi_q$ (Fig.~\ref{fig_striated}d,e), taking $\Delta$ as the effective detuning from interaction-shifted resonance, and measuring $\langle \sigma_z' \rangle$ as a function of $\phi_q$ to extract the Bloch vector components $\langle \sigma_x \rangle, \langle \sigma_y \rangle, \langle \sigma_z \rangle$ on the two respective sublattices. For the (1,1) sublattice response, we model the evolution averaged over random detunings, due to $\sim 15\%$ fluctuations of the interaction shifts associated with thermal fluctuations in atomic positions, which broaden and weaken the spectroscopic response. For both sublattices we also include fluctuations in the calibrated pulse area ($\sim 10\%$ due to low power used). The extracted fit values are $\sigma_{x,y,z}^{(0,0)} = -0.82(6), 0.25(2), -0.32(4)$, and $\sigma_{x,y,z}^{(1,1)} = -0.46(4), 0.01(1), 0.91(5)$.\\
\noindent\textbf{Acknowledgements} We thank many members of the Harvard AMO community, particularly Elana Urbach, Samantha Dakoulas, and John Doyle for their efforts enabling safe and productive operation of our laboratories during 2020. We thank Hannes Bernien, Dirk Englund, Manuel Endres, Nate Gemelke, Donggyu Kim, Peter Stark, and Alexander Zibrov for discussions and experimental help. We acknowledge financial support from the Center for Ultracold Atoms, the National Science Foundation, the Vannevar Bush Faculty Fellowship, the U.S. Department of Energy, the Office of Naval Research, the Army Research Office MURI, and the DARPA ONISQ program. T.T.W. acknowledges support from Gordon College. H.L. acknowledges support from the National Defense Science and Engineering Graduate (NDSEG) fellowship. G.S. acknowledges support from a fellowship from the Max Planck/Harvard Research Center for Quantum Optics. D.B. acknowledges support from the NSF Graduate Research Fellowship Program (grant DGE1745303) and The Fannie and John Hertz Foundation. W.W.H. is supported by the Moore Foundation’s EPiQS Initiative Grant No. GBMF4306, the NUS Development Grant AY2019/2020, and the Stanford Institute of Theoretical Physics. S.C. acknowledges support from the Miller Institute for Basic Research in Science. R.S. and S.S. were supported by the U.S.~Department of Energy under Grant $\mbox{DE-SC0019030}$. The DMRG calculations were performed using the ITensor Library \cite{itensor}. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University.\\
\noindent\textbf{Author contributions}
S.E., T.T.W., H.L., A.K., G.S, A. O. and D.B.
contributed to the building experimental setup, performed the measurements, and analyzed the data. Theoretical analysis was performed by R.S., H.P., W.W.H., and S.C. All work was supervised by S.S., M.G., V.V., and M.D.L. All authors discussed the results and contributed to the manuscript.\\
\noindent\textbf{Competing interests} M.G., V.V., and M.D.L. are co-founders and shareholders of QuEra Computing. A.O. is a shareholder of QuEra Computing.\\
\noindent\textbf{Correspondence and requests for materials} should be addressed to M.D.L.\\
\renewcommand{\figurename}{EXTENDED DATA FIG.}
\setcounter{figure}{0}
\begin{figure*}
\includegraphics{Figure_LargeArrays.png}
\caption{\textbf{Large arrays of optical tweezers.} The experimental platform produces optical tweezer arrays with up to $\sim 1000$ tweezers and $\sim 50\%$ loading probability per tweezer after $100~$ms of MOT loading time. \textbf{a.} Camera image of an array of 34$\times$30 tweezers (1020 traps), including aberration correction. \textbf{b.} Sample image of random loading into this tweezer array, with 543 loaded atoms. Atoms are detected on an EMCCD camera with fluorescence imaging.}
\label{fig_large_tweezer_arrays}
\end{figure*}
\begin{figure*}
\includegraphics{Figure_TweezerAberrations_v4.pdf}
\caption{\textbf{Correcting for aberrations in the SLM tweezer array.} The aberration correction procedure utilizes the orthogonality of Zernike polynomials and the fact that correcting aberrations increases tweezer light shifts on the atoms. To independently measure and correct each aberration type, Zernike polynomials are added with variable amplitude to the SLM phase hologram, with values optimized to maximize tweezer light shifts. \textbf{a.} Two common aberration types: horizontal coma (upper) and primary spherical (lower), for which $\sim 50$~milliwaves compensation on each reduces aberrations and results in higher-depth traps. \textbf{b.} Correcting for aberrations associated with the thirteen lowest order Zernike polynomials. The sum of all polynomials with their associated coefficients gives the total aberrated phase profile in the optical system, which is now corrected (total RMS aberration of $\sim 70$~milliwaves).
\textbf{c.} Trap depths across a $26\times13$ trap array before and after correction for aberrations. Aberration correction results in tighter focusing (higher trap light shift) and improved homogeneity. Trap depths are measured by probing the light shift of each trap on the $\ket{5S_{1/2}, F=2} \to \ket{5P_{3/2}, F'=2}$ transition. \textbf{d.} Aberration correction also results in higher and more homogeneous trap frequencies across the array. Trap frequencies are measured by modulating tweezer depths at variable frequencies, resulting in parametric heating and atom loss when the modulation frequency is twice the radial trap frequency. The measurement after correction for aberrations shows a narrower spectrum and higher trap frequencies (averaged over the whole array).}
\label{fig_tweezer_aberrations}
\end{figure*}
\begin{figure*}
\includegraphics{Figure_Rearrangement_v4.pdf}
\caption{\textbf{Rearrangement protocol.} \textbf{a.} Sample sequence of individual rearrangement steps. There are two pre-sorting moves (1, 2). Move (3) is the single ejection move. Moves (4-6) consist of parallel vertical sorting within each column, including both upward and downwards move. The upper panel illustrates the frequency spectrum of the waveform in the vertical and horizontal AODs during these moves, with the underlying grid corresponding to the calibrated frequencies which map to SLM array rows and columns. \textbf{b.} Spectrograms representing the horizontal and vertical AOD waveforms over the duration of a single vertical frequency scan during a realistic rearrangement procedure for a 26$\times$13 array. The heat-maps show frequency spectra of the AOD waveforms over small time intervals during the scan.}
\label{fig_rearrangement}
\end{figure*}
\begin{figure*}
\includegraphics[width=180mm]{figure_tophat_v1.pdf}
\caption{\textbf{Generating homogeneous Rydberg beams} \textbf{a.} Measured Gaussian-beam illumination on the SLM for shaping the 420-nm Rydberg beam. A Gaussian fit to this data is used as an input for the hologram optimization algorithm. \textbf{b.} Corrected and measured wavefront error through our optical system, showing a reduction of aberrations to $\lambda/100$. \textbf{c.} Computer-generated hologram for creating the 420-nm top-hat beam. \textbf{d.} Measured light intensity of the 420-nm top-hat beam (top), and the cross section along where atoms will be positioned (bottom). Vertical lines denote the 105-$\mu$m region where the beam should be flat. \textbf{e.} Using the measured top-hat intensity, a phase correction is calculated for adding to the initial hologram. \textbf{f.} Resulting top-hat beam after feedback shows significantly improved homogeneity.}
\label{fig_tophat}
\end{figure*}
\begin{figure*}
\includegraphics{Figure_microwaves_v1.pdf}
\caption{\textbf{Characterizing microwave-enhanced Rydberg detection fidelity.} The effect of strong microwave (MW) pulses on Rydberg atoms is measured by preparing atoms in $\ket{g}$, exciting to $\ket{r}$ with a Rydberg $\pi$-pulse, and then applying the MW pulse before de-exciting residual Rydberg atoms with a final Rydberg $\pi$-pulse. (The entire sequence occurs while tweezers are briefly turned off.) \textbf{a.} Broad resonances are observed with varying microwave frequency, corresponding to transitions from $\ket{r} = \ket{70S}$ to other Rydberg states. Note that the transition to $\ket{69P}$ and $\ket{70P}$ are in the range of $10-12$~GHz, and over this entire range there is strong transfer out of $\ket{r}$. Other resonances might be due to multi-photon effects. \textbf{b.} With fixed $6.9$-GHz MW frequency and varying pulse time, there is a rapid transfer out of the Rydberg state on the timescale of several nanoseconds. Over short time-scales, there may be coherent oscillations which return population back to $\ket{r}$, so a $100$~ns pulse is used for enhancement of loss signal of $\ket{r}$ in the experiment.}
\label{fig_mw_ionization}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{fig_local_stagg.pdf}
\caption{\textbf{Coarse-grained local staggered magnetization.} \textbf{a.} Examples of Rydberg populations $n_i$ after a faster (top) and slower (bottom) linear sweep. \textbf{b.} Corresponding coarse-grained local staggered magnetizations $m_i$ clearly show larger extents of antiferromagnetically ordered domains (dark blue or dark red) for the slower sweep (bottom) compared to for the faster sweep (top), as expected from the Kibble-Zurek mechanism. \textbf{c.} Isotropic correlation functions $G^{(2)}_m$ for the corresponding coarse-grained local staggered magnetizations after a faster (top) or a slower (bottom). \textbf{d.} As a function of radial distance, correlations $G^{(2)}_m$ decay exponentially with a length scale corresponding to the correlation length $\xi$. The two decay curves correspond to faster (orange) and slower (blue) sweeps.
}
\label{fig_local_stagg}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{fig_sus_v4.pdf}
\caption{\textbf{Extracting the quantum critical point.} \textbf{a.} The mean Rydberg excitation density $\langle n \rangle$ vs. detuning $\Delta/\Omega$ on a 16$\times$16 array. The data is fitted within a window (dashed lines) to a cubic polynomial (red curve) as a means of smoothening the data. \textbf{b.} The peak in the numerical derivative of the fitted data (red curve) corresponds to the critical point $\Delta_c/\Omega = 1.12(4)$ (red shaded regions show uncertainty ranges, obtained from varying fit windows). In contrast, the point-by-point slope of the data (gray) is too noisy to be useful. \textbf{c.} Order parameter $\tilde{\mathcal{F}}(\pi,\pi)$ for the checkerboard phase vs. $\Delta/\Omega$ measured on a 16$\times$16 array with the value of the critical point from \textbf{b.} superimposed (red line), showing the clear growth of the order parameter after the critical point. \textbf{d.} DMRG simulations of $\langle n \rangle$ vs. $\Delta/\Omega$ on a 10$\times$10 array. For comparison against the experimental fitting procedure, the data from numerics is also fitted to a cubic polynomial within the indicated window (dashed lines). \textbf{e.} The point-by-point slope of the numerical data (blue curve) has a peak at $\Delta_c/\Omega = 1.18$ (blue dashed line), in good agreement with the results (red dashed line) from both the numerical derivative of the cubic fit on the same data (red curve) and the result of the experiment. \textbf{f.} DMRG simulation of $\tilde{\mathcal{F}}(\pi,\pi)$ vs. $\Delta/\Omega$, with the exact quantum critical point from numerics shown (red line).}
\label{fig_sus}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{fig_collapse_distance_v1.pdf}
\caption{\textbf{Optimization of data collapse.} \textbf{a.} Distance $D$ between rescaled correlation length $\tilde{\xi}$ vs. $\tilde{\Delta}$ curves depends on both the location of the quantum critical point location $\Delta_c/\Omega$ and on the correlation length critical exponent $\nu$. The independently determined $\Delta_c/\Omega$ (blue line, with uncertainty range in gray) and the experimentally extracted value of $\nu$ (dashed red line, with uncertainty range corresponding to the red shaded region) are marked on the plot. \textbf{b.} Our determination of $\nu$ (red) from data collapse around the independently determined $\Delta_c/\Omega$ (blue) is consistent across arrays of different sizes. \textbf{c-e.} Data collapse is clearly better at the experimentally determined value ($\nu=0.62$) as compared to the mean-field ($\nu=0.5$) or the (1+1)D ($\nu=1$) values. The horizontal extent of the data corresponds to the region of overlap of all rescaled data sets.}
\label{fig_collapse_distance}
\end{figure*}
\clearpage
\newpage
\bibliographystyle{apsrev4-1}
|
1,116,691,497,244 | arxiv | \section{Introduction}
The pair-density-wave (PDW) state is a superconducting (SC) state of matter in which the Cooper pairs have a finite momentum.
Due to the finite momentum carried by the pair, the SC order parameter is modulated periodically in space.
The PDW state has recently received attention because it can explain the layer decoupling observed in the cuprate La$_{2-x}$Ba$_x$CuO$_4$(LBCO),
the original high $T_c$ superconductor, at the $x=1/8$ anomaly.\cite{Berg-2007}
At this doping the $T_c$ of the three-dimensional material is suppressed to temperatures as low as $ 4$K, where the Meissner effect
first appears and the system is in a three-dimensional $d$-wave SC phase.
In contrast, away from $x=1/8$, the SC $T_c$ in LBCO is about 35K. In spite of the low $T_c$ of the uniform $d$-wave SC state at $x=1/8$,
in this doping regime the CuO planes are already superconducting for a range of temperatures well above $T_c$.\cite{Li-2007,tranquada-2008}
With this in mind, Berg and collaborators\cite{Berg-2007,Berg-2009} showed that this phenomenon can be explained if the CuO planes are in an
inhomogeneous (`striped') SC state, the PDW state, in which charge, spin and SC orders are intertwined with each other.
In this state the SC order parameter oscillates along one direction in the CuO planes, and the average of the SC order parameter
is zero in the CuO planes.
The PDW state is similar to the traditional Larkin-Ovchinnikov (LO) state\cite{Larkin-1964} where the Cooper pairs have a
non-zero center of mass momentum, which in the LO proposal is due to the presence of an external Zeeman field, which breaks the time-reversal
symmetry explicitly (for a recent review of the LO state see Ref. [\onlinecite{Casalbuoni-2004}]).
However, the occurrence of the PDW SC state does not necessarily require to have a system in which time-reversal symmetry is explicitly broken,
nor it does require time-reversal symmetry be spontaneously broken either.
Since it was proposed as a candidate competing state to the uniform $d$-wave SC order,\cite{Berg-2007, Berg-2009} the PDW state has been
studied extensively. A Landau-Ginzburg (LG) theory of the PDW state provides a simple explanation of much of the observed phenomenology of
{La$_{2-x}$Ba$_x$CuO$_4$},\cite{berg-2008a,Berg-2009,agterberg-2008} and of La$_{2-x}$Sr$_x$CuO$_4$ { }in magnetic fields. An outgrowth of these phenomenological theories is a
statistical mechanical description of the thermal melting of the PDW phase by proliferation of topological defects which yielded a rich phase
diagram which includes, in addition to the PDW phase, a novel charge $4e$ SC state and a CDW phase.\cite{Berg-2010,barci-2011,fradkin-2014}
More recently, Agterberg and Garaud\cite{agterberg-2014} showed that it is possible to have a phase in which a uniform SC and PDW order parameters
coexist in the presence of a magnetic field.
The microscopic underpinnings of the PDW state are presently not as well understood as the phenomenologies. Nevertheless, it has been shown that this state can
appear in different regimes of several models. In the weak coupling limit, such a state appears naturally in two dimensions (2D)
inside an electronic spin-triplet nematic phase.\cite{Soto-Garrido-2014}
Also at the mean field level, Lee \cite{lee-2014} found that it is possible to have a PDW state in his model of `Amperian'
pairing\cite{sslee-2007} and that the PDW state can explain the pseudo-gap features found in the angle-resolved photoemission
experiment.\cite{ARPES}
In a series of papers, Loder and collaborators\cite{Loder-2010,Loder-2011} found that a PDW superconducting state is preferred in a tight-binding model with strong attractive
interactions (although the critical value of the coupling constant above which the PDW is
stable is presumably outside the range of validity of the weak coupling theory). Similarly, PDW states with broken time-reversal invariance and parity have been found recently\cite{Wang-2014,Wang-2015} in a `hot spot' model, which also requires a critical (and typically not small) value of a coupling constant.
On the other hand, in one-dimensional systems (1D) the
PDW state has been shown to describe the SC state of the Kondo-Heisenberg chain,\cite{zachar-2001a,zachar-2001b,Berg-2010} and a
phase of an extended Hubbard-Heisenberg model on a two-leg ladder.\cite{jaefari-2012}
We showed recently that the PDW state appearing in these two 1D models is actually a topological SC
which supports Majorana zero modes localized at its boundaries.\cite{Cho-2014}
There has also been considerable recent effort to determine if the PDW state occurs in simple models of strongly correlated systems.
Variational Monte Carlo simulations of the $t-J$ and $t-t'-J$ model on the square lattice at zero magnetic field near doping $x=1/8$
found that the uniform $d$-wave SC state is slightly favored over the PDW state.\cite{Himeda-2002,Raczkowski-2007,Capello-2008,Yang-2008b}
Corboz and coworkers,\cite{corboz-2014} using infinite projected entangled pair-states\cite{Verstraete-2008} (iPEPS), found
strong evidence in the 2D $t-J$ model that the ground state energies of the uniform $d$-wave state and the PDW state are numerically
indistinguishable (within the error bars) over a broad range of dopings and parameters.
This last result indicates that these strongly correlated systems do have a strong tendency to exhibit intertwined orders and that the PDW
state occurs more broadly than was anticipated.\cite{fradkin-2014}
In this work, instead of following a conventional weak coupling approach to the PDW states, we will take an alternative path which has
the physics of strong correlations as its starting point. Rather than starting from a true 2D system, we will consider a
quasi-one dimensional model consisting of weakly coupled (each strongly-interacting) 1D systems. In the decoupled limit we can solve each 1D
system non-perturbatively using bosonization methods.\cite{FradkinFieldTheory,Gogolin-book,Giamarchi-book} We will follow a dimensional
crossover approach that has been used with considerable success by several authors.
\cite{Carlson-2000,emery-2000,granath-2001,vishwanath-2001,Carr-2002,essler-2002,Arrigoni-2004,jaefari-2010}
We will consider a generalization of the model used by Granath and coworkers\cite{granath-2001} in which there are two types of 1D subsystems:
a set of doped two-leg ladders in the Luther-Emery (LE) liquid regime (which has a single gapless charge sector and a gapped spin sector)
and a set of 1D electronic Luttinger liquids (eLL) with both a gapless charge sector and a gapless spin sector.
Although the interactions between LE liquids and between LE and eLL liquids will be treated by the interchain mean field theory (MFT)
(see, e.g. Carlson {\it et al.}\cite{Carlson-2000}), the intra LE and intra eLL interactions are treated essentially exactly using bosonization.
We will make the reasonable assumption that the interaction between the electronic Luttinger liquids leads to a crossover to a full 2D
(anisotropic) Fermi liquid (see, e.g. Ref.[\onlinecite{granath-2001}]). In this fashion, this approach allows to access the strong coupling
regime of a strongly correlated system using controlled approximations.
In this approximation the resulting superconducting $T_c$ is a power law in the interchain coupling and not exponentially
small as in the usual weak-coupling limit (such as BCS approach).
The main departure of the system that we consider here from previous studies of models on this type is that we will allow for the Josephson
couplings between the LE liquids to have either positive or negative signs. A negative sign induces a $\pi$ phase shift
between two neighboring LE liquids. It was shown by Berg {\it et al.}\cite{berg-2008a} that two superconductors that are proximity coupled to each other
through a 1D weakly doped Hubbard model have a broad regime of parameters (in particular, doping) in which the effective Josephson coupling is negative.
Here, in order to incorporate this physics, we will introduce a set of Ising degrees of freedom mediating the interactions between the LE liquids which emulate different doping profiles
of the electronic Luttinger liquids. This feature will allow us to consider the interplay between uniform (s-wave or d-wave) superconductivity
with PDW superconducting states and coexistence phases, resulting in complex phase diagrams.
We note that inhomogeneous SC states such as the PDW are generally accompanied by a subsidiary
charge-ordered state, a charge-density-wave (CDW). The period of the CDW is twice the period of the PDW or equal to the period of the PDW depending on whether this is a pure PDW state or if it is a state in which it coexists with an uniform SC state. The CDW order parameters which describe these states are composite of the PDW order parameters with or without the uniform SC order parameter. The general occurrence of charge-ordered states as subsidiary orders of an inhomogeneous SC state has been emphasized by several authors.\cite{berg-2008a,Berg-2009,agterberg-2008,fradkin-2014,lee-2014,Wang-2014} The same should hold in the case of the SC states that we study here.
The experimental consequences of the PDW states have been discussed extensively in the recent literature\cite{Berg-2007,agterberg-2008,Berg-2009,Berg-2009b,zelli-2011,zelli-2012,lee-2014,fradkin-2014} (including papers by one of the authors) and we will not elaborate further on this questions here. Instead we will focus on microscopic mechanisms behind these inherently strongly interacting states.
The paper is organized as follows.
In section \ref{sec:Model} we define our model and summarize our notation for bosonization in 1D.
In section \ref{sec:MFT} we develop the interchain MFT and discuss the results for the self consistent equations.
In section \ref{sec:LLsystems} we study and discuss the quasiparticle spectrum of the phases, emergent from this quasi-1D system,
for the various PDW and uniform SC states found in section \ref{sec:MFT}.
In section \ref{sec:phases} we summarize other possible phases that could arise in this model using a qualitative scaling
dimensional analysis. We finish with our conclusions in section \ref{sec:conclusions}.
\section{The Model}
\label{sec:Model}
The quasi-1D model, schematically presented in the Introduction, consists of two different types of 1D systems.
One of them is a conventional 1D electronic Luttinger liquid (eLL) in which both the spin and charge degrees of freedom are gapless.
The other type, however, is a 1D system with a spin gap, i.e., it is a 1D Luther-Emery liquid (LE). The presence of the
spin gap in the 1D system will bias the full array of 1D systems to a strong tendency to a SC state.
\subsection{1D Systems and Bosonization}
\label{sec:1D}
Before we define in detail the quasi-1D model,
we start with a short summary on those 1D liquids and their description using bosonization. This material is standard and can be found in
several textbooks, e.g. Ref.[\onlinecite{FradkinFieldTheory}]. Here we will only give some salient results and set up our definitions
(and conventions) that we use in later sections.
We start with a 1D eLL which has a gapless charge sector and a gapless spin sector.
The low-energy Hamiltonian is written in terms of the set of the bosonic fields, $\{ \phi_a, \theta_a \}$ where $a = c, s$ labels
the charge and spin sectors respectively. These fields
satisfy canonical equal-time commutation relations
\begin{equation}
[\phi_a (x'), \partial_x \theta_b (x)] = i \delta_{a, b} \delta(x'-x)
\end{equation}
The effective Hamiltonian for the eLL is
\begin{equation}
H_{eLL}[\phi_{a}, \theta_{a}] = \sum_{a = c,s} \frac{v_\alpha}{2} \left[ K_a (\partial_x \theta_a)^{2} + \frac{1}{K_a} (\partial_x \phi_a)^2 \right],
\label{LL}
\end{equation}
in which $K_a$ (again with $a=c, s$) are the Luttinger parameters for the charge and spin sectors,
and $v_a$ are the characteristic speeds for the charge and spin excitations of the
liquid. The parameters $K_c$, $K_s$, $v_c$ and $v_s$ are determined by the microscopic details of the model. However, for a system with spin-rotational invariance, the resulting SU(2) symmetry restricts the value of the the spin Luttinger parameter to be $K_s=1$.
In this continuum and low-energy limit,
we can decompose the electronic field operator in terms of two slowly varying components, with wave vectors near the two Fermi points $\pm k_F$
\begin{equation}
\frac{1}{\sqrt{a}}\psi_{\sigma}(x) \rightarrow R_{\sigma}(x)e^{ ik_{F}x} + L_{\sigma}(x)e^{-ik_{F}x},
\end{equation}
where $a$ is the ultraviolet cut-off (typically the lattice spacing), and where $\sigma=\pm$ denotes the spin of the electron. Here the fermionic fields
$R_\sigma (x)$ and $L_\sigma (x)$ are the right- and left-moving components of the electron field $\psi_{\sigma}$,
which are slowly varying in space relative to the Fermi momentum $k_F$.
The right- and left-moving fields can be written in terms of the bosonic charge fields $\phi_c$ and $\theta_c$, and the spin fields $\phi_s$ and $\theta_s$, as follows
\begin{align}
R_{\sigma}&=\frac{F_{\sigma}}{\sqrt{2\pi a}}e^{i\sqrt{\pi/2}(\theta_{c}+\sigma\theta_{s}+\phi_{c}+\sigma\phi_{s})},\nonumber\\
L_{\sigma}&=\frac{F_{\sigma}}{\sqrt{2\pi a}}e^{i\sqrt{\pi/2}(\theta_{c}+\sigma\theta_{s}-\phi_{c}-\sigma\phi_{s})}.
\end{align}
The anticommuting Klein factors, $F_{\sigma}$, ensure the fermionic statistics between the right and left moving fermions $ R_\sigma$ and $L_\sigma$.
Next we consider a spin-gapped Luttinger liquid, or LE liquid. At energy scales below the spin gap $\Delta_s$,
the spin sector can be ignored. Hence, the low-energy Hamiltonian contains only the charge fields $\phi_c$ and $\theta_c$, and it is given by
\begin{equation}
H_{LE} [\phi_c, \theta_c] = \frac{v_c}{2} \left[ K_c (\partial_x \theta_c)^{2} + \frac{1}{K_c} (\partial_x \phi_c)^2 \right].
\label{LE}
\end{equation}
Since the spin sector has been effectively projected-out, we will keep only the charge sector of the LE liquid and drop its $c$ label.
In the LE liquid, all interactions represented by operators that are not spin singlets are irrelevant
(and, in fact, with effective scaling dimension infinite). This fact strongly restricts the types of interactions between LE systems and
eLL systems. In this case the only fermion bilinears that need to be considered in the LE liquid are the order parameter of the charge-density-wave with momentum $2k_F$ (CDW)
\begin{equation}
O_{\text{CDW}}(x) \sim e^{-2ik_F x} R^{\dagger}_{\sigma}(x) L_{\sigma}(x) + h.c.,
\end{equation}
and the (Cooper) pair field spin singlet superconducting order parameter
\begin{equation}
\Delta (x) \sim R_\alpha(x) (i\sigma^y)^{\alpha\beta} L_{\beta}(x) + (R \leftrightarrow L).
\end{equation}
Hence the coupling to the LE liquids in the quasi-1D model should involve only the two operators listed above.
We note that the suppression of the spin operator and the power-law correlation for the SC operator make the LE liquids the natural
building blocks for the quasi-1D SC state.
In contrast, the eLL has other observables that need to be considered, including a spin triplet pair field, the $2k_F$ spin-density-wave (SDW) `N\'eel' order parameter, the right and left moving spin current operators, and, in tunneling processes, the electron operators.
\subsection{Quasi-1D Model}
\label{sec:quasi-1D}
Given a set of independent 1D LE and eLL systems that were described above, we now define and discuss the full quasi-1D model.
The model consists of an array of 1D systems shown in
Fig. \ref{Fig_Chains}. Each {\it unit cell} of the array consists of one LE system, labeled by $A$, and one eLL system, labeled by $B$.
Hence we introduce the bosonic fields $\{\phi_{n, A}, \theta_{n,A} \}$ representing the charge fields in the LE chain of the $n$-th unit cell
and also $\{ \phi_{n,B,a}, \theta_{n,B,a} \}$ (with $a=c,s$) representing the charge and spin fields in the eLL chain of the $n$-th
unit cell. Furthermore we assume that the filling of the type B system (an eLL chain) is close to half filling, i.e., $k^{(B)}_F\approx\pi/2$
and $K^{(B)}_c\approx 1/2$. Also the spin rotational symmetry in the B systems is assumed to be unbroken and thus $K^{(B)}_s = 1$.
We further assume that the Fermi momenta of the systems $A$ and $B$ are incommensurate to each other.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{Fig1}
\caption{
(color online) The array of LE systems and eLL systems. The $A$-type LE systems are represented by the solid (red) line.
The $B$-type eLL systems are represented by the dashed (blue) line. Each unit cell consists of one $A$-type and one $B$-type systems.
Here the black filled dots represent electrons. (a) The conventional Josephson coupling ${\cal J}_{AB}$ in Eq.\eqref{Hint} (b) The
conventional Josephson coupling ${\cal J}_{AA}$ in Eq.\eqref{Hint} (c) Splitting a Cooper pair in $A$ system into the neighboring $B$
systems ${\cal J}_{AB}^{\prime}$ in Eq.\eqref{Hint}.
\label{Fig_Chains}
}
\end{center}
\end{figure}
In the limit in which the LE systems and the eLL systems are decoupled from each other, the effective Hamiltonian of the array is described by the
sum of Eq.\eqref{LL} and Eq.\eqref{LE} for each system, and has the form
\begin{equation}
H_{0} = \sum_{n \in {\mathbb Z}} \Big( H_{LE} [\phi_{n,A}, \theta_{n,A}] + H_{LL} [\phi_{n,B, a}, \theta_{n,B,a}] \Big).
\end{equation}
This decoupled limit is an unstable fixed point and the system will eventually flow to the quasi-1D or 2D fixed points under the introduction
of the coupling between the 1D systems. We will show that the PDW state, as a quasi-1D fixed point, will emerge from certain couplings.
Following the work of Granath {\it{et al.}}, \cite{granath-2001} we write down the possibly relevant local interactions terms between the 1D systems. They are given by
\begin{align}
H^{\prime} =&\sum_{n}\int dx \Big\{
-t_{BB}\sum_{\sigma}[\psi_{B,j,\sigma}^{\dagger}\psi_{B,j+1,\sigma}+ {\rm h. c. }]\nonumber\\
&+ J_{BB}{\bm S}_{B,j}\cdot {\bm S}_{B,j+1} \nonumber\\
&-{\cal J}_{AA,j} [\Delta^{\dagger}_{A,j+1}\Delta_{A,j} + {\rm h.c.}] \nonumber \\
&-{\cal J}_{AB}[\Delta^{\dagger}_{B,j}\Delta_{A,j}+\Delta^{\dagger}_{B,j}\Delta_{A,j+1}+{\rm h. c.}]\nonumber \\
&+{\cal J}_{AB}^{\prime}[\Delta_{A,j}^{\dagger}(\psi_{B,j,\uparrow}\psi_{B,j-1,\downarrow}+
\psi_{B,j-1,\uparrow}\psi_{B,j,\downarrow})+{\rm h. c.}]
\Big\}
\label{Hint}
\end{align}
To simplify the analysis, in this paper we will not consider the possible existence of spin-ordered phases (i.e. spin stripes or SDWs) although these are clearly seen in {La$_{2-x}$Ba$_x$CuO$_4$} which is the material where the PDW state is most clearly seen. We are mainly concerned about the SC states in which the spins
do not play much role, and thus we ignore for now the antiferromagnetic interactions in the discussion.
The antiferromagnetic coupling between the eLL chains can also be included in a relatively straightforward extension of the present work.
Following Ref. [\onlinecite{granath-2001}] we have ignored the possible CDW couplings between chains. In general the scaling
dimensions of the CDW operators become less relevant in the presence of forward scattering interactions between the chains, \cite{emery-2000} so
they can be neglected. If the interchain CDW couplings were to become relevant we would have bidirectional charge order. In this paper we are only exploring states with unidirectional charge and superconducting order
In Eq.\eqref{Hint}, the operator $\Delta_{A, j}(x) \sim \psi_{A,j, \alpha}(x) (i\sigma^y)^{\alpha\beta}\psi_{A,j, \beta}(x)$ represents the density of the
spin-singlet Cooper pair of the system $A$, and $\Delta_{B,j}(x)$ is that of the system $B$.
The effective coupling constants ${\cal J}_{AB}$ and ${\cal J}_{AA,j}$ are the conventional Josephson coupling between the two neighboring $A$ systems,
representing the hopping process of the Cooper pairs (see the (a) and (b) in Fig.\ref{Fig_Chains}).
On the other hand, the local term ${\cal J}_{AB}'$ represents the breaking of a Cooper pair in an $A$ system which puts the two single electrons
into the nearest neighbor $B$ systems (see the (c) in Fig.\ref{Fig_Chains} for a diagram of the process).
In the Hamiltonian $H'$, Eq.\eqref{Hint}, the most relevant term is the electron tunneling term, with coupling strength $t_{BB}$,
between two nearest-neighbor $B$ systems. Under this perturbation, the decoupled $B$ systems flow to the 2D Fermi liquid fixed point,
which, in turn, becomes coupled to the superconducting state emergent from $A$ systems.\cite{granath-2001} Due to this dimensional crossover of the $B$ systems
it is difficult to apply the conventional interchain MFT to analyze Eq.\eqref{Hint}. In order to make progress, we ignore at first the $B$
systems, as the first order approximation to the problem, and perform the interchain MFT only with the $A$ systems, which embodies the strong-coupling
nature of the superconductivity emergent in the quasi-1D models. We should stress that in the $A$ systems, there are no electron-like quasiparticles due the existence
of the spin gap which leads to a fully gapped 2D SC phases when the coupling between the $A$ systems are turned on.
Technically speaking, we solve first for an array of $B$ (eLL) systems coupled by $t_{BB}$ and for an array of $A$ systems coupled only
by ${\cal J}_{AA,j}$ in Eq.\eqref{Hint}, and take ${\cal J}_{AB}'$ and ${\cal J}_{AB}$ as
perturbations. At this level of the approximation, the emergent SC state is determined by the Josephson coupling
${\cal J}_{AA,j}$ and the subsequent SC state of the full system follows by proximity effect between the $A$ and the $B$ subsystems.
This was the strategy used by Granth {\it et al.}.\cite{granath-2001} The main difference between this work and that of Granath and coworkers is the inclusion of an additional, Ising-like, degree of freedom in the coupling between the $A$ systems, as we already discussed in the Introduction.
\subsection{Coupled LE Systems}
\label{sec:LEsystems}
It is clear that ${\cal J}_{AA}$ will determine the nature of the emergent 2D SC state from the quasi-1D model.
More precisely, the spatial pattern of ${\cal J}_{AA,j}$ determines that of the Cooper pair. For example,
if the Josephson coupling ${\cal J}_{AA,j}$ in Eq.\eqref{Hint} is uniform and positive,
it is clear that the uniform spin-singlet SC will emerge. However, in the strongly-correlated quasi-1D system,
the Josephson coupling ${\cal J}_{AA,j}$ may not be always uniform and positive.
In Ref. [\onlinecite{Berg-2009}], the Josephson coupling between two systems with an intermediate chain
(which is close to the insulator phase) has been calculated by a numerical density matrix renormalization group (DMRG)
method and it was found that it can be negative, i.e., forming a $\pi$-Josephson junction between the two $A$ systems. From this,
it is not difficult to imagine that there might be more complicated patterns, depending on the microscopic details,
than the uniform $\pi$-Josephson junction.
To reflect this physics and to consider a broader possible patterns of the Josephson coupling ${\cal J}_{AA,j}$, we introduce a {\it phenomenological} Ising degree of freedom
$\sigma_{j}$ which can change the magnitude and possibly the sign of effective Josephson coupling. This Ising degree of freedom can be regarded as a local change in the doping of the intervening $B$ system between two neighboring $A$ systems. In this sense the Ising degree of freedom should be regarded as reflecting the tendency to frustrated phase separation of a doped strongly correlated system.\cite{emery-1993,carlson-1998}
To this effect, we consider the following interaction between the Ising degrees of freedom and the LE liquid
\begin{align}
\mathcal H_{\text{int}}=&-{\cal J}_{AA} \sum_{i}\cos[\sqrt{2\pi}(\theta_{A, i}-\theta_{A, i+1})]\nonumber\\
&- {\cal J}^{\prime}_{AA} \sum_{i}\sigma_{i}\sigma_{i+1}\cos[\sqrt{2\pi}(\theta_{A, i}-\theta_{A, i+1})] \nonumber\\
&+ \mathcal H_{\text{Ising}}[\sigma_i],
\label{Int}
\end{align}
in which we write ${\cal J}_{AA,i}$ in Eq.\eqref{Hint} as ${\cal J}_{AA,i}= {\cal J}_{AA} - {\cal J}'_{AA}\sigma_i \sigma_{i+1}$. The factor $\sim \cos[\sqrt{2\pi}(\theta_i-\theta_{i+1})]$ is the Josephson coupling between the LE systems because of
$\Delta(x) \sim e^{i\sqrt{2\pi} \theta_{A,i}}$.
In Eq.\eqref{Int}, the Ising interaction Hamiltonian $\mathcal H_{\text{Ising}}[\sigma_i]$ is assumed to have several phases depending on the
parameters in $\mathcal H_{\text{Ising}}$ and temperature, e.g. paramagnetic phase $\langle \sigma_i \rangle =0$, and various symmetry-broken phases.
In this paper, we further assume that the Ising variable $\sigma_i$ orders at a much higher temperature (or energy scale) than the spin gap
$\Delta_s$ in the LE liquid. Hence, we ignore any correction to the Ising variable due to the fluctuations of the SC states emergent from LE
liquids. To simplify the analysis we have assumed that the Ising variables are constant along the direction of the 1D systems and are classical (i.e. we did not include a transverse field term). The first assumption is not a problem since we will do mean field theory assuming that the resulting modulation (if any) is unidirectional. More microscopically we will need to assume that the Ising model has frustrated nearest and next nearest neighbor interactions along one direction only. This is the so-called anisotropic next-nearest-neighbor Ising (ANNNI) model which is well known to have a host of modulated phases.\cite{Fisher-1980} Similar physics, with a rich structure of periodic and quasi-periodic states, is obtained from the Coulomb-frustrated phase separation mechanism.\cite{carlson-1998,Low-1994}
In what follows we will not specify the form of $\mathcal H_{\text{Ising}}$ and assume that its ground state is encoded in a specific pattern of order for the Ising variables. In this picture an inhomogeneous charge-ordered state occurs first (and hence has a higher critical temperature) and this pattern causes the effective Josephson couplings to have an ``antiferromagnetic'' sign (i.e. $\pi$ junctions).\cite{berg-2008a,Berg-2009} Nevertheless, as noted in Ref. [\onlinecite{Berg-2009}], once the PDW state sets in there is always a (subdominant) CDW order state with twice the ordering wave-vector as that of the PDW.
The symmetry breaking patterns that we to study are: i) Uniform configuration $\langle \sigma_i \rangle = \pm$, $\forall i$,
ii) Staggered configuration $\langle \sigma_i \rangle = (-1)^i$, and
iii) Period 4 configurations (which will become clear soon below \ref{sec:p4}). Thus, when the Ising variables order and
spontaneously break the translational symmetry, the effective Josephson coupling between the different A systems will be modulated too.
For concreteness, throughout this work, we will take ${\cal J}_{AA}$ and ${\cal J}_{AA}^{\prime}$ to be positive. This condition is not necessary
and the following arguments can be easily extended to the other signs of ${\cal J}_{AA}$ and ${\cal J}_{AA}^{\prime}$.
We will start by analyzing the ground state (or the mean field (MF) state) of the LE systems coupled to Ising variables.
We will do this for different configurations of the Ising variables and see what are the possible phases that arise in the system of coupled
LE liquids.
\subsubsection{Ising Paramagnetic Configuration}
Before proceeding to the symmetry-broken phases of the Ising variable, we first briefly comment on the case with the paramagnetic phase
of the Ising variable $\sigma_i$. In the Ising paramagnetic phase,
we first note that ${\cal J}'_{AA}\sigma_i \sigma_{i+1} \cos[\sqrt{2\pi}(\theta_i-\theta_{i+1})]$ is effectively zero at the level of mean field theory and can be ignored. Thus Eq.\eqref{Int} will become at the low energy
\begin{align}
\mathcal H_{\text{int}} \rightarrow -{\cal J}_{AA}\sum_{i}\cos[\sqrt{2\pi}(\theta_{A, i}-\theta_{A, i+1})]+ \cdots
\label{int_pm}
\end{align}
in which $\cdots$ are the terms generated by integrating the fluctuations of the Ising variables in the paramagnetic phase,
e.g., $\sim \cos[2\sqrt{2\pi}(\theta_{A, i}-\theta_{A, i+1})]$,
which is strictly less relevant than $-{\cal J}_{AA} \cos[\sqrt{2\pi}(\theta_{A,i}-\theta_{A, i+1})]$ appearing in $\mathcal H_{\text{int}}$.
It is well-known that Eq.\eqref{int_pm} induces an uniform 2D superconducting state.\cite{Carlson-2000,Arrigoni-2004}
\subsubsection{Uniform Ising Configuration}
\label{sec:p0}
We now analyze the simplest case with $\langle \sigma_i \rangle\neq0$, where all the $\sigma_i$ have the same value, $\sigma_i=\sigma=\pm$.
In this case $ H_{\text{int}}$ is just given by
\begin{align}
\mathcal H_{\text{int}}=&-({\cal J}_{AA}+ {\cal J}^{\prime}_{AA})\sum_i \cos[\sqrt{2\pi}(\theta_{A,i}-\theta_{A, i+1})]\nonumber\\
\equiv&-{\cal J}_T\sum_i \cos[\sqrt{2\pi}(\theta_{A,i}-\theta_{A, i+1})]
\end{align}
The system of coupled LE systems can be treated in interchain MFT, where all the systems are in phase, since ${\cal J}_T>0$. In this case we just have a
uniform SC state in the direction perpendicular to the systems $\Delta_{j}=\Delta$, where $\Delta$ includes the
spin gap and the MFT value for $\langle\cos\sqrt{2\pi}\theta_{A,i} \rangle$. We will show in the following section how to compute the
value $\langle\cos\sqrt{2\pi}\theta_{A,i}\rangle$. Thus, this is the same phase as in the Ising paramagnetic case but with a larger value of the effective Josephson coupling.
\subsubsection{Staggered (Period 2) Ising Configuration}
\label{sec:p2}
Let us now consider $\sigma_i=(-1)^i$. In this case $\mathcal{H}_{\text{int}}$ is given by
\begin{align}
\mathcal H_{\text{int}}&=-({\cal J}_{AA}- {\cal J}^{\prime}_{AA})\sum_i \cos[\sqrt{2\pi}(\theta_{A,i}-\theta_{A,i+1})]\nonumber\\
\equiv&-\delta {\cal J}\sum_i \cos[\sqrt{2\pi}(\theta_{A,i}-\theta_{A,i+1})]
\label{HintLE}
\end{align}
Again, the system of coupled LE systems can be treated in interchain MFT. However, we need to be careful about the sign of $\delta{\cal J}$. If
$\delta {\cal J}>0$ the SC order parameter in all the systems are in phase. It is important to emphasize that although all the systems are in phase
as in the uniform Ising configuration, the expectation value $\langle\cos\sqrt{2\pi}\theta_{A,i}\rangle$ is different in both cases,
since as we will see below, it explicitly depends on the coupling between the systems, in this case ${\cal J}_T$ or $\delta {\cal J}$.
On the other hand, if $\delta {\cal J}<0$ the phase of SC order parameter has a $\pi$ phase shift between nearest neighbors.
In the former case we just have a uniform superconducting state in the direction perpendicular to the systems, while in the second case we have a
PDW state $\Delta_{A, j}\sim (-1)^{j}$. There is a direct transition from the uniform SC state to the PDW SC state at
${\cal J}_{AA}/{\cal J}^{\prime}_{AA}=1$. In this simple period 2 Ising configuration there is no
room for coexistence between the uniform SC and the PDW state.
\subsubsection{Longer Period Ising Configurations}
\label{sec:p4}
We can generalize the phases obtained with period 2 Ising configurations to cases with longer periods of the Ising variables. For instance for a period 4 of the Ising variables,
\begin{equation}
\cdots, \uparrow, \uparrow, \downarrow, \downarrow, \cdots,
\end{equation}
the effective Josephson couplings will have a period 2 modulation. In this case we will find either an uniform SC state or a period 4 PDW SC state, but no coexistence phase.
However, we will see that for Ising configurations with period $n$, with $n>2$, we
can have a richer phase diagram, including a coexistence phase if $n\geq 3$. For example, for a period 3 structure of the Ising variables, the allowed SC state is a coexistence phase, whereas for period 8 with the following spatial pattern of the Ising degrees of freedom
\begin{equation}
\cdots \downarrow, \uparrow, \uparrow, \uparrow, \uparrow, \downarrow, \downarrow, \downarrow, \downarrow, \uparrow, \uparrow, \cdots,
\end{equation}
we will find either a coexistence phase with period 4 or a PDW SC with period 8. It is straightforward to generalize this to more intricate configurations of the Ising variables.
\section{Interchain MFT on the LE Systems}
\label{sec:MFT}
Keeping the quasi-1D model of the previous section in mind, we now solve the coupled LE system problem using the interchain MFT.
In this section, we generalize the works of Lukyanov and Zamolodchikov,\cite{Lukyanov-1997} and Carr and Tsvelik\cite{Carr-2002}
to the patterns of the Josephson coupling between the LE systems emergent from various symmetry-breaking phases of the Ising variables.
\subsection{Uniform SC and Period 2 PDW SC Phases}
\label{sec:unifstag}
We first review the uniform configuration of the Ising variable (and also the paramagnetic phase of the Ising variable),
in which the SC operator will develop the same expectation value for all the LE systems\cite{Lukyanov-1997,Carr-2002}.
For the staggered (period 2) Ising configuration, there are two phases, depending on the sign of $\delta {\cal J}$, a uniform SC state and a PDW state.
We will solve the self-consistency equations for the both phases, the uniform SC state and a PDW state. Although
the equations have the same form, they correspond to different phases. The case of a period 4 Ising configuration of the form $\uparrow, \uparrow, \downarrow, \downarrow$ can be treated in the same manner. The only difference is that the two phases will be an uniform SC state or a period 4 PDW SC state. Here we will focus in the simpler period 2 case.
\subsubsection{Uniform SC Phase}
In the uniform configuration of the Ising variable, the effective Josephson interaction between neighboring $A$ subsystems (the LE liquids) is
\begin{align}
\mathcal H_{\text{int}}= -{\cal J}_T\sum_i \cos[\sqrt{2\pi}(\theta_{A, i}-\theta_{A, i+1})]
\label{uniform}
\end{align}
To perform the interchain MFT, we consider only the terms involving the $i$-th type-$A$ system among $\mathcal H_{\text{int}}$.
Using standard interchain MFT \cite{Carlson-2000,Carr-2002,Arrigoni-2004} we can approximate eq. \eqref{uniform} by:
\begin{equation}
\mathcal H_{\text{int}}=- 2\mu\int d^2x\cos(\sqrt{2\pi} \theta_{A,i}),
\label{HintSG1}
\end{equation}
with
$2\mu={\cal J}_T[\langle\cos(\sqrt{2\pi}\theta_{i+1})\rangle+\langle\cos(\sqrt{2\pi}\theta_{i-1})\rangle]$.
The self-consistency of the MFT then requires that
\begin{equation}
\langle \cos(\sqrt{2\pi} \theta_{A,i})\rangle =
\frac{\mu}{{\cal J}_T }.
\label{self}
\end{equation}
Following Refs. [\onlinecite{Lukyanov-1997},\onlinecite{Carr-2002}] the self-consistency equation can be solved from the following two expressions:
\begin{align}
&\langle \cos (\sqrt{2\pi}\theta_{A,i}) \rangle
= \frac{(1+\xi) \pi \Gamma(1-d/2)}{16 \sin \pi\xi\ \Gamma(d/2)}\times \nonumber\\
&\quad \left(\frac{\Gamma(\frac{1}{2}+\frac{\xi}{2})\Gamma(1-\frac{\xi}{2})}
{4\sqrt{\pi}}\right)^{(d-2)}\left( 2\sin \frac{\pi\xi}{2} \right)^d
M^d
\label{MFcos}
\end{align}
where
$M$, the soliton mass in the $1+1$-dimensional sine-Gordon model, is related to $\mu$ by
\begin{equation}
\mu = \frac{\Gamma(d/2)}{\pi\Gamma(1-d/2)} \left(
\frac{2\Gamma(\xi/2)}{\sqrt{\pi}\Gamma(\frac{1}{2}+\frac{\xi}{2})}
\right)^{d-2}
M^{2-d}
\label{mutoM}
\end{equation}
In these equations $d=1/(2K_c)$ is the scaling dimension of the vertex operator $e^{i\sqrt{2\pi} \theta_{A,i}}$ and $\xi=\tfrac{1}{2-d}$. Using equations Eq.\eqref{MFcos}
and Eq.\eqref{mutoM}, we can compute explicitly the value of $\langle\cos(\sqrt{2\pi}\theta_{A,i})\rangle$ for a given value of ${\cal J}_T$
and $K_c$. This completely determines, at least at the mean field level, the solution of the coupled LE systems \cite{Carlson-2000,Arrigoni-2004}.
\subsubsection{Period 2 PDW SC Phase }
\label{sec:staggered}
In the staggered configuration of the Ising variable, the interaction term between the $A$ systems is
\begin{align}
\mathcal H_{\text{int}}= -\delta {\cal J} \sum_i \cos[\sqrt{2\pi}(\theta_{A, i}-\theta_{A, i+1})],
\end{align}
which is identical to that of the uniform configuration case, Eq.\eqref{uniform}, if $\delta {\cal J}>0$. Hence if $\delta {\cal J} >0$,
we can simply replace ${\cal J}_T$ by $\delta {\cal J}$ to find the MF solution. This will give a uniform SC state.
If $\delta {\cal J} <0$, then we can perform a transformation on the even sites, $\sqrt{2\pi}\theta_{A, 2i}\to\sqrt{2\pi}\theta_{A, 2i}+\pi$,
effectively changing the sign of $\delta J$ and coming back to the first case.
Though the form of the equation is identical to that of the uniform SC state, it is important to remember
that the MF solution doubles the unit cell, due to the transformation $\sqrt{2\pi}\theta_{A, 2i}\to\sqrt{2\pi}\theta_{A, 2i}+\pi$
acting only on the even sites. Thus, SC order parameter oscillates in space
\begin{equation}
\Delta_{j}(x) \sim (-1)^j \langle \cos(\sqrt{2\pi}\theta_{A}) \rangle,
\end{equation}
corresponding to a period-2 PDW SC state.
Before moving onto the coexistence phase in the next section, let us mention what is the dependence of $T_c$ with $\delta\cJ$ (or $\cJ_T$,
depending on the Ising configuration). We can think of $2\mu$ in eq. \eqref{HintSG1} effectively as an external field due to the mean field
value of $m_j=\langle\cos(\sqrt{2\pi}\theta_{j})\rangle$ in the nearest neighbor systems. We can write then
\begin{equation}
H_j = H^{(0)}_j - h_j \int dx \cos(\sqrt{2\pi} \theta_j)
\end{equation}
in which $h_j=\cJ(m_{j+1}+m_{j-1})$ and $H^{(0)}_j$ is the conventional kinetic term for the Luther-Emery liquid. As we saw above, for
the uniform or staggered configuration the value of $m_j$ is the same in all the systems, or effectively the same for
$\delta {\cal J} <0$ since we can perform a transformation on the even sites $\sqrt{2\pi}\theta_{A, 2i}\to\sqrt{2\pi}\theta_{A, 2i}+\pi$.
In summary, we can write just $m=m_j=\langle\cos(\sqrt{2\pi}\theta_{j})\rangle$ and $h=h_j=2\cJ m$ (where $\cJ=\delta\cJ$ or $\cJ_T$
depending on the case).
For $h\to0$ we have that self-consistency implies
\begin{equation}
m = \chi_{SC} h = 2\cJ \chi_{SC} m,
\end{equation}
which has the trivial solution $m=0$ or a non-trivial solution $m\neq0$ if $2\cJ \chi_{SC} = 1$ (which determines the critical temperature).
Using that for a Lutter-Emery liquid
\begin{equation}
\chi_{SC}(T) \sim \frac{\Delta_{s}}{T^{2- 1/K_c}},
\end{equation}
we have that:
\begin{equation}
T_c\sim \Delta_s\cJ^\alpha
\label{Tcpure}
\end{equation}
where the exponent is $\alpha=\displaystyle\frac{1}{2- 1/K_c}$. Although the resulting $T_c$ is small when $\cJ$ is small, what is important is that is only power-law small, instead of exponentially small as in the BCS case.
\subsection{Uniform SC and Period 4 PDW SC state coexistence phase}
\label{sec:coex}
Now we consider the period 8 states of the Ising variables $\sigma_i=(-1)^{\lfloor i/4\rfloor}$. Then the Josephson coupling also modulates in
space with period 4, and thus we need to solve four coupled self-consistent equations in MFT. The effective MF Hamiltonian for each $A$ system
is given by\cite{Carr-2002}
\begin{equation}
H^{(i)}_{\text{int}}=- 2\mu_i\int d^2x\cos(\sqrt{2\pi}\theta_{A,i})
\end{equation}
with
\begin{equation}
2\mu_i=[{\cal J}_{i}\langle\cos(\sqrt{2\pi}\theta_{A, i+1})\rangle+{\cal J}_{i-1}\langle\cos(\sqrt{2\pi}\theta_{A, i-1})\rangle]
\end{equation}
where ${\cal J}_i = {\cal J}_{AA}- {\cal J}^{\prime}_{AA}\sigma_i \sigma_{i+1}$ in which $\sigma_i$ is in the period 4 structure.
Using $\cJ_T=\cJ_{AA}+ \cJ^{\prime}_{AA}$ and $\delta \cJ=\cJ_{AA}- \cJ^{\prime}_{AA}$ and defining
$m_i= \langle \cos(\sqrt{2\pi} \theta_{A,i} \rangle$, it is clear that we need to solve only for the four systems $i=0, 1, 2, 3$
in this MFT by assuming that the MF solution does not break the translational symmetry $i \sim i+4$ of the pattern of the Josephson coupling.
Upon implementing the MFT analysis from the previous section we have the following set of coupled equations:
\begin{align}
m_0&=f(d)\left(\frac{ m_3\delta {\cal J}+m_1{\cal J}_T }{2}\right)^{d/(2-d)},\nonumber\\
m_1&=f(d)\left(\frac{ m_0{\cal J}_T+m_2 {\cal J}_T }{2}\right)^{d/(2-d)},\nonumber\\
m_2&=f(d)\left(\frac{m_1 {\cal J}_T +m_3 {\cal J}_T }{2}\right)^{d/(2-d)},\nonumber\\
m_3&=f(d)\left(\frac{m_2 {\cal J}_T +m_0 \delta {\cal J}}{2}\right)^{d/(2-d)},
\label{coupledeqs}
\end{align}
where $f(d)$ is a constant that only depends on the scaling dimension $d = \frac{1}{2 K_c}$.
The explicit expression for $f(d)$ is:
\begin{align}
f(d)=&\frac{(1+\xi) \pi \Gamma(1-d/2)}{16 \sin \pi\xi\ \Gamma(d/2)}
\left(\frac{\Gamma(\frac{1}{2}+\frac{\xi}{2})\Gamma(1-\frac{\xi}{2})}
{4\sqrt{\pi}}\right)^{(d-2)}\nonumber\\
&\times\left( 2\sin \frac{\pi\xi}{2} \right)^d
\left(\frac{\pi\Gamma(1-d/2)}{\Gamma(d/2)}\right)^{d/(2-d)}\nonumber\\
&\times\left(
\frac{2\Gamma(\xi/2)}{\sqrt{\pi}\Gamma(\frac{1}{2}+\frac{\xi}{2})}\right)^d
\label{constant}
\end{align}
Notice that the system of Eqs. \eqref{coupledeqs} is non-linear. Nevertheless it is easy to see that $m_0$ and
$m_3$ ($m_1$ and $m_2$) will take the same value ($m_0=m_3$ and $m_1=m_2$). We can therefore reduce Eq. \eqref{coupledeqs} to a system of
only two coupled equations:
\begin{align}
m_0&=f(d)\left(\frac{ m_0\delta {\cal J}+ m_1{\cal J}_T}{2}\right)^{d/(2-d)}\label{coupledeqs1}\\
m_1&=f(d)\left(\frac{ m_0{\cal J}_T+m_1{\cal J}_T }{2}\right)^{d/(2-d)}
\label{coupledeqs2}
\end{align}
Taking the ratio of Eq. \eqref{coupledeqs1} and Eq. \eqref{coupledeqs2} we get:
\begin{equation}
x=\left(\frac{\lambda x+1}{x+1}\right)^{d/(2-d)}
\label{xeq}
\end{equation}
where $\lambda=\delta {\cal J}/{\cal J}_T$.
We can solve numerically the previous transcendental Eq.\eqref{coupledeqs}, or directly solve the system
Eqs. \eqref{coupledeqs1}-\eqref{coupledeqs2}.
Before solving the system of equations \eqref{coupledeqs1}-\eqref{coupledeqs2} numerically for some values of the parameters,
let us comment on Eq. \eqref{xeq}.
In the limiting case where ${\cal J}_T=\delta {\cal J}$ (i.e. ${\cal J}_{AA}^{\prime} =0 $) Eq. \eqref{xeq} has
the trivial solution $x=1$. In this case all the SC order parameters are in phase in the case $\delta {\cal J}>0$. On the other hand,
for $\delta {\cal J}<0$, there is a shift of $\pi$ every four lattice spacings. So, in this case, the periodicity of the PDW order parameter
will be eight (and not four), although the self-consistency equations actually will take the same form.
For now we will assume $\delta {\cal J}>0$ (see section \ref{sec:P8PDW} for the $\delta {\cal J}<0$ case).
Then in the pattern that we consider here, we find $x<1$ and so there is a coexistence between
the uniform SC and the period 4 PDW order parameters. Let us now solve the system of equations \eqref{coupledeqs1}-\eqref{coupledeqs2}
numerically for some values of the parameters. The results are summarize in table \ref{table:numerics}.
We now compute $T_c$ for this case. Following the same steps as in the previous section we have that:
\begin{align}
H_0 &= H^{(0)}_0 - h_0 \int dx \cos(\sqrt{2\pi} \theta_0) \nonumber\\
H_1 & =H^{(0)}_1 - h_1 \int dx \cos(\sqrt{2\pi} \theta_1
\end{align}
and
\begin{align}
h_0 & = \delta\cJ m_0+\cJ_T m_1\nonumber\\
h_1 & = \cJ_T m_0 + \cJ_T m_2
\end{align}
where we have used that $m_0=m_3$ and $m_1=m_2$. Since all the $A$-systems are equivalent, they have the same SC susceptibility $\chi$.
Then, the self-consistency equations are
\begin{equation}
m_0 = \chi_{SC} h_0, \qquad
m_1 = \chi_{SC} h_
\end{equation}
We can write this as a system of linear equations,
\begin{equation}
\left( \begin{array}{cc}
1-\chi_{SC}\delta\cJ & -\chi_{SC}\cJ_T \\
-\chi_{SC}\cJ_T & 1-\chi_{SC}\cJ_T\\
\end{array} \right)
\left( \begin{array}{c}
m_0\\
m_1\\
\end{array} \right)=
\left( \begin{array}{c}
0\\
0\\
\end{array} \right)
\end{equation}
which has a non-trivial solution if and only if the determinant of the $2\times2$ matrix is zero. This gives us a quadratic equation in for $\chi_{SC}$.
Choosing the positive solution we find that the critical temperature for the coexisting state is
\begin{equation}
T_c=\Delta_s \left(\frac{2\cJ_T(\cJ_T-\delta\cJ)}{-\cJ_T-\delta\cJ+\sqrt{5\cJ_T^2-2\cJ_T\delta\cJ+\delta\cJ^2}} \right)^{\alpha}
\label{Tccoex}
\end{equation}
where we recall that the exponent is given by $\alpha=\displaystyle\frac{1}{2- 1/K_c}$. Notice that, in the limit $\delta\cJ\to\cJ_T$, we recover Eq. \eqref{Tcpure}.
Thus, as in the uniform or pure period 2 PDW state, $T_c$ has a power law behavior in
$\cJ_T$ and $\delta\cJ$, and it is not exponentially small as it would be in a weak coupling BCS type theory.
\begin{center}
\begin{table}[t]
\begin{tabular}{c c c c c c c}
\hline
\hline
{${\cal J}_T$} & {$\delta {\cal J}$} & {$d$} & {$m_0$} &{$m_1$} &{$\overline{m}$} & {$m_{\text{PDW}}$}\\
\hline
1 & 1 & 1/4 & 0.890893 & 0.890893 & 0.890893 & 0\\
1 & 0.8 & 1/4 & 0.876601 & 0.889789 & 0.883195 & 0.0065943\\
1 & 0.5 & 1/4 & 0.853007 & 0.887947 & 0.870477 & 0.0174703\\
1 & 0 & 1/4 & 0.806035 & 0.884205 & 0.845120 & 0.0390853\\
\hline
\hline
\end{tabular}
\caption{Numerical solution for the system of equations \eqref{coupledeqs2} for different values of the parameters
${\cal J}_T$, $\delta {\cal J}$ and $d=1/2K_c$.
We also define $\overline{m}=(m_1+m_0)/2$ and $m_{\text{PDW}}=(m_1-m_0)/2$, which correspond to the uniform and PDW part of the SC
order parameter.}
\label{table:numerics}
\end{table}
\end{center}
\section{Fermionic Quasiparticles of the Superconducting States}
\label{sec:LLsystems}
So far, we have solved the coupled LE systems in the limit $|{\cal J}_{AA,j}| \gg |{\cal J}_{AB}|$ and $|{\cal J}_{AA,j}| \gg |{\cal J}_{AB}'|$
in Eq.\eqref{Hint} so that the couplings of the LE systems to eLL systems can be taken as the perturbation.
In this limit, we have ignored the type-$B$ eLL systems and shown that the various SC states
can emerge. Now we include the eLL systems back and investigate the nature of the full emergent SC state by looking at the SC proximity effect.
First of all, we note that the eLL systems themselves will flow to the 2D Fermi liquid fixed point (at low enough temperatures)
under the effect of the hopping amplitude $t_{BB}$. This is the most
relevant coupling in Eq.\eqref{Hint}. The result, for $t_{BB}$ small enough, is an anisotropic Fermi liquid with an open Fermi surface, shown as the dashed curves in Fig.\ref{FSpurePDW}.
Having solved the largest energy scales in Eq.\eqref{Hint}, set by $t_{BB}$ and ${\cal J}_{AA}$, we now include the effect of the pair tunneling processes mixing the systems $A$ with the systems $B$, presented
in Eq.\eqref{Hint}, and parametrized by the coupling constants ${\cal J}_{AB}$ and ${\cal J}_{AB}'$, respectively. We will study the effects of the SC states on the $A$ systems on the $B$ systems by treating the pair-tunneling terms to the lowest non-trivial order in perturbation theory in these coupling constants. Hence, we are assuming that the interaction with the type-$B$ eLL systems does not back react to considerably
change the MFT value of the SC gap in the LE systems.
As in Ref.[\onlinecite{granath-2001}], under the proximity effect mechanism the $B$ systems become superconducting and provide the quasiparticles for the combined $A$-$B$ system.
Since we are interested in the effect of the SC order parameters on the electronic spectrum, we replace the pair density $\Delta_{A, j} (x)$ of the type-$A$ LE systems in Eq.\eqref{Hint} by its MF value
$\langle \Delta_{A,j} \rangle$ determined by the interchain MFT discussed in the previous Section \ref{sec:MFT}.
In this approximation, we find that Eq.\eqref{Hint} reduces to
\begin{align}
&H^{\prime} \to \sum_{j}\int dx \Big\{
-t_{BB}\sum_{\sigma}[\psi_{B,j,\sigma}^{\dagger}\psi_{B,j+1,\sigma}+ {\rm h. c. }]\nonumber\\
&~-{\cal J}_{AB}[\Delta^{\dagger}_{B,j}\langle \Delta_{A,j}\rangle+\Delta^{\dagger}_{B,j}\langle \Delta_{A,j+1}\rangle +{\rm h. c.}]\nonumber \\
&~+{\cal J}_{AB}^{\prime}[\langle \Delta_{A,j}^{*}\rangle (\psi_{B,j,\uparrow}\psi_{B,j-1,\downarrow}+
\psi_{B,j-1,\uparrow}\psi_{B,j,\downarrow})+{\rm h. c.}]
\Big\},
\label{HPDWint2}
\end{align}
which is simply a theory of a Fermi surface coupled to the SC via a proximity coupling. Since Eq.\eqref{HPDWint2} is quadratic in the
electron fields, we can readily diagonalize the effective Hamiltonian, and obtain the quasiparticle spectrum for the different SC states
found in Section \ref{sec:MFT}.
\subsection{Uniform SC phase and pure PDW phase}
As we saw in section \ref{sec:staggered}, for the staggered (period 2) configurations of the Ising variables is
possible to have either a pure uniform SC state or a pure PDW state. The case of the uniform SC was studied by Granath, {\it et al.} \cite{granath-2001}
who showed that, depending on the values of $\cJ_{AB}$ and $\cJ_{AB}^{\prime}$, it is possible to have either a d-wave SC state with a fully gapped spectrum of
quasiparticles or a conventional d-wave SC state one with a nodal quasiparticle spectrum. We refer the reader to their paper for
further details.\cite{granath-2001}
On the other hand, for the pure PDW state, even though the MF equation for the SC gap has the same form as for
the uniform SC gap, the quasiparticle spectrum is quite different. We will study this spectrum in detail here.
Let us start by defining the period 2 PDW order parameter, i.e. with ordering wave vector ${\bm Q}=(0,\pi)$,
\begin{equation}
\Delta^{A}_{j}=\Delta_{\bm Q} e^{i\pi j}
\label{PDWOP}
\end{equation}
where $\Delta_{\bm Q}$ is given by the spin gap and the interchain MFT value for $\langle\cos\sqrt{2\pi}\theta\rangle$
which is given in Section \ref{sec:unifstag} for the period 2 configuration of the Ising variables. Notice that for a period 2 state $\Delta_{\bm Q}=\Delta_{-\bm Q}$, since for a period 2 state $\bm Q$ and $-\bm Q$ differ by a reciprocal lattice vector.
To find the quasiparticle spectrum we first
write down the Hamiltonian of Eq. \eqref{HPDWint2} in momentum space.
Defining the Nambu basis (here we dropped the $B$ label in the electronic operators, since it is understood that we are referring to the eLL
systems) as:
\begin{equation}
\Psi_\mathbf{k}^\dagger=(\psi_{\mathbf{k}\uparrow}^\dagger, \psi_{\mathbf{k}+(0,\pi)\uparrow}^\dagger,
\psi_{-\mathbf{k}\downarrow},\psi_{-\mathbf{k}-(0,\pi)\downarrow})
\label{NBPDW}
\end{equation}
we can write the Bogoliubov de-Gennes (BdG) Hamiltonian as
\begin{equation}
H=\sum_{\mathbf{k}} \;\psi_\mathbf{k}^\dagger \;\hat{H}_\mathbf{k} \;\psi_\mathbf{k}
\end{equation}
where the one-particle Hamiltonian $\hat{H}_{\mathbf{k}}$ is given by
\begin{widetext}
\begin{equation}
\hat{H}_{{\bm k}}=\bmb
\varepsilon(k_x)-t_{BB}\cos(k_y ) & 0 & 0 & 2i{\cal J}_{AB}^{\prime}\Delta^*_{\bm Q}\sin(k_y) \\
0 & \varepsilon(k_x)+t_{BB}\cos(k_y) &-2i{\cal J}_{AB}^{\prime}\Delta^*_{\bm Q}sin(k_y)& 0 \\
0 & 2i{\cal J}_{AB}^{\prime}\Delta_{\bm Q}\sin(k_y) & -\varepsilon(k_x)+t_{BB}\cos(k_y) & 0 \\
-2i{\cal J}_{AB}^{\prime}\Delta_{\bm Q}\sin(k_y) & 0 & 0 & -\varepsilon(k_x)-t_{BB}\cos(k_y)
\emb
\label{HBdGPDW}
\end{equation}
\end{widetext}
From this one-particle Hamiltonian we find the quasiparticle spectrum
\begin{equation}
E(\mathbf{k})=\pm t_{BB} \cos(k_y)\pm \sqrt{\varepsilon^2(k_x)+4{\cal J}_{AB}^{\prime 2}|\Delta_{\bm Q}|^2\sin^2(k_y)}
\end{equation}
\begin{figure}[hbt]
\centering
\hbox{\includegraphics[width=0.46\columnwidth]{Fig2a}
\hskip -.1cm
\includegraphics[width=0.55\columnwidth]{Fig2b}}
\caption{(Color online) On the left, Fermi surface for the pure period 2 PDW state. The dashed (blue) line corresponds to the original
FS in the absence of superconductivity. The solid (red) line corresponds to the new FS after the superconducting proximity state is established.
On the right, the spectral function $A(\mathbf{k},0)$ corresponding to the pockets on the left.
We used ${\cal J}_{AB}^{\prime}\Delta_{\bm Q}=0.12t$, $t_{BB}=0.7t$, $\varepsilon(k_x)=-t\cos k_x$ and $\delta=10^{-4}t$}
\label{FSpurePDW}
\end{figure}
In Fig. \ref{FSpurePDW} we plot the Fermi surface of the Bogoliubov quasiparticles of this period 2 PDW state for some values of the parameters.
In contrast to the pure uniform SC state, whose spectrum can be either nodal or fully gapped, we
find that this PDW state ($\Delta_{\bm Q}\neq0$ in Eq. \eqref{PDWOP}) has pockets of Bogoliubov quasiparticles,
as it is also found in the weak coupling theories.\cite{baruch-2008,Berg-2009,Loder-2010,Radzihovsky-2011,zelli-2012,lee-2014} The size of the pockets depends
on the strength of the SC gap.
In addition, we compute the spectral function given by (see for instance Ref. [\onlinecite{Seo-2008}]):
\begin{equation}
A(\mathbf{k},\omega)=-\frac{1}{\pi}\text{Im}[\hat{G}_{11}(\mathbf{k},\omega)]
\label{spectralfn}
\end{equation}
where
\begin{equation}
\hat{G}(\mathbf{k},\omega)=\frac{1}{\omega+i\delta-\hat{H}_{{\bm k}}}
\end{equation}
is the retarded Green function and $\delta=0^+$.
The spectral function $A(\mathbf{k},\omega=0)$ for this pure period 2 PDW state is shown in Fig. \ref{FSpurePDW}.
In Fig. \ref{bands} we plot the dispersion relation of the Bogoliubov excitations for several values of $k_y$.
\begin{figure}[hbt]
\centering
\subfigure[]{\includegraphics[width=0.23\textwidth]{Fig3a} }
\subfigure[]{\includegraphics[width=0.23\textwidth]{Fig3b}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{Fig3c}}
\subfigure[]{\includegraphics[width=0.23\textwidth]{Fig3d}}
\caption{(Color online) a-d: Dispersion relation of the Bogoliubov quasiparticles for $k_y=0, \frac{\pi}{4}, \frac{4\pi}{9}, \frac{\pi}{2}$, respectively, for the period 2 PDW state.
Here we used ${\cal J}_{AB}^{\prime}\Delta_{\bm Q}=0.12t$, $t_{BB}=0.7t$ and $\varepsilon(k_x)=-t\cos k_x$}
\label{bands}
\end{figure}
\subsection{Coexistence Phase of a Period 4 PDW and a uniform SC: the Striped Superconductor}
We start by writing the SC order parameter, which includes both the uniform SC and the PDW order parameters as an expansion of the form
\begin{equation}
\Delta^{A}_{j}=\Delta_0+\sqrt{2}\Delta_{\bm Q}\cos\left(\frac{\pi j}{2}+\frac{\pi}{4}\right)
\label{coexOP}
\end{equation}
where the expectation values of the order parameters $\Delta_0$ and $\Delta_{\bm Q}$, where ${\bm Q}=(0,\frac{\pi}{2})$ is the ordering wave vector, are set jointly by the spin gap of the LE systems and by the interchain MFT value for $\langle\cos\sqrt{2\pi}\theta\rangle$
found in the previous section for the period 4 state of the Ising degrees of freedom.
\begin{figure}[t]
\centering
\includegraphics[width=0.78\columnwidth]{Fig4}
\caption{
(Color online) Quasiparticle spectra with nodal points in the coexistence phase.
The dashed (blue) line corresponds to the original FS in the absence of superconductivity.
The red points correspond to the position of the nodes in the absence of the PDW state ($\Delta_{\bm Q}=0$).
The green points correspond to the position of the nodes in the presence of the PDW state with $\Delta_{\bm Q}=0.2$.
We have chosen the parameters ${\cal J}_{AB}^{\prime}=0.5t$,
${\cal J}_{AB}=0.2t$, $t_{BB}=0.7t$, $\Delta_0=0.2$ and $\varepsilon(k_x)=-t\cos k_x$}.
\label{nodes}
\end{figure}
We now write down the Hamiltonian in momentum space following the notation of Ref. [\onlinecite{baruch-2008}].
We define the Nambu spinor as:
\begin{equation}
\Psi_\mathbf{k}^\dagger=(\psi_{\mathbf{k}\uparrow}^\dagger, \psi_{\mathbf{k}+\mathbf{q}\uparrow}^\dagger,
\ldots,\psi_{-\mathbf{k}\downarrow},\psi_{-(\mathbf{k}+\mathbf{q})\downarrow}, \ldots)
\label{NB}
\end{equation}
where $\mathbf{q}$ is the ordering wavevector. In our case $\mathbf{q}=(0,\pi/2)$ and $\mathbf{k}$ is taking values over the reduced Brillouin
zone (RBZ) associated with the ordered state, which in this case is $k_x \in [-\pi, \pi)$ and $k_y\in [-\pi/4, \pi/4)$.
In this basis the Hamiltonian is given by:
\begin{equation}
H=\sum_{\mathbf{k}\in RBZ} \;\psi_\mathbf{k}^\dagger \;\hat{H}_\mathbf{k} \;\psi_\mathbf{k},
\label{Hrep}
\end{equation}
where the BdG Hamiltonian $\hat{H}_{\bm k}$ in the Nambu basis of Eq.\eqref{NB} is given by:
\begin{equation}
\hat{H}_\mathbf{k}=\left( \begin{array}{cc}
\mathcal{A}_\mathbf{k} & \mathcal{C}_\mathbf{k} \\
\mathcal{C}^\dagger_\mathbf{k} & -\mathcal{A}_\mathbf{k}
\end{array} \right)
\label{Hmatrix}
\end{equation}
where $\mathcal{A}_\mathbf{k}=\textrm{diag} (\varepsilon(\mathbf{k}),
\varepsilon(\mathbf{k}+\mathbf{q}),\ldots)$ is a diagonal matrix, and the square matrix $\mathcal{C}_\mathbf{k}$ contains the SC order parameters.
Since the ordering vector is $\pi/2$ along the $k_y$ direction, our matrix $\mathcal{C}_\mathbf{k}$ is given by a $4\times 4$
matrix with the form:
\begin{equation}
\!\!\mathcal{C}_\mathbf{k}=\left( \begin{array}{cccc}
f_0(\mathbf{k}) & f_1(\mathbf{k}) & f_2(\mathbf{k}) & f_3(\mathbf{k}) \\
f_1^*(\mathbf{k}) & f_0(\mathbf{k}+\mathbf{q}) & f_1(\mathbf{k}+\mathbf{q}) & f_2(\mathbf{k}+\mathbf{q}) \\
f_2^*(\mathbf{k}) & f_1^*(\mathbf{k}+\mathbf{q}) & f_0(\mathbf{k}+2\mathbf{q}) & f_1(\mathbf{k}+2\mathbf{q})\\
f_3^*(\mathbf{k}) & f_2^*(\mathbf{k}+\mathbf{q}) & f_1^*(\mathbf{k}+2\mathbf{q}) & f_0(\mathbf{k}+3\mathbf{q}) \\
\end{array} \right)
\label{Cmatrix}
\end{equation}
where $f_0$ corresponds to uniform pairing and $f_1,f_2,f_3$ to the finite momentum pairing. The explicit expressions are the following:
\begin{align}
f_0(\mathbf{k})&=2\Delta_0({\cal J}_{AB}-{\cal J}_{AB}^{\prime}\cos k_y)\nonumber\\
f_1(\mathbf{k})&=-i\Delta_{\bm Q}({\cal J}_{AB}-\sqrt{2}{\cal J}_{AB}^{\prime}\cos(k_y+q_y/2))\nonumber\\
f_2(\mathbf{k})&=0\nonumber\\
f_3(\mathbf{k})&=i\Delta_{\bm Q}({\cal J}_{AB}-\sqrt{2}{\cal J}_{AB}^{\prime}\cos(k_y-q_y/2))
\end{align}
where we recall that $\mathbf{q}=(0,\pi/2)$, so $q_y=\pi/2$.
\begin{figure}[t!]
\centering
\subfigure[]{\includegraphics[width=0.28\textwidth]{Fig5a} }
\subfigure[]{\includegraphics[width=0.28\textwidth]{Fig5b}}
\subfigure[]{\includegraphics[width=0.28\textwidth]{Fig5c}}
\caption{(Color online) Dispersion relations of the quasiparticles in the coexistence phase shown for several values of $k_y$. Notice that the dispersion relation is only gapless for $k_y\approx0.457$, which
corresponds to the position of the nodal point for the same set of parameters used in Fig. \ref{nodes}. ${\cal J}_{AB}^{\prime}=0.5t$,
${\cal J}_{AB}=0.2t$, $t_{BB}=0.7t$, $\Delta_0=\Delta_{\bm Q}=0.2$ and $\varepsilon(k_x)=-t\cos k_x$}
\label{bandscoex}
\end{figure}
First of all,
due to the periodicity of the PDW SC state, it is necessary to the fold original FS.
Let us first analyze the case of the pure uniform SC state. In this case ($\Delta_{\bm Q}=0$) the spectrum can be easily calculated
from the Hamiltonian given in eq. \eqref{Hmatrix}:
\begin{align}
E_{1,\pm}^{2}&=(\varepsilon(k_x)\pm t_{BB}\cos(k_y))^2+\Delta_0^2(\cJ_{AB}\pm\cJ_{AB}^{\prime}\cos(k_y))^2\nonumber \\
E_{2,\pm}^{2}&=(\varepsilon(k_x)\pm t_{BB}\sin(k_y))^2+\Delta_0^2(\cJ_{AB}\pm\cJ_{AB}^{\prime}\sin(k_y))^2
\end{align}
We can see that this SC state will have a quasiparticle spectrum with nodes if $|\cJ_{AB}|<|\cJ_{AB}^{\prime}|$. Now, even in the coexistence phase, where both $\Delta_{\bm Q}\neq0$
and $\Delta_0\neq0$, the quasiparticle spectrum may still can have nodes. For the pure uniform SC state, the position of the nodes depends on
the values $\cJ_{AB}/\cJ_{AB}^{\prime}$ and $t_{BB}$. In the coexistence phase the position of the nodes will depends on $\Delta_{\bm Q}$ as well
(see Fig. \ref{nodes}).
As in the case of pure period 2 PDW state, we show in Fig. \ref{bandscoex} the dispersion relation of the quasiparticles for several values of $k_y$.
\subsection{Period 8 PDW state}
\label{sec:P8PDW}
\begin{figure}[hbt]
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth]{Fig6a} }
\subfigure[]{\includegraphics[width=0.4\textwidth]{Fig6b}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{Fig6c}}
\caption{(Color online) FS for the period 8 PDW state.
The dashed (blue) line corresponds to the original FS in the absence of superconductivity.
The solid (red) line corresponds to the new FS (pockets) after the introduction of superconductivity ($\Delta_1\neq0$ and $\Delta_2\neq0$).
In (a) $\Delta_1=\Delta_2=0.05$, in (b) $\Delta_1=0.08$ and $\Delta_2=0.1$ and in (c) $\Delta_1=0.25$ and $\Delta_2=0.3$
In all figures ${\cal J}_{AB}^{\prime}=0.6t$, ${\cal J}_{AB}=0.4t$, $t_{BB}=0.7t$, $\Delta_0=0$ and $\varepsilon(k_x)=-t\cos k_x$}
\label{pocketsP8}
\end{figure}
Above we focused on the coexistence phase for the period 4 case. This was the case when $\delta\cJ>0$. However,
if $\delta {\cal J}<0$ (i.e. for $\mathcal{J}_{AA}<\mathcal{J}'_{AA}$) the case is different and we find a PDW state. There is a shift of $\pi$ every four lattice sizes,
so in this case the periodicity of the PDW order parameter is actually eight (not four!).
Nevertheless, the self-consistency equations will have the same form:
\begin{align}
m_0&=f(d)\left(\frac{ m_0|\delta {\cal J}|+ m_1{\cal J}_T}{2}\right)^{d/(2-d)}\nonumber\\
m_1&=f(d)\left(\frac{ m_0{\cal J}_T+m_1{\cal J}_T }{2}\right)^{d/(2-d)}
\label{coupledeqsP8}
\end{align}
The pattern of the SC order parameter is now that of a pure period 8 PDW SC state:
$$\Delta=(\Delta_1,\Delta_2,\Delta_2,\Delta_1,-\Delta_1,-\Delta_2,-\Delta_2,-\Delta_1,\Delta_1,\ldots)$$
We can write the previous pattern using the following SC order parameter:
\begin{equation}
\Delta^{A}_{j}=\Delta\sin\left(\frac{\pi j}{4}+\frac{\pi}{8}\right)+\tilde{\Delta}\sin\left(\frac{3\pi j}{4}+\frac{3\pi}{8}\right)
\label{Period8OP}
\end{equation}
where we have defined:
\begin{align}
\Delta&=\Delta_1\sin\left(\frac{\pi}{8}\right)+\Delta_2\cos\left(\frac{\pi}{8}\right)\nonumber\\
\tilde{\Delta}&=\Delta_1\cos\left(\frac{\pi}{8}\right)-\Delta_2\sin\left(\frac{\pi}{8}\right)
\end{align}
where $\Delta_1$ and $\Delta_2$ are given by the spin gap and the interchain MFT value for $\langle\cos\sqrt{2\pi}\theta\rangle$
in eq. \eqref{coupledeqsP8}.
Since we are dealing with a period 8 SC state, the reduced Brillouin Zone is now
case is $k_x \in [-\pi, \pi)$ and $k_y\in [-\pi/8, \pi/8)$ and $\mathbf{q}=(0,\pi/4)$. The difference between the period 4 and the
period 8 is that the definition of the $\mathcal{C}_{\mathbf{k}}$ matrix is different, since is now an $8\times8$ matrix.
\begin{equation}
\!\!\mathcal{C}_\mathbf{k}=\left( \begin{array}{cccc}
f_0(\mathbf{k}) & f_1(\mathbf{k}) & \cdots & f_7(\mathbf{k}) \\
f_1^*(\mathbf{k}) & f_0(\mathbf{k}+\mathbf{q}) & \cdots & f_6(\mathbf{k}+\mathbf{q}) \\
\vdots & & \ddots \\
f_7^*(\mathbf{k}) & & & f_0(\mathbf{k}+7\mathbf{q}) \\
\end{array} \right)
\label{CmatrixP8}
\end{equation}
where the $f_i(\mathbf{k})$'s are given by the following expressions:
\begin{align}
f_0(\mathbf{k})=&f_2(\mathbf{k})=f_4(\mathbf{k})=f_6(\mathbf{k})=0\nonumber\\%&2\Delta_0({\cal J}_{AB}-{\cal J}_{AB}^{\prime}\cos k_y)\nonumber\\
f_1(\mathbf{k})=&i\Delta\left(\frac{1}{2}{\cal J}_{AB}(e^{-i\pi/8}+e^{-i3\pi/8})\right.\nonumber\\
&\qquad\left.-{\cal J}_{AB}^{\prime}e^{-i\pi/4}\cos(k_y+q_y/2)\right)\nonumber\\
f_3(\mathbf{k})=&i\tilde{\Delta}\left(\frac{1}{2}{\cal J}_{AB}(e^{-3i\pi/8}+e^{-i9\pi/8})\right.\nonumber\\
&\qquad\left.-{\cal J}_{AB}^{\prime}e^{-i3\pi/4}\cos(k_y+3q_y/2)\right)\nonumber\\
f_5(\mathbf{k})=&-i\tilde{\Delta}\left(\frac{1}{2}{\cal J}_{AB}(e^{3i\pi/8}+e^{i9\pi/8})\right.\nonumber\\
&\qquad\left.-{\cal J}_{AB}^{\prime}e^{i3\pi/4}\cos(k_y-3q_y/2)\right)\nonumber\\
f_7(\mathbf{k})=&-i\Delta\left(\frac{1}{2}{\cal J}_{AB}(e^{i\pi/8}+e^{i3\pi/8})\right.\nonumber\\
&\qquad\left.-{\cal J}_{AB}^{\prime}e^{i\pi/4}\cos(k_y-q_y/2)\right)\nonumber
\end{align}
where we recall that $\mathbf{q}=(0,\pi/4)$, so $q_y=\pi/4$.
Having $\mathcal{C}_{\mathbf{k}}$ we can write down our BdG Hamiltonian as in eq. \eqref{Hmatrix}.
In Fig. \eqref{pocketsP8} we show the FS for some values of $\Delta_1$ and $\Delta_2$. As in the pure period 2 PDW state, we see the formation
of pockets due to the folding of the FS.
\section{Other phases}
\label{sec:phases}
For completeness we summarize the other possible phases occurring in the system. Following closely Granath {\it et al.} \cite{granath-2001}
we treat the interactions appearing in eq. \eqref{Hint} perturbatively around the so called decoupled fixed point. At this
fixed point (FP) the systems are completely decoupled, and each one of the systems corresponds to a 1D system
that can be solved using bosonization.
Around the decoupled FP a perturbation with coupling constant $g$ is relevant (irrelevant) if its scaling dimension $d_g<2$ ($d_g>2$).
The scaling dimensions for the operators appearing in Eq. \eqref{Hint} are given in the work of Granath {\it et al.}.\cite{granath-2001}
The phases found by Granath {\it et al.} are:
\begin{enumerate}
\item
Typically, the couplings between the eLL and LE systems are irrelevant or less relevant than the coupling between $AA$ and $BB$
systems separately. In this case the RG flows to the point where all the $AB$ couplings go to zero. At this FP the system is made of
two (independent) interpenetrating systems, $A$ and $B$.
\item
The $\cJ_{AA}$ ($\cJ_{AA}^{\prime}$) term is relevant for $K_{c}^{(A)}>1/2$. In this case the $A$ systems develop long-range order and a
full spin gap.
Since the $BB$ electron tunneling operator has lower scaling dimension than the $BB$ spin exchange interaction, in the absence of a charge gap in
the $B$ subsystem, most probably the $B$ subsystem is in a anisotropic Fermi liquid phase. However, this two fluid FP is unstable due to
the proximity effect. Depending on the parameters in the Hamiltonian of Eq.\eqref{Hint} the quasiparticle spectrum can be gapless
(present nodes or pockets in the pure PDW state) or fully gapped. This means that we can have several possible stable SC phases,
a SC state with Fermi pockets, a nodal SC state, or a fully gapped SC state.
These were the phases studied in the previous sections using interchain MFT and coupling the eLL
systems to the LE systems.
\item
If $\Delta_c^{(B)}>0$, the $B$ subsystem can develop a antiferromagnetic phase. At this FP will be a coexistence between
superconductivity (in the $A$ subsystem) and antiferromagnetism (in the $B$ subsystem). This FP is stable, due to the spin gap in the
SC ($A$) and the charge gap in the antiferromagnet ($B$). The quasiparticle spectrum is therefore fully gapped as is also found in BCS-type theories.\cite{Loder-2011}
\end{enumerate}
\section{Concluding Remarks}
\label{sec:conclusions}
We have investigated a model of an array of two inequivalent systems in the quasi-one dimensional limit.
In this limit we have treated the interactions between the different systems in the array exactly using bosonization methods and interchain mean field
theory. The phases that we found are either a uniform d-wave superconductor, a striped superconductor (in which the uniform SC and the PDW SC state coexist), and a PDW state. To simplify the analysis we only looked at the case in which the modulation of the SC state is commensurate.
The resulting critical temperatures are, as expected, upper bounds on the actual physical critical temperatures. As emphasized in Refs.[\onlinecite{Arrigoni-2004}] and [\onlinecite{kivelson-2007}], the analytic dependence of these mean field $T_c$'s on the coupling constants obeys the exact power-law scaling behavior predicted by a renormalization group analysis of the dimensional crossover from the 1D regime to the full (but anisotropic) 2D phases, albeit with an overestimate of the prefactor.
On the other hand, the actual critical temperatures are significantly suppressed from the values quoted here due to the the two-dimensional nature of the array. Hence we expect the ground states that we found here to undergo a sequence of thermodynamic phase transitions leading to a complex phase diagram of the type discussed by Berg {\it et al.}\cite{Berg-2009b} (and by Agterberg and Tsunetsugu.\cite{agterberg-2008}) It is well known from classical critical phenomena of 2D commensurate systems that states of the type we discuss here may become incommensurate at finite temperatures due to thermal fluctuations if the period of the ordered state is longer than a critical value (typically equal to four), see, e.g. Ref.[\onlinecite{Chaikin-1995}].
We have shown that a high energy scales (of the order of the spin gap), we can first determine the SC phases of one set of systems
(in our notation, the Luther-Emery liquid systems $A$). At these energy scales we
showed that it is possible to have, in addition to a uniform SC phase, a pure PDW state and a coexistence phase of a uniform and a PDW state.
Having determined the SC in the LE systems, we proceeded to incorporate the electronic Luttinger liquid systems perturbatively.
We found that the quasiparticle spectrum arising from the eLL systems can present Fermi pockets if the SC state is a pure PDW state.
In the case of coexistence uniform SC and PDW state or pure uniform SC (i.e. a striped superconductor) the quasiparticle spectrum can have nodes or be fully gapped depending
on the value of the coupling in the model.
We should stress, as it was done recently in Ref. [\onlinecite{fradkin-2014}], that in this quasi-1D approach the superconducting state
evolves from a local high energy scale, the spin gap, which hence has magnetic origin. For temperature $T$ higher that the spin gap, the system
is a quasi 1D system which does not have quasiparticles in the spectrum up to a scale, determined by an electron tunneling scale, to a crossover
to a Fermi liquid type system. Hence, at least qualitatively, systems of this type behave as `high $T_c$ superconductors.'
\begin{acknowledgments}
We thank Steven Kivelson for great discussions and V. Chua for his help generating the density plot for the spectral function.
This work was supported in part by the NSF grants DMR-1064319 (GYC,EF) and DMR 1408713 (EF) at the University of Illinois,
DOE Award No. DE-SC0012368 (RSG) and Program Becas Chile (CONICYT) (RSG).
\end{acknowledgments}
|
1,116,691,497,245 | arxiv | \section{Introduction}
In two dimensions,
$\mathbb{Z}$~topological insulators exhibit a charge Hall conductivity that
is quantized and
proportional to the Chern number of the
occupied bands \cite{TKNN,Hasan,Zhang}.
Such nontrivial topological phases are also characterized
by the presence of gapless edge modes \cite{Halperin,Hatsugai} that can be detected by transport measurements or tunneling.
In a topologically non-trivial superconductor
one does not expect that the charge Hall conductivity may be quantized,
however, as charge is not conserved due to the breaking of $U(1)$ symmetry.
In a singlet superconductor spin is conserved and
there is still a possibility that the spin Hall
conductivity is quantized,
as previously shown for a $d$-wave superconductor in the vortex state \cite{ZlatkoHall}.
For a triplet superconductor even this quantization is absent.
The thermal Hall conductivity has recently been shown to be quantized for
topological superconductors with broken time reversal symmetry (TRS) \cite{Sumiyoshi}.
Generically speaking,
the charge Hall resistance may be written as the sum of two contributions, one proportional to the
magnetic field and an anomalous contribution as
$\rho_{xy}=R_0 H_z + \rho_{xy}^{AH}$ (considering
$z$ as the perpendicular direction to the plane where the charges move).
The term $\rho_{xy}^{AH}$
is the anomalous Hall effect \cite{Nagaosa}
and has different origins. One of these origins is intrinsic and is the
result of an anomalous velocity \cite{Karplus} that is the result of a non-zero Berry curvature
\cite{Xiao}, $\boldsymbol{\Omega}_n(k)$.
The velocity of a charged particle in a given energy band $n$ in the presence of an electric field,
$\boldsymbol{E}$, can be written as
$\boldsymbol{v}_n(k)=\hbar^{-1} \partial \epsilon_n(k)/\partial k
-(e/\hbar) \boldsymbol{E} \times \boldsymbol{\Omega}_n(k)$.
The last term gives a contribution to the
velocity that is transverse to the direction of the electric field
and therefore
contributes to the Hall conductivity. Two other mechanisms that lead to an anomalous velocity
are due to scattering from impurities in a system where the spin-orbit has to be taken into account
such as the skew scattering mechanism \cite{Smit} and the side jump \cite{Berger}.
Historically, the anomalous Hall resistivity was studied in detail in systems with a
finite magnetization $M_z$, where $\rho_{xy}^{AH} = R_sM_z$.\cite{Nagaosa}
Anomalous properties in superconductors with magnetization have been studied
before in particular,
the presence of magnetoelectric effects \cite{edelstein}, the generation of a charge Hall effect due to a
spin current \cite{maekawa}
or a spin Hall effect due to a charge current \cite{bernevig}.
The anomalous Hall effect has been studied \cite{us}
in superconductors with spin-orbit coupling. The presence of a magnetic impurity is enough to induce a
non-vanishing Hall conductivity \cite{us}. The magnetic impurity interacts with the superconductor as
a local Zeeman term and orbital effects such as the presence of vortices are neglected. The local magnetization
may also be the result of some proximity effect for instance with a magnetic dot.
A dense magnetic impurity distribution leads to the destruction of superconductivity if the pairing
is a spin singlet but magnetization and superconductivity may coexist if the pairing is of triplet
origin such as in a p-wave superconductor. In this case the Hall conductivity
was shown to be non-vanishing if the spin-orbit coupling is present \cite{us}.
In this work we will
be concerned with the effect of the intrinsic contribution to the charge Hall conductivity
in $\mathbb{Z}$~topological superconductors. Superconductivity
with non-trivial topology may be obtained in different ways.\cite{Zhang} It can be due to
the pairing symmetry, as is the case of $p$-wave superconductors.\cite{ReadGreen}
In semiconductors with Rashba spin-orbit coupling it arises when $s$-wave superconductivity
is induced
and a Zeeman (time-reversal breaking) term is added.\cite{SatoPRL09, SauPRL10} An interesting proposal
is that of systems where the normal phase is already topologically non-trivial, in which case a
topological superconductor can
be obtained if $s$-wave superconductivity is induced by proximity effect.\cite{Fu,QHZ10}
Here we consider a Rashba-type non-centrosymmetric superconductor with admixture of $s$-wave
and $p$-wave pairing and TRS breaking Zeeman splitting, which has been
recently proposed in Ref.~\onlinecite{Sato}. We generalize to the cases where the pairing vector
is not coincident with that of the spin-orbit coupling, and also to the cases where the normal
phase is topologically non-trivial. We thus take on equal footing the three possible ways of
obtaining topological superconductivity mentioned above.
The main results of this paper may be summarized as follows. Whenever a $\mathbb{Z}$~topological
superconductor is realized and the first Chern number fully characterizes the topological phase,
we find that the behavior of the Hall conductivity and its derivatives with respect to the parameters
that drive the topological phase transition, specially the second derivative, provide an alternative
way to identify topological transitions in superconductors. This approach proves extremely
useful when the pairing vector is not aligned with the spin-orbit, in which case we have found
a less obvious relation between the Chern number and the number of crossings of edge state
bands with the Fermi level. A careful topological analysis of this case is given.
The paper is organized as follows.
In Section II we introduce the expression for the Chern number that will be used throughout
this work. The model Hamiltonian is presented in Section III and in Section IV the
results for the Hall conductivity and Chern number are presented. Section V is devoted
to the analysis of the edge states and their correspondence to the topological indices.
In Section VI we present results for a model where the non-superconducting band structure is already
nontrivial. Our conclusions are presented in Section VII.
\section{Characterization of topological phases}
The Berry curvature tensor
for a band with Bloch wavefunctions $u_n({\boldsymbol k})$
can be calculated as
\begin{equation}
\boldsymbol{\Omega}_n({\boldsymbol k}) = \langle \nabla_{\boldsymbol k} u_n({\boldsymbol k}) | \times | \nabla_{\boldsymbol k} u_n({\boldsymbol k}) \rangle\,,
\end{equation}
where
$\boldsymbol k=(k_x,k_y)$ denotes the momentum vector.
The contribution from the $n$-th band
to the Hall conductivity of a normal system
may be written in terms of the Berry curvature as\cite{TKNN}:
\begin{equation}
\sigma_{xy}^{(n)} = \frac{e^2}{\hbar} \int_{BZ} \frac{d^2 k}{(2\pi)^2} \Omega_n^{x,y}({\boldsymbol k}) n_F(\epsilon_n({\boldsymbol k}))\,,
\end{equation}
where $n_F$ is the Fermi function.
If the chemical potential lies within a gap the integral over the occupied states runs over
the entire Brillouin zone.
The charge Hall conductivity can then be written as
\begin{equation}
\sigma_{xy} = C \frac{e^2}{h}\,,
\end{equation}
where $C$ is the sum of the Chern numbers of the occupied bands.
The Berry curvature may also be obtained as a sum over states analogous to the Kubo
formula for the conductivity, and reads:
\begin{equation}
\Omega_n^{\mu,\nu}=i \sum_{n' \neq n} \frac{\langle n| \frac{\partial H}{\partial k_{\mu}}|n'\rangle \langle
n' | \frac{\partial H}{\partial k_{\nu}} |n\rangle - \mu \leftrightarrow \nu}{(E_n-E_{n'})^2}\,.
\end{equation}
In the case of a superconductor, the states $|n\rangle$ are the eigenstates of the Bogoliubov-de Gennes equations. At the gapless
points the denominator vanishes and the integral over the Brillouin zone may have large numerical errors.
It is then more convenient to calculate the Chern number by computing the
flux of the Berry curvature over plaquetes in the Brillouin zone \cite{Fukui}.
Discretizing the Brillouin zone as $k_\mu = 2\pi j/N$, with $j=1,...,N$, and $\mu=x,y$,
a new variable, $U_{\mu}({\boldsymbol k})$, for the link $\delta k_\mu$ (with $|\delta k_\mu|= 2\pi/N$)
oriented along the $\mu$ direction from the point ${\boldsymbol k}$ may be defined as
\begin{equation}
U_{\mu}({\boldsymbol k}) = \frac{\langle n({\boldsymbol k})|n({\boldsymbol k} +\delta k_\mu)\rangle}{|\langle n({\boldsymbol k})|n({\boldsymbol k} +\delta k_\mu)\rangle |}\,,
\end{equation}
and the lattice field strength may be defined as
\begin{equation}
F_{xy}({\boldsymbol k}) = \ln \left( U_x({\boldsymbol k}) U_y({\boldsymbol k} +\delta k_x) U_x({\boldsymbol k} +\delta k_y)^{-1} U_y({\boldsymbol k} )^{-1} \right)\,.
\end{equation}
$F_{xy}({\boldsymbol k})$ is restricted to the interval
$-\pi < -i F_{xy}({\boldsymbol k}) \leq \pi$ and the gauge invariant expression for the Chern number is
\begin{equation}
C_n=\frac{1}{2\pi i} \sum_{\boldsymbol k} F_{xy}({\boldsymbol k})\,.
\label{fukui}
\end{equation}
The calculations of the Chern number of each band $n$ are performed in this way in this work.
The calculation of the Chern number is simple for a 2$\times$2
Hamiltonian matrix $\hat H$ once the latter is written in the form:
\begin{equation}
\hat H(\boldsymbol h) = \boldsymbol h( \boldsymbol k) \cdot \boldsymbol \tau
+ h_0(\boldsymbol k) \tau_0 \,,
\label{H}
\end{equation}
where
$ \boldsymbol h=(h_x,h_y,h_z)$,
$\tau$ are Pauli matrices and $\tau_0$ is
the identity. The
Chern number for the bands in Hamiltonian Eq.~(\ref{H}) is independent of the choice for
$h_0(\boldsymbol k)$, as computed from the
usual expression
\begin{equation}
C=\frac{1}{4\pi}\int dk_x\ dk_y \ \frac{\partial \hat{\boldsymbol h}}{\partial k_x}
\times \frac{\partial \hat{\boldsymbol h}}{\partial k_y} \cdot \hat{\boldsymbol h}\,,
\label{chern}
\end{equation}
$\hat{\boldsymbol h}=\boldsymbol h/|\boldsymbol h|$.
The topological nature of bands
may be understood as the result of the covering of the unit sphere defined by the vector
$\hat{\boldsymbol h}$.
The system's symmetry properties depend on whether
the Pauli matrices in equation (\ref{H}) represent a pseudospin
({\it e.g.}, a sublattice)
or the physical spin. In the first case
TRS requires $h_{x(z)}$ to be an even function
of $\boldsymbol k$ and $h_y$ to be odd.
Otherwise, all components have to be odd.
In order to have nonzero $C$, TRS must be broken. The operation
of spatial inversion does not change the Chern number $C$.
On general grounds, non-trivial topological order for non-interacting Hamiltonians can be
related with the presence or absence of three discrete symmetries: time-reversal, particle-hole,
and chiral symmetry.\cite{Ludwig,LudwigAIP,LudwigNJP} For Bogoliubov-de Gennes systems, where particle-hole symmetry
is always present, preserving or not TRS is determinant to the nature of possible
topological phases in two dimensions. The non-centrosymmetric superconductor we consider here
is time-reversal invariant if the Zeeman term is absent and the pairing is unitary.
The system then belongs to the symmetry class DIII where the topological invariant is
a $\mathbb{Z}_2$ index, and it is said to realize a $\mathbb{Z}_2$~topological
superconductor. If the pairing is non-unitary or the Zeeman term is finite TRS is broken and the system belongs
to the symmetry class D (the TRS operator $\mathcal{T}$ is such that $\mathcal{T}^2 = -1 $).
The topological invariant that characterizes this phase is the first Chern number $C$,
and the system is said to be a $\mathbb{Z}$~topological superconductor.
\section{Topological superconductor}
\label{sec:TS}
We consider a triplet superconductor with $p$-wave symmetry in the presence of Rashba spin-orbit coupling
and magnetization, {\it e.g.}, due to a time-reversal breaking Zeeman term.
Due to the non-centrosymmetric nature of the system, parity is broken and, in general,
the pairing symmetry is not fixed, and an admixture of singlet pairing is allowed \cite{Gorkov}. Therefore, we
also consider a contribution from $s$-wave pairing.
This model was studied in Refs. \cite{Sato,us}.
We write the Hamiltonian as
\begin{eqnarray}
\hat H = \frac 1 2\sum_{\boldsymbol k} \left( {\boldsymbol c}_{{\boldsymbol k}}^\dagger ,{\boldsymbol c}_{-{\boldsymbol k}} \right)
\left(\begin{array}{cc}
\hat H_0({\boldsymbol k}) & \hat \Delta({\boldsymbol k}) \\
\hat \Delta^{\dagger}({\boldsymbol k}) & -\hat H_0^T(-{\boldsymbol k}) \end{array}\right)
\left( \begin{array}{c}
{\boldsymbol c}_{{\boldsymbol k}} \\ {\boldsymbol c}_{-{\boldsymbol k}}^\dagger \end{array}
\right)
\label{bdg1}
\end{eqnarray}
where $\left( {\boldsymbol c}_{{\boldsymbol k}}^{\dagger}, {\boldsymbol c}_{-{\boldsymbol k}} \right) =
\left( c_{{\boldsymbol k}\uparrow}^{\dagger}, c_{{\boldsymbol k}\downarrow}^\dagger ,c_{-{\boldsymbol k}\uparrow}, c_{-{\boldsymbol k}\downarrow} \right)$
and
\begin{equation}
\hat H_0=\epsilon_{\boldsymbol k}\sigma_0 -M_z\sigma_z + \hat H_R\,.
\end{equation}
Here, $\epsilon_{\boldsymbol{k}}=-2 t (\cos k_x + \cos k_y )-\epsilon_F$
is the kinetic part, where $t$ denotes the hopping parameter set in
the following as the energy scale, $t=1$, $\epsilon_F$ is the
chemical potential,
$\boldsymbol{k}$ is a wave vector in the $xy$ plane, and we have taken
the lattice constant to be unity, $a=1$. Furthermore, $M_z$
is the Zeeman splitting term responsible for the magnetization,
in energy units, along the $z$ direction.
Finally, the Rashba spin-orbit
term is written as
\begin{equation}
\hat H_R = \boldsymbol{s} \cdot \boldsymbol{\sigma} = \alpha
\left( \sin k_y \sigma_x - \sin k_x \sigma_y \right)\,,
\end{equation}
where
$\alpha$ is measured in the energy units
and $\boldsymbol{s} =\alpha(\sin k_y,-\sin k_x, 0)$.
The matrices $\sigma_x,\sigma_y,\sigma_z$ are
the Pauli matrices acting on the spin sector, and $\sigma_0$ is the
$2\times2$ identity.
The pairing matrix reads
\begin{equation}
\hat \Delta = i\left( {\boldsymbol d}\cdot {\boldsymbol\sigma} + \Delta_s \right) \sigma_y =
\left(\begin{array}{cc}
-d_x+i d_y & d_z + \Delta_s \\
d_z -\Delta_s & d_x +i d_y
\end{array}\right)\,.
\end{equation}
The vector $\boldsymbol{d}=(d_x,d_y,d_z)$ is the vector representation of the
$p$-wave
superconducting pairing and is an odd function of $\boldsymbol k$.
Because of Fermi statistics, the pairing matrix satisfies
$\hat \Delta({\boldsymbol k}) = -\hat \Delta^T (-{\boldsymbol k}) $.
The triplet pairing term is invariant under a spin rotation about the $\boldsymbol{\hat d}$ direction.
We note that both the superconducting order parameter and the
magnetization may be due to intrinsic order or to some proximity effect due to
neighboring superconductors or ferromagnets.
The pairing matrix for a p-wave superconductor generally satisfies
\begin{equation}
\hat \Delta \hat \Delta^{\dagger} = |\boldsymbol d|^2 \sigma_0 + \boldsymbol q \cdot \boldsymbol{\sigma}\,,
\end{equation}
where $\boldsymbol q=i \boldsymbol d \times \boldsymbol d^*$.
If the vector $\boldsymbol q$ vanishes the pairing is called unitary (s-wave
pairing is always unitary). Otherwise it is called non-unitary\cite{Sigrist}
and breaks TRS,
originating a spontaneous magnetization in the system due to the symmetry of the pairing, as in $^3He$.
We will consider both unitary and non-unitary pairings. In the case of unitary
pairing we consider two examples. One of them respects to a situation where the
spin-orbit coupling is such that the pairing is aligned \cite{Sigrist2} along the
spin-orbit vector $\boldsymbol{s}$. This is a situation expected if the spin-orbit
is strong since it is energetically favorable, and we will denote it by strong coupling case. In the other case we will
relax this restriction and allow that the two vectors are not aligned. This case
we will denote by weak spin-orbit coupling.
The energy eigenvalues and eigenfunction may be obtained solving the Bogoliubov-de Gennes equations
\begin{equation}
\label{bdg2}
\left(\begin{array}{cc}
\hat H_0({\boldsymbol k}) & \hat \Delta({\boldsymbol k}) \\
\hat \Delta^{\dagger}({\boldsymbol k}) & -\hat H_0^T(-{\boldsymbol k}) \end{array}\right)
\left(\begin{array}{c}
u_n\\
v_n
\end{array}\right)
= \epsilon_{k,n}
\left(\begin{array}{c}
u_n\\
v_n
\end{array}\right).
\end{equation}
The 4-component spinor can be written as \cite{errata}
\begin{equation}
\left(\begin{array}{c}
u_n\\
v_n
\end{array}\right)=
\left(\begin{array}{c}
u_n(\boldsymbol{k},\uparrow) \\
u_n(\boldsymbol{k},\downarrow) \\
-v_n(-\boldsymbol{k},\uparrow) \\
v_n(-\boldsymbol{k},\downarrow) \\
\end{array}\right)
\end{equation}
The energy eigenvalues of the Hamiltonian \eqref{bdg1} can be written (for
$\Delta_s=0$ and $d_z=0$) as
\begin{equation}
\label{bands} \epsilon_{\boldsymbol{k},\alpha_1,\alpha_2}
= \alpha_1 \sqrt{z_1 +\alpha_2 2 \sqrt{z_2}},
\end{equation}
where
\begin{eqnarray}
z_1 &=& \boldsymbol{d}\cdot \boldsymbol{d}^* + \boldsymbol{s}\cdot \boldsymbol{s} + \epsilon_{\boldsymbol{k}}^2
+ M_z^2 \,,\nonumber \\
z_2 &=& \left| (\boldsymbol{d}\times \boldsymbol{s})_z \right|^2 -
i \epsilon_{\boldsymbol{k}} M_z (\boldsymbol{d}\times \boldsymbol{d}^*)_z + \nonumber\\
&&\frac{1}{4}\left[ (\boldsymbol{d}\cdot \boldsymbol{d}^*)^2 - \left| \boldsymbol{d}\cdot \boldsymbol{d} \right|^2 \right] +
\epsilon_{\boldsymbol{k}}^2(\boldsymbol{s}\cdot \boldsymbol{s} + M_z^2 ),
\end{eqnarray}
and $\alpha_1,\alpha_2=\pm$.
The gap between the lowest bands closes at the $\boldsymbol k$ points satisfying the condition
$z_1=2 \sqrt{z_2}$.
In the superconducting phase the system is generally gapped. A possible change of topology
occurs when the gap closes.
Considering the strong spin orbit case for which
the $\boldsymbol d$ and $\boldsymbol s$ are collinear\cite{Sato},
we may write
$\boldsymbol{d}=(d/\alpha) \boldsymbol{s}$.
Taking also the $s$-wave pairing into account,
the gapless points satisfy
\begin{eqnarray}
\epsilon_{\boldsymbol{k}}^2+\Delta_s^2 &=& M_z^2 + \left(1+ \frac{d^2}{\alpha^2} \right) \boldsymbol{s}^2, \nonumber \\
\epsilon_{\boldsymbol{k}} \frac{d}{\alpha} \boldsymbol{s} &=& \Delta_s \boldsymbol{s}\,.
\label{gapless}
\end{eqnarray}
\section{Chern numbers and Hall conductivity}
\label{sec:Csigma}
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig1a.png}
\includegraphics[width=0.8\columnwidth]{fig1b.png}
\par\end{centering}
\caption{\label{fig:hallsold}(color online). Hall conductivity in a unitary case with $d_x=d \sin k_y, d_y=d \sin k_x, d_z=0$
(top panel)
and a non-unitary case with $d_x=-d/2 \sin k_x=-i d_y, d_z=0$ (bottom panel) with $d=1, \epsilon_F=-1$.
This Figure corrects a previously obtained result \cite{us}.}
\end{figure}
\subsection{Weak spin-orbit coupling: Unitary and non-unitary pairings}
In Fig. \ref{fig:hallsold} we present the results for the Hall conductivity,
as calculated from the Kubo formula, given in Eq. (10) of Ref. \cite{us}.
We consider both the unitary and the
non-unitary cases, relaxing the restriction that
$\boldsymbol{d} \parallel \boldsymbol{s}$ which was assumed in Ref.~\onlinecite{Sato}.
The two cases are chosen as $d_x=d \sin k_y, d_y=d \sin k_x, d_z=0$ and
$d_x=-d/2 \sin k_x=-i d_y, d_z=0$, respectively.
The Hall conductivity in the superconducting unitary
phase is similar to that of the normal phase (not shown).
Both in the normal phase and in the unitary case a magnetization
is required in order to have a non-vanishing Hall conductivity.
However, in the non-unitary case, the magnetization
induced by the vector $\boldsymbol q$
produces a finite
Hall conductivity even if the explicit Zeeman term is absent.
In all cases the spin-orbit coupling is necessary for a non-vanishing Hall conductivity.
In all three cases (normal, unitary, and non-unitary) the Hall conductivity has a clear
minimum when the magnetization is of the order of the
chemical potential.
At this point the spectrum is gapless and,
as shown in Fig. \ref{fig:Chernold} for the unitary case, the Chern number of the occupied bands
changes.
The topological transition does not depend on $\alpha$ and the results in Fig. \ref{fig:Chernold} are shown for
the particular case of $\alpha=1$.
It turns out that for the non-unitary case depicted in Fig. \ref{fig:hallsold} the spectrum is always
gapless due to the lack of dependence of the pairing function on $k_y$.
Therefore, the expression for the Chern number suffers from numerical instability.
Considering a non-unitary pairing of the form $d_x=d \sin k_y, d_y=i d \sin k_x, d_z=0$
the Hall conductivity is similar to that obtained in Fig. \ref{fig:hallsold} and the change in the
Chern number is also similar to the one shown in Fig. \ref{fig:Chernold} for the unitary case.
In the normal phase the system is topologically
trivial and the Chern number is zero throughout the space of parameters of the chemical potential and the
magnetization.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig2.pdf}
\par\end{centering}
\caption{\label{fig:Chernold}
Chern number for the unitary case of Fig. \ref{fig:hallsold} as a function of magnetization for $d=1, \epsilon_F=-1$ and $\alpha=1$.
}
\end{figure}
\subsection{Strong spin-orbit coupling}
For strong spin-orbit coupling it has been shown that it is more favorable that the pairing
vector $\boldsymbol d$ aligns with the spin-orbit coupling:
\begin{equation}
\boldsymbol d(\boldsymbol k) = d (\sin k_y, -\sin k_x)\,.
\end{equation}
As a consequence, the critical temperature associated with this type of pairing is higher \cite{Sigrist2}.
There is a rich sequence of topological transitions
as a function of the chemical potential,
spin-orbit coupling and magnetization \cite{Sato}.
In general this problem involves solving for the eigenvalues
of the $4\times 4$ matrix in Eq.~(\ref{bdg1}). The calculation of the Chern number of each band is performed
using the eigenfunctions of this $4\times 4$ matrix in Eq.~\eqref{fukui}.
Since the gap must close at the
topological transitions,
the location of these transitions may be determined looking at the gapless $\boldsymbol k$
points \cite{Sato} satisfying Eq.~(\ref{gapless}).
The location of the transitions
and the associated gapless points in the spectrum have been obtained before \cite{Sato}. It turns out that
in each topological phase, the Hamiltonian can be continuously deformed in such a way that
($\alpha$, $\Delta_s \rightarrow 0$) without closing the gap.
The problem then simplifies
since the original $4\times 4$ Hamiltonian has been deformed
to two $2\times 2$ matrices and the Chern number may be
calculated as in Eq.~\eqref{chern}.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig3a.png}
\includegraphics[width=0.8\columnwidth]{fig3b.png}
\par\end{centering}
\caption{\label{fig:hallstrong}(color online). Chern number as a function of chemical potential and magnetization and
Hall conductivity for the case of strong spin-orbit coupling. The parameters used are $d=0.6, \Delta_s=0.1, \alpha=0.6$.
}
\end{figure}
In Fig.~\ref{fig:hallstrong}
we show the results for the Chern number of the occupied bands as a function of the chemical potential and
magnetization. There are various transitions that correspond to the closing and opening of gaps in the
spectrum. In the same figure we show the results for the Hall conductivity for the same region of parameters.
Even though the Hall conductivity is a continuous, smooth function there are clearly local maxima and minima
that can be associated to the points where a topological transition occurs.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{fig4a.pdf}
\includegraphics[width=0.9\columnwidth]{fig4b.pdf}
\includegraphics[width=0.9\columnwidth]{fig4c.pdf}
\par\end{centering}
\caption{\label{fig:hallsder}(color online). Chern number, Hall conductivity and its first and second derivatives
as a function
of magnetization and as a function of chemical potential for strong spin-orbit coupling.
The Hall conductivity and its first derivative are multiplied by a factor of $10$ for
better visualization.}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig5a.png}
\includegraphics[width=0.8\columnwidth]{fig5b.png}
\par\end{centering}
\caption{\label{fig:hallss}(color online). Chern number and Hall conductivity for s-wave pairing and zero p-wave
pairing ($d=0, \Delta_s=0.5, \alpha=0.6$).
}
\end{figure}
In the case of a $\mathbb{Z}$~topological insulator, a topological transition modifies the Chern number
and the value of the Hall conductivity
which, therefore, exhibits a clear signature of the transition.
In the case of a $\mathbb{Z}$~topological superconductor there is, however,
no discontinuity of $\sigma_{xy}$ but its second derivative signals the transitions sharply.
This is well illustrated in Fig. \ref{fig:hallsder} where we
show cuts at constant chemical potential
as a function of the magnetization or fixing the magnetization and changing the chemical potential.
The results for the Chern number
clearly indicate the topological transitions either as
a function of the magnetization or chemical potential.
The behavior of the Hall conductivity
correlates with these transitions.
As expected from general considerations if a transition occurs between Chern
numbers of different signs, the Hall conductivity changes sign accordingly.
Here we are interested in finding a signature of the change in the Chern number. This can be achieved by
looking
at the derivatives of the Hall conductivity. At the transitions the derivative behaves in a way qualitatively
similar to the case of a $\mathbb{Z}$~topological insulator: if the Chern number increases across the transition the
first derivative of the Hall conductivity is positive and if the Chern number decreases the derivative is negative.
The change
is small and the features in the first derivative are also small, therefore, we have multiplied the
Hall conductivity and its first derivative by a factor of $10$. A much stronger
signal is provided by the second derivative.
At a transition
where the Chern number changes,
the second derivative exhibits two close peaks: a negative peak followed by a positive one
when the Chern number increases (and vice-versa if the Chern number decreases).
In the case of
a topological insulator, the derivative is a Dirac delta
function and the second derivative is the derivative of a Dirac delta function. In the superconductor these
delta functions are smeared but are still
clear evidence of the location of the transition and, moreover, of the
change in the topological number.
We may also consider the case where
there is only $s$-wave pairing and no $p$-wave pairing. The results for the Chern number and the Hall
conductivity for the same region of parameters are shown in Fig. \ref{fig:hallss}.
As shown before, if the magnetization vanishes, and the $s$-wave component is larger than the $p$-wave component,
the phase is topologically trivial.\cite{Sato}
There are non-trivial
phases that arise due to the presence of the magnetization and the consequent breaking of TRS.\cite{SatoPRL09,SauPRL10}
Note that there is a finite region around zero magnetization in which the Chern number vanishes, as mentioned.
As above, the Hall conductivity clearly shows the location of the various transitions between the Chern
numbers and an analysis similar to the one carried out for the $p$-wave case both for the Hall
conductivity and its derivatives may be carried out, signaling in a similar way the various transitions.
The same holds for weak spin-orbit coupling. The topological transitions are also clearly detected
by the derivatives of the Hall conductivity.
\section{Edge states}
Due to the bulk-edge correspondence, complementary information on the topological phases and
transitions may be obtained by analyzing the edge states. We consider a strip geometry
of transversal
width $N_y$ and apply periodic boundary conditions along the longitudinal direction, $x$.
We write
\begin{equation}
\psi_{k_x,k_y,\sigma} = \frac{1}{\sqrt{N_y}} \sum_{j_y} e^{-i k_y j_y} \psi_{k_x,j_y,\sigma}\,,
\label{operators}
\end{equation}
and rewrite the Hamiltonian matrix in terms of
the operators (\ref{operators}) as
\begin{eqnarray}
H = \sum_{k_x} \sum_{j_y}
& & \left(\begin{array}{cccc}
\psi_{k_x,j_y,\uparrow}^{\dagger} & \psi_{k_x,j_y,\downarrow}^{\dagger} &
\psi_{-k_x,j_y,\uparrow} & \psi_{-k_x,j_y,\downarrow}
\end{array}\right) \nonumber \\
& & \hat{H}_{k_x,j_y}
\left(\begin{array}{c}
\psi_{k_x,j_y,\uparrow} \\
\psi_{k_x,j_y,\downarrow} \\
\psi_{-k_x,j_y,\uparrow}^{\dagger} \\
\psi_{-k_x,j_y,\downarrow}^{\dagger} \\
\end{array}\right)
\end{eqnarray}
The operator $\hat{H}_{k_x,j_y}$ reads
\begin{widetext}
\begin{equation}
\left(\begin{array}{cccc}
-2 t \cos k_x -M_z-\epsilon_F-t \eta_+ & i\alpha \sin k_x +\frac{\alpha}{2i} \eta_- &
-i d \sin k_x -\frac{d}{2i} \eta_- & \Delta_s \\
-i \alpha \sin k_x +\frac{\alpha}{2i} \eta_- & -2 t \cos k_x +M_z -\epsilon_F -t \eta_+ &
-\Delta_s & -i d \sin k_x +\frac{d}{2i} \eta_- \\
i d \sin k_x -\frac{d}{2i} \eta_- & -\Delta_s &
2 t \cos k_x +M_z+\epsilon_F+t \eta_+ & -i\alpha \sin k_x +\frac{\alpha}{2i} \eta_- \\
\Delta_s & i d \sin k_x +\frac{d}{2i} \eta_- &
i \alpha \sin k_x +\frac{\alpha}{2i} \eta_- & 2 t \cos k_x -M_z +\epsilon_F +t \eta_+ \\
\end{array}\right)
\end{equation}
\end{widetext}
where $\psi_{j_y}^{\dagger} \eta_{\pm} \psi_{j_y} = \psi_{j_y}^{\dagger} \psi_{j_y+1} \pm \psi_{j_y+1}^{\dagger} \psi_{j_y}$.
The diagonalization of this Hamiltonian involves the solution of a $4 N_y \times 4 N_y$ eigenvalue problem.
The energy states include states in the bulk and states along the edges.
\subsection{Strong spin-orbit coupling}
In the case of strong spin-orbit coupling with $M_z=0$ there is no TRS breaking and the system belongs
to the symmetry class DIII.\cite{LudwigAIP,LudwigNJP}
In the $s$-wave case there is only the bulk
gap and no gapless (edge) states. The system is in a topologically trivial phase.
In the case of p-wave
pairing even though the Chern number vanishes there are gapless edge states \cite{Sato}.
The system is in a $\mathbb{Z}_2$ topological phase. The gapless edge states have a
twofold Kramers degeneracy and
two counterpropagating edge modes
give opposite contributions to the total Chern number, $C=0$.
This is a similar situation to that
in the spin Hall effect, where, even though the charge current vanishes,
there is a spin current along the edges.
In the case where
there is a mixture of s- and p-wave components and
the amplitude of the p-wave pairing is larger than the corresponding
amplitude of the s-wave case, there are edge states and a topologically nontrivial phase.
Because of spin-momentum locking
there is no backscattering and these states are topologically protected from non-magnetic
impurities.
As the magnetization is turned on TRS is broken
and the system's symmetry class changes to D.\cite{LudwigAIP,LudwigNJP}
For small magnetization the $\mathbb{Z}$~topological superconductor is in a trivial phase
with Chern number $C=0$. A finite magnetization is then necessary to cause a topological
phase transition to a phase with non-zero Chern number.\cite{Sato}
This happens both for the
p-wave case and the s-wave case.
The sequence of Chern numbers is clearly correlated with the number of pairs
of edge states as shown in Ref. \onlinecite{Sato}.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig6.png}
\par\end{centering}
\caption{\label{fig:edges6}Gapless edge modes for unitary case of strong spin-orbit coupling
for zero Chern number for different values of the spin-orbit coupling, $\alpha$.
Here $\epsilon_F=-3,~d=0.6,~\Delta=0.1,~M_z=0.5$.
}
\end{figure}
In Fig. \ref{fig:edges6} we show the edge modes as a function of spin-orbit coupling for the case of
strong spin orbit coupling ($\boldsymbol{d} \parallel \boldsymbol{s}$)
and small magnetization, when the system is in the $C=0$ phase. The spin orbit coupling does
not change the Chern number since it does not close the bulk gap.
It is interesting to note that even though the system is in a $C=0$ phase the
number of edge states is two; the same as that in the parent $\mathbb{Z}_2$ phase,
when $M_z = 0$.\cite{Sato}
For $M_z \neq 0$, however, TRS is broken and these edge states are not topologically protected against
(any type of) disorder. In this sense the system is in a trivial phase, in accordance with
the Chern number $C=0$. Nevertheless, in the clean limit these edge modes could be detected.
The presence of edge modes induced by bulk topology can also be shown
using dimensional reduction and thereby calculating the winding number.\cite{Wen}
For $k_y=0$ or $\pi$, the Hamiltonian $H({\boldsymbol k})$ has the chiral symmetry:
\begin{equation}
\Gamma H({\boldsymbol k}) \Gamma^\dagger = - H({\boldsymbol k})
\end{equation}
with $\Gamma = \tau_x\otimes\sigma_0$, where $\sigma_0$ is the identity in spin space
and $\tau_x$ acts on the particle-hole space.
The operator that diagonalizes $\Gamma$ is \cite{Schnyder,Tewari}
\begin{equation}
T= \sigma_0 \otimes e^{-i\frac{\pi}{4} \tau_y}\,,
\label{eq:T}
\end{equation}
and the Hamiltonian can then be brought to the off-diagonal form:
\begin{equation}
T H({\boldsymbol k}) T^{\dagger} =
\left(\begin{array}{cc}
0 & q({\boldsymbol k}) \\
q^{\dagger}({\boldsymbol k}) & 0
\end{array}\right)\,,
\end{equation}
if $k_y=0,\pi$ and $d_z=0$ where
\begin{eqnarray}
q({\boldsymbol k}) = \nonumber \\
\left(\begin{array}{cc}
\epsilon_{{\boldsymbol k}}-\epsilon_F -M_z+id \sin k_x & i\alpha \sin k_x-\Delta_s \\
-i\alpha \sin k_x+\Delta_s & \epsilon_{{\boldsymbol k}}-\epsilon_F +M_z+id \sin k_x
\end{array}\right)\,. \nonumber \\
\label{eq:qk}
\end{eqnarray}
The winding number is then defined as
\begin{eqnarray}
I(k_y) = \nonumber \\
\frac{1}{4\pi i} \int_{-\pi}^{\pi} dk_x Tr [ q^{-1}({\boldsymbol k}) \partial_{k_x} q({\boldsymbol k}) -(q^{\dagger})^{-1}({\boldsymbol k}) \partial_{k_x}
q^{\dagger}({\boldsymbol k}) ]\,. \nonumber \\
(k_y=0,\pi) \nonumber\\
\end{eqnarray}
Physically, a nonzero $I(k_y)$ means that if the system is infinite along the $y$ direction and finite along $x$,
there will be edge states with $k_y=0$ or $\pi$.\cite{yakovenko}
The calculation of the winding number gives the number of gapless edge modes both when the Chern number vanishes
and when the Chern number is finite.\cite{Sato}
\subsection{Weak spin-orbit coupling}
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig7a.png}
\includegraphics[width=0.8\columnwidth]{fig7b.png}
\par\end{centering}
\caption{\label{fig:edges7}Gapless edge modes in the unitary case for
zero Chern number (top) and Chern number equal to $2$ (bottom).
Here $\epsilon_F=-1,d=1$ and $M_z=0.5$ (top), $M_z=1.2$ (bottom).}
\end{figure}
If the spin-orbit coupling is not strong, so that the pairing vector $\boldsymbol d$ is not aligned with the spin-orbit
vector, as in the unitary and non-unitary cases considered in Sec.~\ref{sec:Csigma},
the connection between the Chern number and the number of gapless edge states is less
transparent. Fig.~\ref{fig:edges7} shows the low-lying energy modes for the unitary case previously
considered for different values of the spin orbit coupling, $\alpha$. The top panel corresponds to a
case where the Chern number vanishes while the bottom panel
to a non-vanishing Chern number. There is a variety of gapless edge states
for both the $C=0$ and $C=2$ cases.
Even though changing $\alpha$ should not change the topology, there is an apparent appearance of various
gapless states that seem not to follow the bulk-edge correspondence. However,
in the unitary case Eq.~\eqref{eq:T} still transforms
the Hamiltonian into an off-diagonal form similar to that in Eq.~\eqref{eq:qk},
so that the winding number is still well defined.
A calculation of the winding
number shows that the number of gapless edge modes is actually independent of $\alpha$.
In the top panel
($C=0$) we get that $I(0)=2$ and $I(\pi)=0$ and in the bottom panel we obtain that $I(0)=1$ and $I(\pi)=1$
in agreement with the value $C=2$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{fig8a.png}
\includegraphics[width=0.9\columnwidth]{fig8b.png}
\par\end{centering}
\caption{\label{fig:edges8}(color online). Gapless edge modes for unitary case for zero Chern number (top) and
Chern number equal to $2$ (bottom).
Here $\epsilon_F=-1,d=1$ and $M_z=0.5,\alpha=2$ (top), and $M_z=1.2,\alpha=3$ (bottom).}
\end{figure}
The bulk-edge correspondence is further elucidated in Fig.~\ref{fig:edges8}.
Careful analysis shows that some of the
gapless edge states do not originate
from bulk topology and that the number of topologically
induced edge states (given either by
the winding number or the Chern number in the case of $C\neq 0$) is consistent. Only the bands of edge states that connect
the upper and lower bulk bands, i.e. connecting open and filled circles in Fig.~\ref{fig:edges8},
can be traced back to the nontrivial bulk topology.\cite{Hatsugai2} Denoting the two edges of the system as $R$ and $L$
we see that
for $C=0$, the number of propagating states at each edge is always the same as the number of counter propagating ones.
For $C=2$, on the other hand, the difference between propagating and counter propagating states is always~2 at each edge.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig9a.pdf}
\includegraphics[width=0.8\columnwidth]{fig9b.png}
\par\end{centering}
\caption{\label{fig:nont}(color online). Chern number and Hall conductivity
in the case where the normal system is topologically nontrivial.
The parameters are: $t_1=1,t_2=1.1,\Delta_s=0.1,d=0.6,\alpha=0.6$.
In the lower panel $\epsilon_F=1.92$. }
\end{figure}
\section{Nontrivial topology in normal phase}
The nontrivial topology of the bands in the superconducting phases above
stems from the mixture of the particle and hole
excitations since the normal phase is non-topological.
We may as well
consider a system that is nontrivial in the normal phase and add superconductivity either self-consistently
or via a proximity effect. This has been proposed before in various contexts \cite{Fu}.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{fig10a.png}
\includegraphics[width=0.8\columnwidth]{fig10b.png}
\par\end{centering}
\caption{\label{fig:nont2}Gapless edge modes for
the same system as in Figure \ref{fig:nont} with $\epsilon_F=1.92$
and: $M_z=1$, $C=-3$ (top);
$M_z=4$, $C=-1$ (bottom).
}
\end{figure}
Considering a Hamiltonian of the form of Eq. \eqref{H} and selecting
\begin{eqnarray}
h_x &=& \alpha\sin k_y\,, \qquad h_y = -\alpha\sin k_x \,, \nonumber\\
h_z &=& 2t_1 \left( \cos k_x + \cos k_y \right) + 4 t_2 \cos k_x \cos k_y\,,
\end{eqnarray}
leads to nontrivial phases as the hoppings $t_1$ and $t_2$ are varied \cite{beijing}.
Results for the edge states, the Chern number and the Hall conductivity are shown in Figs. \ref{fig:nont} and
\ref{fig:nont2}.
In the normal phase TRS
is broken since $h_z$ is even in the momentum. The
Chern number
is
$C= 2$ if $| t _1| < |t_2 |$;
and $C= 1$ if $| t_1 | > |t_2 |$.
As the system becomes superconducting the
nontrivial topology remains even though the Chern number changes.
The nontrivial topology of the normal
state bands lends some robustness to the topological superconducting phase.
Indeed, the Chern number remains invariant in large portions of the parameter space.
In the first panel of Fig. \ref{fig:nont} we show cuts of the Chern number at constant chemical potential,
$\epsilon_F=-1.92,0,1.92$, as a function of $M_z$. For negative chemical potential the Chern number is $C=-3$
except for some narrow regions where $C=5$. For zero and positive chemical potential there is a single
topological transition from $C=-3$ to $C=-1$.
The results for the edge states in Fig.~\ref{fig:nont2}
show a clear correspondence between the Chern number and the number of gapless modes.
Note that in this case there is strong spin-orbit coupling.
The difference to the previous Sections is the
non-trivial topology of the normal phase.
The superconducting pairing then changes the topology,
as shown by the change of Chern number entering the superconducting phase.
In this case the Hall conductivity
also signals the transition at positive values of the chemical potential
and varies smoothly in
the narrow region where $C=5$.
\section{Conclusions}
We have shown that the Hall conductivity and its derivatives may be used to detect
the topological transitions that occur in $\mathbb{Z}$~topological superconductors. In a topological insulator the Hall conductivity
is quantized and proportional to the Chern number and, therefore, its discontinuous changes across a transition
can be used to detect and characterize the transition. Even though the Hall conductivity is not quantized in
a superconductor \cite{pdss1999} it may also be used to study these transitions. This provides a bulk detection method of these
transitions that is complementary to the detection of the gapless edge states associated with these nontrivial
topological phases.
In the case of strong spin-orbit coupling there is a simple correspondence between the number of gapless
edge states and the Chern number, both for trivial and nontrivial normal state bands.
However, in the case of weak spin orbit coupling where the pairing vector, $\boldsymbol{d}$, is not parallel
to the spin-orbit vector, $\boldsymbol{s}$,
extra unprotected gapless modes appear. The bulk-edge correspondence is preserved as evidenced by
the calculation of the winding number.
|
1,116,691,497,246 | arxiv | \section{Introduction}
The notion of quantum isomorphism of graphs was first introduced by Aterias et al in \cite{AMRSSV}. Two graphs are quantum isomorphic if there exists a perfect quantum strategy for the so called isomorphism game, a nonlocal game in which Alice and Bob want to convince a referee that they know an isomorphism between two graphs $G_1$ and $G_2$. Interestingly, Aterias et al constructed pairs of graphs for which there are perfect quantum strategies for the isomorphism game, but no perfect classical strategies. This shows that there are graphs that are quantum isomorphic, but not isomorphic.
It is well-known that two graphs are isomorphic if there exists a permutation matrix interchanging the adjacency matrices of the graphs. It was shown in \cite{AMRSSV} that two graphs are quantum isomorphic if and only if there exists a quantum permutation matrix $u$ interchanging the adjacency matrices of the two graphs. Here a quantum permutation matrix is a matrix $u\in M_n(\mathcal{A})$ with entries in some unital $C^*$-algebra $\mathcal{A}$ fulfilling $u_{ij}=u_{ij}^*=u_{ij}^2$ and $\sum_k u_{ik}=1_{\mathcal{A}}=\sum_k u_{ki}$, which was first studied in terms of quantum permutation groups by Wang \cite{WanSn}.
A completely combinatorial description of quantum isomorphism was given by Man\v{c}inska and Roberson in \cite{MRplanar}. They showed that quantum isomorphism is equivalent to having the same homomorphism counts from all planar graphs.
Despite having many equivalent formulations, only a few constructions of quantum isomorphic, non-isomorphic graphs are known. In \cite{AMRSSV}, the graphs where constructed from quantum solutions of binary constraint systems. Roberson and the author (\cite{RS}) used colored versions of the graphs constructed in \cite{AMRSSV} and a decoloring procedure to obtain new quantum isomorphic, non-isomorphic graphs, but those still come from quantum solutions of binary constraint systems.
A more general approach was presented by Musto, Reutter and Verdon in \cite{MRV}. In the article, they obtain quantum isomorphic graphs from a central type subgroup of the automorphism group of one of the graphs, having coisotropic stabilizers. They could find the graphs of \cite{AMRSSV} in this way, but they did not construct new examples. Recently, Chan and Martin \cite{CM} and Gromada \cite{G} have shown that Hadamard graphs of the same order are quantum isomorphic.
In this article, we will construct a new pair of quantum isomorphic, non-isomorphic graphs from a subgroup of the automorphism group of a graph. This subgroup is associated to $3$-tensor Pauli matrices and is a central type subgroup with coisotropic stabilizers. We will explicitly construct the quantum permutation matrix interchanging the adjacency matrices from the automorphisms in the subgroup. The quantum isomorphic graphs are both strongly regular with parameters $(120, 63, 30, 36)$ and known from the graph theory literature. The first graph is the orthogonality graph of the lines in the $E_8$ root system, see also \cite[Section 10.39]{BVM}. The second graph can be obtained from independent sets of the folded halved $8$-cube graph. Its complement was first discovered by Brouwer, Ivanov and Klin in \cite{BIK} as a graph from a quadric with a hole.
One can obtain one graph from the other by switching the underlying partial geometry, as shown by Mathon and Street \cite{MS}. Note that our first graph is isomorphic to $G_1$ and the second graph is isomorphic to $G_2$ -- $G_5$ in \cite[Table 2.2]{MS}.
We will also show that using Godsil-McKay switching on both graphs with the same, appropriate vertex partition preserves quantum isomorphism. We get more examples of quantum isomorphic, but non-isomorphic strongly regular graphs in this way.
The article is structured as follows. In Section \ref{secprelim}, we recall basic notions of graph theory and give the definition of quantum isomorphism of graphs. In Section \ref{secpairqiso}, we will first define the orthogonality graph of the lines in the $E_8$ root system and study a specific subgroup of the automorphism group. Then, we look at a graph coming from independent sets of the folded halved $8$-cube graph and give an alternative description of this graph. Using this description, we show that the two mentioned graphs are quantum isomorphic. Finally, we obtain more quantum isomorphic, non-isomorphic strongly regular graphs using Godsil-McKay switching in Section \ref{secswitching}.
\section{Preliminaries}\label{secprelim}
We start by recalling basic notions of graph theory. In this article, a graph $G$ is always finite and simple, that is, it has a finite vertex set $V(G)$ and has no multiple edges or loops. Thus, the edge set $E(G)$ is a subset of $V(G)\times V(G)$, where $(i,j)\in E(G)$ implies $(j,i)\in E(G)$. The \emph{adjacency matrix} $A_G$ of a graph $G$ is the matrix with entries $(A_G)_{ij}=1$ if $(i,j)\in E(G)$ and $(A_G)_{ij}=0$ otherwise. The complement of a graph $G$, which we denote by $\overline{G}$, is the graph with the same vertex set as $G$ and $(i,j)\in E(\overline{G})$ if and only if $(i,j)\notin E(G)$.
A \emph{clique} is a subset $C\subseteq V(G)$ of vertices such that all vertices in $C$ are adjacent. The \emph{clique number} of $G$ is the size of a largest clique in $G$. An \emph{independent set} is a subset $I\subseteq V(G)$ of vertices such that all vertices in $I$ are non-adjacent. The $\emph{independence number}$ of $G$ is the size of a largest independent set in $G$.
The vertex $j$ is called \emph{neighbor} of $i$ if $(i,j)\in E(G)$. A graph is \emph{$k$-regular} if every vertex has $k$ neighbors. A \emph{path} of length $m$ joining two vertices $i,k \in V(G)$ is a sequence of vertices $a_0, a_1, \dots, a_m$ with $i=a_0$, $k=a_m$ such that $(a_n,a_{n+1})\in E(G)$ for $n\in \{0, \dots m-1\}$. The \emph{distance} $d(i,k)$ between vertices $i,k\in V(G)$ denotes the length of a shortest path joining $i$ and $k$.
\begin{definition}
Let $G$ be a $k$-regular graph with $n$ vertices. We say that the graph $G$ is \emph{strongly regular} if there exist $\lambda, \mu\in \mathbb{N}_0$ such that
\begin{itemize}
\item[(i)] adjacent vertices have $\lambda$ common neighbors,
\item[(ii)] non-adjacent vertices have $\mu$ common neighbors.
\end{itemize}
We then say that $G$ is strongly regular with parameters $(n, k, \lambda, \mu)$.
\end{definition}
A \emph{graph automorphism} is a bijection $\sigma:V(G)\to V(G)$ such that $(i,j)\in E(G)$ if and only if $(\sigma(i), \sigma(j))\in E(G)$. The set of graph automorphisms form a group, the \emph{automorphism group} $\aut(G)$. For a subgroup $K\subseteq \aut(G)$ and a vertex $v\in V(G)$, we call $\mathrm{Stab}_K(v)=\{\sigma \in K \,|\, \sigma(v)=v\}$ the \emph{stabilizer subgroup} of $K$ with respect to $v$.
An \emph{isomorphism} between graphs $G_1$ and $G_2$ is a bijection $\varphi:V(G_1) \to V(G_2)$ such that $(i,j)\in E(G_1)$ if and only if $(\varphi(i), \varphi(j))\in E(G_2)$. It is easy to see that there exists an isomorphism between the graphs $G_1$ and $G_2$ if and only if there exists a permutation matrix $P_{\varphi}$ such that $A_{G_1}P_{\varphi}=P_{\varphi}A_{G_2}$.
\begin{definition}[\cite{WanSn}]
Let $\mathcal{A}$ be a unital $C^*$-algebra. A matrix $u\in M_n(\mathcal{A})$ is called \emph{quantum permutation matrix} or \emph{magic unitary} if the entries $u_{ij}\in \mathcal{A}$ are projections, i.e. $u_{ij}=u_{ij}^*=u_{ij}^2$ for all $i,j$ and
\begin{align*}
\sum_k u_{ik}=1_{\mathcal{A}}=\sum_k u_{ki}
\end{align*}
for all $i$.
\end{definition}
If $\mathcal{A}=\mathbb{C}$, then $u$ is a quantum permutation matrix if and only if it is a permutation matrix. For the quantum permutation matrices appearing in this article, we will have $\mathcal{A}=M_8(\mathbb{C})$.
The concept of quantum isomorphism was first introduced in \cite{AMRSSV} as perfect quantum strategies of the isomorphism game. We will use an equivalent definition, established in \cite{LMR}.
\begin{definition}
Let $G_1$ and $G_2$ be graphs. We say that $G_1$ and $G_2$ are \emph{quantum isomorphic} if there exists a unital $C^*$-algebra $\mathcal{A}$ and a quantum permutation matrix $u\in M_n(\mathcal{A})$ such that $A_{G_1}u=uA_{G_2}$, which means $\sum_k(A_{G_1})_{ik}u_{kj}=\sum_k u_{ik}(A_{G_2})_{kj}$ for all $i\in V(G_1), j \in V(G_2)$.
\end{definition}
The following lemma gives an equivalent relation to $A_{G_1}u=uA_{G_2}$, see also \cite[Theorem 2.2 and Theorem 2.5]{LMR}. We give a proof similar to \cite[Proposition 2.1.3]{Sthesis} which gives the same equivalence for quantum automorphism groups.
\begin{lemma}\label{lemprod0}
Let $G_1$ and $G_2$ be graphs, $\mathcal{A}$ be a unital $C^*$-algebra and $u\in M_n(\mathcal{A})$ be a quantum permutation matrix. Then $A_{G_1}u=uA_{G_2}$ is equivalent to $u_{ij}u_{kl}=0$ if $(i,k)\notin E(G_1)$ and $(j,l)\in E(G_2)$ or vice versa.
\end{lemma}
\begin{proof}
Assume $A_{G_1}u=uA_{G_2}$ and let $(i,k)\notin E(G_1)$, $(j,l)\in E(G_2)$. From $A_{G_1}u=uA_{G_2}$, we get $\sum_{s;(i,s)\in E(G_1)}u_{sl}=\sum_{t:(t,l)\in E(G_2)} u_{it}$. Using this, we calculate
\begin{align*}
u_{ij}u_{kl}= u_{ij}\left(\sum_{t:(t,l)\in E(G_2)} u_{it}\right)u_{kl}=u_{ij}\left(\sum_{s;(i,s)\in E(G_1)}u_{sl}\right)u_{kl}=0.
\end{align*}
The first equality holds because we have $u_{ij}u_{it}=\delta_{jt}u_{ij}$ and $j$ is part of the sum since $(j,l)\in E(G_2)$. The last equality holds because $u_{sl}u_{kl}=\delta_{ks}u_{kl}$ and $k$ is not part of the sum since $(i,k)\notin E(G_1)$. One similarly gets $u_{ij}u_{kl}=0$ for $(i,k)\in E(G_1), (j,l)\notin E(G_2)$.
Now assume that we have $u_{ij}u_{kl}=0$ if $(i,k)\notin E(G_1)$ and $(j,l)\in E(G_2)$ or vice versa. Using this and $\sum_s u_{sl}=1_{\mathcal{A}}=\sum_t u_{it}$, we get
\begin{align*}
\sum_{t;(t,l)\in E(G_2)}u_{it}&=\left(\sum_{t;(t,l)\in E(G_2)}u_{it}\right)\left(\sum_{s;(i,s)\in E(G_1)}u_{sl}\right)\\
&=\sum_{s;(i,s)\in E(G_1)}\left(\sum_{t=1}^nu_{it}\right)u_{sl}\\
&=\sum_{s;(i,s)\in E(G_1)}u_{sl}
\end{align*}
for all $i,l$.
\end{proof}
Interestingly, quantum isomorphism of graphs is connected to homomorphism counts of planar graphs in the following way.
\begin{theorem}\cite{MRplanar}
Let $G_1$ and $G_2$ be graphs. Then $G_1$ and $G_2$ are quantum isomorphic if and only if they admit the same number of homomorphisms from any planar graph.
\end{theorem}
As mentioned in the introduction, only few constructions of quantum isomorphic, non-isomorphic graphs are known. In the next section, we obtain a new pair of quantum isomorphic, non-isomorphic graphs.
\section{A pair of quantum isomorphic strongly regular graphs}\label{secpairqiso}
In this section, we construct a new pair of quantum isomorphic, non-isomorphic graphs. Those graphs are additionally strongly regular with parameters $(120, 63, 30, 36)$, giving us a first pair of quantum isomorphic strongly regular graphs. Both graphs are known from the graph theory literature. We will first define the graphs, give some properties and alternative descriptions and then prove that they are quantum isomorphic.
\subsection[Orthogonality graph]{The orthogonality graph of the lines in the $E_8$ root system}
\begin{definition}\label{defGE8}
The $E_8$ root system $\Psi_{E_8}$ consists of the following $240$ vectors in $\mathbb{R}^8$:
\begin{align}\label{rootsystemE8}
&\pm e_i \pm e_j \text{ for } 1 \leq i<j \leq 8,\qquad x=(x_1,\dots,x_8) \text{ for } x_i \in \{\pm 1\} \text{ and } \prod_{i=1}^8 x_i=1.
\end{align}
Let $G_{E_8}=(V(G_{E_8}), E(G_{E_8}))$ be the orthogonality graph of the vectors in \eqref{rootsystemE8}, where we identify vectors $x$ and $-x$. This means that $G_{E_8}$ is the graph with $V(G_{E_8})=\{v_x \,|\, x \in \Psi_{E_8}, x \equiv-x\}$, $(v_x,v_y)\in E(G_{E_8})$ if and only if $\langle x, y\rangle=0$. This graph is a strongly regular with parameters $(120, 63, 30, 36)$, see for example \cite[Section 10.39]{BVM}. \end{definition}
Note that the graph $G_{E_8}$ is isomorphic to the graph $G_1$ appearing in \cite[Table 2.2]{MS}. We will now look at some automorphisms of $G_{E_8}$.
We denote by $I$, $X$, $Y$ and $Z$ the (real Pauli) matrices
\begin{align}\label{Pauli}
I=\begin{pmatrix}1&0\\0&1\end{pmatrix},\quad
X=\begin{pmatrix}0&1\\1&0\end{pmatrix},\quad Z=\begin{pmatrix}1&0\\0&-1\end{pmatrix} \quad\text{and } Y=XZ.
\end{align}
\begin{lemma}\label{Lsubgroup}
Let $I$, $X$, $Y$ and $Z$ be as above. For each $M:=M_1\otimes M_2 \otimes M_3$ with $M_i \in \{I,X,Y,Z\}$ the maps $\sigma_M:V(G_{E_8})\to V(G_{E_8})$, $v_x \mapsto v_{Mx}$ are automorphisms of $G_{E_8}$. Those automorphisms give rise to a subgroup $L \cong \mathbb{Z}_2^6$ of $\aut(G_{E_8})$.
\end{lemma}
\begin{proof}
Let $M:=M_1\otimes M_2 \otimes M_3$ for some $M_i \in \{I,X,Y,Z\}$. We first have to check that $\sigma_M$ is well-defined. For this, we have to show that $Mx\in \Psi_{E_8}$ for every $x \in \Psi_{E_8}$. Note that any matrix $M$ of this form is a product of the following six matrices:
\begin{align}
&X\otimes I\otimes I=\begin{pmatrix}&&I&0\\&&0&I\\I&0&&\\0&I&&\end{pmatrix},
&&Z\otimes I\otimes I=\begin{pmatrix}I&&&\\&I&&\\&&-I&\\&&&-I\end{pmatrix},\nonumber\\
&I\otimes X\otimes I=\begin{pmatrix}0&I&&\\I&0&&\\&&0&I\\&&I&0\end{pmatrix},
&&I\otimes Z\otimes I=\begin{pmatrix}I&&&\\&-I&&\\&&I&\\&&&-I\end{pmatrix}
,\nonumber\\
&I\otimes I\otimes X=\begin{pmatrix}X&&&\\&X&&\\&&X&\\&&&X\end{pmatrix},
&&I\otimes I\otimes Z=\begin{pmatrix}Z&&&\\&Z&&\\&&Z&\\&&&Z\end{pmatrix}.\label{Paulitensor}
\end{align}
Thus, the matrix $M$ is a signed permutation matrix flipping an even number of signs. Looking at \eqref{rootsystemE8}, we see that such permutations map every $x\in\Psi_{E_8}$ to an element in $\Psi_{E_8}$. Therefore, $\sigma_M$ is well-defined
Now, we show that every $\sigma_M$ is a graph automorphism. We have the following relations on $X,Y,Z$:
\begin{align*}
X^2=I, X^*=X, Y^*=-Y, Y^2=-I, Z^2=I, Z^*=Z.
\end{align*}
Knowing that $M$ is a product of the matrices in \eqref{Paulitensor}, this yields $MM^*=M^*M=(I\otimes I\otimes I)$. From this, one deduces that $\sigma_M$ is a bijection. Since unitaries preserve inner products, we get $\langle x,y\rangle=0$ if and only if $\langle Mx,My \rangle=0$ and thus $(v_x,v_y)\in E(G_{E_8})$ if and only if $(v_{Mx}, v_{My})\in E(G_{E_8})$. This shows $\sigma_M\in \aut(G_{E_8})$.
It remains to show that we get a subgroup $L\cong \mathbb{Z}_2^6$ of the automorphism group. It is easy to see that $\sigma_M \sigma_N=\sigma_{MN}$. Also note that $\sigma_M=\sigma_{-M}$, since we identified $x$ and $-x$ in the vertex set of $G_{E_8}$. Thus, products and inverses are still of the form $\sigma_{N_1\otimes N_2\otimes N_3}$ for some $N_i \in \{I,X,Y,Z\}$. The group $L$ is generated by $\sigma_{X\otimes I\otimes I}, \sigma_{I\otimes X\otimes I},\sigma_{I\otimes I\otimes X},\sigma_{Z\otimes I\otimes I},\sigma_{I\otimes Z\otimes I},$ $\sigma_{I\otimes I\otimes Z}$. Since we have $\sigma_M=\sigma_{-M}$ and all matrices in \eqref{Paulitensor} either commute or anticommute, we get a commutative subgroup of $\aut(G_{E_8})$. Since we also have $\sigma_M^2=\mathrm{id}$ for all generators and they are independent, we obtain $L\cong \mathbb{Z}_2^6$.
\end{proof}
The next lemma shows that vertices in the same orbit of $L$ have the same stabilizer.
\begin{lemma}\label{stabeq}
If there exists $\sigma_M\in L$ with $\sigma_M(v_x)=v_y$, then $\mathrm{Stab}_L(v_x)=\mathrm{Stab}_L(v_y)$.
\end{lemma}
\begin{proof}
Let $\sigma_N \in \mathrm{Stab}_L(v_x)$. Then, we know $v_x=v_{Nx}$ and thus $x =\pm Nx$. Multiplying by $M$ from the left yields $Mx=\pm MNx$. Since $M$ and $N$ either commute or anti-commute, we get $Mx=\pm NMx$. This shows $y=\pm Ny$ and therefore $\sigma_N \in \mathrm{Stab}_L(v_y)$. Thus $\mathrm{Stab}_L(v_x)\subseteq \mathrm{Stab}_L(v_y)$. One similarly shows the other inclusion and we get $\mathrm{Stab}_L(v_x)= \mathrm{Stab}_L(v_y)$.
\end{proof}
Using the action of the subgroup $L\subseteq \aut(G_{E_8})$, we get a partition of the vertex set of $G_{E_8}$ into $15$ orbits.
\begin{lemma}
The action of $L$ on $V(G_{E_8})$ yields $15$ orbits of size $8$. Those partition the vertex set into $15$ cliques of size $8$.
\end{lemma}
\begin{proof}
Let $S\subseteq \{1,\dots, 8\}$. We denote by $x_S\in \mathbb{R}^8$ the vector with $(x_S)_i=-1$ if $i\in S$ and $(x_S)_i=1$ otherwise. By straightforward computation, we have the following orbits under $L$, where we just list the vectors $x$ associated to the vertices $v_x$ in the graph $G_{E_8}$:
\begin{align}
V_1:\quad &e_1 \pm e_2, e_3 \pm e_4, e_5 \pm e_6, e_7 \pm e_8,\nonumber\\
V_2:\quad &e_1 \pm e_3, e_2 \pm e_4, e_5 \pm e_7, e_6 \pm e_8,\nonumber\\
V_3:\quad &e_1 \pm e_4, e_2 \pm e_3, e_5 \pm e_8, e_6 \pm e_7,\nonumber\\
V_4:\quad &e_1 \pm e_5, e_2 \pm e_6, e_3 \pm e_7, e_4 \pm e_8,\nonumber\\
V_5:\quad &e_1 \pm e_6, e_2 \pm e_5, e_3 \pm e_8, e_4 \pm e_7,\nonumber\\
V_6:\quad &e_1 \pm e_7, e_2 \pm e_8, e_3 \pm e_5, e_4 \pm e_6,\nonumber\\
V_7:\quad &e_1 \pm e_8, e_2 \pm e_7, e_3 \pm e_6, e_4 \pm e_5,\nonumber\\
V_8:\quad &x_{\{1,2\}}, x_{\{3,4\}},x_{\{5,6\}},x_{\{7,8\}},x_{\{1,4,6,8\}}, x_{\{2,3,6,8\}},x_{\{2,4,5,8\}},x_{\{2,4,6,7\}},\nonumber\\
V_9:\quad &x_{\{1,3\}}, x_{\{2,4\}},x_{\{5,7\}},x_{\{6,8\}},x_{\{1,4,7,8\}}, x_{\{1,4,5,6\}},x_{\{1,2,6,7\}},x_{\{1,2,5,8\}},\nonumber\\
V_{10}:\quad &x_{\{1,4\}}, x_{\{2,3\}},x_{\{5,8\}},x_{\{6,7\}},x_{\{1,3,7,8\}}, x_{\{1,3,5,6\}},x_{\{1,2,5,7\}},x_{\{1,2,6,8\}},\nonumber\\
V_{11}:\quad &x_{\{1,5\}}, x_{\{2,6\}},x_{\{3,7\}},x_{\{4,8\}},x_{\{1,6,7,8\}}, x_{\{2,5,7,8\}},x_{\{4,5,6,7\}},x_{\{1,2,4,7\}},\nonumber\\
V_{12}:\quad &x_{\{1,6\}}, x_{\{2,5\}},x_{\{3,8\}},x_{\{4,7\}},x_{\{1,5,7,8\}}, x_{\{2,6,7,8\}},x_{\{3,5,6,7\}},x_{\{4,5,6,8\}},\nonumber\\
V_{13}:\quad &x_{\{1,7\}}, x_{\{2,8\}},x_{\{3,5\}},x_{\{4,6\}},x_{\{1,5,6,8\}}, x_{\{3,6,7,8\}},x_{\{2,5,6,7\}},x_{\{4,5,7,8\}},\nonumber\\
V_{14}:\quad &x_{\{1,8\}}, x_{\{2,7\}},x_{\{3,6\}},x_{\{4,5\}},x_{\{1,5,6,7\}}, x_{\{4,6,7,8\}},x_{\{2,5,6,8\}},x_{\{3,5,7,8\}},\nonumber\\
V_{15}:\quad &x_\emptyset, x_{\{5,6,7,8\}},x_{\{3,4,7,8\}},x_{\{2,4,6,8\}},x_{\{3,4,5,6\}}, x_{\{2,4,5,7\}},x_{\{2,3,6,7\}},x_{\{2,3,5,8\}}.\label{vertexpartition}
\end{align}
We see that the vectors in the orbits form a basis of $\mathbb{R}^8$, thus they partition $V(G_{E_8})$ into $15$ cliques of size $8$.
\end{proof}
We have the following stabilizer subgroups for all vertices in the corresponding orbits:
\begin{align}
&V_1:\,\,\, \langle \sigma_{IIX}, \sigma_{IZI}, \sigma_{ZII}\rangle, &&V_2:\,\,\,\langle \sigma_{ZII}, \sigma_{IXI}, \sigma_{IIZ}\rangle, &&V_3:\,\,\,\langle \sigma_{ZII}, \sigma_{IXX}, \sigma_{IZZ}\rangle,\nonumber\\
&V_4:\,\,\,\langle \sigma_{XII}, \sigma_{IIZ}, \sigma_{IZI}\rangle, &&V_5:\,\,\,\langle \sigma_{XIX}, \sigma_{IZI}, \sigma_{ZIZ}\rangle, &&V_6:\,\,\,\langle \sigma_{XXI}, \sigma_{IIZ}, \sigma_{ZZI}\rangle,\nonumber\\
&V_7:\,\,\,\langle \sigma_{XXX}, \sigma_{ZZI}, \sigma_{IZZ}\rangle, &&V_8:\,\,\,\langle \sigma_{IIX}, \sigma_{ZXI}, \sigma_{XZI}\rangle, &&V_9:\,\,\,\langle \sigma_{IXI}, \sigma_{ZIX}, \sigma_{XIZ}\rangle,\nonumber\\
&V_{10}:\langle \sigma_{IXX}, \sigma_{ZXI}, \sigma_{XZZ}\rangle, &&V_{11}:\langle \sigma_{XII}, \sigma_{IXZ}, \sigma_{IZX}\rangle, &&V_{12}:\langle \sigma_{XIX}, \sigma_{IZX}, \sigma_{ZXZ}\rangle,\nonumber\\
&V_{13}:\langle \sigma_{XXI}, \sigma_{IXZ}, \sigma_{ZZX}\rangle, &&V_{14}:\langle \sigma_{XXX}, \sigma_{ZZX}, \sigma_{XZZ}\rangle, &&V_{15}:\langle \sigma_{XII}, \sigma_{IXI}, \sigma_{IIX}\rangle.\label{Stabs}
\end{align}
In this case, we use the notation $M_1M_2M_3:=M_1 \otimes M_2 \otimes M_3$, where $M_i\in \{I,X,Y,Z\}$.
Note that the rank-$1$ projections associated to the vectors can be written in the form
\begin{align*}
\frac{1}{\|x\|^2}xx^*=\frac{1}{8}(1\pm N_1)(1\pm N_2)(1\pm N_3),
\end{align*}
where $N_i, i=1,2,3$ are the matrices associated to the generators of the stabilizer subgroup. For example, we have
\begin{align*}
\frac{1}{2}(e_1+e_2)(e_1+e_2)^*=\frac{1}{8}(1+IIX)(1+IZI)(1+ZII).
\end{align*}
Using those rank-$1$ projections, we will now define a quantum permutation matrix on the vertex set of $G_{E_8}$.
\begin{lemma}\label{qpermmatrix}
Let $P_x=\frac{1}{\|x\|^2}xx^*$ be the associated rank-$1$ projection to every vector $x \in \Psi_{E_8}$ in the $E_8$ root system. We partition the vertex set of $G_{E_8}$ as in \eqref{vertexpartition}. For each $i \in \{1,\dots, 15\}$, we choose a vector $w_i$ associated to one of the vertices in $V_i$. For $v_y, v_z \in V_i$, we define $u_{v_yv_z}^{(i)}:=M_{yz}P_{w_i}M_{yz}^*=P_{M_{yz}w_i}$, where $M_{yz}=M_1\otimes M_2 \otimes M_3$, $M_i\in \{I,X,Y,Z\}$ fulfils $M_{yz}y=\pm z$ (such an $M$ exists as $v_y$ and $v_z$ are in the same orbit under the action of $L$). Let $u^{(i)}=(u_{v_yv_z}^{(i)})_{v_y,v_z \in V_i}$ and let $u$ be the matrix
\begin{align}
\begin{blockarray}{cccccc}
&V_1&V_2&\dots&\dots&V_{15}\\
\begin{block}{c(ccccc)}
V_1&u^{(1)}&0&0&\dots&0\\ V_2&0&u^{(2)}&0&\dots&0\\ \vdots&0&0&u^{(3)}&\dots&0\\\vdots&\vdots &\vdots&\vdots&\ddots&0\\V_{15}&0&0&0&0&u^{(15)} \\
\end{block}
\end{blockarray}\label{blockmatrix}
\end{align}
Then $u$ is a quantum permutation matrix.
\end{lemma}
\begin{proof}
The entries of $u$ are projections by definition. It remains to show that $\sum_{v_y\in V_i}u_{v_yv_z}^{(i)}=1_{M_8(\mathbb{C})}$ and $\sum_{v_z\in V_i}u_{v_yv_z}^{(i)}=1_{M_8(\mathbb{C})}$ for all $i$. Since $M_{yz}^2=\pm 1_{M_8(\mathbb{C})}$, we see that we also have $M_{yz}z=\pm y$. For $v_y, v_{z_1}, v_{z_2}\in V_i$ with $z_1\neq z_2$, we thus have $M_{yz_2}M_{yz_1}z_1=\pm z_2$. It follows $\sigma_{M_{yz_2}M_{yz_1}}\notin \mathrm{Stab}_L(v_{z_1})=\mathrm{Stab}_L(v_{w_i})$, where the equality of the stabilizers comes from Lemma \ref{stabeq}. We get $M_{yz_1}w_i\neq \pm M_{yz_2}w_i$ for all $v_{z_1}\neq v_{z_2} \in V_i$. Recall that we have $u_{v_yv_z}^{(i)}=P_{M_{yz}w_i}$. We deduce
\begin{align*}
\sum_{v_z\in V_i}u_{v_yv_z}^{(i)}=\sum_{v_z\in V_i}P_{M_{yz}w_i}=1_{M_8(\mathbb{C})},
\end{align*}
since we sum over all rank-$1$ projections of the vectors associated to $V_i$, which form a basis of $\mathbb{R}_8$ (see \eqref{vertexpartition}). One similarly shows $\sum_{v_y\in V_i}u_{v_yv_z}^{(i)}=1_{M_8(\mathbb{C})}$.
\end{proof}
For the next lemma, recall that $d(i,j)$ denotes the distance between vertices $i,j$ as defined in Section \ref{secprelim}.
\begin{lemma}\label{edgepermutation}
Partition the vertex set of $G_{E_8}$ as in \eqref{vertexpartition}. Let $k,s \in V_i$ and $l,t \in V_j$ for $i\neq j$. If $d(k,l)=d(s,t)$, then there exists $\sigma \in L$ such that $\sigma(k)=s$, $\sigma(l)=t$.
\end{lemma}
\begin{proof}
Since $k,s \in V_i$, they are in the same orbit under the action of $L$ by definition. Thus, there exists $\sigma_1\in L$ such that $\sigma_1(k)=s$, $\sigma_1(l)=t'$, where $d(k,l)=d(s,t')$.
Let $\mathrm{Stab}_L(s)\cap \mathrm{Stab}_L(t')=\{\mathrm{id}, \tau\}$ (we know $|\mathrm{Stab}_L(s)\cap \mathrm{Stab}_L(t')|=2$, see \eqref{Stabs}). Assume $\sigma_2, \sigma_3 \in \mathrm{Stab}_L(s)$ such that $\sigma_2(t')=\sigma_3(t')$. Then $\sigma_2\circ \sigma_3\in \mathrm{Stab}_L(s)\cap \mathrm{Stab}_L(t')$ since $\sigma_2\circ \sigma_3(t')=\sigma_2\circ \sigma_2(t')=t'$ as $\sigma_2$ has order $2$. Therefore, we either have $\sigma_2\circ \sigma_3=\mathrm{id}$ or $\sigma_2\circ \sigma_3=\tau$. This shows $\sigma_3=\sigma_2$ or $\sigma_3=\sigma_2\circ \tau$. Since $|\mathrm{Stab}_L(s)/(\mathrm{Stab}_L(s)\cap \mathrm{Stab}_L(t'))|=4$, we see that elements in $\mathrm{Stab}_L(s)$ map $t'$ to four other vertices $q$ in $V_j$ fulfilling $d(s,t')=d(s,q)$. Note that $s$ has four neighbors and four non-neighbors in $V_j$. Thus, there exists $\sigma_4\in \mathrm{Stab}_L(s)$ such that $\sigma_4(t')=t$. Therefore, $\sigma=\sigma_4 \circ \sigma_1$ fulfills $\sigma(k)=s$ and $\sigma(l)=t$.
\end{proof}
We will now look at products of entries in the previously defined quantum permutation matrix. The lemma inspired us to give the alternative description (Definition \ref{defqisograph}) of the graph we will discuss in the next section.
\begin{lemma}\label{prod0}
Let $u$ be as in Lemma \ref{qpermmatrix} and let $k,s \in V_i$ and $l,t\in V_j$, $i \neq j$.
\begin{itemize}
\item[(i)] For $d(k,l)=d(s,t)$, we have $u_{ks}^{(i)}u_{lt}^{(j)}=0$ if and only if $\langle w_i,w_j\rangle=0$.
\item[(ii)] For $d(k,l)\neq d(s,t)$, we have $u_{ks}^{(i)}u_{lt}^{(j)}=0$ if and only if $\langle w_i,w_j\rangle\neq0$.
\end{itemize}
\end{lemma}
\begin{proof}
We start with the proof of $(i)$. Since $d(k,l)=d(s,t)$, there exists $\sigma_M\in L$ such that $\sigma_M(k)=s$ and $\sigma_M(l)=t$ by Lemma \ref{edgepermutation}. Thus, we can choose $M_{ks}=M_{lt}:=M$ (note that $M_{kl}$ and $M_{st}$ are unique up to the stabilizers of $w_i$ and $w_j$, respectively) and get
\begin{align}\label{eq1}
u_{ks}^{(i)}u_{lt}^{(j)}=MP_{w_i}M^*MP_{w_j}M^*=MP_{w_i}P_{w_j}M^*,
\end{align}
since we have $M^*M=1$ for all $M$ associated to the permutations in $L$. Since $P_{w_i}$ and $P_{w_j}$ are the rank-$1$ projections associated to $w_i$ and $w_j$, respectively, we have $\langle w_i,w_j \rangle=0$ if and only if $P_{w_i}P_{w_j}=0$. We see that the latter is equivalent to $u_{ks}^{(i)}u_{lt}^{(j)}=0$ by multiplying $M^*$ from the left and $M$ from the right in \eqref{eq1}.
For $(ii)$, recall from Lemma \ref{qpermmatrix} that $u_{ks}^{(i)}$ and $u_{lt}^{(j)}$ are the rank-$1$ projections of the vectors associated to the vertices in $V_i$ and $V_j$, respectively. Fix $k,s\in V_i$ and $l\in V_j$. As every vertex in $V_i$ is connected to four vertices in $V_j$, every rank-$1$ projection associated to $V_i$ is orthogonal to four rank-$1$ projections associated to $V_j$. Thus, we have $u_{ks}^{(i)}u_{lt}^{(j)}=0$ for exactly four $t\in V_j$. Note that by \eqref{eq1}, we either have $u_{ks}^{(i)}u_{lt}^{(j)}=0$ for all $t$ with $d(k,l)=d(s,t)$ or it holds $u_{ks}^{(i)}u_{lt}^{(j)}=0$ for all $t$ with $d(k,l)\neq d(s,t)$. The assertion then follows from $(i)$.
\end{proof}
\subsection[Rank 4 graph]{A rank $4$ graph from independent sets of the folded halved $8$-cube graph}
The folded halved $8$-cube graph can be described as follows. Denote by $\mathbf{1}$ the all ones vector in $\mathbb{F}_2^8$. The vertex set of the folded halved $8$-cube graph is the set of all pairs $\{x,\mathbf{1}+x\}$ for all $x\in \mathbb{F}_2^8$ with $\sum_i x_i=0$. Two vertices $\{x,\mathbf{1}+x\}$ and $\{y,\mathbf{1}+y\}$ are adjacent if and only if $x+y$ or $x+y+\mathbf{1}$ is a vector $v$ having $v_i=1$ in exactly two positions.
We need the folded halved $8$-cube graph in the next definition.
\begin{definition}[\cite{BIK}]\label{rank4graph}
Let $\Gamma_1$ be the following graph. Take as vertices one orbit of the independent sets of size $8$ under $\mathbb{Z}_2^6\times A_8$ in the folded halved $8$-cube graph, where two vertices are adjacent if the associated independent sets have two points in common.
\end{definition}
The graph is strongly regular with parameters $(120, 56, 28, 24)$ and has rank $4$, meaning that the stabilizer $\mathrm{Stab}_{\aut(\Gamma_1)}(v)$ for a vertex $v$ has $4$ orbits (see \cite{BIK}). Note that the complement of this graph also came up in \cite[Table 2.2]{MS}, isomorphic to the graphs $G_2$ -- $G_5$. To see how this graph is related to $G_{E_8}$, we will now introduce graphs $G^{\boldsymbol{w}}$ and show that those are isomorphic to the complement of $\Gamma_1$. The definition of $G^{\boldsymbol{w}}$ comes from the relations between the entries of the quantum permutation matrix in Lemma \ref{prod0}.
\begin{definition}\label{defqisograph}
For each $i \in \{1,\dots, 15\}$, choose a vector $w_i$ associated to one of the vertices in $V_i$, where $V_i$ are the sets of vertices as in $\eqref{vertexpartition}$. Let $\boldsymbol{w}=\{w_1, \dots, w_{15}\}$ and define the graph $G^{\boldsymbol{w}}$ as follows. We let $V(G^{\boldsymbol{w}})=V(G_{E_8})$. If $s\in V_i$, $t\in V_j$ with $\langle w_i, w_j\rangle \neq 0$, we let $(s,t)\in E(G^{\boldsymbol{w}})$ if and only if $(s,t)\in E(G_{E_8})$. If $s\in V_i$, $t\in V_j$ with $\langle w_i, w_j\rangle = 0$, we let $(s,t)\in E(G^{\boldsymbol{w}})$ if and only if $(s,t)\notin E(G_{E_8})$.
\end{definition}
A convenient choice for the vectors $w_i$ are $e_1-e_j$, $x_{\{1,j\}}$, $x_{\emptyset}$ for $j \in \{2,\dots, 8\}$. Then, the only vectors that are orthogonal are $e_1-e_j, x_{\{1,j\}}$ and $e_1-e_j, x_{\emptyset}$ for $j \in \{2,\dots, 8\}$. We will see that we get isomorphic graphs for any choice of $w_i$. First, we look at automorphisms of $G^{\boldsymbol{w}}$.
\begin{lemma}\label{autsG'}
Let $G^{\boldsymbol{w}}$ as in Definition \ref{defqisograph} and $L$ as in Lemma \ref{Lsubgroup}. We have $L \subseteq \aut(G^{\boldsymbol{w}})$. Furthermore, for $v_x \in V_i$ and $v_y \in V_j$, we have $(v_x,v_y) \notin E(G^{\boldsymbol{w}})$ if and only if there exists $\sigma\in L$ such that $\sigma(v_{w_i})=v_x$ and $\sigma(v_{w_j})=v_y$.
\end{lemma}
\begin{proof}
Let $\sigma_M \in L$. Since we have $V(G^{\boldsymbol{w}})=V(G_{E_8})$, we know that $\sigma_M $ is well-defined on $V(G^{\boldsymbol{w}})$ by Lemma \ref{Lsubgroup}. Let $V_i$, $i \in \{1, \dots, 15\}$ as in \eqref{vertexpartition}. By definition, we have $\sigma_M(v_x)\in V_i$ for all $v_x \in V_i$. Using this, the definition of $G^{\boldsymbol{w}}$ and $\sigma_M \in \aut(G_{E_8})$, we see that $\sigma_M \in \aut(G^{\boldsymbol{w}})$. Thus $L \subseteq \aut(G^{\boldsymbol{w}})$.
For the second assertion, first note that we have $(v_{w_i}, v_{w_j})\notin E(G^{\boldsymbol{w}})$ for every $i, j$. We will denote by $\mathrm{Stab}_L(V_i)$ the stabilizer subgroup of all vertices in $V_i$. Let $v_x \in V_i$ and define $B:=M_{w_ix}$. Since $\sigma_{B}\in \aut(G^{\boldsymbol{w}})$, we have $(v_x, v_{Bw_j})\notin E(G^{\boldsymbol{w}})$ for all $j \neq i$. For every $\sigma_N\in \mathrm{Stab}_L(V_i)\backslash \mathrm{Stab}_L(V_j)$, we thus get a vertex $v_{NBw_j}$ with $(v_x, v_{NBw_j})\notin E(G^{\boldsymbol{w}})$. Since we have $|\mathrm{Stab}_L(V_i)/ (\mathrm{Stab}_L(V_i)\cap \mathrm{Stab}_L(V_j))|=4$, we thus get four non-neighbors of $v_x$ for every $j \neq i$. We get $56$ non-neighbors in this way and since the graph $G^{\boldsymbol{w}}$ is $63$-regular (the construction from $G_{E_8}$ does not change the number of neighbors of a vertex), those are all non-neighbors in $G^{\boldsymbol{w}}$.
\end{proof}
\begin{cor}
The graphs $G^{\boldsymbol{w}^{(1)}}$ and $G^{\boldsymbol{w}^{(2)}}$ are isomorphic for any choice of $w_i^{(1)}\in V_i$ and $w_i^{(2)}\in V_i$, $i\in \{1,\dots, 15\}$.
\end{cor}
\begin{proof}
Since we have $V(G^{\boldsymbol{w}^{(1)}})=V(G^{\boldsymbol{w}^{(2)}})=V(G_{E_8})$, we can partition the vertex sets into $V_i$ as in \eqref{vertexpartition}. Let $N_i:=M_{w_i^{(1)}w_i^{(2)}}$, where $M_{w_i^{(1)}w_i^{(2)}}$ is as in Lemma \ref{qpermmatrix}. We claim that the map $\varphi:V(G^{\boldsymbol{w}^{(1)}})\mapsto V(G^{\boldsymbol{w}^{(2)}})$, $v_x \in V_i\mapsto v_{N_ix}\in V_i$ is an isomorphism between $G^{\boldsymbol{w}^{(1)}}$ and $G^{\boldsymbol{w}^{(2)}}$. Let $v_x\in V_i, v_y\in V_j$. Assume $(v_{N_ix}, v_{N_iy})\notin E(G^{\boldsymbol{w}^{(2)}})$. By Lemma \ref{autsG'} this is equivalent to the existence of $\tau \in L$ such that $\tau(v_{w_i^{(2)}})=v_{N_ix}$ and $\tau(v_{w_j^{(2)}})=v_{N_jy}$. Using $v_{N_ix}=\sigma_{N_i}(v_x)$, $\sigma_{N_i}^2=\mathrm{id}$, $\sigma_{N_i}\circ \tau=\tau \circ \sigma_{N_i}$ and $\sigma_{N_i}(v_{w_i^{(2)}})=v_{w_i^{(1)}}$ and similar for $v_y$ and $j$, this is equivalent to $\tau(v_{w_i^{(1)}})=v_x$ and $\tau(v_{w_j^{(1)}})=v_y$. Again using Lemma \ref{autsG'}, we get those equations if and only if $(v_x,v_y)\notin E(G^{\boldsymbol{w}^{(1)}})$. Thus, $\varphi$ is an isomorphism between $G^{\boldsymbol{w}^{(1)}}$ and $G^{\boldsymbol{w}^{(2)}}$.
\end{proof}
We will now show that $\overline{\Gamma}_1$ is isomorphic to the graph $G^{\boldsymbol{w}}$.
\begin{theorem}\label{isographs}
The complement of the graph $G^{\boldsymbol{w}}$ is isomorphic to the graph $\Gamma_1$ described in Definition \ref{rank4graph}.
\end{theorem}
\begin{proof}
First note that the folded halved (or halved folded) $8$-cube graph is the complement of $VO_6^+(2)$, see \cite[Section 10.26]{BVM}. The latter graph has vertex set $\mathbb{F}_2^6$ and two vertices $x,y$ are adjacent if $Q(x+y)=0$, where $Q(z)=z_1z_2+z_3z_4+z_5z_6$. We will use an equivalent description. Take as vertex set $X^{x_1}Z^{x_2}\otimes X^{x_3}Z^{x_4}\otimes X^{x_5}Z^{x_6}$ for the Pauli matrices $X, Z$ as in \eqref{Pauli}, $x\in \{0,1\}^6$. Then two vertices are adjacent if in the product of the $3$-tensor Pauli matrices, the matrix $Y=XZ$ appears either zero or two times in the three tensor legs (we will consider the $3$-tensor Pauli matrices up to a sign). We construct the graph $\Gamma_1$ from the following $120$ cliques of size $8$ in $VO_6^+(2)$.
Recall from \eqref{Stabs} the stabilizer subgroups of the vertices. We will denote by $\mathrm{Stab}_L(V_i)$ the stabilizer subgroup of all vertices in $V_i$ (they are the same by Lemma \ref{stabeq}). For $i \in \{1,\dots, 15\}$, let
\begin{align}\label{clique1}
C(i)=\{M=M_1\otimes M_2 \otimes M_3,\, M_i\in \{I,X,Y,Z\}\,|\, \sigma_M \in \mathrm{Stab}_L(V_i)\}.
\end{align}
They form cliques in $VO_6^+(2)$ as all elements in $C(i)$, and thus their products contain either $0$ or $2$ times the matrix $Y$ in the tensor legs. The remaining $105$ cliques of size $8$ that we choose are of the form
\begin{align}\label{clique2}
NC(i)=\{NM \,|\, M \in C(i)\},
\end{align}
where $N=N_1\otimes N_2 \otimes N_3$ with $N_i\in \{I,X,Y,Z\}$ and $\sigma_N \notin \mathrm{Stab}_L(V_i)$. In \eqref{clique2}, we leave out signs in front of the tensor and consider the corresponding vertex in $V(VO_6^+(2))$ to $NM$. We get seven additional cliques for every $C(i)$ in this way. Choosing those $120$ cliques as vertices, which are adjacent if the corresponding cliques share two vertices of $VO_6^+(2)$ gives us the graph $\Gamma_1$ (Definition \ref{rank4graph}) because of the following. The subgroup $\mathbb{Z}_2^6$ is generated by left multiplication with elements of the form $N=N_1\otimes N_2 \otimes N_3$ with $N_i\in \{I,X,Y,Z\}$. Thus, we get from $C(i)$ to $NC(i)$ using those automorphisms. Let $q(x,v)=Q(x+v)+Q(v)+Q(x)$ for $x,v \in \mathbb{F}_2^6$. The subgroup $A_8$ of the automorphism group of $VO_6^+(2)$ is generated by even products of maps $t_v:\mathbb{F}_2^6\to \mathbb{F}_2^6$, $t_v(x)=x+q(x,v)v$, where $Q(v)=1$ (inducing automorphisms on $X^{x_1}Z^{x_2}\otimes X^{x_3}Z^{x_4}\otimes X^{x_5}Z^{x_6}$). One checks that one gets from $C(i)$ to $C(j)$ using those automorphisms (for example, we use $t_v\circ t_w$ with $v=(001100), w=(000011)$ to obtain $C(2)$ from $C(1)$). Since we know from \cite{BIK} that there are $120$ cliques in the orbit, the cliques in \eqref{clique1} and \eqref{clique2} are all of them.
We will now use the same labelling for the complement of $G^{\boldsymbol{w}}$ to see that the graphs are isomorphic. Let $G^{\boldsymbol{w}}$ as in Definition \ref{defqisograph}. Relabel the vertices $v_{w_i}$ by $C(i)$ as in \eqref{clique1} and $v_x\in V_i$ by $M_{w_ix}C(i)$ as in \eqref{clique2}, where $M_{w_ix}$ is as in Lemma \ref{qpermmatrix}. In this labelling, the automorphisms $\sigma_M\in L$ send $NC(i)$ to $(MN)C(i)$. By Lemma \ref{autsG'}, we know that vertices $v_x$ and $v_y$ in the complement of $G^{\boldsymbol{w}}$ are adjacent if and only if there exists $\sigma_N\in L$ such that $\sigma_N(v_{w_i})=v_x$ and $\sigma_N(v_{w_j})=v_y$. With the new labelling, this means that vertices $AC(i)$ and $BC(j)$ are adjacent if and only if there exists $\sigma_N\in L$ such that $\sigma_N(C(i))=AC(i)$ and $\sigma_N(C(j))=BC(j)$. Looking at \eqref{Stabs}, we see that every $C(i)$ and $C(j)$ have two points in common. Since for adjacent $AC(i)$ and $BC(j)$ there exists $\sigma_N \in L$ such that $\sigma_N(C(i))=AC(i)$ and $\sigma_N(C(j))=BC(j)$, the sets $AC(i)$ and $BC(j)$ also have two points in common. We know that $\Gamma_1$ is $56$-regular, thus every clique shares two points with exactly $56$ other cliques. Since the complement of $G^{\boldsymbol{w}}$ is also $56$-regular and all neighbors share two points, we see that the existence of an automorphism $\sigma_N$ with $\sigma_N(C(i))=AC(i)$ and $\sigma_N(C(j))=BC(j)$ is equivalent to $AC(i)$ and $BC(j)$ sharing two points. Therefore, the complement of the graph $G^{\boldsymbol{w}}$ is isomorphic to the graph $\Gamma_1$.
\end{proof}
\subsection[The quantum isomorphism]{The quantum isomorphism between $G_{E_8}$ and $\overline{\Gamma}_1$}
We will now show that $G_{E_8}$ and $\overline{\Gamma}_1$ are quantum isomorphic. This will follow from Lemma \ref{prod0} and Theorem \ref{isographs}.
\begin{theorem}\label{qisostronglyregular}
Let $G_{E_8}$ be the graph as in Definition \ref{defGE8} and $\Gamma_1$ as in Definition \ref{rank4graph}. Then $G_{E_8}$ and $\overline{\Gamma}_1$ are quantum isomorphic, non-isomorphic strongly regular graphs.
\end{theorem}
\begin{proof}
By Theorem \ref{isographs}, we know that $\overline{\Gamma}_1$ is isomorphic to $G^{\boldsymbol{w}}$ for any choice of $\boldsymbol{w}$. To see that $G_{E_8}$ and $G^{\boldsymbol{w}}$ are quantum isomorphic, we have to show that there exists a quantum permutation matrix that fulfills $A_{G_{E_8}}u=uA_{G^{\boldsymbol{w}}}$. By Lemma \ref{lemprod0}, this is equivalent to showing $u_{ks}u_{lt}=0$ for $d(k,l)\neq d(s,t)$, $k,l \in V(G_{E_8}), s,t \in V(G^{\boldsymbol{w}})$. Let $u$ be the quantum permutation matrix as in Lemma \ref{qpermmatrix}, where we choose the same $w_i$ as in the construction of $G^{\boldsymbol{w}}$. Since $u$ has block form (see \eqref{blockmatrix}) and we know that all vertices in $V_i$ are connected in $G_{E_8}$ and $G^{\boldsymbol{w}}$, it remains to show $u_{ks}^{(i)}u_{lt}^{(j)}=0$ for $d(k,l)\neq d(s,t)$, $k,l \in V(G_{E_8}), s,t \in V(G^{\boldsymbol{w}})$ for $i \neq j$. By definition of $G^{\boldsymbol{w}}$, we thus have to show $u_{ks}^{(i)}u_{lt}^{(j)}=0$ for $d(k,l)\neq d(s,t)$, $k,l,s,t \in V(G_{E_8})$ if $\langle w_i,w_j \rangle \neq0$ and $u_{ks}^{(i)}u_{lt}^{(j)}=0$ for $d(k,l)= d(s,t)$, $k,l,s,t \in V(G_{E_8})$ for $\langle w_i,w_j \rangle =0$. But this follows from Lemma \ref{prod0}.
The graphs $G_{E_8}$ and $G^{\boldsymbol{w}}$ are not isomorphic because of the following. It is well known that $G_{E_8}$ has independence number $8$ (see for example \cite[Section 10.39]{BVM}). By construction, the set of vertices $\{v_{w_1}, \dots, v_{w_{15}}\}$ is an independent set of size $15$ in $G^{\boldsymbol{w}}$, which already shows $G_{E_8}\ncong G^{\boldsymbol{w}}$.
Thus, $G_{E_8}$ and $\overline{\Gamma}_1$ are quantum isomorphic, non-isomorphic strongly regular graphs.
\end{proof}
\begin{remark}\label{remqiso}
\begin{itemize}
\item[(i)] Note that deleting a clique in a graph decreases the independence number by at most one. Let $T\subseteq \{1,\dots, 15\}$ and $V_T=\bigcup_{i\in T}V_i$, where $V_i$ is the partition of $V(G_{E_8})$ as before (see \eqref{vertexpartition}). For $|T|\geq 9$, the induced subgraphs of $G_{E_8}$ and $G^{\boldsymbol{w}}$ on $V_T$ are still quantum isomorphic and non-isomorphic. They are quantum isomorphic, since the submatrix on $V_T$ of $u$ (see \eqref{blockmatrix}) is still a quantum permutation matrix fulfilling the required relations. They are non-isomorphic, since the independence number of the subgraph of $G_{E_8}$ is less or equal to $8$ whereas the independence number of the subgraph of $G^{\boldsymbol{w}}$ is bigger or equal to $9$.
\item[(ii)] It is noted in \cite[Section 10.39]{BVM}, that $G_{E_8}$ is distance-transitive. Thus, we know that both the quantum orbital algebra and the orbital algebra are $3$-dimensional. By \cite{BIK}, we know that $\overline{\Gamma}_1$ is a rank $4$ graph, which means that the orbital algebra is $4$-dimensional. Since $G_{E_8}$ and $\overline{\Gamma}_1$ are quantum isomorphic, we know that the quantum orbital algebra of $\overline{\Gamma}_1$ is $3$-dimensional (see \cite[Theorem 4.6]{LMR}). Thus, the graph $\overline{\Gamma}_1$ is the first example of a graph that has $3$-dimensional quantum orbital algebra and $4$-dimensional orbital algebra. For more on quantum orbital algebras, see \cite{LMR}.
\item[(iii)] Looking at \cite[Table 2.2]{MS}, we see that $|\aut(G_{E_8})|=348364800$ and $|\aut(\overline{\Gamma}_1)|=1290240$. Thus, we have an example of quantum isomorphic graphs, where the sizes of the automorphism groups differ. This also implies that $\overline{\Gamma}_1$ has quantum symmetry, since the quantum automorphism groups of quantum isomorphic graphs are monoidally equivalent \cite[Theorem 4.7]{BCEHPSW}.
\end{itemize}
\end{remark}
\section{Switching quantum isomorphic graphs}\label{secswitching}
In this section, we will use Godsil-McKay switching to construct more quantum isomorphic, non-isomorphic graphs from the pair we obtained in the last section. We start with the definition of Godsil-McKay switching.
\begin{definition}[Godsil-McKay switching, \cite{GM}]\label{GMswitching}
Let $G$ be a graph and $\pi=\{C_1, \dots, C_k, D\}$ be a partition of the vertex set $V(G)$. Suppose that we have for $1 \leq i,j \leq k$, $v \in D$
\begin{itemize}
\item[(i)] any two vertices in $C_i$ have the same number of neighbors in $C_j$,
\item[(ii)] the vertex $v$ has either $0$, $\frac{n_i}{2}$ or $n_i$ neighbors in $C_i$, where $n_i:=|C_i|$.
\end{itemize}
The graph $G^{\pi, D}$ is obtain as follows. For each $v\in D$ and $1 \leq i \leq k$ such that $v$ has $\frac{n_i}{2}$ neighbors in $C_i$, delete these $\frac{n_i}{2}$ edges and join $v$ instead to the other $\frac{n_i}{2}$ vertices in $C_i$.
Let $Q_m=\frac{2}{m}J_m-1_{M_m(\mathbb{C})}$, where $J_m$ is the all-ones matrix. In terms of the adjacency matrix, we have $A_{G^{\pi, D}}=QA_GQ$, where $Q$ is the following matrix
\begin{align}
\begin{blockarray}{ccccccc}
&C_1&C_2&\dots&\dots&C_k&D\\
\begin{block}{c(cccccc)}
C_1&Q_{n_1}&0&0&\dots&\dots&0\\ C_2&0&Q_{n_2}&0&\dots&\dots&0\\ \vdots&0&0&Q_{n_3}&\dots&\dots&0\\\vdots&\vdots &\vdots&\vdots&\ddots&\dots&0\\
C_k&\vdots &\vdots&\vdots&\vdots&Q_{n_k}&0\\
D&0&0&0&0&0&1_{M_{|D|}(\mathbb{C})} \\
\end{block}
\end{blockarray}\label{switchingmatrix}
\end{align}
\end{definition}
\begin{theorem}[\cite{GM}]
The graphs $G$ and $G^{\pi, D}$ are cospectral.
\end{theorem}
The next theorem shows that we can find new pairs of quantum isomorphic, non-isomorphic graphs from certain vertex partitions compatible with the block structure of a quantum permutation matrix associated to a quantum isomorphism between the graphs.
\begin{theorem}\label{thm:qisoGM}
Let $G_1$ and $G_2$ be quantum isomorphic graphs, where there exists a quantum permutation matrix $u$ with $uA_{G_1}=A_{G_2}u$ of the form
\begin{align}
\begin{blockarray}{cccccc}
&V_1&V_2&\dots&\dots&V_m\\
\begin{block}{c(ccccc)}
V_1&u^{(1)}&0&0&\dots&0\\ V_2&0&u^{(2)}&0&\dots&0\\ \vdots&0&0&u^{(3)}&\dots&0\\\vdots&\vdots &\vdots&\vdots&\ddots&0\\
V_m&0&0&0&0&u^{(m)}\\
\end{block}
\end{blockarray}\label{switchqpermmatrix}
\end{align}
for some partition $\{V_1, \dots, V_m\}$ of the vertex sets (we can label both vertex sets by $V$ as quantum isomorphic graphs have the same number of vertices). Let $\{S_1, \dots, S_{k+1}\}$ be a partition of $[m]$ and define a partition $\pi=\{C_1, \dots, C_k, D\}$ of the vertex set by setting $C_i:=\cup_{s\in S_i}V_j$ and $D:=\cup_{s\in S_{k+1}}V_j$.
If $G_1$ and $G_2$ fulfill the properties $(i)$ and $(ii)$ of Definition \ref{GMswitching} with respect to $\pi$, we can use Godsil-McKay switching and the graphs $G_1^{\pi, D}$ and $G_2^{\pi, D}$ are quantum isomorphic.
\end{theorem}
\begin{proof}
We first show $uQ=Qu$ for $Q$ as in \eqref{switchingmatrix} and $u$ as in \eqref{switchqpermmatrix}. By the block form of $Q$ and $u$ as well as reordering the $V_i$'s, we have $uQ=Qu$ if and only if $(\bigoplus_{s\in S_i}u^{(i)})Q_{n_i}=Q_{n_i}(\bigoplus_{s\in S_i}u^{(i)})$. Since the matrices $u_{S_i}:=\bigoplus_{s\in S_i}u^{(i)}$ are also quantum permutation matrices, we compute
\begin{align*}
(u_{S_i}Q_{n_i})_{kl}=\frac{2}{n_i}\sum_j (u_{S_i})_{kj}- (u_{S_i})_{kl}=\frac{2}{n_i}-(u_{S_i})_{kl}=\frac{2}{n_i}\sum_j (u_{S_i})_{jl}-(u^{(i)})_{kl}=(Q_{n_i}u_{S_i})_{kl},
\end{align*}
where we used $\sum_j (u_{S_i})_{kj}=1=\sum_j (u_{S_i})_{jl}$.
We have $uA_{G_1}=A_{G_2}u$ by assumption and know $A_{G_a^{\pi, D}}=QA_{G_a}Q$ for $a=1,2$ by Definition \ref{GMswitching}. Using this, we deduce
\begin{align*}
uA_{G_1^{\pi, D}}=uQA_{G_1}Q=QA_{G_2}Qu=A_{G_2^{\pi, D}}u.
\end{align*}
Thus $G_1^{\pi, D}$ and $G_2^{\pi, D}$ are quantum isomorphic.
\end{proof}
Finally, we give the following example.
\begin{example}
Let $G_{E_8}$ be the graph as in Definition \ref{defGE8} and $G^{\boldsymbol{w}}$ as in Definition \ref{defqisograph}, where we choose the vectors $w_i$ as $e_1-e_j$, $x_{\{1,j\}}$, $x_{\emptyset}$ for $j \in \{2,\dots, 8\}$. Choose the vertex set partition $\pi=\{V_1, \dots, V_{15}\}$ as in \eqref{vertexpartition} and let $D=V_{15}$. Then, for $G_{E_8}$ and $G^{\boldsymbol{w}}$, we know that
\begin{itemize}
\item[(i)] vertices in $V_i$ have $4$ neighbors in $V_j$,
\item[(ii)] every $v\in D=V_{15}$ has $4=\frac{|V_i|}{2}$ neighbors in $V_i$.
\end{itemize}
It is easy to see that $\pi$ satisfies the conditions in Theorem \ref{thm:qisoGM}. Therefore, we get that $(G_{E_8})^{\pi, V_{15}}$ and $(G^{\boldsymbol{w}})^{\pi, V_{15}}$ are quantum isomorphic. The graphs are non-isomorphic because of the following. Recall that we have $\alpha(G_{E_8})=8$ and $\alpha(G^{\boldsymbol{w}})=15$. Since $D$ is a clique, switching can at most increase or decrease the independence number by one, as at most one vertex in $D$ can be part of an independent set. Thus, we get $\alpha((G_{E_8})^{\pi, V_{15}})\leq 9$ and $\alpha((G^{\boldsymbol{w}})^{\pi, V_{15}})\geq 14$, which yields that the graphs are non-isomorphic.
Using Sage (\cite{sagemath}), we see that the graphs $(G_{E_8})^{\pi, V_{15}}$ and $(G^{\boldsymbol{w}})^{\pi, V_{15}}$ are non-isomorphic to $G_{E_8}$ and $G^{\boldsymbol{w}}$. Since $(G_{E_8})^{\pi, V_{15}}$ and $(G^{\boldsymbol{w}})^{\pi, V_{15}}$ are cospectral to $G_{E_8}$ and $G^{\boldsymbol{w}}$, respectively, the graphs $(G_{E_8})^{\pi, V_{15}}$ and $(G^{\boldsymbol{w}})^{\pi, V_{15}}$ are strongly regular with parameters $(120, 63, 30, 36)$.
\end{example}
\begin{remark}
\begin{itemize}
\item[(i)] We do not know how many different pairs of quantum isomorphic, non-isomorphic strongly regular graphs we get by using Godsil-McKay switching several times.
\item[(ii)] Similar to Remark \ref{remqiso} $(i)$, we can delete cliques and obtain more quantum isomorphic, non-isomorphic graphs in that way.
\end{itemize}
\end{remark}
\paragraph{Acknowledgments.}\phantom{a}\newline
The author has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101030346. He thanks David Roberson for helpful discussions on quantum isomorphisms and graph switching.
\bibliographystyle{plainurl}
|
1,116,691,497,247 | arxiv | \section{Introduction}
The advent of smartphones and tablets, data hungry applications, and the ever growing amount of digital content have increased the mobile data traffic unprecedentedly \cite{Cisco2016}. It is anticipated that the mobile data traffic will grow at a compound annual growth rate of $53 \%$ from $2015$ to $2020$ and reach $30.6$ exabytes per month \cite{Cisco2016}. A considerable portion of this traffic belongs to the contents that are of interest for groups of users in the network, for example, live broadcast of sporting events, mobile TV, and regular system updates \cite{lecompte2012evolved,DVB-GFaria,DVB-Elhajjar}. Although these types of traffic can be delivered by unicast\footnote{We present a formal definition of unicast and multicast transmissions in Section II. B.} transmission, theoretically it is more efficient to employ multicast transmission\footnote{For the sake of brevity, in this paper we refer to physical layer multicasting as multicasting.} \cite{sidiropoulos2006transmit} and therefore it has been considered in different releases of the 3rd Generation Partnership Project \cite{lecompte2012evolved}.
Multicasting can be performed in two different ways, either with the blind isotropic transmission as in digital video broadcasting \cite{DVB-GFaria,DVB-Elhajjar} or by downlink precoding based on channel state information (CSI) \cite{sidiropoulos2006transmit,karipidis2008quality}. As detailed in \cite{karipidis2008quality}, the latter approach is more desirable for wireless systems. In this paper by multicasting we refer to this second approach, where the multi-antenna transmitter employs its CSI to perform precoding such that a desired metric of interest is optimized \cite{sidiropoulos2006transmit,karipidis2008quality}. A seminal study of multicasting is presented in \cite{sidiropoulos2006transmit}, where the precoder design for the so-called max-min fairness (MMF) and quality of service (QoS) problems is investigated. Considering a single-group single-cell system, it is shown that both MMF and QoS problems are NP-hard and a suboptimal solution is presented. This work is then extended to a multi-group single-cell scenario and it is shown that there exists a duality between the MMF and QoS problems \cite{karipidis2008quality}. The MMF problem is then revisited under per-antenna power constraint for multi-group single-cell systems in \cite{christopoulos2014weighted}. Also, the coordinated multicasting transmission for a single-group multi-cell scenario is investigated in \cite{xiang2013coordinated}. Note that \cite{sidiropoulos2006transmit,karipidis2008quality,christopoulos2014weighted,xiang2013coordinated} assume perfect CSI is available at the base station (BS) and also at the user terminals (UTs).
The aforementioned works (among many others) are based on the semidefinite relaxation (SDR) technique and suffer from high computational complexity. Considering a multicasting system with an $N$-antenna BS and $G$ different multicasting groups, the complexity of SDR based techniques is of $\mathcal{O}(G^{3.5}N^{6.5})$ \cite{karipidis2008quality}. This high complexity makes the SDR based multicasting algorithms impractical for large dimensional systems, e.g. massive MIMO systems where they deploy hundreds of antennas \cite{marzetta2010noncooperative}.
Due to significant performance of massive MIMO in terms of energy and spectral efficiency \cite{hoydis2013massive,ngo2013energy,LSAPowerNorm}, it is a promising candidate for the fifth generation of cellular networks \cite{andrews2014will,boccardi2014five}. Therefore recent works on multicasting have tried to address the high computational complexity of massive MIMO multicasting \cite{tran2014conic,christopoulos2015multicast,MeysamMultiComplexity}. Particularly, \cite{tran2014conic} presents a successive convex approximation technique for single-group single-cell multicasting of large-scale antenna arrays which reduces the computational complexity to $\mathcal{O}(N^{3.5})$. The system set-up of \cite{tran2014conic} is extended to a multi-group single-cell multicasting in \cite{christopoulos2015multicast}. Therein a feasible point pursuit based algorithm with a complexity of $\mathcal{O}((GN)^{3.5})$ is presented. However, the complexity is still high for large-scale antenna systems with hundreds of antennas. Recently a low-complexity algorithm, $\mathcal{O}(N)$ for single-group and $\mathcal{O}(GN^{2})$ for multi-group multicasting, for massive MIMO system is presented in \cite{MeysamMultiComplexity}. This algorithm not only reduces the complexity but also significantly outperforms the SDR based methods.
The common denominator of the aforementioned algorithms is the perfect CSI assumption, both at the BS and at the UTs. However, in practice the CSI is not available neither at the BS nor at the UTs, and should be obtained. This introduces new challenges to the multicasting problem, which is already NP-hard. To address the CSI acquisition problem, two approaches have been presented in the literature. The first approach leverages the asymptotic orthogonality of the channels in massive MIMO, which simplifies the precoding design \cite{Zhengzheng2014,zhou2015joint,sadeghi2015multi}. The main problem with the asymptotic approach is that a very large number of antennas, e.g., $N>4000$, is required to get close to the asymptotic performance, while the performance is poor for realistic antenna numbers \cite{zhou2015joint,sadeghi2015multi}.
The second approach relies on employing predefined multicasting precoders \cite{YangMulticat}. More precisely, considering a single-cell multi-group multicasting system, \cite{YangMulticat} presents a maximum ratio transmission (MRT) based multicast precoder with a novel pilot allocation strategy. Contrary to the common approach where a dedicated pilot is used per UT, it uses a shared pilot for all the UTs within each multicasting group, hereafter called co-pilot assignment. They show numerically that MRT multicasting with co-pilot assignment substantially outperforms the MRT unicasting with dedicated pilot assignment in terms of minimum spectral efficiency (SE).
The improvement in the SE of multicast transmission, shown by \cite{YangMulticat}, has motivated the application of co-pilot assignment in the subsequent works \cite{Zhengzheng2014,zhou2015joint,sadeghi2015multi}. But as this improved SE is observed by numerical comparison of MRT multicast transmission with co-pilot assignment, and MRT unicast transmission with dedicated pilot assignment, a series of questions remain to be answered:
\begin{itemize}
\item Does the same observation hold for zero forcing (ZF)?
\item When is beneficial to employ co-pilot assignment instead of dedicated pilot assignment?
\item Given a set of system parameters, which precoder and pilot assignment shall be used?
\end{itemize}
To answer these questions, we study six different possible scenarios as shown in Fig. \ref{figint}. The first layer of Fig. \ref{figint} considers the two possible transmission technology, unicast (un) and multicast (mu). The second layer considers the employed pilot assignment strategy\footnote{Note that for unicast we just consider dedicated pilot assignment as the co-pilot assignment results in very weak performance due to high inter-group interference and extreme pilot contamination.}, i.e., dedicated pilot (dp) or co-pilot (cp). The third layer determines the precoding scheme, which is either MRT or ZF. Then the six considered scenarios are: MRT-undp, ZF-undp, MRT-mudp, ZF-mudp, MRT-mucp, and ZF-mucp.\footnote{As an example note that MRT-mucp means MRT multicasting with co-pilot assignment.}
\begin{figure}[]
\centering
\includegraphics[width=1\columnwidth, trim={0.25cm 2.9cm 0.2cm 2.8cm},clip]{FigIntroductionRev.pdf}
\caption{The six considered scenarios in this paper.}
\label{figint}
\end{figure}
In this paper, we answer the aforementioned questions while considering a multi-group massive MIMO multicasting system with realistic CSI acquisition. Our main contributions are as follows:
\begin{itemize}
\item We derive achievable SEs for each UT in the system considering the set-ups depicted in Fig. \ref{figint}.
\item We formulate the MMF problem for each of the six scenarios in Fig. \ref{figint}. For an arbitrary pilot length, we find 1) the optimal uplink pilots powers; 2) the optimal downlink data transmission powers; and 3) the optimal SE for each UT in the system, all in closed-forms.
\item Based on our analytical and numerical results, we draw a guideline for massive MIMO multicasting design. More precisely, given the number of BS antennas, the number of UTs, and the length of coherence interval, we determine the multicasting scheme that shall be used.
\end{itemize}
The remainder of this paper is organized as follows. Section II introduces the system model, the channel estimation, and elaborates the unicast and multicast transmissions. Section III presents the precoding schemes and their associated achievable SEs. Section IV studies the MMF problem for all set-ups of Fig. \ref{figint}. Section V presents the numerical analysis and further detailed discussions. Section VI summarizes the paper and presents the main conclusions.
\textit{Notations:} The following notation is used throughout the paper. Scalars are denoted by lower
case letters whereas boldface lower (upper) case letters are used for vectors (matrices). We
denote by $\mathbf{I}_{G}$ the identity matrix of size $G$ and represent the $j$ column of $\mathbf{I}_{G}$ as $\mathbf{e}_{j,G}$. The symbol $\mathcal{CN} (.,.)$ denotes the circularly symmetric complex Gaussian distribution. The trace, transpose, conjugate transpose, and expectation operators are denoted by $\mathrm{tr}(.)$, $(.)^{T}$ , $(.)^{H}$, and $\mathbb{E}[.]$, respectively. We denote the cardinality of a set $\mathcal{G}$ by $\vert \mathcal{G} \vert$.
\section{System and Signal Model}
We consider multi-group multicasting in a single-cell massive MIMO system. We assume the system has one BS with $N$ antennas and it transmits $G$ data streams toward $G$ multicasting groups. We denote the set of indices of these $G$ multicasting groups as $\mathcal{G}$, i.e., $\mathcal{G} = \{ 1, \ldots , G \}$. We assume the $j$th data stream, $j \in \{1,\ldots,G\}$, is of interest for $K_{j}$ single antenna UTs, and we say these $K_{j}$ UTs belong to the $j$th multicasting group. We denote the set of indices of all the UTs in $j$th multicasting group as $\mathcal{K}_{j}$, i.e. $\mathcal{K}_{j} = \{ 1, \ldots , K_{j} \}$. Therefore $\vert \mathcal{G} \vert = G$ and $\vert \mathcal{K}_{j} \vert = K_{j}$. We assume each UT is assigned to just one multicasting group, i.e. $\mathcal{K}_{i} \cap \mathcal{K}_{j} =\emptyset \; \forall i,j \in \mathcal{G}, i \neq j$. We denote the total number of UTs in the system as $K_{tot} = \sum_{j=1}^{G} K_{j}$.
We consider a block flat-fading channel model where $C_{B}$ (in Hz) is the coherence bandwidth and $C_{T}$ (in seconds) is the coherence time. Hence the channels are static within a coherence interval of $T=C_{B} C_{T}$ symbols. We assume the BS does not have a priori CSI but estimates the channels by uplink pilots transmission using a TDD protocol, exploiting channel-reciprocity. The procedure is detailed next. Under these assumptions, we represent the channel between the BS and UT $k$ in multicasting group $j$ as $\mathbf{g}_{jk}$. We assume all the UTs have independent Rayleigh fading channels, as it well-matches non-line-of-sight measurements \cite{XGaoMeasurment}. This implies that $\mathbf{g}_{jk} \sim \mathcal{CN}(\mathbf{0}, \beta_{jk} \mathbf{I}_{N}) \forall k, j$, where $\beta_{jk}$ represents the large-scale fading.
\subsection{Channel Estimation}
The BS uses uplink pilot transmission to estimate the channels to the UTs in the system. As detailed in Section I, it can be performed either by dedicated pilot assignment \cite{ngo2013energy,hoydis2013massive}, or by co-pilot assignment \cite{YangMulticat}. The dedicated pilot approach sacrifices more resources, e.g., time-frequency slots in each coherence interval, to achieve a better estimation of the channel of each UT in the system. On the other hand, the co-pilot approach enforces deliberate pilot contamination among UTs of each multicasting group in order to reduce the consumed time-frequency resources. In the sequel we elaborate the channel estimation under each of these scenarios.
\subsubsection{Channel Estimation with Dedicated Pilot Assignment}
The dedicated pilot assignment uses one pilot per UT, so it requires $K_{tot}$ pilots per coherence interval. Denoting the pilot length as $\tau_{p}^{dp}$, to have orthogonal pilots we have $\tau_{p}^{dp} \geq K_{tot}$. Under dedicated pilot assignment, the minimum mean-square error (MMSE) estimate of the channel of UT $k$ in group $j$ is
\begin{align}
\label{est_gjk_dp}
\hat{\mathbf{g}}_{jk}^{dp} = \dfrac{\sqrt{\tau_{p}^{dp} p_{jk}^{u}} \beta_{jk} }{1 + \tau_{p} p_{jk}^{u} \beta_{jk}} \left( \sqrt{\tau_{p}^{dp} p_{jk}^{u}} \mathbf{g}_{jk} + \mathbf{n} \right)
\end{align}
where $ \mathbf{n} \sim \mathcal{CN}(\mathbf{0}, \mathbf{I}_{N})$ is the normalized additive noise and $p_{jk}^{u}$ is the uplink pilot power of UT $k$ in group $j$. Therefore we have $\hat{\mathbf{g}}_{jk}^{dp} \sim \mathcal{CN} ( \mathbf{0}, \gamma_{jk}^{dp} \mathbf{I}_{N})$ with $\gamma_{jk}^{dp} = \dfrac{\tau_{p}^{dp} p_{jk}^{u} \beta_{jk}^{2} }{1 + \tau_{p}^{dp} p_{jk}^{u} \beta_{jk}} $. Also the estimation error is $\tilde{\mathbf{g}}_{jk}^{dp} = \hat{\mathbf{g}}_{jk}^{dp} - \mathbf{g}_{jk} \sim \mathcal{CN}(\mathbf{0}, (\beta_{jk} - \gamma_{jk}^{dp}) \mathbf{I}_{N}) $. Moreover, we denote the $N \times K_{tot}$ matrix obtained by stacking the estimated channel of all the UTs in the system as $\hat{\mathbf{G}}_{dp} = [\hat{\mathbf{G}}_{1},\ldots,\hat{\mathbf{G}}_{G}]$, where $ \hat{\mathbf{G}}_{j} = [\hat{\mathbf{g}}_{j1}^{dp},\ldots,\hat{\mathbf{g}}_{jK_{j}}^{dp}] \; \forall j \in \mathcal{G}$.
\subsubsection{Channel Estimation with Co-pilot Assignment}
The co-pilot assignment uses one pilot per multicast group, so it requires $G$ pilots per coherence interval. Denoting the pilot length as $\tau_{p}^{cp}$, to have orthogonal pilots we need $\tau_{p}^{cp} \geq G$. Under co-pilot assignment the MMSE estimate of the channel of UT $k$ in multicasting group $j$ is
\begin{align}
\label{est_gjk_cp}
\hat{\mathbf{g}}_{jk}^{cp} = \dfrac{\sqrt{\tau_{p}^{cp} p_{jk}^{u}} \beta_{jk}}{ 1 + \tau_{p}^{cp} \sum_{m=1}^{K_{j}} p_{jm}^{u} \beta_{jm} } \left( \sum_{m=1}^{K_j} \sqrt{\tau_{p}^{cp} p_{jm}^{u}} \mathbf{g}_{jm} + \mathbf{n} \right)
\end{align}
where $\hat{\mathbf{g}}_{jk}^{cp} \sim \mathcal{CN}(\mathbf{0}, \gamma_{jk}^{cp} \mathbf{I}_{N})$ with $\gamma_{jk}^{cp} = \dfrac{\tau_{p}^{cp} p_{jk}^{u} \beta_{jk}^{2}}{ 1 + \tau_{p}^{cp} \sum_{m=1}^{K_{j}} p_{jm}^{u} \beta_{jm}}$. From \eqref{est_gjk_cp} it is easy to observe that the channel estimate of each UT is contaminated by other UTs in its multicasting group. The estimation error of $\mathbf{g}_{jk}$ is $\tilde{\mathbf{g}}_{jk}^{cp} = \hat{\mathbf{g}}_{jk}^{cp} - \mathbf{g}_{jk} \sim \mathcal{CN}(\mathbf{0}, (\beta_{jk} - \gamma_{jk}^{cp}) \mathbf{I}_{N}) $. Moreover, we need the estimation of a linear combination of the channels of all the UTs within this multicasting group, which we denote as $\mathbf{g}_{j} = \sum_{k=1}^{K_j} \sqrt{\tau_{p}^{cp} p_{jk}^{u}} \mathbf{g}_{jk}$.\footnote{Note that for $K_{j}=1$, $\mathbf{g}_{j} = \sqrt{\tau_{p}^{cp} p_{jk}^{u}} \mathbf{g}_{jk}$.} Its MMSE estimate is
\begin{align}
\label{estgj}
\hat{\mathbf{g}}_{j} = \dfrac{ \tau_{p}^{cp} \sum_{k=1}^{K_{j}} p_{jk}^{u} \beta_{jk}}{1 + \tau_{p}^{cp} \sum_{k=1}^{K_{j}} p_{jk}^{u} \beta_{jk}} \left( \sum_{k=1}^{K_j} \sqrt{\tau_{p}^{cp} p_{jk}^{u}} \mathbf{g}_{jk} + \mathbf{n} \right)
\end{align}
and we have $\hat{\mathbf{g}}_{j} \sim \mathcal{CN}(\mathbf{0}, \gamma_{j} \mathbf{I}_{N})$ with $\gamma_{j} = \dfrac{ (\tau_{p}^{cp} \sum_{k=1}^{K_{j}} p_{jk}^{u} \beta_{jk})^2}{1 + \tau_{p}^{cp} \sum_{k=1}^{K_{j}} p_{jk}^{u} \beta_{jk}}$. Also we denote the $N \times G$ matrix obtained by stacking the vectors $\hat{\mathbf{g}}_{j}$ $\forall j \in \mathcal{G}$ as $\hat{\mathbf{G}}_{cp} = [\hat{\mathbf{g}}_{1},\ldots,\hat{\mathbf{g}}_{{G}}]$.
\subsection{Transmission Mode: Unicast versus Multicast}
As motivated in Section I, we want to understand when it is beneficial to employ multicast transmission instead of unicast transmission. Therefore we consider both unicast and multicast transmissions in the sequel. Let us denote by $s_{i} \sim \mathcal{CN}(0,1) \; \forall i \in \mathcal{G}$ the signal requested by the UTs in the $i$th multicasting group, i.e., $\mathcal{K}_{i}$. We assume $s_{i}$ is independent across $i$. We stack them in a vector $\mathbf{s} = [s_{1}, \ldots, s_{G}]^{T}$.
\subsubsection{Unicast Transmission}
In unicast transmission we consider a $K_{tot} \times 1$ data vector $\mathbf{x}$ where
\begin{align}
\mathbf{x} = [\underbrace{s_{1},\ldots,s_{1}}_{K_{1}}, \underbrace{s_{2},\ldots,s_{2}}_{K_{2}}, \ldots, \underbrace{s_{G},\ldots,s_{G}}_{K_{G}}]^{T}.
\end{align}
Also the precoding matrix is an $N \times K_{tot}$ matrix $\mathbf{W}_{un} = [\mathbf{w}_{11},\ldots,\mathbf{w}_{jk}, \ldots, \mathbf{w}_{GK_{G}}]$, where $\mathbf{w}_{jk}$ is the precoding vector of UT $k$ in multicasting group $j$. We will provide more details on the exact structure of the precoding vectors in Section II. The received signal of UT $k$ in multicasting group $j$ during downlink transmission is
\begin{align}
\label{unicasttransmission}
y_{jk} = \mathbf{g}_{jk}^{H} \mathbf{W}_{un} \mathbf{x} + n = \mathbf{g}_{jk}^{H} \sum_{i=1}^{G} \sum_{t=1}^{K_{i}} \mathbf{w}_{it} s_{i} + n
\end{align}
where $n \sim \mathcal{CN}(0, 1)$ is the normalized noise.
\subsubsection{Multicast Transmission}
In the multicast case we use $\mathbf{s}$ as the data vector. Also the precoding matrix becomes an $N \times G$ matrix $\mathbf{W}_{mu} = [\mathbf{w}_{1},\ldots, \mathbf{w}_{G}]$ where $\mathbf{w}_{j}$ is the joint precoding vector of all the UTs in $j$th multicasting group. In this case, the received signal of UT $k$ in multicasting group $j$ is
\begin{align}
\label{multicasttransmission}
y_{jk} = \mathbf{g}_{jk}^{H} \mathbf{W}_{mu} \mathbf{s} + n = \mathbf{g}_{jk}^{H} \sum_{i=1}^{G} \mathbf{w}_{i} s_{i} + n.
\end{align}
\section{Precoder Structures and Achievable SEs}
It is well known that in massive MIMO systems linear precoding schemes provide close-to-optimal performance \cite{Emil10Myth}. Also it has been shown that the asymptotically optimal precoders in massive MIMO multicasting are linear combinations of the channels \cite{sadeghi2015multi,Zhengzheng2014}. Therefore, in the sequel we consider two common linear precoding schemes in the context of massive MIMO systems, namely MRT and ZF \cite{yang2013performance}, and derive the achievable SE for them.
\subsection{Precoder Structure and Achievable SE for Unicast Transmission}
Consider the $N \times K_{tot}$ precoding matrix $\mathbf{W}_{un} = [\mathbf{w}_{11},\ldots,\mathbf{w}_{jk}, \ldots, \mathbf{w}_{GK_{G}}]$ for unicast transmission with dedicated pilots. Then the MRT and ZF precoding vectors of UT $k$ in group $j$ are
\begin{align}
\label{MRTUNDP}
& \mathbf{w}_{jk}^{\rm{MRT-undp}} = \sqrt{\dfrac{p_{jk}^{dl}}{N \gamma_{jk}^{dp}}} \; \hat{\mathbf{g}}_{jk}^{dp}
\\
\label{ZFUNDP}
& \mathbf{w}_{jk}^{\rm{ZF-undp}} = \sqrt{p_{jk}^{dl} \gamma_{jk}^{dp} ( N-K_{tot})} \;\; \hat{\mathbf{G}}_{dp} (\hat{\mathbf{G}}^{H}_{dp} \hat{\mathbf{G}}_{dp})^{-1} \mathbf{e}_{\nu_{jk},K_{tot}}
\end{align}
where $\nu_{jk}=\sum_{t=1}^{j-1} K_{t} + k$, $\mathbf{e}_{\nu_{jk},K_{tot}}$ is the $\nu_{jk}$th column of $\mathbf{I}_{K_{tot}}$, and $p_{jk}^{dl}$ is the downlink power allocated to this user. Note that for $\mathbf{w}_{jk}^{\rm{MRT}}$ and $\mathbf{w}_{jk}^{\rm{ZF}} $, we have $\mathbb{E}[\Vert \mathbf{w}_{jk}^{\rm{MRT}} \Vert^{2}] = p_{jk}^{dl}$ and $\mathbb{E}[\Vert \mathbf{w}_{jk}^{\rm{ZF}} \Vert^{2}] = p_{jk}^{dl}$. We denote the total utilized downlink power as $P_{dp}=\sum_{j=1}^{G} \sum_{k=1}^{K_{j}} p_{jk}^{dl}$. Given \eqref{MRTUNDP} and \eqref{ZFUNDP}, we can achieve the following SEs for the UTs in the system.
\begin{proposition} \label{prop1}
With MRT unicast transmission and dedicated pilot assignment, an achievable SE for user $k$ of group $i$ is
\begin{align}
\label{se_mrt_undp}
\mathrm{SE}_{ik}^{\mathrm{MRT-undp}} = \left( 1 - \dfrac{\tau_{p}^{dp}}{T}\right) \log_{2} (1+\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}})
\end{align}
where $\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}} = \dfrac{N \gamma_{ik}^{dp} p_{ik}^{dl} }{ 1 + \beta_{ik} P_{dp}}$ is the effective SINR of this user.
\end{proposition}
\begin{proof}
The proof follows the conventional bounding\footnote{In Propositions 1 and 2, the achievable SE is obtained by employing the use and then forget (UatF) bounding technique\cite{marzetta2016fundamentals,jose2011pilot}. Compared to the classic application of UatF in massive MIMO, here we have a subtle technicality as follows. The interference caused by the transmission to the other UTs in group $i$ is uncorrelated with the effective transmission to user $k$ in group $i$, however the message is the same. Therefore the transmission to the other UTs within a multicast group does not contribute to the desired signal power and act as interference.} technique in \cite{marzetta2016fundamentals} and is omitted for brevity.
\end{proof}
\begin{proposition}
With ZF unicast transmission and dedicated pilot assignment, an achievable SE for user $k$ of group $i$ is
\begin{align}
\label{se_zf_undp}
\mathrm{SE}_{ik}^{\mathrm{ZF-undp}} = \left( 1 - \dfrac{\tau_{p}^{dp}}{T} \right) \log_{2} (1+\mathrm{SINR}_{ik}^{\mathrm{ZF-undp}})
\end{align}
where $\mathrm{SINR}_{ik}^{\mathrm{ZF-undp}} = \dfrac{(N-K_{tot}) \gamma_{ik}^{dp} p_{ik}^{dl} }{1 + (\beta_{ik} - \gamma_{ik}^{dp}) P_{dp}}$ is the effective SINR of this user.
\end{proposition}
\begin{proof}
The proof follows the conventional bounding technique in \cite{marzetta2016fundamentals} and is omitted for brevity.
\end{proof}
\subsection{Precoder Structure and Achievable SEs for Multicast Transmission}
As detailed in Section II.A, the required CSI for multicast transmission can be achieved either by dedicated pilot assignment or by co-pilot assignment. In the sequel we present the precoder structure and achievable SEs for both cases.
\subsubsection{Precoder Structure and Achievable SE for Multicast Transmission with Dedicated Pilot Assignment}
If dedicated pilot assignment is employed then the MRT and ZF precoding vectors of $j$th multicast group are
\begin{align}
\label{MRTMUDP}
& \mathbf{w}_{j}^{\rm{MRT-mudp}} = \sum_{k=1}^{K_{j}} \sqrt{\dfrac{p_{jk}^{dl}}{N \gamma_{jk}^{dp}}} \; \hat{\mathbf{g}}_{jk}^{dp}
\\
\label{ZFMUDP}
& \mathbf{w}_{j}^{\rm{ZF-mudp}} = (\mathbf{I}_{N} - \hat{\mathbf{G}}_{-j} (\hat{\mathbf{G}}_{-j}^{H} \hat{\mathbf{G}}_{-j})^{-1} \hat{\mathbf{G}}_{-j}^{H} ) \sum_{k=1}^{K_{j}} \sqrt{\mu_{jk}} \hat{\mathbf{g}}_{jk}^{dp}
\end{align}
where $p_{jk}^{dl}$ is the downlink power of UT $k$ in group $j$, $\hat{\mathbf{G}}_{-j} = [\hat{\mathbf{G}}_{1}, \ldots,\hat{\mathbf{G}}_{j-1},\hat{\mathbf{G}}_{j+1},\ldots,\hat{\mathbf{G}}_{G}]$, and $\mu_{jk} = \sqrt{\dfrac{p_{jk}^{dl}}{(N - \nu_{j}) \gamma_{jk}^{dp}}}$ with $\nu_{j} = K_{tot} - K_{j}$. For $\mathbf{w}_{j}^{\rm{MRT-mudp}}$ and $\mathbf{w}_{j}^{\rm{ZF-mudp}}$ we have $\mathbb{E}[\Vert \mathbf{w}_{j}^{\rm{MRT-mudp}} \Vert^{2}] = \sum_{k=1}^{K_{j}} p_{jk}^{dl}$ and $\mathbb{E}[\Vert \mathbf{w}_{j}^{\rm{ZF-mudp}} \Vert^{2}] = \sum_{k=1}^{K_{j}} p_{jk}^{dl}$.
Note that there is a subtle difference between ZF-undp and ZF-mudp. The ZF-undp scheme ensures that (within the limitations of channel estimation errors) any UT is immune to the transmissions intended for all other UTs, in its own multicasting group and also in other multicasting groups. Therefore it requires $N \geq K_{tot}$. However, ZF-mudp just ensures that the UTs within each multicasting group are rendered immune (within the limitations of channel estimation errors) to the transmissions to the rest of UTs in other multicasting groups and every UT experiences intra-group interference from the transmissions intended for the other UTs in its own group. Hence it requires $N \geq (K_{tot} - \max_{j \in \mathcal{G}} K_{j})$.
\begin{remark}
\label{remZFmudp}
Notice that \eqref{ZFMUDP} is a generalized version of the precoder proposed in \cite{MeysamMultiComplexity}, since that it accounts for imperfect CSI. As the precoder presented in \cite{MeysamMultiComplexity} outperforms the SDR based multicasting schemes, this generalization works as a benchmark and enable us to indirectly compare our proposed methods with the SDR based algorithms. This is of particular interest, as the SDR-based algorithms, which are assuming perfect CSI is available at both BS and UTs, are the baseline schemes used in the literature \cite{sidiropoulos2006transmit,karipidis2008quality,christopoulos2014weighted,xiang2013coordinated}.
\end{remark}
Given \eqref{MRTMUDP} and \eqref{ZFMUDP}, we can achieve the following SEs.
\begin{theorem}
\label{T-MRT-mudp}
With MRT multicast transmission and dedicated pilot assignment, an achievable SE for user $k$ of group $i$ is
\begin{align}
\label{se_mrt_mudp}
\mathrm{SE}_{ik}^{\mathrm{MRT-mudp}} = \left( 1 - \dfrac{\tau_{p}^{dp}}{T}\right) \log_{2} (1+\mathrm{SINR}_{ik}^{\mathrm{MRT-mudp}}).
\end{align}
where $\mathrm{SINR}_{ik}^{\mathrm{MRT-mudp}} = \dfrac{N \gamma_{ik}^{dp} p_{ik}^{dl}}{1 + \beta_{ik} P_{dp}}$ is the effective SINR of this user.
\end{theorem}
\begin{proof}
The proof follows by showing that when we have a common message for all the users in each multicasting group, the MRT-mudp is equivalent with MRT-undp:
\begin{align*}
\mathbf{W}_{un} \mathbf{x} = \sum_{j=1}^{G} \sum_{k=1}^{K_{j}} \mathbf{w}_{jk}^{\mathrm{MRT-undp}} s_{j} = \sum_{j=1}^{G} \mathbf{w}_{j}^{\mathrm{MRT-mudp}} s_{j} = \mathbf{W}_{mu} \mathbf{s}.
\end{align*}
Hence the SINR and SE are the same as Proposition \ref{prop1}.
\end{proof}
\begin{theorem}
\label{TZFmudp}
With ZF multicast transmission and dedicated pilot assignment, an achievable SE for user $k$ of group $i$ is
\begin{align}
\label{se_zf_mudp}
\mathrm{SE}_{ik}^{\mathrm{ZF-mudp}} = \left( 1 - \dfrac{\tau_{p}^{dp}}{T}\right) \log_{2} (1+\mathrm{SINR}_{ik}^{\mathrm{ZF-mudp}}).
\end{align}
where $\mathrm{SINR}_{ik}^{\mathrm{ZF-mudp}} = \dfrac{(N - \nu_{i}) \gamma_{ik}^{dp} p_{ik}^{dl}}{1+ \gamma_{ik}^{dp} \sum_{m=1}^{K_{i}} p_{im}^{dl} + (\beta_{ik} - \gamma_{ik}^{dp}) P_{dp}}$ is the effective SINR of this user.
\end{theorem}
\begin{proof}
The proof is given in Appendix A.
\end{proof}
\subsubsection{Precoder Structure for Multicast Transmission with Co-pilot Assignment}
If co-pilot assignment is utilized then the MRT and ZF precoding vectors of $j$th multicast group are
\begin{align}
\label{MRTMUCP}
& \mathbf{w}_{j}^{\rm{MRT-mucp}} = \sqrt{\dfrac{p_{j}^{dl}}{N \gamma_{j}}} \; \hat{\mathbf{g}}_{j}
\\
\label{ZFMUCP}
& \mathbf{w}_{j}^{\rm{ZF-mucp}} = \sqrt{p_{j}^{dl} \gamma_{j} ( N-G)} \;\; \hat{\mathbf{G}}_{cp} (\hat{\mathbf{G}}_{cp}^{H} \hat{\mathbf{G}}_{cp})^{-1} \mathbf{e}_{j,G}
\end{align}
where $p_{j}^{dl}$ is the downlink power of the precoding vector of group $j$. Note that for $\mathbf{w}_{j}^{\rm{MRT-mucp}} $ and $\mathbf{w}_{j}^{\rm{ZF-mucp}}$ we have $\mathbb{E}[\Vert \mathbf{w}_{j}^{\rm{MRT-mucp}} \Vert^{2}] = p_{j}^{dl}$ and $\mathbb{E}[\Vert \mathbf{w}_{j}^{\rm{ZF-mucp}} \Vert^{2}] = p_{j}^{dl}$. We denote the utilized downlink power as $P_{cp} = \sum_{j=1}^{G} p_{j}^{dl}$. By using MRT as in \eqref{MRTMUCP}, it has been shown that the following achievable SE for user $k$ of group $i$ can be obtained \cite{YangMulticat}
\begin{align}
\label{se_mrt_mucp}
\mathrm{SE}_{ik}^{\mathrm{MRT-mucp}} = \left( 1 - \dfrac{\tau_{p}^{cp}}{T} \right) \log_{2} (1+\mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}})
\end{align}
where $\label{sinr_mrt_mucp} \mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}} = \dfrac{ N \gamma_{ik}^{cp} p_{i}^{dl}}{1+\beta_{ik}P_{cp}}$ is the effective SINR of this user. By using ZF as in \eqref{ZFMUCP}, we can achieve the following SE.
\begin{theorem}
\label{Theorem3}
With ZF multicast transmission and co-pilot assignment, an achievable SE for user $k$ of group $i$ is
\begin{align}
\label{se_zf_mucp}
\mathrm{SE}_{ik}^{\mathrm{ZF-mucp}} = \left( 1 - \dfrac{\tau_{p}^{cp}}{T}\right) \log_{2} (1+\mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}}).
\end{align}
where $ \label{sinr_zf_mucp} \mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}} = \dfrac{(N-G) \gamma_{ik}^{cp} p_{i}^{dl} }{1+(\beta_{ik} - \gamma_{ik}^{cp})P_{cp}}$ is the effective SINR of this user.
\end{theorem}
\begin{proof}
The proof is given in Appendix B.
\end{proof}
In Theorem \ref{Theorem3} we obtained a simple closed-form for the SINR of ZF-mucp, while the precoder is entirely based on the composite channels, e.g., $\hat{\mathbf{g}}_{j} \; \forall j \in \mathcal{G}$. This is because we took advantage of the fact that $\forall j \in \mathcal{G}, \forall k \in \mathcal{K}_{j}$, $ \hat{\mathbf{g}}_{jk}^{cp}$ and $ \hat{\mathbf{g}}_{j}$ are equal up to a scalar coefficient. Hence ZF-mucp can cancel the inter-group interference, within the limitation of the channel estimates, which leads to the obtained simple closed-form for the SINR of ZF-mucp. The proof details are given in Appendix B.
\begin{remark}
\label{Rem-MRTtoZF}
Note that when we switch from MRT to ZF in the above scenarios, e.g., from Proposition 1 to Proposition 2, the SINR terms always change in a particular way. The signal power in the numerator reduces by a factor of $\frac{N - \kappa}{N}$, where $\kappa$ depends on the considered scenario. Also the interference in the denominator reduces from $\beta_{ik} P_{dp}$ to $(\beta_{ik} - \gamma_{ik}^{dp}) P_{dp}$ or from $\beta_{ik} P_{cp}$ to $(\beta_{ik} - \gamma_{ik}^{cp}) P_{cp}$. This is due to the fact that ZF uses these $\kappa$ degrees of freedom to cancel the interference toward other UTs at the cost of reducing the received power of each UT.
\end{remark}
\section{Max-Min Fairness Problem}
The MMF problem is the common problem of interest in multicasting systems, where we maximize the minimum of a metric of interest given some constraints on the resources. For the sake of simplicity, the existing works in the literature \cite{sidiropoulos2006transmit,karipidis2008quality,YangMulticat,MeysamMultiComplexity,Zhengzheng2014,christopoulos2015multicast,christopoulos2014weighted,xiang2013coordinated,zhou2015joint,sadeghi2015multi} consider the SINR as the metric of interest and the available power at the BS as the resource constraint, while ignoring CSI acquisition. Here we consider a more general problem formulation for MMF that accounts for the CSI acquisition. We choose the SE as our metric of interest and also we set our resource constraints as 1) the available power at the BS; 2) the uplink training power limit of the UTs; and 3) the length of the pilots. Therefore the MMF problem for dedicated pilot assignment is
\begin{align}
\label{MMF_dp_SE}
\mathcal{P}1: \max_{\tau_{p}^{dp}, \{p_{jk}^{dl}\}, \{p_{jk}^{u}\}} \min_{\forall j \in \mathcal{G}} & \min_{\forall k \in \mathcal{K}_{j}} \quad \; (1 - \frac{\tau_{p}^{dp}}{T}) \log_{2}(1 + \mathrm{SINR}_{jk}^{\mathrm{dp}})
\\
& s.t. \quad \quad p_{jk}^{u} \leq p^{utot}_{jk} \quad \; \forall \; k \in \; \mathcal{K}_{j}, \forall \; j \in \; \mathcal{G} \tag{\ref{MMF_dp_SE}-C1}
\\
\label{poweryek1_SE}
& \quad \quad \quad \; P_{dp} = \sum_{j=1}^{G} \sum_{k=1}^{K_{j}} p_{jk}^{dl} \leq P \tag{\ref{MMF_dp_SE}-C2}
\\
& \; \quad \quad \quad \tau_{p}^{dp} \in \{K_{tot}, \ldots,T \} \tag{\ref{MMF_dp_SE}-C3}
\end{align}
where $p^{utot}_{jk}$ is the maximum pilot power of user $k$ in group $j$, and $P$ is the total available power at the BS. Similarly, the MMF problem for co-pilot assignment is
\begin{align}
\label{MMF_cp_SE}
\mathcal{P}2: \max_{\tau_{p}^{cp},\{p_{j}^{dl}\}, \{p_{jk}^{u}\}} \min_{\forall j \in \mathcal{G}} & \min_{\forall k \in \mathcal{K}_{j}} \quad (1 - \frac{\tau_{p}^{cp}}{T}) \log_{2}(1 + \mathrm{SINR}_{jk}^{\mathrm{cp}})
\\
& s.t. \quad \quad p_{jk}^{u} \leq p^{utot}_{jk} \quad \; \forall \; k \in \; \mathcal{K}_{j}, \forall \; j \in \; \mathcal{G} \tag{\ref{MMF_cp_SE}-C1}
\\
\label{powerdo2_SE}
& \quad \quad \quad \; P_{cp} = \sum_{j=1}^{G} p_{j}^{dl} \leq P \tag{\ref{MMF_cp_SE}-C2}
\\
& \; \quad \quad \quad \tau_{p}^{dp} \in \{G, \ldots ,T \} \tag{\ref{MMF_cp_SE}-C3}.
\end{align}
Note that the constraints \eqref{poweryek1_SE} and \eqref{powerdo2_SE} are due to the total available power at the BS, but are slightly different. When we use a dedicated pilot per UT, we obtain a dedicated estimate of the channel of each user. Hence in the downlink we can decide on the amount of power we allocate to the UTs on a per UT basis, e.g., $p_{jk}^{dl}$. On the other hand, for co-pilot transmission, the channel estimates of all UTs within a multicasting group are different just by a scalar coefficient. Hence we just can allocate the power on a per group basis, e.g., $p_{j}^{dl}$. It is straightforward to show that for both $\mathcal{P}1$ and $\mathcal{P}2$, the constraints \eqref{poweryek1_SE} and \eqref{powerdo2_SE} should be met with equality. To see this, assume the contrary, e.g., at the optimal solution of $\mathcal{P}2$ we have $P > P_{cp} = \sum_{j=1}^{G} p_{j}^{dl} $. Then one can increase all the $p_{j}^{dl}$ by a factor of $\frac{P}{P_{cp}}$. This increases each UT's SE, hence improves the minimum SE of the system. This contradicts our assumption. Consequently at the optimal solution of $\mathcal{P}2$, $P = P_{cp}$. In the remainder of this section, we find the optimal solutions to $\mathcal{P}1$ and $\mathcal{P}2$ for the six considered scenarios of Fig. \ref{figint}.
To solve $\mathcal{P}1$ and $\mathcal{P}2$, we use a two-step approach. First, we solve them for any arbitrary value of $\tau_{p}^{dp}$ or $\tau_{p}^{cp}$ and determine their optimal solution in closed-form. Second, we find the optimal value of $\tau_{p}^{dp}$ or $\tau_{p}^{cp}$ by searching over the finite discrete set of all the possible values, thanks to the closed-form obtained in the first step. Given an arbitrary $\tau_{p}^{dp}$, as logarithm is a strictly increasing function, $\mathcal{P}1$ can be replaced with a problem $\mathcal{P}^{\prime}1$ as follows
\begin{align}
\label{MMF_dp_SINR}
\mathcal{P}^{\prime}1: \max_{\{p_{jk}^{dl}\}, \{p_{jk}^{u}\}} \min_{\forall j \in \mathcal{G}} & \min_{\forall k \in \mathcal{K}_{j}} \quad \mathrm{SINR}_{jk}^{\mathrm{dp}}
\\
& s.t. \quad \quad \text{\ref{MMF_dp_SE}-C1 and } P_{dp} = P. \notag
\end{align}
Similarly, $\mathcal{P}2$ can be replaced with a problem $\mathcal{P}^{\prime}2$ as follows
\begin{align}
\label{MMF_cp_SINR}
\mathcal{P}^{\prime}2: \max_{\{p_{j}^{dl}\}, \{p_{jk}^{u}\}} \min_{\forall j \in \mathcal{G}} & \min_{\forall k \in \mathcal{K}_{j}} \quad \mathrm{SINR}_{jk}^{\mathrm{cp}}
\\
& s.t. \quad \quad \text{\ref{MMF_cp_SE}-C1 and } P_{cp} = P. \notag
\end{align}
\subsection{MMF solution for MRT-undp}
\begin{theorem}
\label{T-MMF-MRT-undp}
Consider $\mathcal{P}^{\prime}1$ with MRT-undp, then at the optimal solution all the UTs receive the same SINR and it is equal to
\begin{align}
\label{SINR-MMF-MRT-undp}
\Gamma = NP \left(\sum_{i=1}^{G} \sum_{k=1}^{K_{j}} \frac{1+\beta_{ik}P}{\gamma_{ik}^{dp*}} \right)^{-1}
\end{align}
with $\gamma_{ik}^{dp*} = \dfrac{\tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}^{2} }{1 + \tau_{p}^{dp} p_{ik}^{utot} \beta_{jt}}$. The optimal uplink training and downlink transmission powers of UT $k$ in group $i$ are
\begin{align}
p_{ik}^{u*} =& \; p_{ik}^{utot}
\\
p_{ik}^{dl*} =& \; \dfrac{1 + \beta_{ik} P}{ \gamma_{ik}^{dp*} N} \; \Gamma .
\end{align}
\end{theorem}
\begin{proof}
The proof is given in Appendix C.
\end{proof}
\subsection{MMF solution for ZF-undp}
\begin{theorem}
\label{T-MMF-ZF-undp}
Consider $\mathcal{P}^{\prime}1$ with ZF-undp, then at the optimal solution all the UTs receive the same SINR and it is equal to
\begin{align}
\label{SINRZFundp}
\Gamma = \dfrac{(N-K_{tot}) P}{\sum_{i=1}^{G} \sum_{k=1}^{K_{i}} \dfrac{1 + (\beta_{ik} - \gamma_{ik}^{dp*}) P}{\gamma_{ik}^{dp*}}}
\end{align}
with $\gamma_{ik}^{dp*} = \dfrac{\tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}^{2} }{1 + \tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}}$. The optimal uplink training and downlink transmission powers of UT $k$ in group $i$ are
\begin{align}
\label{upTrZFundp}
p_{ik}^{u*} =& \; p_{ik}^{utot}
\\
\label{dlTrZFundp}
p_{ik}^{dl*} =& \dfrac{1 + (\beta_{ik} - \gamma_{ik}^{dp*}) P}{ \gamma_{ik}^{dp*} ( N-K_{tot})} \; \Gamma .
\end{align}
\end{theorem}
\textit{Proof Sketch.} The proof is similar to the proof of Theorem \ref{T-MMF-MRT-undp} and its sketch is presented for brevity. First it should be shown that for every UT $k$ in group $i$ its SINR is monotonically increasing with $p_{ik}^{u}$ which results in \eqref{upTrZFundp}. Then it should be shown that at the optimal solution all UTs will have the same SINR, which also determines \eqref{dlTrZFundp}. Now using this fixed value for the SINR and the downlink transmission power constraint, we obtain \eqref{SINRZFundp}. \qe
Remark \ref{Rem-MRTtoZF} described the similarities between the SE expressions with MRT and ZF, and the same pattern appears in the optimal solutions to the MMF problem. As we switch from the MRT to ZF in Theorems \ref{T-MMF-MRT-undp} and \ref{T-MMF-ZF-undp}, the coherent beamforming gain reduces from $N$ to $N-K_{tot}$. Also the interference in the denominator reduces from $\dfrac{ \beta_{ik} P}{ \gamma_{ik}^{dp*}}$ to $\dfrac{ (\beta_{ik} - \gamma_{ik}^{dp*}) P}{ \gamma_{ik}^{dp*}}$. This is due to the fact that ZF uses the degrees of freedom provided by the large-scale antenna array to cancel the interference toward other UTs at the cost of reducing the desired signal power at each UT.
\subsection{MMF solution for MRT-mudp}
\begin{corollary}
\label{C-MMF-MRT-mudp}
Consider $\mathcal{P}^{\prime}1$ with MRT-mudp, then at the optimal solution all the UTs receive the same SINR and it is equal to \eqref{SINR-MMF-MRT-undp}.
\end{corollary}
\begin{proof}
From Theorem \ref{T-MRT-mudp}, we know MRT-mudp is equivalent to MRT-undp. Hence it provides the same SINR for each UT. Therefore its optimal solution is the same as Theorem~\ref{T-MMF-MRT-undp}.
\end{proof}
\subsection{MMF solution for ZF-mudp}
\begin{theorem}
\label{T-MMF-ZF-mudp}
Consider $\mathcal{P}^{\prime}1$ with ZF-mudp, then at the optimal solution all the UTs receive the same SINR, i.e., $ \Gamma = \mathrm{SINR}_{ik}^{\mathrm{ZF-mudp*}} \; \forall i,k$, and it is the solution of the equation
\begin{align}
\label{sinr_equation}
P = \sum_{i=1}^{G} \dfrac{\Gamma \Delta_{i}}{N-\nu_{i} - \Gamma K_{i}}
\end{align}
where $\Delta_{i} = \sum_{k=1}^{K_{i}} \left( \dfrac{1}{\gamma_{ik}^{dp*}} + P \dfrac{\beta_{ik}}{\gamma_{ik}^{dp*}} - P \right) $ with $\gamma_{ik}^{dp*} = \dfrac{\tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}^{2} }{1 + \tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}}$ and $\Gamma < \min_{i \in \mathcal{G}} \{ \frac{N-\nu_{i}}{K_{i}} \}$. Also the optimal uplink training and downlink transmission powers of UT $k$ in group $i$ are
\begin{align}
p_{ik}^{u*} =& \; p_{ik}^{utot}
\\
p_{ik}^{dl*} =& \dfrac{\Gamma}{N-\nu_{i}} \left( \frac{1}{\gamma_{ik}^{dp*}} + P_{i}^{dl} + P \dfrac{\beta_{ik}}{\gamma_{ik}^{dp*}} - P \right)
\end{align}
where $P_{i}^{dl} = \dfrac{\Gamma \Delta_{i}}{N-\nu_{i} - \Gamma K_{i}} $.
\end{theorem}
\begin{proof}
The proof is given in Appendix D.
\end{proof}
Note that as the right hand side of \eqref{sinr_equation} is an increasing function of $\Gamma$, its solution can simply be obtained by line search.
\subsection{MMF solution for MRT-mucp}
\begin{theorem}
\label{T-MMF-MRT-mucp}
Consider $\mathcal{P}^{\prime}2$ with MRT-mucp, then at the optimal solution all the UTs receive the same SINR and it is equal to
\begin{align}
\label{MRTmucpSINR}
\Gamma = \dfrac{NP}{\sum_{i=1}^{G} \dfrac{1 + \tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u*} \beta_{im}}{\tau_{p}^{cp} \Upsilon_{i}}}
\end{align}
with $\Upsilon_{i} = \min_{t \in \mathcal{K}_{i}} \dfrac{\beta_{it}^{2} p_{it}^{tot}}{1+P \beta_{it}} \; \forall i \in \mathcal{G}$. The optimal uplink training and downlink transmission powers of UT $k$ in group $i$ are
\begin{align}
p_{ik}^{u*} =& \dfrac{1+P \beta_{ik}}{\beta_{ik}^{2}} \; \Upsilon_{i} \quad \quad \forall i \in \mathcal{G}, K \in \mathcal{K}_{i}
\\
p_{i}^{dl*} =& \dfrac{\Gamma (1 + \tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u*} \beta_{im})}{\tau_{p}^{cp} N \Upsilon_{i}} \quad \quad \forall j \in \mathcal{G}.
\end{align}
\end{theorem}
\begin{proof}
The proof is given in Appendix E.
\end{proof}
\subsection{MMF solution for ZF-mucp}
\begin{theorem}
\label{T-MMF-ZF-mucp}
Consider $\mathcal{P}^{\prime}2$ with ZF-mucp, then at the optimal solution all the UTs receive the same SINR and it is equal to
\begin{align}
\label{ZFmucpSINR}
\Gamma &= \dfrac{P (N-G)}{\sum_{j=1}^{G} \frac{1}{\Delta_{j}}}
\end{align}
with $\Delta_{j} = \dfrac{\tau_{p}^{cp} \Upsilon_{j}}{1 + \tau_{p}^{cp}(E_{j} - P \Upsilon_{j})}$, $E_{j} = K_{j} \Upsilon_{j} P + \Upsilon_{j} \sum_{m=1}^{K_{j}} \dfrac{1}{\beta_{jm}}$, and $\Upsilon_{j} = \min_{k \in \mathcal{K}_{j}} \dfrac{p_{jk}^{utot} \beta_{jk}^{2}}{1+\beta_{jk}P} \; \forall j \in \mathcal{G}$. The optimal uplink training and downlink transmission powers of UT $k$ in group $i$ are
\begin{align}
p_{ik}^{u*} =& \dfrac{1+\beta_{ik}P}{\beta_{ik}^{2}} \Upsilon_{i} \quad \quad \forall k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}
\\
\label{ZFmucpDLpower}
p_{i}^{dl*} =& \left( \sum_{j=1}^{G} \frac{\Delta_{i}}{ \Delta_{j}} \right)^{\!\!\!-1} \!\! P \quad \forall j \in \mathcal{G} .
\end{align}
\end{theorem}
\begin{proof}
The proof is given in Appendix F.
\end{proof}
The achieved results (Theorems \ref{T-MMF-MRT-undp} to \ref{T-MMF-ZF-mucp} and Corollary \ref{C-MMF-MRT-mudp}), determine the optimal value of the SINR, the uplink training powers, and the downlink transmission powers in closed-form, for any given pilot length. These closed-form results enable us to find the optimal value of SE by simply searching over $\tau_{p}^{dp} \in \{K_{tot},\ldots,T\}$ or $\tau_{p}^{dp} \in \{G, \ldots, T\}$ and find the pilot length that provides the highest SE.
\section{Numerical Analysis and Further Discussions}
In this section, we use the results of Section IV to perform a numerical analysis and propose a guideline for multicasting design in massive MIMO systems. In our simulations we consider a system with $G$ multicasting groups where each group has $K$ UTs, i.e., $K_{i} = K \; \forall i \in \mathcal{G}$. The cell radius is considered to be $500$ meters and the UTs are randomly and uniformly distributed in the cell excluding an inner circle of radius $35$ meters. The large-scale fading parameters are modeled as $\beta_{ik} = \bar{d}/ x_{ik}^{\nu}$ where $\nu=3.76$ is the path-loss exponent and the constant $\bar{d} = 10^{-3.53}$ regulates the channel attenuation at $35$ meters \cite{3GPPmodel}. Also $ x_{ik}$ is the distance between UT $k$ in group $i$ and the BS in meters. At a carrier frequency of $2$ GHz, the transmission bandwidth (BW) is assumed to be $20$ MHz, the coherence bandwidth and coherence time are considered to be $300$ kHz and $2.5$ ms, which results in a coherence interval of length $750$ symbols for a vehicular system with speed of $108$ kilometers per hour \cite{marzetta2016fundamentals}. The noise power spectral density is considered to be $-174$ dBm/Hz.
Fig. \ref{Fig2} studies the effect of the system parameters, i.e., $G$, $K$, $N$, $p_{jk}^{utot}$, and $P$, on the optimal SEs that can be obtained for the six scenarios depicted in Fig. \ref{figint}. Figs. \ref{a}, \ref{c}, and \ref{e} represent the high SNR regime, where for the cell-edge, the training SNR is $-5.8$ dB (equivalent to $p_{jk}^{utot}=1$ Watt over the BW) and the downlink SNR is $10$ dB (equivalent to $P=40$ Watt over the BW). Also Figs. \ref{b}, \ref{d}, and \ref{f} are representing the low SNR regime, where for the cell-edge, the training SNR is $-15.8$ dB (equivalent to $p_{jk}^{utot}=0.1$ Watt over the BW) and the downlink SNR is $-5.8 $dB (equivalent to $P=1$ Watt over the BW).
\begin{figure}[]
\centering
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={3.7cm 8.25cm 4.2cm 9cm},clip]{FigA1.pdf}
\caption{$G\!=\!3$, $K\!=\!10$, $P\!=\!40$, and $p^{utot}\!=\!1$ Watt.}
\label{a}
\end{subfigure}%
~
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={4cm 8.25cm 4cm 9cm},clip]{FigB1.pdf}
\caption{$G\!=\!3$, $K\!=\!10$, $P\!=\!1$, and $p^{utot}\!=\!0.1$ Watt.}
\label{b}
\end{subfigure}
~
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={3.7cm 8.25cm 4.2cm 9cm},clip]{FigC1.pdf}
\caption{$G\!=\!3$, $K\!=\!50$, $P\!=\!40$, and $p^{utot}\!=\!1$ Watt.}
\label{c}
\end{subfigure}
~
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth,trim={4cm 8.25cm 4cm 9cm},clip]{FigD1.pdf}
\caption{$G\!=\!3$, $K\!=\!50$, $P\!=\!1$, and $p^{utot}\!=\!0.1$ Watt.}
\label{d}
\end{subfigure}
~
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={3.7cm 8.25cm 4.2cm 9cm},clip]{FigE1.pdf}
\caption{$G\!=\!10$, $K\!=\!50$, $P\!=\!40$, and $p^{utot}\!=\!1$ Watt.}
\label{e}
\end{subfigure}
~
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth,trim={4cm 8.25cm 4cm 8.9cm},clip]{FigF1.pdf}
\caption{$G\!=\!10$, $K\!=\!50$, $P\!=\!1$, and $p^{utot}\!=\!0.1$ Watt.}
\label{f}
\end{subfigure}
\caption{SE versus $N$ for different system setups.}
\label{Fig2}
\end{figure}
From Fig. \ref{Fig2} we make the following observations:
\begin{itemize}
\item The dedicated pilot assignment is more vulnerable to SNR reduction than the co-pilot assignment, comparing the two SNR regimes. For example, consider $N=600$, then the average reduction in SE of ZF-undp comparing Figs. \ref{a}, \ref{c}, \ref{e} respectively with Figs. \ref{b}, \ref{d}, \ref{f} is $6.85$ times while with MRT-mucp and ZF-mucp it is $1.69$ times. This is because the emphasis in dedicated pilot assignment is on achieving good channel estimates, while the co-pilot assignment is focusing on saving time-frequency resources. Hence in the low SNR regime as long as $K_{tot}$ is large enough, e.g. $K_{tot} \gtrapprox 0.2 N$ , MRT-mucp and ZF-mucp provide better performance than other schemes.
\item In the high SNR regime, ZF-undp significantly outperforms the co-pilot approaches as soon as $N$ becomes slightly bigger than $K_{tot}$ ($N \gtrapprox 1.15 K_{tot}$), as it can be verified from Figs. \ref{a}, \ref{c}, and \ref{e}. The reason is twofold. First, with dedicated pilot assignment a pilot contamination free channel estimation is achieved. Second, in high SNR regime $\tau_{p}^{dp}$ is close to $K_{tot}$, and as $N \gtrapprox 1.15 K_{tot}$ there are enough time-frequency resources for downlink transmission. While for co-pilot assignment the channel estimates are highly contaminated due to the shared pilots.
\item It is plausible that MRT-mucp can provide a better SE than ZF-undp if there is downlink pilot transmission, as downlink pilot transmission can be done efficiently just by employing $G$ symbols of the coherence interval for downlink training \cite{NoDownlinkPilot}. Therefore in Fig. \ref{Fig2} we also have presented the minimum SE of MRT-mucp with genie UTs, i.e., MRT-mucp-Genie, where we assume the UTs \textit{perfectly} estimate their channels from $G$ downlink training symbols. Even in this case, in the high SNR regime, ZF-undp significantly outperforms the MRT-mucp with genie UTs, as soon as $N$ becomes slightly bigger than $K_{tot}$, e.g., $N \gtrapprox 1.2 K_{tot}$, see Figs. \ref{a}, \ref{c}, and \ref{e}.
\item The SE of the co-pilot assignment approaches is more robust to adding more UTs to the system than the SE of the dedicated pilot assignment approaches. For example, consider $N=700$ and compare the SE of ZF-undp and MRT-mucp in Figs. \ref{c} and \ref{d} (where $K_{tot} = 150$) respectively with Figs. \ref{e} and \ref{f} (where $K_{tot} = 500$). For ZF-undp the SE reduces by a factor of $2.95$ (comparing Fig. \ref{c} with Fig. \ref{e}) and $7.77$ (comparing Fig. \ref{d} with Fig. \ref{f}) while for MRT-mucp it reduces by a factor of $2$ (comparing Fig. \ref{c} with Fig. \ref{e}) and $2.36$ (comparing Fig. \ref{d} with Fig. \ref{f}). This is because adding more UTs increases the pilot overhead in dedicated pilot assignment approaches while it has a slight effect for co-pilot approaches. Hence co-pilot approaches are more suitable for applications like DVB-H or mobile TV over wide areas with many users \cite{DVB-GFaria,DVB-Elhajjar}.
\item As we increase $K_{tot}$ by adding more multicasting groups, e.g., in applications with large number of multicasting UTs such as DVB-H \cite{DVB-GFaria}, the downlink training becomes less important and can be neglected, e.g., compare Figs \ref{a}, \ref{c}, and \ref{e} or Figs \ref{b}, \ref{d}, and \ref{f}. This is because adding more groups requires more time-frequency resources for downlink training.
\item MRT-mucp nearly provides the same SE as ZF-mucp, e.g. see Figs. \ref{c}, \ref{d}, \ref{e}. This is because the deliberate pilot contamination that was enforced to the precoder structure, \eqref{ZFMUCP}, prevents the ZF-based pecoder from suppressing the interference efficiently. Therefore due to the higher complexity of ZF, if the co-pilot strategy is employed, it is beneficial to use MRT-mucp rather than ZF-mucp.
\item MRT-mucp always outperform MRT-undp and MRT-mudp, e.g., see Figs \ref{e} and \ref{f}. Hence if MRT is employed for multicasting, it is better to use the MRT-mucp scheme.
\item In all of the considered setups in Fig. \ref{Fig2}, the maximum performance is either achieved by ZF-undp or MRT-mucp. Hence a multicasting system need to support these two transmission modes and switch between them depending on the system parameters.
\item As detailed in Remark \ref{remZFmudp}, ZF-mudp is the generalized version of the precoder proposed in \cite{MeysamMultiComplexity} and it outperforms the SDR-based precoding schemes \cite{karipidis2008quality}. Also ZF-mudp is always outperformed by either MRT-mucp or ZF-undp. Therefore in a massive MIMO system that accounts for CSI acquisition, a system with hybrid transmission that switches between MRT-mucp and ZF-undp outperforms SDR-based approaches \cite{karipidis2008quality,MeysamMultiComplexity}.
\end{itemize}
The aforementioned observations were achieved either at high or low SNR regime. Fig. \ref{SNR} verifies them for a wide range of SNR. Considering $N=300$, $G=4$, $K=50$, Fig \ref{SNRa} presents the SE of the proposed scheme for a fixed cell edge training SNR of $-5.8$ dB, while the cell edge downlink SNR is changing from $-20$ dB to $20$ dB. Fig. \ref{SNRb} presents the SE for a fixed cell edge downlink SNR of $10$ dB while the cell edge training SNR is changing from $-30$ dB to $5$ dB. Note that the same observation holds true, e.g., 1) MRT-mucp and ZF-mucp have the same performance; 2) The optimal performance is achieved by switching between MRT-mucp and ZF-undp; 3) at low SNR the co-pilot approaches perform better than the dedicated approaches, and the opposite holds for high SNR; and 4) MRT-mucp always outperform MRT-undp and MRT-mudp.
\begin{figure}[]
\centering
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={4.1cm 8.4cm 4.4cm 9cm},clip]{SNR_DL_dB1.pdf}
\caption{SE vs Downlink SNR.}
\label{SNRa}
\end{subfigure}%
~
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={4cm 8.4cm 4.4cm 9.1cm},clip]{SNR_Training_dB1.pdf}
\caption{SE vs Training SNR.}
\label{SNRb}
\end{subfigure}
\caption{SE versus SNR.}
\label{SNR}
\end{figure}
As some of the state of the art multicast standards and applications, e.g. DVB-H and mobile TV, employ omnicast transmission \cite{DVB-GFaria,DVB-Elhajjar}, it is interesting to compare the performance of the proposed multicast schemes with omnicast transmission. Therefore in Fig. \ref{figOmni} we consider a system with $P=40$ Watt, $p^{uot}_{jk}=1$ Watt, and $G$ multicasting groups where $G$ is changing from $1$ to $30$ with $K$ UTs per group. It presents the minimum SE versus the number of multicasting groups for the proposed multicasting schemes and the omnicast transmission. For omnicast transmission we assume the channels are perfectly known at the UTs, and the minimum SE is computed as follows
\begin{align}
\label{OmniEq}
\mathrm{SE}_{\mathrm{Omnicast}} = \mathbb{E} \left[ \min_{\{ j \},\{ k \}} \mathbb{E} \left[\dfrac{1}{G} \log_{2} \left(1 + \frac{P \Vert \mathbf{h}_{jk} \Vert^{2} }{\sigma^{2}}\right) \rvert \beta_{jk} \right] \right]
\end{align}
where the outer expectation is with respect to large-scale fading and the inner expectation is with respect to small-scale fading. Note that \eqref{OmniEq} provides an upper bound on the performance of an omnicast transmission as we assumed perfect channel knowledge at the UTs. In practice, terminals will have to rely on channel estimates obtained from downlink pilots. This pilot transmission is complicated by the fact that optimal training entails the transmission of mutually orthogonal pilots from each antenna; with a large number of antennas, this pilot overhead can be significant. A reduction of the pilot overhead, at the cost of some spatial diversity order loss, can be achieved by transmission into a pre-determined subspace \cite{meng2016omnidirectional,karlsson2014operation}. Note that in independent Rayleigh fading, a conventional omnicast system that uses a single antenna is equivalent to the considered array \cite{karlsson2014operation}, while maximal dimensionality reduction applied. A corresponding achievable SE can be obtained from \cite{larsson2016joint}, by setting $\rho_b'=0$ in equation (49) therein\footnote{There is an $M^{\prime}$ parameter in equation (49) of \cite{larsson2016joint}, that in Fig. \ref{figOmni} we found its optimal value by exhaustive search, which gives us the best lower bound that can be obtained for omnicast transmission based on \cite{larsson2016joint}.}, which we refer to as omnicast with imperfect downlink CSI. From Fig. \ref{figOmni} one can see that for any $K_{tot}= GK$, at least ZF-undp or MRT-mucp provide significantly better performance than omnicast transmission. Note that even when we have $K_{tot}=1500$ UTs in the system, MRT-undp provides more than $3$ times higher SE than omnicast transmission. This highly motivates the application of massive MIMO in new multicasting standards \cite{DVB-GFaria,DVB-Elhajjar}.
\begin{figure}[]
\centering
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={4.1cm 8.3cm 4.4cm 8.85cm},clip]{Omnicast_K20_G1to30_N3001.pdf}
\caption{$K=20$ and $N=300$.}
\label{Omnia}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=1\columnwidth, trim={4cm 8.3cm 4.4cm 8.9cm},clip]{Omnicast_K50_G1to30_N5001.pdf}
\caption{$K=50$ and $N=500$.}
\label{Omnib}
\end{subfigure}
\caption{Comparison between Multicast and Omnicast transmissions.}
\label{figOmni}
\end{figure}
Based on the numerical analysis provided in this section, Fig. \ref{MuRegimes} presents a guideline for multicasting in massive MIMO systems. Given the system parameters, it determines which scheme should be applied in different scenarios. Also based on our derived results in Section IV, we can explicitly specify the SE that can be obtained using this selected scheme.
\begin{figure}[]
\centering
\includegraphics[width=1\columnwidth, trim={0.1cm 5.5cm 3cm 6cm},clip]{Regimes.pdf}
\caption{The massive MIMO multicasting regimes.}
\label{MuRegimes}
\end{figure}
\section{Summary and Conclusion}
In this paper, we studied multi-group multicasting in the context of massive MIMO. First, we introduced different transmission technologies (multicast and unicast), different pilot assignment strategies (co-pilot or dedicated pilot assignment), and the two common precoding schemes in massive MIMO (MRT and ZF). The six possible combinations were outlined in Fig. \ref{figint}. Second, for each of these schemes we derived an achievable SE while accounting for the uplink pilot-based CSI acquisition. Third, for any given training length, we solved the max-min fairness problem for the proposed schemes and found the optimal uplink pilot powers, downlink precoding powers, and the optimal SEs, all in closed-forms. Fourth, based on the achieved results we evaluated the proposed schemes numerically and drew a guideline for practical multi-group massive MIMO multicasting design. We showed that a massive MIMO multicasting system need to support two transmission modes, i.e., MRT-mucp and ZF-undp, and switches between them depending on the system parameters.
\section*{Appendices}
The appendix provides the proof of proposed theorems and propositions. We will frequently use the following lemma, which can be proved by standard techniques (for example see Section II of \cite{marzetta2016fundamentals}).
\begin{lemma}
\label{MainLemma}
Consider a discrete memoryless channel with input $x \! \in \! \mathbb{C}$ and output $y\!=\!\!~h x +~ \! v +~\! n$, where $h$ is a deterministic channel coefficient, $v$ is a random interference with zero mean and power $\mathbb{E}[\vert v \vert^{2}] = p_{v}$ that is uncorrelated with $x$, and $n \sim \mathcal{CN}(0,\sigma^{2})$ is independent circularly symmetric complex Gaussian noise. Then if the input power is limited as $\mathbb{E}[\vert x \vert^{2}] = P$ and the channel response $h \in \mathbb{C}$ and interference power $p_{v} \in \mathbb{R}_{+}$ are known at the output, then $\mathrm{SINR} = \dfrac{P \vert h \vert^{2}}{p_{v} + \sigma^{2}}$ and $r = \log_{2} (1 + \mathrm{SINR})$ are the achievable SINR and SE for this channel.
\end{lemma}
\section*{Appendix A - Achievable SE with ZF-mudp}
Starting from \eqref{multicasttransmission} and applying \eqref{ZFMUDP} we have
\begin{align}
y_{ik} \!\! = \! \underbrace{\mathbb{E}[\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF\!-\!mudp}}]}_{h} \underbrace{s_{i}}_{x} \! + \! \underbrace{( \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF\!-\!mudp}} \!-\! \mathbb{E}[\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF\!-\!mudp}}]) s_{i} \!-\! \tilde{\mathbf{g}}_{ik}^{dpH} \! \sum_{j=1}^{G} \! \mathbf{w}_{j}^{\mathrm{ZF\!-\!mudp}} s_{j}}_{v} \! + n. \label{hANDv}
\end{align}
Now using Lemma \ref{MainLemma} while considering $h$, $x$ and $v$ as shown in \eqref{hANDv}, we obtain the following effective SINR for UT $k$ in group $i$:
\begin{align}
\label{SINRZFmudpaid}
\mathrm{SINR}_{ik}^{\mathrm{ZF-mudp}} = \dfrac{\vert \mathbb{E}[\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}}] \vert^{2} }{1 + \mathrm{var}(\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}}) + \sum_{j=1}^{G} \mathbb{E}[ |\tilde{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{j}^{\mathrm{ZF-mudp}} |^{2} ] }.
\end{align}
Next we find the exact value of each term in \eqref{SINRZFmudpaid}. For the term $\mathbb{E}[\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}}]$ we have
\begin{align}
\label{powerterm}
&\mathbb{E}[\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}}]
=\sum_{m=1}^{K_{i}} \mathbb{E} \left[ \mathrm{tr} \left( \sqrt{\mu_{im}} \hat{\mathbf{g}}_{im}^{dp} \hat{\mathbf{g}}_{ik}^{dpH} (\mathbf{I}_{N} - \hat{\mathbf{G}}_{-i} (\hat{\mathbf{G}}_{-i}^{H} \hat{\mathbf{G}}_{-i})^{-1} \hat{\mathbf{G}}_{-i}^{H} ) \right) \right]
\\
&=\sum_{m=1}^{K_{i}} \mathrm{tr} \left( \mathbb{E} [ \sqrt{\mu_{im}} \hat{\mathbf{g}}_{im}^{dp} \hat{\mathbf{g}}_{ik}^{dpH} ] \mathbb{E} [(\mathbf{I}_{N} - \hat{\mathbf{G}}_{-i} (\hat{\mathbf{G}}_{-i}^{H} \hat{\mathbf{G}}_{-i})^{-1} \hat{\mathbf{G}}_{-i}^{H} ) ] \right) \notag
\\
&= \sqrt{\mu_{ik}} \gamma_{ik}^{dp} \left( N - \mathbb{E} \left[ \mathrm{tr} \left( \hat{\mathbf{G}}_{-i} (\hat{\mathbf{G}}_{-i}^{H} \hat{\mathbf{G}}_{-i})^{-1} \hat{\mathbf{G}}_{-i}^{H} \right) \right] \right)
= \sqrt{\mu_{ik}} \gamma_{ik}^{dp} (N - \nu_{i}). \notag
\end{align}
Now let us consider the interference term due to imperfect CSI. We have
\begin{align}
\label{interferenceZFmudp}
\mathbb{E}[ |\tilde{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{j}^{\mathrm{ZF-mudp}} |^{2} ] =& \mathbb{E}[ \tilde{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{j}^{\mathrm{ZF-mudp}} \mathbf{w}_{j}^{\mathrm{ZF-mudp}H} \tilde{\mathbf{g}}_{ik}^{dp} ]
\\
=& (\beta_{ik} - \gamma_{ik}^{dp}) \mathrm{tr} ( \mathbb{E}[\mathbf{w}_{j}^{\mathrm{ZF-mudp}} \mathbf{w}_{j}^{\mathrm{ZF-mudp}H}] ) = (\beta_{ik} - \gamma_{ik}^{dp}) \sum_{t=1}^{K_j} p_{jt}^{dl} . \notag
\end{align}
Now we need to calculate the variance term,
\begin{align}
\label{varZFmudp}
\mathrm{var}(\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}}) = \mathbb{E}[\vert \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}} \vert^{2}] - \vert \mathbb{E}[ \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}} ] \vert^{2} .
\end{align}
Denote $ \mathbf{C}_{i} = \mathbf{I}_{N} - \hat{\mathbf{G}}_{-i} (\hat{\mathbf{G}}_{-i}^{H} \hat{\mathbf{G}}_{-i})^{-1} \hat{\mathbf{G}}_{-i}^{H} $. For the term $\mathbb{E}[\vert \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}} \vert^{2}]$ we have
\begin{align*}\allowdisplaybreaks
&\mathbb{E}[\vert \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}} \vert^{2}] = \mathbb{E}[ \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{C}_{i} \sum_{m=1}^{K_{i}} \sqrt{\mu_{im}} \hat{\mathbf{g}}_{im}^{dp} \sum_{t=1}^{K_{i}} \sqrt{\mu_{it}} \hat{\mathbf{g}}_{it}^{dpH} \mathbf{C}_{i} \hat{\mathbf{g}}_{ik}^{dp} ]
\\
&= \underbrace{\mathrm{tr} \left( \mathbb{E} \left[ \hat{\mathbf{g}}_{ik}^{dp} \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{C}_{i} \! \left( \! \sum_{m=1, m\neq k}^{K_{i}} \sum_{t=1, t\neq k}^{K_{i}} \! \! \! \sqrt{\mu_{im} \mu_{it}} \hat{\mathbf{g}}_{im}^{dp} \hat{\mathbf{g}}_{it}^{dpH} \!\! \right) \!\! \mathbf{C}_{i} \right] \right)}_{(i)}
+
\underbrace{\mu_{ik} \mathbb{E} \left[ \left( \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{C}_{i} \hat{\mathbf{g}}_{ik}^{dp} \right)^{2} \right]}_{(ii)}
\\
&+ \underbrace{\mathrm{tr} \left( \mathbb{E} \left[ \hat{\mathbf{g}}_{ik}^{dp} \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{C}_{i} \left( \sum_{t=1, t\neq k}^{K_{i}} \sqrt{\mu_{ik} \mu_{it}} \hat{\mathbf{g}}_{ik}^{dp} \hat{\mathbf{g}}_{it}^{dpH} + \sum_{m=1, m\neq k}^{K_{i}} \sqrt{\mu_{im} \mu_{ik}} \hat{\mathbf{g}}_{im}^{dp} \hat{\mathbf{g}}_{ik}^{dpH} \right) \mathbf{C}_{i} \right] \right)}_{(iii)}.
\end{align*}
Notice that $(iii)$ is equal to zero due to the independency of $ \hat{\mathbf{g}}_{ik}^{dp} $ and $ \hat{\mathbf{g}}_{it}^{dp} \; \forall t \neq k , t \in \mathcal{K}_{i}$. The term $(i)$ reduces to
\begin{align*}
&\sum_{m=1, m\neq k}^{K_{i}} \mu_{im} \mathrm{tr} \left( \mathbb{E} \left[ \hat{\mathbf{g}}_{ik}^{dp} \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{C}_{i} \hat{\mathbf{g}}_{im}^{dp} \hat{\mathbf{g}}_{im}^{dpH} \mathbf{C}_{i} \right] \right)
=\gamma_{ik}^{dp} \sum_{m=1, m\neq k}^{K_{i}} \mu_{im} \mathrm{tr} \left( \mathbb{E} \left[ \mathbf{C}_{i} \hat{\mathbf{g}}_{im}^{dp} \hat{\mathbf{g}}_{im}^{dpH} \mathbf{C}_{i} \right] \right)
\\
&= \gamma_{ik}^{dp} \sum_{m=1, m\neq k}^{K_{i}} \mu_{im} \gamma_{im}^{dp} \mathbb{E} \left[ \mathrm{tr} \left( \mathbf{C}_{i} \right) \right] \stackrel{(a)}{=} \gamma_{ik}^{dp} \sum_{m=1, m\neq k}^{K_{i}} p_{im}^{dl} \notag
\end{align*}
where in $(a)$ we used the fact that $N - \nu_{i} = \mathrm{tr} \left( \mathbf{C}_{i} \right) $. For the term $(ii)$, denote $\hat{\mathbf{g}}_{ik}^{dp} = \sqrt{\gamma_{ik}^{dp}} \; \hat{\mathbf{h}}_{ik}$ with $\hat{\mathbf{h}}_{ik} \sim \mathcal{CN}(\mathbf{0},\mathbf{I}_{N})$, then we have
\begin{align}
\mu_{ik} \mathbb{E} \left[ \left( \hat{\mathbf{g}}_{ik}^{dpH} \mathbf{C}_{i} \hat{\mathbf{g}}_{ik}^{dp} \right)^{2} \right] &= \mu_{ik} (\gamma_{ik}^{dp})^{2} \mathbb{E} \left[ \left( \hat{\mathbf{h}}_{ik}^{H} \mathbf{C}_{i} \hat{\mathbf{h}}_{ik} \right)^{2} \right] = \dfrac{p_{ik}^{dl} \gamma_{ik}^{dp}}{N - \nu_{i}} \left( \mathrm{tr}(\mathbf{C}_{i})^{2} + \mathrm{tr}(\mathbf{C}_{i}^{2}) \right)
\\
&= p_{ik}^{dl} \gamma_{ik}^{dp} (N -\nu_{i}) + p_{ik}^{dl} \gamma_{ik}^{dp}. \notag
\end{align}
Therefore $\mathrm{var}(\hat{\mathbf{g}}_{ik}^{dpH} \mathbf{w}_{i}^{\mathrm{ZF-mudp}}) = \gamma_{ik}^{dp} \sum_{m=1}^{K_{i}} p_{im}^{dl}$. Now, inserting \eqref{powerterm}, \eqref{interferenceZFmudp}, and \eqref{varZFmudp} into \eqref{SINRZFmudpaid} and utilizing that the pilot length is $\tau_{p}^{dp}$, the SE is obtained as given in \eqref{se_zf_mudp}.
\section*{Appendix B - Achievable SE with ZF-mucp}
Starting from \eqref{multicasttransmission} and applying \eqref{ZFMUCP} we have
\begin{align}
y_{ik} \!\! =& (\hat{\mathbf{g}}_{ik}^{cp} - \tilde{\mathbf{g}}_{ik}^{cp})^{\!H} \! \sum_{j=1}^{G} \! \mathbf{w}_{j}^{\mathrm{ZF\!-\!mucp}} \! s_{j} \! + \! n \! \stackrel{(a)}{=} \!\! \frac{\sqrt{\tau_{p}^{cp} p_{ik}^{u}} \beta_{ik}}{\tau_{p}^{cp} \! \sum_{m=1}^{K_{i}} \! p_{im}^{u} \beta_{im}} \!\! \sum_{j=1}^{G} \! \hat{\mathbf{g}}_{i}^{H} \! \mathbf{w}_{j}^{\mathrm{ZF\!-\!mucp}} \! s_{j} \! - \! \tilde{\mathbf{g}}_{ik}^{cpH} \! \sum_{j=1}^{G} \! \mathbf{w}_{j}^{\mathrm{ZF\!-\!mucp}} \! s_{j} \! + \! n \notag
\\
=& \underbrace{\dfrac{\sqrt{\tau_{p}^{cp} p_{ik}^{u}} \beta_{ik}}{\tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u} \beta_{im}} \sqrt{p_{i}^{dl} \gamma_{i} ( N-G)} }_{h} \underbrace{s_{i}}_{x} - \underbrace{\tilde{\mathbf{g}}_{ik}^{cpH} \sum_{j=1}^{G} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} s_{j}}_{v} + n \label{aidproofZFmudp}
\end{align}
where in $(a)$ we used $\hat{\mathbf{g}}_{jk}^{cp} = \dfrac{\sqrt{\tau_{p}^{cp} p_{jk}^{u}} \beta_{jk}}{ \tau_{p}^{cp} \sum_{k=1}^{K_{j}} p_{jk}^{u} \beta_{jk}} \hat{\mathbf{g}}_{j}$.
Now applying Lemma \ref{MainLemma} considering $h$, $x$ and $v$ as shown in \eqref{aidproofZFmudp}, we obtain the effective SINR
\begin{align}
\label{help_sinr_mucp}
\mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}} = \dfrac{\dfrac{\tau_{p}^{cp} p_{ik}^{u} \beta_{ik}^{2} p_{i}^{dl} \gamma_{i} ( N-G)}{(\tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u} \beta_{im})^{2}}}{1 + \sum_{j=1}^{G} \mathbb{E}[ |\tilde{\mathbf{g}}_{ik}^{cpH} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} |^{2} ] }.
\end{align}
In the above equation for the terms $\mathbb{E}[ |\tilde{\mathbf{g}}_{ik}^{cpH} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} |^{2} ]$ we have
\begin{align}
&\mathbb{E}[ |\tilde{\mathbf{g}}_{ik}^{cpH} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} |^{2} ]
= \mathbb{E}[ \tilde{\mathbf{g}}_{ik}^{cpH} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} \mathbf{w}_{j}^{\mathrm{ZF-mucp}H} \tilde{\mathbf{g}}_{ik}^{cp}]
= \mathrm{tr} ( \mathbb{E}[ \tilde{\mathbf{g}}_{ik}^{cp} \tilde{\mathbf{g}}_{ik}^{cpH} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} \mathbf{w}_{j}^{\mathrm{ZF-mucp}H} ] ) \notag
\\
&\stackrel{(a)}{=} \mathrm{tr} ( \mathbb{E}[ \tilde{\mathbf{g}}_{ik}^{cp} \tilde{\mathbf{g}}_{ik}^{cpH} ] \mathbb{E}[ \mathbf{w}_{j}^{\mathrm{ZF-mucp}} \mathbf{w}_{j}^{\mathrm{ZF-mucp}H} ] ) = (\beta_{ik} - \gamma_{ik}) \mathrm{tr} (\mathbb{E}[ \mathbf{w}_{j}^{\mathrm{ZF-mucp}} \mathbf{w}_{j}^{\mathrm{ZF-mucp}H} ] ) \notag
\\
&= (\beta_{ik} - \gamma_{ik}) \mathbb{E}[ \mathbf{w}_{j}^{\mathrm{ZF-mucp}H} \mathbf{w}_{j}^{\mathrm{ZF-mucp}} ] = (\beta_{ik} - \gamma_{ik}) p_{j}^{dl} \label{interferenceZFmucp}
\end{align}
where (a) is due to the fact that $\tilde{\mathbf{g}}_{ik}^{cp}$ and $\hat{\mathbf{g}}_{i}$ are independent. Inserting \eqref{interferenceZFmucp} into \eqref{help_sinr_mucp} and noting that the pilot length is $\tau_{p}^{cp}$, we obtain \eqref{se_zf_mucp} for the SE of this UT.
\section*{Appendix C - MMF problem for MRT-undp}
First note that $\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}}\!$, given in Proposition \ref{prop1}, is monotonically increasing with respect to $\gamma_{ik}^{dp}$, and also $\gamma_{ik}^{dp}$ is monotonically increasing with respect to $p_{ik}^{u}$. Therefore, the optimal value for $p_{ik}^{u}$ is $p_{ik}^{u*} = p_{ik}^{utot}$ and $\gamma_{ik}^{dp*} = \dfrac{\tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}^{2} }{1 + \tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}}$. Now we prove that at the optimal solution $\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}} = \mathrm{SINR}_{jt}^{\mathrm{MRT-undp}} =\Gamma \quad \forall k,t,i,j$. Assume the contrary, i.e., that UT $t$ in group $j$ has the minimum SINR and there exists a UT $k$ in a group $i$ with $(i,k) \neq (j,t)$ such that $\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}} > \mathrm{SINR}_{jt}^{\mathrm{MRT-undp}} $. Then one can improve $\mathrm{SINR}_{jt}^{\mathrm{MRT-undp}}$ by changing $p_{ik}^{dl}$ and $p_{jt}^{dl}$ respectively to $p_{ik}^{dl} - \delta$ and $p_{jt}^{dl} + \delta$, where $0<\delta<(\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}} - \mathrm{SINR}_{jt}^{\mathrm{MRT-undp}} )\dfrac{1+\beta_{ik}P}{N \gamma_{ik}^{dp*}}$. Note that this just changes the $\mathrm{SINR}_{ik}^{\mathrm{MRT-undp}}$ and $\mathrm{SINR}_{jt}^{\mathrm{MRT-undp}}$, and the other SINRs remain intact. By performing this process once (or repeating it multiple times, if we have multiple UTs with same minimum SINR), we can increase the minimum SINR of the system, which contradicts our optimality assumption. Hence at the optimal solution all the SINRs are equal. Therefore, $p_{ik}^{dl*} = \dfrac{\Gamma (1+\beta_{ik} P)}{N \gamma_{ik}^{dp*}}$. Now by summing over all UTs in all groups and performing some straightforward operations we can find $\Gamma = NP \big(\sum_{j=1}^{G} \sum_{t=1}^{K_{j}} \frac{1+\beta_{jt}P}{\gamma_{jt}^{dp*}} \big)^{-1}$.
\section*{Appendix D - MMF problem for ZF-mudp}
Starting from $\mathrm{SINR}_{ik}^{\mathrm{ZF-mudp}}$, given in Theorem \ref{TZFmudp}, and similar to Appendix C we can show that the optimal value for $p_{ik}^{u}$ is $p_{ik}^{u*} = p_{ik}^{utot}$ and $\gamma_{ik}^{dp*} = \dfrac{\tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}^{2} }{1 + \tau_{p}^{dp} p_{ik}^{utot} \beta_{ik}}$. Now we prove that at the optimal solution $\mathrm{SINR}_{ik}^{\mathrm{ZF-mudp}} = \mathrm{SINR}_{jt}^{\mathrm{ZF-mudp}} = \Gamma \; \forall k,t,i,j$. Assume the contrary, i.e., that UT $t$ in group $j$ has the minimum SINR, and there exists a UT $k$ in a group $i$ with $(i,k) \neq (j,t)$ such that $\mathrm{SINR}_{ik}^{\mathrm{ZF-mudp}} > \mathrm{SINR}_{jt}^{\mathrm{ZF-mudp}}$. Denote $a_{ik} = (N - \nu_{i}) \gamma_{ik}^{dp*} p_{ik}^{dl}$ and $b_{ik} = 1+ \gamma_{ik}^{dp*} \sum_{m=1}^{K_{i}} p_{im}^{dl} + P (\beta_{ik} - \gamma_{ik}^{dp*})$. Then one can increase the minimum SINR of the system by reducing $p_{ik}^{dl}$ to $p_{ik}^{dl} - \delta$, where $0 < \delta < \dfrac{a_{ik}b_{jt}-a_{jt}b_{ik}}{(N-\nu_{i}) \gamma_{ik}^{dp*} b_{jt} - a_{jt} \gamma_{ik}^{dp*}}$, which contradicts the assumption. Therefore at the optimal solution all UTs have the same SINR. Now consider UTs $k$ and $t$ in $i$th multicasting group. Let us denote $P_{i}^{dl} = \sum_{m=1}^{K_{i}} p_{im}^{dl}$, then we have
\begin{align}
\Gamma_{i} = \dfrac{ \gamma_{ik}^{dp*} p_{ik}^{dl}}{1+ \gamma_{ik}^{dp*} P_{i}^{dl} + P (\beta_{ik} - \gamma_{ik}^{dp*})}
=
\dfrac{ \gamma_{it}^{dp*} p_{it}^{dl}}{1+ \gamma_{it}^{dp*} P_{i}^{dl} + P (\beta_{it} - \gamma_{it}^{dp*})}
\end{align}
with $\Gamma = (N-\nu_{i}) \Gamma_{i}$. Hence we can write
\begin{align}
\label{pik_dl_zf_mudp_proof}
p_{ik}^{dl*} = \dfrac{\Gamma}{ (N-\nu_{i})} (\frac{1}{\gamma_{ik}^{dp*}} + P_{i}^{dl} + P \dfrac{\beta_{ik}}{\gamma_{ik}^{dp*}} - P ).
\end{align}
Summing over the downlink power of all UTs in group $i$ and after some straightforward operations we obtain
$P_{i}^{dl} = \Gamma \Delta_{i}(N-\nu_{i} - \Gamma K_{i})^{-1}$, where $\Delta_{i} = \sum_{k=1}^{K_{i}} (\frac{1}{\gamma_{ik}^{dp*}} + P \dfrac{\beta_{ik}}{\gamma_{ik}^{dp*}} - P ) $. Note that as $\forall i \in \mathcal{G} \; P_{i}^{dl} \geq 0$ and $\sum_{k=1}^{K_{i}} P_{i}^{dl} = P_{dp} = P$, we have $\Gamma < \min_{ i \in \mathcal{G}} \{ \frac{N-\nu_i}{K_i} \}$. Summing over all groups downlink powers we have \eqref{sinr_equation} and $\Gamma$ can be found by solving it.
\section*{Appendix E - MMF problem for MRT-mucp}
First we prove that at the optimal solution $ \mathrm{SINR}_{jk}^{\mathrm{MRT-mucp}} = \mathrm{SINR}_{it}^{\mathrm{MRT-mucp}} \; \forall t,k,i,j$. Let us denote the user with the minimum SINR in $i$th group as $kmin_{i}$, i.e., $kmin_{i} = \argmin_{k \in \mathcal{K}_{i}}$. Now we prove that at the optimal solution of $\mathcal{P}^{\prime}2$ we have $ \mathrm{SINR}_{jkmin_{j}}^{\mathrm{MRT-mucp}} = \mathrm{SINR}_{ikmin_{i}}^{\mathrm{MRT-mucp}} \; \forall i,j$. Assume the contrary, then $\exists j,i \in \mathcal{G}$ such that $ \mathrm{SINR}_{jkmin_{j}}^{\mathrm{MRT-mucp}} > \mathrm{SINR}_{ikmin_{i}}^{\mathrm{MRT-mucp}}$. Now one can change $p_{j}^{dl}$ and $p_{i}^{dl}$ respectively to $p_{j}^{dl} - \delta$ and $p_{i}^{dl} + \delta$ with $0 < \delta < ( \mathrm{SINR}_{jkmin_{j}}^{\mathrm{MRT-mucp}} - \mathrm{SINR}_{ikmin_{i}}^{\mathrm{MRT-mucp}} ) \dfrac{1+\beta_{jkmin_{j}}P}{N \gamma_{jkmin_{j}}^{cp}}$ and improve the minimum SINR of the system\footnote{If we have multiple groups with equal value of minimum SINR, we can improve the minimum SINR of the system by repeating the same procedure multiple times.}, which contradicts our optimality assumption. Now we prove that at the optimal solution the SINR of all the users within each group are the same, i.e., $\mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}} = \mathrm{SINR}_{it}^{\mathrm{MRT-mucp}} \; \forall k,t \in \mathcal{K}_{i}, \forall i \in \mathcal{G}$. Assume the contrary, $\exists k,t \in \mathcal{K}_{i}$ such that $ \mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}} > \mathrm{SINR}_{it}^{\mathrm{MRT-mucp}} $. Then one can improve the minimum SINR of this group by reducing $p_{ik}^{u}$ to $p_{ik}^{u} - \delta$, where $0 < \delta < \dfrac{(1+\tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u} \beta_{im})(1+\beta_{ik}P)}{\tau_{p}^{cp} \beta_{ik}^{2} N p_{i}^{dl}} (\mathrm{SINR}_{ik}^{\mathrm{MRT-cp}} - \mathrm{SINR}_{it}^{\mathrm{MRT-cp}})$. Hence at the optimal answer for group $i$ we have
\begin{align}
\Phi_{i} = \dfrac{\gamma_{ik}^{cp}}{1+\beta_{ik}P} = \dfrac{\gamma_{it}^{cp}}{1+\beta_{it}P} \; \forall t,k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}
\end{align}
where $\Phi_{i}$ is a fixed number. Equivalently we have
\begin{align}
\label{Upsiloneq}
\Upsilon_{i} = \dfrac{p_{ik}^{u} \beta_{ik}^{2}}{1+\beta_{ik}P} = \dfrac{p_{it}^{u} \beta_{it}^{2}}{1+\beta_{it}P} \; \forall k,t \in \mathcal{K}_{i}, \forall i \in \mathcal{G}
\end{align}
where $\Upsilon_{i}$ is a fixed constant. Considering the fact that $\mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}}$ is strictly increasing with respect to $p_{ik}^{u}$ and noting that $p_{ik}^{u} \leq p_{ik}^{utot}$, the optimal uplink power will be equal to
\begin{align}
p_{ik}^{u*} = \dfrac{1+\beta_{ik}P}{\beta_{ik}^{2}} \Upsilon_{i} \; \forall k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}
\end{align}
where $\Upsilon_{i} = \min_{k \in \mathcal{K}_{i}} \dfrac{p_{ik}^{utot} \beta_{ik}^{2}}{1+\beta_{ik}P} $. Therefor $\mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}} = \Upsilon_{i} \dfrac{N p_{i}^{dl} \tau_{p}^{cp}}{1 + \tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u} \beta_{im} }$. As we already showed the SINR at the optimal point is equal among all UTs and we have $\Gamma = \mathrm{SINR}_{ik}^{\mathrm{MRT-mucp}} \forall i,k$. Hence we have $p_{i}^{dl*} = \Gamma (1 + \tau_{p}^{cp} \sum_{m=1}^{K_{i}} p_{im}^{u*} \beta_{im}) / \tau_{p}^{cp} N \Upsilon_{i}$. Now summing $ p_{i}^{dl*}$ over all groups and employing the total available power constraint we achieve \eqref{MRTmucpSINR}.
\section*{Appendix F - MMF problem for ZF-mucp}
First we prove that at the optimal solution $ \mathrm{SINR}_{jk}^{\mathrm{ZF-mucp}} = \mathrm{SINR}_{it}^{\mathrm{ZF-mucp}} \; \forall t,k,i,j$. Let us denote the user with the minimum SINR in $i$th group as $kmin_{i}$, i.e., $kmin_{i} = \argmin_{k \in \mathcal{K}_{i}}$. Now we prove that at the optimal solution $ \mathrm{SINR}_{jkmin_{j}}^{\mathrm{ZF-mucp}} = \mathrm{SINR}_{ikmin_{i}}^{\mathrm{ZF-mucp}}$. Assume the contrary, then $\exists j,i \in \mathcal{G}$ such that $ \mathrm{SINR}_{jkmin_{j}}^{\mathrm{ZF-mucp}} > \mathrm{SINR}_{ikmin_{i}}^{\mathrm{ZF-mucp}}$. Now one can change $p_{j}^{dl}$ and $p_{i}^{dl}$ respectively to $p_{j}^{dl} - \delta$ and $p_{i}^{dl} + \delta$ with $0 < \delta < \big( \mathrm{SINR}_{jkmin_{j}}^{\mathrm{ZF-mucp}} - \mathrm{SINR}_{ikmin_{i}}^{\mathrm{ZF-mucp}} \big) \dfrac{1+(\beta_{jkmin_{j}}-\gamma_{jkmin_{j}}^{cp})P}{(N-G) \gamma_{jkmin_{j}}^{cp}}$ and improve the minimum SINR of the system, which contradicts our optimality assumption. Now we prove that at the optimal answer the SINR of all the UTs within each group are the same, i.e., $\mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}} = \mathrm{SINR}_{it}^{\mathrm{ZF-mucp}} \; \forall k,t \in \mathcal{K}_{i}, \forall i \in \mathcal{G}$. Assume the contrary, $\exists k,t \in \mathcal{K}_{i}$ such that $ \mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}} > \mathrm{SINR}_{it}^{\mathrm{ZF-mucp}} $. Then one can improve the minimum SINR of this group by reducing $p_{ik}^{u}$ to $p_{ik}^{u} - \delta$, where $0 < \delta < \dfrac{1+(\beta_{ik}-\gamma_{ik}^{cp})P}{p_{i}^{dl} (N-G)}(\mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}} - \mathrm{SINR}_{it}^{\mathrm{ZF-mucp}})$. Hence at the optimal answer the SINR of all users within group $i$ are equal and we have
\begin{align}
\Delta_{i} = \dfrac{\gamma_{ik}^{cp}}{1+(\beta_{ik}-\gamma_{ik}^{cp})P} = \dfrac{\gamma_{it}^{cp}}{1+(\beta_{it} - \gamma_{it}^{cp})P} \; \forall t,k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}.
\end{align}
Equivalently we have $\gamma_{ik}^{cp}(1+P\beta_{it}) = \gamma_{it}^{cp}(1+P\beta_{ik}) \; \forall t,k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}$. Therefore
\begin{align}
\Upsilon_{i} = \dfrac{p_{ik}^{u} \beta_{ik}^{2}}{1+\beta_{ik}P} = \dfrac{p_{it}^{u} \beta_{it}^{2}}{1+\beta_{it}P} \; \forall t,k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}
\end{align}
where $\Upsilon_{i}$ is a fixed constant. Now note that it is exactly the same as \eqref{Upsiloneq} and hence the optimal uplink powers are given as
\begin{align}
p_{ik}^{u*} = \dfrac{1+\beta_{ik}P}{\beta_{ik}^{2}} \Upsilon_{i} \; \forall k \in \mathcal{K}_{i}, \forall i \in \mathcal{G}
\end{align}
where $\Upsilon_{i} = \min_{k \in \mathcal{K}_{i}} \dfrac{p_{ik}^{utot} \beta_{ik}^{2}}{1+\beta_{ik}P} $. Using the above result and after straightforward calculation we obtain $\Delta_{i} = \dfrac{\tau_{p}^{cp} \Upsilon_{i}}{1 + \tau_{p}^{cp}(E_{i} - P \Upsilon_{i})} \; \forall i \in \mathcal{G}$, where $E_{i} = K_{i} \Upsilon_{i} P + \Upsilon_{i} \sum_{m=1}^{K_{i}} \dfrac{1}{\beta_{im}}$. Since we proved that the SINR is equal for all UTs, we have $\Gamma = \mathrm{SINR}_{ik}^{\mathrm{ZF-mucp}} = (N-G) \Delta_{i} p_{i}^{dl}$, where $\Gamma$ is a fixed constant. Now, $p_{i}^{dl} = \dfrac{\Gamma}{(N-G) \Delta_{i}}$, and summing over all downlink powers and using the total available power constraint we achieve \eqref{ZFmucpSINR} and \eqref{ZFmucpDLpower} for the $\Gamma$ and $p_{i}^{dl*}$, respectively.
\bibliographystyle{IEEEtran}
|
1,116,691,497,248 | arxiv | \section{Introduction}
\label{sec:intro}
\par In the present work, we revisit the complementarity between strangeness oscillations and lifetime information in the neutral kaon system previously studied by A. Bramon, G. Garbarino and B. Hiesmayr (BGH) \cite{BGH_PRL,BGH_TwoPath,BGH_EPJC}. However, instead of using the Wigner--Weisskopf approach to the isolated, free-kaon propagation, we consider an open systems model in which the neutral kaon's weak decay states are included as a second party. The interaction between the two subsystems is given by a completely positive probability-preserving quantum dynamical map. Our model coincides with that proposed by Caban \emph{et al}. \cite{Caban_Phys_Review_A} (and also discussed in Bertlmann \emph{et al}. \cite{BGH_OpenSys}) upon partial trace of the decay products, but it has the new feature of allowing bipartite entanglement to be studied. We examine quantitatively the effects of these correlations on complementarity in the context of neutral kaon interferometry.
\par In this case, the quantitative duality relation of the Greenberger-Yasin type \cite{Greenberger_Yasin} considered in Refs. \cite{BGH_PRL,BGH_TwoPath,BGH_EPJC} must extend to a ``triality'' relation incorporating a quantitative entanglement measure. We show here that a new such quantitative complementarity relation holds:
\vspace{-0.2cm}
\begin{equation}\label{eq:fidelity_compl}
\mathcal{V}(\tau ) \leq \sqrt{1-\mathcal{D}^2 (\tau ) - \mathcal{S}^2 (\tau )}\quad \forall \tau\in I ,
\end{equation}
\noindent where $\tau $ denotes the proper time and $I$ is the time interval relevant for the analysis (see Sec. \ref{sec:model}). This inequality is similar to that proposed by M. Jakob and J. Bergou for bipartite systems \cite{Jakob_Bergou}. Here, $\mathcal{D}$ denotes the distinguishability between the decay products states corresponding to the distinct kaon propagation modes $K_S $, $K_L $. As we will see, $\mathcal{D}$ quantifies the increasing amount of lifetime information which becomes available (due to entanglement correlations) in the decay states subsystem. The associated visibility $\mathcal{V}$ quantifies the amount of wave-like path interference between these states, while $\mathcal{S}$ denotes the von Neumann entropy of the kaon state and measures bipartite entanglement.~We will demonstrate that the new quantitative complementarity relation (\ref{eq:fidelity_compl}) also accounts for the complementarity between strangeness oscillations and lifetime information considered by BGH. The results allow us to visualize and discuss in a clear way through the $K^{0}$--$\overline{K}\,^{0}$ oscillations the essential role played by entanglement in wave-particle duality.
\vspace{-0.3cm}
\section{The Model}
\label{sec:model}
\par While there are several open quantum system models available in the literature offering completely-positive, probability preserving descriptions of the composite neutral kaon plus weak decay products system \cite{Caban_Phys_Review_A, BGH_OpenSys, Caban_Phys_Lett_A, Caban_Phys_Lett_A_2, Smolinski_1, Smolinski_2}, here we consider a model in which these two subsystems are treated as different parties. Therefore, we take the composite system state space as the tensor product $\mathcal{H}=\mathcal{H}_Q \otimes \mathcal{H}_P $ between the kaon (quanton) Hilbert space $\mathcal{H}_Q $ and the space of decay products states $\mathcal{H}_P $.
\par A short-lived kaon $K^{0}_S $ always decays into two pions, either $\pi ^{+} + \pi ^{-}$ or $\pi ^{0} + \pi ^{0}$. On the other hand, a long-lived kaon $K^{0}_L $ has several decay modes: it can decay into three neutral pions or $\pi ^{+} + \pi^{-} + \pi ^{0}$, but there are also the semileptonic decays into $\pi ^{\pm } + \mu ^{\mp }+ \nu _{\mu }$, $\pi ^{\pm } + e^{\mp } + \nu _{e }$, and the considerably rare $K^{0}_L $ decays into two pions. However, we will not consider here this last decay mode associated with charge-parity violation. In this case, the state of the decay products subsystem can be labeled by its pion content. Thus, we take $\mathcal{H}_P $ as the Hilbert space spanned by the (orthonormalized) vectors $\arrowvert 0_\pi \rangle $, $\arrowvert\pi\pi\rangle$, and $\arrowvert\widetilde{\pi\pi}\rangle $, which represent respectively states with no pions, two pions, and one or three pions.
\par The kaon state space is taken as the direct sum $\mathcal{H}_Q = H_{0 }\oplus H_{K^0 }$, where $H_{0 }$ is the Hilbert space spanned by the vector $\arrowvert 0_{K}\rangle $ representing the vacuum (absence of kaon) and $H_{K^0 }$ is the usual kaon Hilbert space spanned by the strangeness eigenstates $\arrowvert K^{0} \rangle $, $\arrowvert \overline{K}^{0}\rangle $. Under our assumption of charge-parity symmetry, the neutral kaon mass eigenstates $\arrowvert K_{S}^{0}\rangle $, $\arrowvert K_{L}^{0}\rangle $ corresponding to the short-lived and long-lived propagation modes are
\vspace{-0.3cm}
\begin{align}\label{eq:kaon_basis}
\arrowvert K_{S}^0 \rangle=\frac{1}{\sqrt{2}}(\arrowvert K^0 \rangle +\arrowvert \overline{K}^0 \rangle),\quad
\arrowvert K_{L}^0 \rangle=\frac{1}{\sqrt{2}}(\arrowvert K^0 \rangle -\arrowvert \overline{K}^0 \rangle)\, ,
\end{align}
\noindent and we have $\langle K_{S}^0 \arrowvert K_{L}^0 \rangle = 0$. We assume that $\arrowvert 0_{K}\rangle $ is normalized and orthogonal to $\arrowvert K^{0}_S \rangle $, $\arrowvert K^{0}_L \rangle $.
\vspace{0.2cm}
\par So far for the kinematic aspects. Let us turn now to dynamics. The only physically meaningful initial configurations are those with a kaon and no pion -- that is, factorized initial conditions of the form $\arrowvert \Psi (0) \rangle = \left(\, \alpha \arrowvert K_{S}^{0}\rangle + \beta \arrowvert K_{L}^{0}\rangle\, \right)\arrowvert 0_\pi \rangle$. We assume that evolution takes place entirely in the subspace $\mathcal{W}<\mathcal{H} $ spanned by $\{\arrowvert K_S^0 \rangle\arrowvert 0_\pi \rangle , \arrowvert K_L^0 \rangle\arrowvert 0_\pi \rangle , \arrowvert 0_{K}\rangle\arrowvert\pi\pi\rangle,
\arrowvert 0_{K}\rangle\arrowvert\widetilde{\pi\pi}\rangle \}$ and according to the quantum map
\begin{widetext}
\begin{align}\label{eq:dyn_map}
\arrowvert 0_{K}\rangle \arrowvert 0_\pi \rangle &\longrightarrow \arrowvert 0_{K}\rangle \arrowvert 0_\pi \rangle \\ \nonumber
\arrowvert K_S^0 \rangle\arrowvert 0_\pi \rangle &\longrightarrow e^{-\tfrac{1}{2}\Gamma _S \tau }e^{-im_S \tau }\arrowvert K_S^0 \rangle\arrowvert 0_\pi \rangle + \sqrt{1-e^{-\Gamma _S \tau }}\arrowvert 0_{K}\rangle\arrowvert\pi\pi\rangle \\ \nonumber
\arrowvert K_L^0 \rangle\arrowvert 0_\pi \rangle &\longrightarrow e^{-\tfrac{1}{2}\Gamma _L \tau }e^{-im_L \tau }\arrowvert K_L^0 \rangle\arrowvert 0_\pi \rangle + \sqrt{1-e^{-\Gamma _L \tau }}\arrowvert 0_{K}\rangle\arrowvert\widetilde{\pi\pi}\rangle\; ,
\end{align}
\end{widetext}
\noindent where $m_S $ and $\Gamma _S = \frac{1}{\tau _S }$ (resp. $m_L $ and $\Gamma _L = \frac{1}{\tau _L }$) are the $K^0_S $ (resp. $K^0_L $) mass and decay width \cite{footnote_1}, and where $p_S (\tau )\equiv 1-e^{-\Gamma _S \tau } $ (resp. $p_L (\tau )\equiv 1-e^{-\Gamma _L \tau } $) denotes the amplitude for the state $\arrowvert K_S^0 \rangle\arrowvert 0_\pi \rangle $ (resp. $\arrowvert K_L^0 \rangle\arrowvert 0_\pi \rangle $) mapping at time $\tau $ into $ \arrowvert 0_{K}\rangle\arrowvert\pi\pi\rangle $ (resp. $ \arrowvert 0_{K}\rangle\arrowvert\widetilde{\pi\pi}\rangle $). Moreover, we assume that all the interactions experienced by the kaon with other degrees of freedom have been included in the description above. In this case the composite system density operator $\rho(\tau )=\arrowvert \Psi (\tau ) \rangle \langle \Psi (\tau ) \arrowvert $ for the initial $\arrowvert \Psi (0) \rangle $ can be assumed to remain pure in the course of dynamics. From Eq. (\ref{eq:dyn_map}),
\vspace{-0.2cm}
\begin{widetext}
\begin{align}\label{eq:density}
\arrowvert \Psi (\tau ) \rangle &= \alpha e^{-\tfrac{1}{2}\Gamma _S \tau }e^{-im_S \tau }\arrowvert K_S^0 \rangle\otimes\arrowvert 0_\pi \rangle + \beta e^{-\tfrac{1}{2}\Gamma _L \tau }e^{-im_L \tau }\arrowvert K_L^0 \rangle\otimes\arrowvert 0_\pi \rangle + \alpha \sqrt{1-e^{-\Gamma _S \tau }}\arrowvert 0_{K}\rangle\otimes\arrowvert\pi\pi\rangle \\ \nonumber
&+ \beta \sqrt{1-e^{-\Gamma _L \tau }}\arrowvert 0_{K}\rangle\otimes\arrowvert\widetilde{\pi\pi}\rangle \; .
\end{align}
\end{widetext}
\par In the sequence, we will focus our attention in kaons produced in strangeness eigenstates -- say, $\arrowvert K^{0} \rangle $ states generated by strong reactions such as $\pi^{-} p \rightarrow K^0 \Lambda $. So we take $\alpha = \frac{1}{\sqrt{2}} = \beta $. Taking this into $\rho (\tau )$, we see that the reduced kaon state is
\begin{align}\label{eq:reduced_kaon}
&\rho _Q (\tau ) = \mathrm{Tr}_{P}\left[ \rho (\tau ) \right] = \frac{1}{2}e^{-\Gamma _S \tau } \arrowvert K_S^0 \rangle \langle K_S^0 \arrowvert \\ \nonumber
&+ \frac{1}{2}e^{-\Gamma _L \tau } \arrowvert K_L^0 \rangle \langle K_L^0 \arrowvert + \left( 1 - \frac{e^{-\Gamma _S \tau }}{2}-\frac{e^{-\Gamma _L \tau }}{2}\right)\arrowvert 0_K \rangle \langle 0_K \arrowvert \\ \nonumber
&+ \left\{ \frac{1}{2}e^{-\frac{1}{2}\Gamma _S \tau }e^{-\frac{1}{2}\Gamma _L \tau }e^{i\Delta m \tau }\arrowvert K_S^0 \rangle \langle K_L^0 \arrowvert + \mathrm{H.C.}\right\}\;
\end{align}
\noindent This coincides with the kaon state evolution considered in Ref. \cite{Caban_Phys_Review_A} by Caban \emph{et al}.~\cite{footnote_2}. In this work, the authors deduced the general form in (\ref{eq:reduced_kaon}) for the kaon's dynamics under the assumptions that the kaon state evolution must be (i) completely positive and probability preserving, and (ii) compatible with the Wigner--Weisskopf phenomenological prescription \cite{footnote_3}.~Therefore, these properties are also true for the reduced kaon state $\rho _Q (\tau ) $ in the present model (\ref{eq:density}).~It is straightforward to check that the composite system evolution given by $\rho (\tau )$ is also completely positive and probability preserving.
\vspace{-0.2cm}
\section{Results}
\label{sec:model}
\par We apply our model now to a quantitative analysis of complementarity in the $K^{0}$--$\overline{K}\,^{0}$ system, including the duality between strangeness oscillations and lifetime information. But here we investigate the phenomenon in light of the new feature presented by the model's bipartite character: entanglement. Our goal is to examine its role on complementarity in the context of neutral kaon interferometry.
\par We can restrict our analysis to focus only on the proper time interval $0\leq \tau \leq \tau _0 $, where $\tau _0 = 4.79 \tau _S $. The reason is that it can be verified in experiment \cite{BGH_PRL} that neutral kaons decaying after $\tau _0 $ can be regarded as $K_0^L $ kaons with negligible error probability. In other words: at $\tau = \tau _0 $ one can already consider to have complete width information on the kaon. Therefore, we assume in the sequence that $\tau $ ranges from $0 $ to $\tau _0 $.
\par For pure composite system states, the degree of mixedness of a reduced party state both qualifies and quantifies entanglement. Here we will use the von Neumann entropy $\mathcal{S}$ of the reduced pionic subsystem state. It can be readily evaluated from
\begin{align}\label{eq:reduced_pion}
&\rho _P (\tau ) = \mathrm{Tr}_{Q}\left[ \rho (\tau ) \right] = \frac{1-e^{-\Gamma _S \tau } }{2} \arrowvert\pi\pi\rangle\langle\pi\pi\arrowvert \\ \nonumber
&+\frac{1-e^{-\Gamma _L \tau } }{2} \arrowvert\widetilde{\pi\pi}\rangle\langle\widetilde{\pi\pi}\arrowvert +\left( \frac{e^{-\Gamma _S \tau }}{2}+\frac{e^{-\Gamma _L \tau }}{2} \right) \arrowvert 0_{\pi }\rangle\langle 0_{\pi }\arrowvert \\ \nonumber
&+\left\{ \frac{1}{2}\sqrt{1-e^{-\Gamma _S \tau }}\sqrt{1-e^{-\Gamma _L \tau }}\arrowvert \pi\pi \rangle\langle \widetilde{\pi\pi }\arrowvert + \mathrm{H.C.} \right\}\; ,
\end{align}
\vspace{0.2cm}
\noindent whose eigenvalues are $\{ 0, x, 1-x \}$ for $$x(\tau )\equiv \dfrac{e^{-\Gamma _S \tau }+e^{-\Gamma _L \tau }}{2}\; ,$$ \noindent $0\leq x \leq 1 $ $\forall \tau $. Direct numerical analysis reveals that $$\mathcal{S}=x\ln x + (1-x)\ln (1-x)$$ \noindent is monotone increasing in $[0, \tau _0 ]$ (see Fig.~1).
\par As entanglement correlations are dynamically generated, information about the kaon's $\{ \arrowvert K_{S}^{0}\rangle , \arrowvert K_{L}^{0}\rangle \}$ component leaks to the pionic subsystem. The natural quantifier of the amount of lifetime information which thus becomes available to be retrieved (through the pionic state) is the distinguishability
\begin{equation}\label{eq:disting}
\mathcal{D}(\tau ) = \frac{1}{2}\| \rho _{P}^{(S)}(\tau ) - \rho _{P}^{(L)}(\tau ) \| \, .
\end{equation}
\vspace{0.1cm}
\noindent It is given by the trace distance between the pionic subsystem states $\rho _{P}^{(S)}(\tau ) = 2 \langle K_{S}^0 \arrowvert \rho (\tau ) \arrowvert K_{S}^0 \rangle = e^{-\tau \Gamma _S }\arrowvert 0_{\pi }\rangle\langle 0_{\pi }\arrowvert $ and $\rho _{P}^{(L)}(\tau ) = 2\langle K_{L}^0 \arrowvert \rho (\tau ) \arrowvert K_{L}^0 \rangle = e^{-\tau \Gamma _L }\arrowvert 0_{\pi }\rangle\langle 0_{\pi }\arrowvert $ corresponding to the distinct kaon propagation modes $K_S $, $K_L $. Due to the generation of entanglement, we expect $\mathcal{D}(\tau )$ to also increase monotonically in $[0,\tau _0 ]$. In fact, we found that $\mathcal{D}(\tau )=\frac{1}{2}\left|e^{-\tau \Gamma _S }-e^{-\tau \Gamma _L }\right|$ is an increasing function of $\tau $ in this interval. The numerical results are summarized in Fig.~1.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.38]{fig1new.png}
\caption{(color online) The solid black lines show $\mathcal {D}^2 + \mathcal {S}^2 $ increasing towards $1$ while $\mathcal{V}^2 $ decreases correspondingly, such that $\mathcal{V}^2 \leq 1 - \mathcal {D}^2 - \mathcal {S}^2 $. Proper time ranges from $\tau =0$ to $\tau = 4.79 \tau _S $. The green dotdashed line shows $\mathcal{D}$. The purple dashed line shows $\mathcal{S}$.}\label{fig1}
\end{center}
\end{figure}
\par The quantity naturally complementary to $\mathcal{D}$ and playing the role of interferometric visibility here is the Uhlmann fidelity
\begin{equation}\label{eq:vis_fidelity}
\mathcal{V}(\tau ) = \mathcal{F}\left( \rho _{P}^{(S)}(\tau ) , \rho _{P}^{(L)}(\tau ) \right)\, ,
\end{equation}
\noindent where $\mathcal{F}\left( \hat{\rho }, \hat{\tau }\right)=\mathrm{Tr}\left[ \displaystyle\sqrt{\sqrt{\hat{\rho }}\, \hat{\tau }\sqrt{\hat{\rho }}}\right]$. The fidelity is an ``overlap'' measure generalized to arbitrary mixed states $\hat{\rho }, \hat{\tau }$, therefore quantifying the visibility of quantum interferences between $\rho _{P}^{(S)}(\tau ), \rho _{P}^{(L)}(\tau ) $. Moreover, it is well-known to be related to the trace distance by the information-theoretic inequality
\begin{equation}\label{eq:inf_inequality}
\mathcal{F}(\hat{\rho }, \hat{\tau })\leq \displaystyle\sqrt{1-\mathcal{D}^2 (\hat{\rho }, \hat{\tau })}\; .
\end{equation}
\noindent We have $\mathcal{V}(\tau )=e^{-\Gamma \tau }$, where $\Gamma \equiv \frac{1}{2}(\Gamma _S + \Gamma _L )$.
\par As was pointed out by Jakob and Bergou in Ref.~\cite{Jakob_Bergou}, complementarity in bipartite systems must relate the single-partite properties distinguishability and visibility to the amount of entanglement. Here we have
\vspace{-0.2cm}
\begin{equation*}
\mathcal{V}^2 +\mathcal{D}^2 = e^{-\tau (\Gamma _S + \Gamma _L )}+\frac{\left( e^{-\Gamma _S \tau }-e^{-\Gamma _L \tau } \right) ^2 }{4}=x (\tau ) ^2 \; ,
\end{equation*}
\noindent in such a way that
\vspace{-0.2cm}
\begin{align*}
\mathcal{V}^2 +\mathcal{D}^2 +\mathcal{S}^2 = x^2 + [x\ln (x) + (1-x)\ln (1-x)]^2 \leq 1\, .
\end{align*}
\noindent Indeed, numerical analysis of the quantities $\mathcal {S}$, $\mathcal {D}$ and $\mathcal {V}$ shows that the inequality
\begin{equation}\label{eq:bergou}
\mathcal {V}^2 + \mathcal {D}^2 + \mathcal {S}^2 \leq 1
\end{equation}
\noindent holds within the relevant proper time interval $[0,\tau _0 ]$ (see Fig. 2). As $\mathcal {D}^2 + \mathcal {S}^2 $ increases towards (nearly) $1$ in $[0, \tau _0 ]$, the quantity $\mathcal{V}$ must correspondingly decrease towards $0$, therefore enforcing the visibility of the wave-like, path interference phenomena to reduce.
\vspace{0.5cm}
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.38]{fig2new.png}
\caption{(color online) The quantitative complementarity relation (\ref{eq:bergou}) for $\tau \in [0,\tau _0 ]$. The solid violet line shows $\mathcal {V}^2 + \mathcal {D}^2 + \mathcal {S}^2 $. The upper bound $1$ is shown dashed, in black.}\label{fig2}
\end{center}
\end{figure}
\vspace{-1.0cm}
\subsection*{Strangeness Oscillations}
\label{subsec:strangeness}
\par To see that Eq.~(\ref{eq:bergou}) also accounts for the complementarity between strangeness oscillations and lifetime information in $[0,\tau _0 ]$, notice first that the visibility of $K^{0}\overline{K}^{0}$ oscillations must be defined here as the quantity $\mathcal{V}_0 (\tau )$ such that
\begin{equation}\label{eq:vis_strange}
2 \langle \overline{K}^{0} \arrowvert \mathrm{Tr}_{P} [\rho (\tau )] \arrowvert \overline{K}^{0} \rangle = F(\tau )\{ 1-\mathcal{V}_0 (\tau )\cos (\Delta m \tau ) \}\, .
\end{equation}
\noindent That is, by the oscillatory term in the probability that the initial $\arrowvert K^0 \rangle $ is detected in the strangeness eigenstate $\arrowvert \overline{K}^{0} \rangle $ at the later time $\tau =0 $. Direct calculation gives
\begin{equation}\label{eq:vis_strange}
\mathcal{V}_0 (\tau ) = \dfrac{2e^{-\frac{1}{2}(\Gamma _S + \Gamma _L )\tau}}{e^{-\Gamma _S \tau } + e^{-\Gamma _S \tau } } \;\; .
\end{equation}
\vspace{0.1cm}
\par Next, a straightforward argument (see the Appendix) shows that the ratio $\frac{d\mathcal{V}_0}{d\tau }\Big/\frac{d\mathcal{V}}{d\tau }$ between its derivative and that of the fidelity visibility (\ref{eq:vis_fidelity}) is positive in the time interval $0\leq \tau \leq \tau _0 \,$. The quantities $\mathcal{V},\mathcal{V}_0 $ are then either both increasing or both decreasing in $[0,\tau _0 ]$. Therefore, we see from Eq.~(\ref{eq:inf_inequality}) that the increase of lifetime information as measured by $\mathcal{D}(\tau ) $ in fact enforces (not only $\mathcal{V} $, but also) the visibility $\mathcal{V}_0 $ of the strangeness oscillations to decrease in this interval.
\section{Conclusions}
\label{sec:conclusions}
\par Entanglement plays a crucial role in quantum mechanical complementarity for bipartite systems.~We have shown in the present work how it can be clearly illustrated and discussed in the kaon-antikaon oscillating system.~We considered a bipartite model where a single neutral kaon interacts with the environment consisting of its weak interaction decay products.~From an interferometric point of view, the kaon is treated as the interfering object (quanton) and lifetime/width information plays the role of which-way information.~This is similar to the \emph{neutral kaon interferometry} of Bramon, Garbarino and Hiesmayr \cite{BGH_EPJC}. We verified that, as entanglement correlations are established between these two parties, lifetime information leaks and becomes available in the environmental state.~Corresponding to the entanglement generation and acquisition of lifetime information, we saw how the visibility of which-way interference is reduced. The interplay between the single-particle properties visibility/distinguishability and entanglement was proved to be governed by a quantitative complementarity relation:
\vspace{-0.3cm}
\begin{equation*}
\mathcal {V}^2 + \mathcal {D}^2 + \mathcal {S}^2 \leq 1
\end{equation*}
\noindent This inequality is similar to the one proposed by Jakob and Bergou in their analysis of wave-particle duality in bipartite systems \cite{Jakob_Bergou}.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.38]{fig3new.png}
\caption{(colors online) Upper bounds for $\mathcal{V}^2 $ (the dotdashed green line) given by $1-\mathcal{D}^2 $ (the dashed purple line) and by $1-\mathcal {D}^2 -\mathcal {S}^2 $ (the solid black line).}\label{fig3}
\end{center}
\end{figure}
\vspace{-0.1cm}
\par In this direction, it is interesting to notice that the inclusion of the quantitative entanglement measure in this ``triality'' relation is very important if we want to see the reduction of the interference visibility $\mathcal{V}$ as enforced by quantum complementarity.~In Fig.~3, we compare the upper bounds for $\mathcal{V}^2 $ given by $1-\mathcal{D}^2 $ alone and by $1-\mathcal {D}^2 -\mathcal {S}^2 $. The upper bound including entanglement is much sharper and consistent with the actual reduction in $\mathcal{V}$.
\par We have also shown how our inequality accounts for the complementarity between strangeness oscillations and lifetime information in the time interval relevant for the analysis. This demonstrates consistency with the previous analysis of complementarity in the neutral kaon system, and with the general principle that the visibility of any quantum interference phenomenon whatsoever must reduce when which-way information becomes available \cite{Scully_Englert_Walther, Rempe,Englert_Scully_and_others}.
\textbf{Acknowledgments.} G.S. and M.S. would like to thank the Departamento de Ci\^{e}ncias Exatas e Tecnol\'{o}gicas/UESC -- Ilh\'{e}us for the hospitality and financial support during the development of this work. M.S. also acknowledges financial support form the Brazilian institutions CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and FAPEMIG (Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de Minas Gerais). J.G.O. acknowledges financial support from FAPESB (Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado da Bahia), AUXPE-FAPESB-3336/2014 number 23038.007210/2014-19.
\section*{Appendix}
\par Let us show that the ratio $\frac{d\mathcal{V}_0}{d\tau }\Big/\frac{d\mathcal{V}}{d\tau }$ between the derivatives of the visibility of strangeness oscillations (Eq. (\ref{eq:vis_strange})) and the fidelity visibility (Eq. (\ref{eq:vis_fidelity})) is positive in the time interval $0\leq \tau \leq \tau _0 \,$.~Observe that this dimensionless ratio is given by
\begin{widetext}
\begin{equation*}
\frac{d\mathcal{V}_0}{d\tau }\Big/\frac{d\mathcal{V}}{d\tau } = \frac{2}{e^{-\Gamma _S \tau }+e^{-\Gamma _L \tau }}\left( 1 - \frac{\Gamma _S e^{-\Gamma _S \tau } + \Gamma _L e^{-\Gamma _L \tau }}{\Gamma (e^{-\Gamma _S \tau }+e^{-\Gamma _L \tau })} \right) > 0 \quad \forall 0\leq \tau \leq \tau _0 \, .
\end{equation*}
\end{widetext}
\noindent Therefore, it is enough to show that
\vspace{-0.2cm}
\begin{equation*}\label{eq:ratio}
\frac{\Gamma _S e^{-\Gamma _S \tau }+\Gamma _L e^{-\Gamma _L \tau }}{\Gamma \left( e^{-\Gamma _S \tau }+e^{-\Gamma _L \tau } \right) } \leq 1\, , \quad \forall \, 0\leq \tau \leq \tau _0 \, .
\end{equation*}
\par In order to do this, notice that since $\Gamma _S = 579\Gamma _L $, we have $e^{-\Gamma _S \tau }\leq e^{-\Gamma _L \tau }$ for every $0\leq \tau \leq \tau _0 $. Thus, in the interval $[0,\tau _0 ]$ we have $$(\Gamma _S -\Gamma _L )e^{-\Gamma _S \tau }\leq (\Gamma _S -\Gamma _L )e^{-\Gamma _L \tau },$$ \noindent or
\begin{equation*}
\frac{1}{2}\left( \Gamma _S e^{-\Gamma _S \tau }+\Gamma _L e^{-\Gamma _L \tau } \right) \leq \frac{1}{2}\left( \Gamma _L e^{-\Gamma _S \tau }+\Gamma _S e^{-\Gamma _L \tau } \right)\; .
\end{equation*}
\noindent Adding $\frac{1}{2}\left( \Gamma _S e^{-\Gamma _S \tau }+\Gamma _L e^{-\Gamma _L \tau } \right)$ to both sides of the previous inequality gives
\begin{equation*}
\Gamma _S e^{-\Gamma _S \tau }+\Gamma _L e^{-\Gamma _L \tau }\leq \Gamma \left( e^{-\Gamma _S \tau }+ e^{-\Gamma _L \tau } \right)\; ,
\end{equation*}
\vspace{0.2cm}
\noindent as desired.
\newpage
|
1,116,691,497,249 | arxiv | \section{Introduction}
Quantum entanglement is perhaps the most striking phenomenon associated with quantum systems. Once seen as ``evidence'' for the alleged incompleteness of quantum mechanics \cite{einstein35}, entanglement has now found numerous applications as a resource for quantum communication \cite{bennett92,bennett93}, computation \cite{shor97}, and cryptography \cite{ekert91}.
The characterization of entanglement has become a key area of quantum information theory. Various schemes to quantify entanglement have been proposed (for a review, see \cite{horodecki09}). A particularly interesting approach is offered by the Lewenstein--Sanpera decomposition (LSD) \cite{lewenstein98} of a composite quantum state, which comprises a convex sum of a separable state and an entangled state.
Now for any 2-qubit system, there is a unique \emph{optimal} LSD. This optimal decomposition has a separable part with maximal weight, and the entangled part is a pure state. The weight of the pure state in this decomposition multiplied by its concurrence \cite{wootters97,wootters98} provides a measure of entanglement for the 2-qubit state \cite{lewenstein98, wellens01}.
Analytical expressions for the optimal LSD of some special cases were found in \cite{englert00}, these include the rank-2 states, the self-transposed states, and the generalized Werner states. Recently, a pair of coupled nonlinear equations for finding the optimal LSD of full-rank states was obtained by Wellens and Ku\'s \cite{wellens01}. However, an analytic solution to these equations is only available in the case where the separable part in the optimal LSD has full rank.
As noticed in \cite{Rezaee06}, the problem of finding the optimal LSD can be in some cases formulated as a SemiDefinite Program (SDP). In the present paper, we systematically exploit this connection for 2-qubit states. We first rederive the Wellens--Ku\'s equations for full-rank states in a particularly transparent manner. The SDP formulation also enables us to efficiently compute the optimal decomposition by numerical means. We then extend our analysis to rank-3 states, and obtain necessary and sufficient optimality conditions. With the optimal LSDs of rank-2 states already known \cite{englert02}, this completes the characterization of optimal LSDs for 2-qubit states. We also obtain analytically the optimal LSD for the class of rank-3 states that are orthogonal to a product state and have a separable part of rank 3. For such states, the pure state in the optimal LSD is maximally entangled. This is similar to the full-rank case where the separable part is full rank. There, the nonseparable pure state is maximally entangled too \cite{karnas01}.
\section{Lewenstein--Sanpera Decompositions}
The construction of LSDs hinges on the fact that the set of separable states is convex. Any composite system can be written as a convex sum of a separable state $\rho_\text{sep}$ and an entangled state $\rho_\text{ent}$. Information about nonseparability is then contained in $\rho_\text{ent}$; for example, the state $\rho$ is nonseparable if $\rho_\text{ent}$ does not vanish, and only then.
A simple dimensional argument \cite{lewenstein98} leads to the important consequence that for 2-qubit states, $\rho_\text{ent}$ is just a pure state. In general, there is a continuum of LSDs, $\rho = \lambda\rho_\text{sep}+(1-\lambda)\rho_\text{pure}$, for a given state. Among these is the \emph{optimal} LSD,
\begin{equation}
\rho = \mathcal{S}\varrho_\text{sep}+(1-\mathcal{S})\varrho_\text{pure},\quad \mathcal{S}=\text{max}\{\lambda\}\label{optlsd},
\end{equation}
where $\mathcal{S}$ is the \emph{degree of separability} of $\rho$. Throughout this paper, we will use calligraphic font to refer to quantities that are optimal.
When $\rho$ has full rank, $\varrho_\text{sep}$ is either full-rank or \mbox{rank-3}. In the latter situation, we denote its null eigenstate by $\rho_1$. Let us also introduce $\varrho^{\text{T}_1}_\text{sep}$, the partial transpose with respect to the first qubit of $\varrho_\text{sep}$. Then the \emph{barely-separable} property of $\varrho_\text{sep}$ \cite{karnas01} says that $\varrho^{\text{T}_1}_\text{sep}$ has a zero eigenvalue, whose corresponding null eigenstate shall be denoted by $\rho_2$. We quote the following results from the Wellens--Ku\'s paper \cite{wellens01}, with slight modifications to their notation.
In the optimal LSD of a full-rank state, $\varrho_\text{pure}$ is an eigenstate of $\mu\rho_1+\rho_2^{\text{T}_1}$, $\mu\geq0$ , with a nonpositive eigenvalue,
\begin{equation}
\exists\alpha,\mu\geq 0\qquad(\mu\rho_1+\rho_2^{\text{T}_1})\varrho_\text{pure}=-\alpha\varrho_\text{pure},\label{sepred:eig1}
\end{equation}
with $\mu\rho_1 \equiv 0$ if $\varrho_\text{sep}$ has full rank.
This is accompanied by the eigenstate equation for $\rho_2$,
\begin{equation}
\bigl(\rho-(1-\mathcal{S})\varrho_\text{pure}\bigr)^{\text{T}_1}\rho_2=0.\label{sepred:eig4}
\end{equation}
Equations \eqref{sepred:eig1} and \eqref{sepred:eig4} are the Wellens--Ku\'s equations. In general, there may be several solutions to these coupled eigenvalue equations. However, consistent with the uniqueness of the optimal LSD, there is only one with $\mu,\alpha\geq 0$ that gives a positive and separable, and thus permissible, $\rho_\text{sep}$.
The original proofs of these assertions, as well as the sufficiency of these equations, involve considerable technical detail. The aim of the present paper is to present an alternative derivation, and to generalize these equations to the reduced-rank case.
\section{Semidefinite Programming}
In semidefinite programming \cite{vandenberghe96}, a linear objective function is minimized subject to the constraint that an affine combination of hermitian matrices is positive semidefinite. We now briefly review some important features of SDP.
\subsection{The primal semidefinite program}
In its canonical form, the primal semidefinite program is formally stated as:
\begin{equation}
\begin{array}{ll}
\text{minimize} & \vec{c}^{\,\text{T}}\vec{x}\\
\text{subject to} & F(\vec{x})\geq 0,\label{primalprogram}
\end{array}
\end{equation}
where $F(\vec{x})=F_0+\sum_{i=1}^mx_iF_i$ and $\vec{x}\in\mathbb{R}^m$. The inputs for the primal problem are (i) the vector $\vec{c}\in\mathbb{R}^m$ characterizing the objective function, and (ii) the $m+1$ hermitian matrices $F_0,F_1,\ldots,F_m\in\mathcal{H}^n$ defining the linear matrix inequality, where $\mathcal{H}^n$ is the space of $n\times n$ Hermitian matrices. The primal problem is strictly feasible if there exists $\vec{x}$ such that $F(\vec{x})>0$. The primal optimal value is $p^*=\inf\{\vec{c}^{\,\text{T}}\vec{x}\;|\;F(\vec{x})\geq 0\}$, and we denote the primal optimal set by
\begin{equation}
\mathbf{X}_\text{opt}=\{\vec{x}\:|\:F(\vec{x})\geq 0\:\, \text{and}\:\, \vec{c}^{\,\text{T}}\vec{x}=p^*\}.\label{primaloptset}
\end{equation}
\subsection{The dual semidefinite program}
The dual problem associated with \eqref{primalprogram} is
\begin{equation}
\begin{array}{ll}
\text{maximize} &-\text{tr}\{F_0Z\}\\
\text{subject to} &\text{tr}\{F_iZ\}=c_i, i=1,\ldots,m,\\
{} & Z\geq 0.
\end{array}\label{dualprogram}
\end{equation}
The dual variable $Z=Z^{\dag}\in\mathcal{H}_+^n$ is subject to $m$ equality constraints, defined by the $F_i$s and $c_i$s specified in the primal program, in addition to a condition of nonnegativity. The dual problem is strictly feasible if there exists $Z>0$ satisfying the dual constraints. The dual optimal value is $d^*=\sup\big\{-\text{tr}\{F_0Z\}\;|\;Z\geq 0,\:\text{tr}\{F_iZ\}=c_i\: \forall i\big\}$, while the dual optimal set is
\begin{equation}
\mathbf{Z}_\text{opt}=\big\{Z\geq0\:|\:\text{tr}\{F_iZ\}=c_i\: \forall i, -\text{tr}\{F_0Z\}=d^*\big\}\label{dualoptset}.
\end{equation}
One also has the hierarchy $-\text{tr}\{F_0Z\}\leq d^*\leq p^*\leq \vec{c}^{\,\text{T}}\vec{x}$, meaning that the dual objective yields lower bounds on the optimal primal value, while the primal objective yields upper bounds on the optimal dual value.
\subsection{Complementary slackness condition}
An important quantity to consider is the duality gap $\vec{c}^{\text{T}}\vec{x}+\text{tr}\{F_0Z\}=\text{tr}\{F(\vec{x})Z\}$, which is a nonnegative quantity linear in $\vec{x}$ and $Z$. The equality $d^*=p^*$ holds (no duality gap) if either the primal or the dual problem is strictly feasible. If \emph{both} are strictly feasible, the optimal sets $\mathbf{X}_\text{opt}$ and $\mathbf{Z}_\text{opt}$ are nonempty, and there exist feasible pairs of $\vec{x}$ and $Z$ with $p^*=\vec{c}^{\,\text{T}}\vec{x}=-\text{tr}\{F_0Z\}=d^*$, so that $F(\vec{x})Z=0$. This is the \emph{complementary slackness condition}, stating that the ranges of the nonnegative matrices $F(\vec{x})$ and $Z$ are orthogonal. Under strict primal and dual feasibility, one then has necessary and sufficient optimality conditions for the semidefinite program: a feasible $\vec{x}$ is optimal if and only if there exists a $Z$ such that
\begin{equation}
\begin{array}{l}
F(\vec{x})\geq 0,\: Z\geq 0,\\
\text{tr}\{F_iZ\}=c_i,\: i=1,\ldots,m,\\
F(\vec{x})Z=0.
\end{array}\label{optcondition}
\end{equation}
The above equations provide algebraic expressions that the optimal $\vec{x}$ and $Z$ must satisfy. We will see in the next section that these conditions lead to the Wellens--Ku\'s equations.
\section{Derivation of the Wellens--Ku\'s equations}
Let us notice that we have an optimization problem, in which we wish to minimize a scalar function $-\lambda=-\text{tr}\{\lambda\rho_\text{sep}\}$ of some variables subject to a set of constraints. Firstly, we require $\rho_\text{sep}$ and $\rho_\text{pure}$ in a LSD to be positive semidefinite. Next, the Peres--Horodecki criterion \cite{peres96,horodecki96} tells us that a 2-qubit state is separable if and only if its partial transpose is positive. The crucial point here is that the \emph{separability} constraint has become a \emph{positivity} constraint, ensuring that the optimal LSD problem for 2-qubit states can be formulated as a SDP. We will proceed to show this explicitly. For simplicity, we only consider full-rank states in this section. The case of reduced-rank states will be considered in the following section.
\subsection{Optimal LSD as a semidefinite program}
\subsubsection{The primal problem}
We use $\vec{\sigma}$ and $\vec{\tau}$ to denote the Pauli operators in the first and second qubit space, respectively. It will be convenient to use the \emph{magic basis}, introduced by Hill and Wootters \cite{wootters97, wootters98}, in which the Pauli operators are represented by imaginary antisymmetric $4 \times 4$ matrices while their products are represented by real, symmetric matrices. Partial transposition in the first qubit is effected by $\vec{\sigma}\to-\vec{\sigma}, \vec{\tau}\to\vec{\tau}$.
Our basis $\{E_i : i=1,\ldots,16\}$ for $4\times 4$ hermitian operators comprises the sixteen combinations of the Pauli operators and the identity, $\sigma_i\tau_j$, where $i,j=0,1,2,3$ and $\sigma_0=\tau_0=\openone_4$. These are traceless (except $E_1=\openone_4$) and mutually orthogonal, i.e., $\text{tr}\{E_iE_j\}=4\delta_{ij}$. A LSD of a state $\rho$ can be written as
\begin{equation}
\rho=\lambda\rho_\text{sep}+(1-\lambda)\rho_\text{pure}\equiv\tilde{\rho}_\text{sep}+\tilde{\rho}_\text{pure},
\end{equation}
where the weights $\lambda$ and $1-\lambda$ have been absorbed into $\tilde{\rho}_\text{sep}\equiv\lambda\rho_\text{sep}$ and ${\tilde{\rho}_\text{pure}\equiv(1-\lambda)\rho_\text{pure}}$. In this notation, we have the parameterization $\tilde{\rho}_\text{sep}=\frac{1}{4}\vec{x}\cdot\vec{E}$, where ${\vec{x}^{\,\text{T}}=(\lambda,x_2,\ldots,x_{16})\in\mathbb{R}^{16}}$.
In the search for the optimal LSD, we comb through the possible $\tilde{\rho}_\text{sep}$s via choices of $\vec{x}$, but these choices are not arbitrary. To ensure a valid decomposition in the first place, we must enforce three constraints,
\begin{equation}
\begin{array}{ll}
\begin{tabular}[c]{ll}
(i) & positivity of $\tilde{\rho}_\text{sep}$ \\
(ii) & separability of $\tilde{\rho}_\text{sep}$ \\
(iii) & positivity of $\tilde{\rho}_\text{pure}$ \\
\end{tabular}
\end{array}
\end{equation}
which we merge into a single inequality of a $12 \times 12$ matrix:
\begin{equation}
\begin{array}{ll}
\begin{bmatrix}
\tilde{\rho}_\text{sep} & 0 & 0\\
0 & \tilde{\rho}_\text{sep}^{\text{T}_1} & 0\\
0 & 0 & \rho-\tilde{\rho}_\text{sep}
\end{bmatrix}\geq 0.\label{blockconstraint}
\end{array}
\end{equation}
Next, we introduce 16 block-diagonal $12\times12$ hermitian matrices $F_i$ associated with the $E_i$s, defined by $F_i=\frac{1}{4}\text{diag}(E_i, E_i^{\text{T}_1}, -E_i), i=1,\ldots,16$, as well as ${F_0=\text{diag}(0, 0, \rho)}$. In terms of the $F_i$s, the inequality constraint in Eq.~\eqref{blockconstraint} can be expressed as ${F(\vec{x})=F_0+\sum_{i=1}^{16}{x_iF_i}\geq 0}$. Finally, let ${\vec{c}^{\,\text{T}}=(-1, 0,\ldots,0)\in \mathbb{R}^{16}}$, so that ${\vec{c}^{\,\text{T}}\vec{x}=-\lambda}$. Maximizing $\lambda$ to obtain the optimal LSD is then equivalent to minimizing $\vec{c}^{\,\text{T}}\vec{x}$.
With these specifications, we have rephrased the optimal LSD problem as a SDP in the form of \eqref{primalprogram}. One can then efficiently compute the optimal LSD of a given 2-qubit state using well-established algorithms for solving SDPs. For instance, we have written a working routine using \textsf{cvx} version 1.2 \cite{grant08a}, which is a modeling system for disciplined convex programming, utilizing the open-source solver SDPT3 \cite{TTT06}.
Next, we establish strict primal feasibility. For this, we choose $\tilde{\rho}_\text{sep}=\alpha\frac{1}{4}\openone_4$, i.e., $\vec{x}^{\,\text{T}}=(\alpha, 0,\ldots,0)$, where $\alpha\frac{1}{4}$ is a positive number smaller than the smallest eigenvalue of $\rho$. Clearly, $\tilde{\rho}_\text{sep}>0$ and $\tilde{\rho}_\text{sep}^{\text{T}_1}>0$. Furthermore, since $\rho>0$, it has a spectrum with 4 positive eigenvalues, and choosing $\alpha$ as described above, the difference ${\rho-\tilde{\rho}_\text{sep}=\rho-\alpha\frac{1}{4}\openone_4}$ is still positive definite. Thus, Eq.~\eqref{blockconstraint} holds with strict inequality as required. Since the primal problem is strictly feasible, we conclude that there is no duality gap.
\subsubsection{The dual problem}
We now focus our attention on the dual problem associated with \eqref{primalprogram}. Following \eqref{dualprogram}, the dual variable $Z$ is a $12\times 12$ positive semidefinite matrix subject to the 16 dual constraints $\text{tr}\{F_iZ\}=c_i$. Since $F_0$ and $F_i$ are block-diagonal, the dual objective depends only on the block-diagonal entries of $Z$. Without loss of generality, we can choose $Z$ to be block-diagonal. For convenience, we write
\begin{equation}
Z=
\begin{bmatrix}
Z_1 & 0 & 0\\
0 & Z_2 & 0\\
0 & 0 & Z_3
\end{bmatrix},
\end{equation}
where $Z_1$, $Z_2$ and $Z_3$ are nonnegative $4\times4$ matrices. With this notation, the dual objective becomes $-\text{tr}\{\rho Z_3\}$. Since there is no duality gap, we have ${d^*=-\text{tr}\{\rho\mathcal{Z}_3\}=p^*=-\mathcal{S}}$.
The dual problem is strictly feasible too: choose ${Z=\text{diag}(\openone_4,\openone_4,3\openone_4)>0}$, and check that all the constraints are indeed fulfilled. The first dual constraint ${\text{tr}\{F_1Z\}=-1}$ is satisfied and the $2\text{nd}$ to $16\text{th}$ dual constraints $\text{tr}\{F_iZ\}=0$ hold, since $E_i$ and $E_i^{\text{T}_1}$ are traceless by construction.
\subsection{Equivalence of complementary slackness condition and Wellens--Ku\'s equations}
With strict primal \emph{and} dual feasibility, we now have necessary and sufficient optimality conditions as a consequence of the complementary slackness condition \eqref{optcondition}. In the present context, conditions \eqref{optcondition} translate into the following statement. The primal variable $\tilde{\varrho}_\text{sep}$ is optimal if and only if there exists a $\mathcal{Z}$ such that
\begin{eqnarray}
\begin{array}{ll}
\text{(I)} & \begin{bmatrix}
\tilde{\varrho}_\text{sep} & 0 & 0\\
0 & \tilde{\varrho}_\text{sep}^{\,\text{T}_1} & 0\\
0 & 0 & \tilde{\varrho}_\text{pure}
\end{bmatrix}
\begin{bmatrix}
\mathcal{Z}_1 & 0 & 0\\
0 & \mathcal{Z}_2 & 0\\
0 & 0 & \mathcal{Z}_3
\end{bmatrix}=0,
\end{array} \nonumber
\end{eqnarray}
\begin{eqnarray}
\begin{array}{ll}
\text{(II)} & \begin{bmatrix}
\tilde{\varrho}_\text{sep} & 0 & 0\\
0 & \tilde{\varrho}_\text{sep}^{\,\text{T}_1} & 0\\
0 & 0 & \tilde{\varrho}_\text{pure}
\end{bmatrix}\geq 0,
\end{array} \nonumber
\end{eqnarray}
\begin{eqnarray}
\begin{array}{ll}
\text{(III)} & \begin{bmatrix}
\mathcal{Z}_1 & 0 & 0\\
0 & \mathcal{Z}_2 & 0\\
0 & 0 & \mathcal{Z}_3
\end{bmatrix}\geq 0, \quad
\begin{array}{ll}
\text{tr}\{F_i\mathcal{Z}\}=c_i,\\
i=1,\ldots,16.
\end{array}
\end{array}\label{compslack}
\end{eqnarray}
Here, $\tilde{\varrho}_\text{sep}$, $\tilde{\varrho}_\text{pure}$, and $\mathcal{Z}$ refer to the \emph{optimal} variables. Let us digest this information. (I) is a set of three eigenstate equations from the slackness condition that determines the matrices $\mathcal{Z}_1$, $\mathcal{Z}_2$ and $\mathcal{Z}_3$. (II) is the primal constraint and simply reiterates that we have a valid decomposition in the first place. (III) is the set of dual constraints, which we will utilize to express $\mathcal{Z}_3$ in terms of $\mathcal{Z}_1$ and $\mathcal{Z}_2$. Notice that the $F_i$s are composed of blocks of $E_i$s, the 16 orthogonal basis matrices for the space of $4\times 4$ hermitian matrices. In fact, the 16 dual constraints $\text{tr}\{F_i\mathcal{Z}\}=c_i$ are really statements about the 16 components of the operator $\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1}-\mathcal{Z}_3$ in the ``directions'' of $E_i$. Specifically, the $i\text{th}$ dual constraint reads
\begin{align}
\text{tr}\{F_i\mathcal{Z}\}
&= \text{tr}\left\{\frac{1}{4}
\begin{bmatrix}
E_i & 0 & 0\\
0 & E_i^{\text{T}_1} & 0\\
0 & 0 & -E_i\end{bmatrix}
\begin{bmatrix}
\mathcal{Z}_1 & 0 & 0\\
0 & \mathcal{Z}_2 & 0\\
0 & 0 & \mathcal{Z}_3
\end{bmatrix}\right\}\nonumber\\
&= \frac{1}{4}\text{tr}\{E_i\mathcal{Z}_1\}
+\frac{1}{4}\text{tr}\{E_i^{\text{T}_1}\mathcal{Z}_2\}
-\frac{1}{4}\text{tr}\{E_i\mathcal{Z}_3\}\nonumber\\
&= \frac{1}{4}\text{tr}\{E_i(\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1}-\mathcal{Z}_3)\}=c_i,
\end{align}
where we used the identity ${\text{tr}\{E_i^{\text{T}_1}\mathcal{Z}_2\}=\text{tr}\{E_i\mathcal{Z}_2^{\text{T}_1}\}}$. Since any hermitian operator can be written as ${H=\frac{1}{4}\sum_{i=1}^{16}{E_i\,\text{tr}\{E_iH\}}}$, we arrive at
\begin{equation}
\mathcal{Z}_3=\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1}+\openone_4.\label{Z3formula}
\end{equation}
We are now ready to state the Wellens--Ku\'s equations. The third block equation in (II) states, using Eq.~\eqref{Z3formula},
\begin{equation}
(\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1})\tilde{\varrho}_\text{pure}=-\tilde{\varrho}_\text{pure}.\label{SDPWK1}
\end{equation}
This is supplemented by the second block equation in (I), in which we carry out the replacement $\tilde{\varrho}_\text{sep}\to\rho-\tilde{\varrho}_\text{pure}$ to obtain
\begin{equation}
(\rho-\tilde{\varrho}_\text{pure})^{\text{T}_1}\mathcal{Z}_2=0.\label{SDPWK2}
\end{equation}
Equations \eqref{SDPWK1} and \eqref{SDPWK2} are the Wellens--Ku\'s equations, which we restate here for easy reference:
\begin{eqnarray}
\exists\alpha,\mu\geq 0 &&( \mu\rho_1+\rho_2^{\text{T}_1})\varrho_\text{pure}=-\alpha\varrho_\text{pure}, \label{WK1} \\
&&\left( \rho-(1-\mathcal{S})\varrho_\text{pure}\right)^{\text{T}_1}\rho_2=0.\label{WK2}
\end{eqnarray}
The first block-equation in (I) states that $\tilde{\varrho}_\text{sep}\mathcal{Z}_1=0$, so $\mathcal{Z}_1$ is proportional to $\rho_1$. Therefore, Eqs.~\eqref{WK1} and \eqref{SDPWK1} are really the same equations, with the multiplicative factors $\alpha$ and $\mu$ absorbed in the normalization of $\mathcal{Z}_1$ and $\mathcal{Z}_2$. It is also clear that Eqs.~\eqref{WK2} and \eqref{SDPWK2} are the same equations, with $\rho_2$ and $\mathcal{Z}_2$ differing only by a multiplicative factor.
We remark that the barely-separable property of $\tilde{\varrho}_\text{sep}$ in the optimal LSD of $\rho$ can be derived as a consequence of this formulation. Suppose otherwise, that $\tilde{\varrho}_\text{sep}^{\text{T}_1}$ has full rank. Then we must have $\mathcal{Z}_2=0$ and Eq.~\eqref{SDPWK1} becomes $\mathcal{Z}_1\tilde{\varrho}_\text{pure}=-\tilde{\varrho}_\text{pure}$. But $\mathcal{Z}_1$ is assuredly nonnegative by (III), so $\tilde{\varrho}_\text{pure}$ must vanish, which is to say, $\rho$ was separable to begin with.
Now for a nonseparable $\rho$, $\tilde{\varrho}_\text{sep}^{\text{T}_1}$ has rank 3 so $\mathcal{Z}_2$ must be a pure state. If in addition, $\tilde{\varrho}_\text{sep}$ has full rank, $\mathcal{Z}_1$ must vanish. In this case, $\tilde{\varrho}_\text{pure}$ is the pure state associated with the negative eigenvalue of $\mathcal{Z}_2^{\text{T}_1}$, which is a Bell state \cite{sanpera98}. This is consistent with the observation made by Karnas and Lewenstein in Ref.~\cite{karnas01}.
In passing we note that $\mathcal{Z}_1,\mathcal{Z}_2$ and $\mathcal{Z}_3$ have an interesting interpretation in the language of entanglement witnesses. An entanglement witness $W$ is a hermitian operator such that $\text{tr}\{W\rho_\text{sep}\}\geq0$ for all separable states $\rho_\text{sep}$, but for some entangled state $\rho_\text{ent}$, $\text{tr}\{W\rho_\text{ent}\}<0$. The dual of the optimal LSD problem for 2-qubit systems can be written as an optimization over a constrained set of entanglement witnesses \cite{brandao05}, so that
\begin{equation}
1-\mathcal{S}=\text{max}\big\{0,\minusmin_{{W+\openone_4\geq0}}\text{tr}\{W\rho\}\big\}.
\end{equation}
The quantity $\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1}$ can be interpreted as the optimal entanglement witness $\mathcal{W}$ for the state $\rho$, since
\begin{equation}
\begin{array}{l}
\text{tr}\{\mathcal{W}\rho_\text{sep}\}\geq0 \;\;\forall \:\text{separable states}\: \rho_\text{sep}, \\
\text{tr}\{\mathcal{W}\rho\}
=\mathcal{S}-1<0.
\end{array}
\end{equation}
It is optimal because ${\text{tr}\{\tilde{\varrho}_\text{sep}(\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1})\}=0}$, so ${\mathcal{Z}_1+\mathcal{Z}_2^{\text{T}_1}}$ ``ignores'' the separable content of $\rho$, while maximally detecting the entangled part $\tilde{\varrho}_\text{pure}$ in accordance with Eq.~\eqref{SDPWK1}.
\section{Generalized Wellens--Ku\'s equations for reduced-rank states}
Since the optimal LSDs for rank-2 states are already known, it remains to characterize the rank-3 states to fully apprehend the LSD of any 2-qubit state. As a side result, Wellens and Ku\'s \cite{wellens01} generalized their equations to the reduced-rank states by treating them as the limit $x\rightarrow 0$ of the full-rank state $x\frac{1}{4}\openone_4+(1-x)\rho$. However, their approach has the implicit assumption that $\mathcal{Z}_2$ is a pure state, whereas it could also be of rank 2. The component $\mathcal{Z}_2^{\text{T}_1}$ in the optimal entanglement witness need not be the partial transpose of a pure state. As we will show, the SDP approach naturally takes care of this subtlety.
Clearly, the primal problem in the previous form is never strictly feasible if $\rho$ has rank 3. In order to utilize the complementary slackness condition, we need to modify the primal problem such that strict feasibility is restored. We denote the pure state orthogonal to $\rho$ by $\gamma$ and its concurrence by $q$. There will be two separate cases to consider: (i) $\gamma$ is entangled, and (ii) $\gamma$ is a product state.
\subsection{$\gamma$ is an entangled state}
\subsubsection{The primal problem}
We consider a parameterization in the three dimensional subspace spanned by $\rho$, which requires $3 \times 3=9$ parameters. The rank-3 projector onto the orthogonal complement of $\gamma$ is given by $P_3=\openone_4-\gamma$. We denote by $\openone_3$ its restriction to its own support. In its generic form, $\gamma$ can be written as
\begin{equation}\label{formstate}
\gamma=\frac{1}{4}(\openone_4+p\sigma_1-p\tau_1-\sigma_1\tau_1-q\sigma_2\tau_2-q\sigma_3\tau_3),
\end{equation}
where $p=\sqrt{1-q^2}$ and $0<q\leq 1$. One can then construct an orthogonal basis $\{\Gamma_i : i=1,\ldots,9\}$ for the support of $\openone_3$, in which $\Gamma_1=\openone_3$ and the remaining $\Gamma_i$ are traceless. An explicit construction for $\{\Gamma_i\}$ can be found in \cite{englert02}. In this basis, the parameterization for the (unnormalized) rank-3 state $\tilde{\rho}_\text{sep}$ becomes $\tilde{\rho}_\text{sep}=\frac{1}{3}\vec{x}\cdot\vec{\Gamma}$, where the primal variable $\vec{x}=(\lambda,x_2,\ldots,x_9)$ is in $\mathbb{R}^9$.
One can represent the $\Gamma_i$s by $3\times 3$ matrices, but their partial transposes $\Gamma_i^{\text{T}_1}$s can be full-rank, therefore we need $4\times 4$ matrices to write them. Following the same prescription as in the full-rank case, we express the three primal constraints in block diagonal form,
\begin{equation}
\begin{bmatrix}
0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & \rho
\end{bmatrix}+\frac{1}{3}\sum_{i=1}^9{x_i
\begin{bmatrix}
\Gamma_i & 0 & 0\\
0 & \Gamma_i^{\text{T}_1} & 0\\
0 & 0 & -\Gamma_i
\end{bmatrix}}
\geq 0,\label{blockconstraintrank3}
\end{equation}
where the first and third blocks are $3\times 3$ and the second block is $4\times 4$. Analogously to the full-rank case, we define $F_i=\frac{1}{3}\text{diag}(\Gamma_i, \Gamma_i^{\text{T}_1}, -\Gamma_i), i=1,\ldots,9$, and $F_0=\text{diag}(0, 0, \rho)$, so that Eq.~\eqref{blockconstraintrank3} turns into $F(\vec{x})=F_0+\sum_{i=1}^9{x_iF_i}\geq 0$. Finally, we also define $\vec{c}^{\,\text{T}}=(-1,0,\ldots,0)\in \mathbb{R}^{9}$, such that $\vec{c}^{\,\text{T}}\vec{x}=-\lambda$. With these specifications, the optimal LSD problem for rank-3 states has been cast as a SDP.
We proceed to show that this is a strictly feasible problem. The state $\rho$ has three positive eigenvalues and can be regarded as positive definite when considering only the subspace orthogonal to $\gamma$. We choose $\vec{x}^{\,\text{T}}=(\alpha,0,\ldots,0)$ where $0<\alpha /3<\text{smallest positive eigenvalue of $\rho$}$, so that
\begin{eqnarray}
\tilde{\rho}_\text{sep}&=&\alpha\frac{1}{3}\openone_3>0, \nonumber \\
\rho-\tilde{\rho}_\text{sep}&=&\rho-\alpha\frac{1}{3}\openone_3>0.
\end{eqnarray}
The first and third blocks of $F(\vec{x})$ are thus positive definite. For the second block, we need the fact that the eigenvalues of $\gamma^{\text{T}_1}$ are given by $\frac{1}{2}(1\pm p)$ and $\pm\frac{1}{2}q$ \cite{sanpera98}. Since $0<q\leq1$ and $0\leq p < 1$, this means that $\tilde{\rho}_\text{sep}^{\text{T}_1}=\alpha\frac{1}{3}(\openone_4-\gamma^{\text{T}_1})$ is positive definite. Let us note that $\tilde{\rho}_\text{sep}^{\text{T}_1}$ has zero eigenvalues only if $q=0$, i.e., when $\gamma$ is a product state. Thus, if we assume that $\gamma$ is \emph{not} a product state, $\tilde{\rho}_\text{sep}^{\text{T}_1}>0$ and we have strict primal feasibility. The case where $\gamma$ is a product state is treated in Sec.~\ref{section}.
\subsubsection{The dual problem}
The dual variable $Z$ is now a $10\times10$ positive semidefinite matrix, subject to nine dual constraints. Strict dual feasibility is immediate as we can choose ${Z=\text{diag}(\openone_3, \openone_4, 3\openone_3)}$, which can be easily checked to satisfy the nine dual constraints.
\subsubsection{Generalized Wellens--Ku\'s equations}
Now, having established strict primal and dual feasibility, we can invoke the complementary slackness condition \eqref{compslack}, or rather its rank-3 analog. The $i\text{th}$ dual constraint now reads
\begin{eqnarray}
\text{tr}\{F_i\mathcal{Z}\}&=&\frac{1}{3}\text{tr}\{\Gamma_i(\mathcal{Z}_1+\openone_3\mathcal{Z}_2^{\text{T}_1}\openone_3-\mathcal{Z}_3)\}\nonumber \\
&=&c_i,\;i=1,\ldots,9.
\end{eqnarray}
Any hermitian operator orthogonal to $\gamma$ can be written as $H_\text{rank3}=\sum_{i=1}^{9}{\Gamma_i\,\text{tr}\{\Gamma_iH_\text{rank3}\}}/\text{tr}\{\Gamma_i^2\}$. Let us repeat here that both $\mathcal{Z}_2$ and $\mathcal{Z}_2^{\text{T}_1}$ have support in the total Hilbert space. To avoid inconsistency in the notation, let us define $\mathcal{Z}_{2||}^{\text{T}_1}$, the restriction of the projection $P_3{Z}_2^{\text{T}_1}P_3$ to its own support. We then arrive at $\mathcal{Z}_3=\mathcal{Z}_1+\mathcal{Z}_{2||}^{\text{T}_1}+\openone_3$. The third block equation in (I) of Eq.~\eqref{compslack} then states that $(\mathcal{Z}_1+\mathcal{Z}_{2||}^{\text{T}_1}+\openone_3)\tilde{\varrho}_\text{pure}=0,$ and since $\tilde{\varrho}_\text{pure}$ resides in the subspace that $\openone_3$ projects onto,
\begin{equation}
(\mathcal{Z}_1+\mathcal{Z}_{2||}^{\text{T}_1})\tilde{\varrho}_\text{pure}=-\tilde{\varrho}_\text{pure},\label{SDPWK1rank3}
\end{equation}
and as before, this is supplemented by the eigenstate equation for $\mathcal{Z}_2$,
\begin{equation}
(\rho-\tilde{\varrho}_\text{pure})^{\text{T}_1}\mathcal{Z}_2=0.\label{SDPWK2rank3}
\end{equation}
Equations \eqref{SDPWK1rank3} and \eqref{SDPWK2rank3} are the generalization of the Wellens--Ku\'s equations to the rank-3 case where the orthogonal state is entangled. These are almost identical to the original equations, the subtle difference being that not only $Z_2$, but also $Z_{2||}^{\text{T}_1}$, the projection of its partial transpose onto the support of $\rho$, are now relevant. Similarly to the full-rank case, one can define $\mathcal{Z}_1+\mathcal{Z}_{2||}^{\text{T}_1}$ as the optimal entanglement witness for the state $\rho$.
\subsection{$\gamma$ is a product state}\label{section}
\subsubsection{The primal problem}
A little more care is needed if $\rho$ is orthogonal to a pure product state $\gamma=\frac{1}{2}(\openone_4+\sigma_1)\frac{1}{2}(\openone_4-\tau_1)$, the $q=0$ version of Eq.~\eqref{formstate}. In this case, since $\tilde{\rho}_\text{sep}$ is separable and orthogonal to $\gamma$, $\tilde{\rho}_\text{sep}^{\text{T}_1}$ and $\gamma^{\text{T}_1}$ must be orthogonal too. The separability of $\tilde{\rho}_\text{sep}$ then requires: (i) the positivity of $\tilde{\rho}_\text{sep}^{\text{T}_1}$, and (ii) the orthogonality of $\tilde{\rho}_\text{sep}^{\text{T}_1}$ and $\gamma^{\text{T}_1}$. Only two of the nine $\Gamma_i$s do \emph{not} obey $\Gamma_i^{\text{T}_1}\gamma^{\text{T}_1}=0$. These are ${\Gamma_8=\frac{1}{2}(\sigma_2\tau_2-\sigma_3\tau_3)}$ and $\Gamma_9=\frac{1}{2}(\sigma_2\tau_3+\sigma_3\tau_2)$. Furthermore, there exists a proportionality relation between the products $\Gamma_8^{\text{T}_1}\gamma^{\text{T}_1}$ and $\Gamma_9^{\text{T}_1}\gamma^{\text{T}_1}$, inasmuch as $\Gamma_8^{\text{T}_1}\gamma^{\text{T}_1}=\frac{1}{2}(\Gamma_8^{\text{T}_1}+\text{i}\Gamma_9^{\text{T}_1})=\text{i}\Gamma_9^{\text{T}_1}\gamma^{\text{T}_1}$. Constraint (ii) then reads
\begin{equation}
\tilde{\rho}_\text{sep}^{\text{T}_1}\gamma^{\text{T}_1}=\frac{1}{6}(x_8-\text{i}x_9)(\Gamma_8^{\text{T}_1}+\text{i}\Gamma_9^{\text{T}_1})=0.
\end{equation}
Since $x_8$ and $x_9$ are real, they must vanish and we have the parameterization $\tilde{\rho}_\text{sep}=\frac{1}{3}\sum_{i=1}^7{x_i\Gamma_i}$. Consequently, the primal objective is now $\vec{c}^\text{T}\vec{x}$ with $\vec{x}\in\mathbb{R}^7$. The same choice of $\tilde{\rho}_\text{sep}=\alpha\frac{1}{3}\openone_3$ shows that this modified primal problem is strictly feasible.
\subsubsection{The dual problem}
One can also verify, in the now familiar manner, that $Z=\text{diag}(\openone_3, \openone_3^{\text{T}_1}, 3\openone_3)$ is a strictly feasible point for the modified dual problem.
\subsubsection{Generalized Wellens--Ku\'s equations}
The seven dual constraints lead to ${\mathcal{Z}_1+\mathcal{Z}_{2||}^{\text{T}_1}-\mathcal{Z}_3+a\Gamma_8+b\Gamma_9=-\openone_3}$, where $a$ and $b$ are some real coefficients. We then arrive at another pair of generalized Wellens--Ku\'s equations,
\begin{align} (\mathcal{Z}_1+\mathcal{Z}_{2||}^{\text{T}_1}+a\Gamma_8+b\Gamma_9)\tilde{\varrho}_\text{pure}&=-\tilde{\varrho}_\text{pure}, \label{SDPWKspecial1}\\
(\rho-\tilde{\varrho}_\text{pure})^{\text{T}_1}\mathcal{Z}_2&=0,\label{SDPWKspecial2}
\end{align}
which are necessary and sufficient for optimality. Note that $\mathcal{Z}_2$ lies in the support of $\openone_3^{\text{T}_1}$ while $\mathcal{Z}_2^{\text{T}_1}$ can have support in the total Hilbert space since $\mathcal{Z}_2$ is not separable. The term in parentheses in Eq.~\eqref{SDPWKspecial1} is the optimal entanglement witness for $\rho$. In contrast with the earlier cases, nonpositivity is provided by the combination $\mathcal{Z}_{2||}^{\text{T}_1}+a\Gamma_8+b\Gamma_9$.
\subsubsection{$\tilde{\varrho}_{\rm sep}$ has rank 3}
In the full-rank case, when the separable part is full-rank, the nonseparable part is maximally entangled. A similar property exists for rank-3 states. In three dimensions, the analog of the full-rank case is a rank-3 state orthogonal to a pure product state. Note that the pure state has to be a product state to ensure that all the relevant positive operators remain of rank 3 under partial transposition. If $\tilde{\varrho}_\text{sep}$ has rank 3, the optimal decomposition can be obtained analytically. In this case, ${\mathcal{Z}_1=\mathcal{Z}_2=0}$ and Eq.~\eqref{SDPWKspecial1} reduces to ${(a\Gamma_8+b\Gamma_9) \tilde{\varrho}_\text{pure}=-\tilde{\varrho}_\text{pure}}$. In the magic basis, which is a basis of Bell states, the nonzero matrix elements of $a\Gamma_8+b\Gamma_9$ appear as
\begin{equation}
a\Gamma_8+b\Gamma_9\widehat{=}
\begin{bmatrix}
a & b\\
b & -a
\end{bmatrix},
\end{equation}
where the two basis states are $\ket{\phi^+}=\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})$ and $\ket{\psi^+}=\frac{\text{i}}{\sqrt{2}}(\ket{01}+\ket{10})$. Equation~\eqref{SDPWKspecial1} imposes that this matrix has an eigenvalue $-1$. Putting this requirement into its characteristic equation leads to the relation $a^2+b^2=1$, and an angular parameterization, $a=\cos{\theta}, b=\sin{\theta}$, can be used. The corresponding eigenstate is nondegenerate, and can hence be identified with $\tilde{\varrho}_\text{pure}$. Explicitly, we have
\begin{equation}
\ket{\varrho_\text{pure}}=\cos{\frac{\theta}{2}}\ket{\phi^+} - \sin{\frac{\theta}{2}}\ket{\psi^+},
\end{equation}
which is maximally entangled.
Now, $\tilde{\varrho}_\text{sep}$ has no components along $\Gamma_8$ and $\Gamma_9$, so one must have $\text{tr}\{\Gamma_i (\rho-\tilde{\varrho}_\text{pure})\}=0 \;\; \textrm{for} \; i=8,9$. These turn out to provide a simple set of equations for the unknowns $\theta$ and $\mathcal{S}$:
\begin{equation}
\begin{array}{rl}
\text{tr}\{\Gamma_8\rho\}&=(\mathcal{S}-1)\cos{\theta},\\
\text{tr}\{\Gamma_9\rho\}&=(\mathcal{S}-1)\sin{\theta}.
\end{array}\label{specialsimultaneous}
\end{equation}
Therefore, we obtain $\mathcal{S}=1-\sqrt{(\text{tr}\{\Gamma_8\rho\})^2+(\text{tr}\{\Gamma_9\rho\})^2}$. The solution to Eq.~\eqref{specialsimultaneous} then gives us $\tilde{\varrho}_\text{pure}$ and ${\tilde{\varrho}_\text{sep}=\rho-\tilde{\varrho}_\text{pure}}$ in the optimal LSD of $\rho$.
In general, one can assume that $\tilde{\varrho}_\text{sep}$ has rank 3 and use the above result to determine the optimal $\tilde{\varrho}_\text{sep}$ and $\tilde{\varrho}_\text{pure}$. It is however necessary to check if the deduced $\tilde{\varrho}_\text{sep}$ is indeed of rank 3 and separable. If the verification fails, $\tilde{\varrho}_\text{sep}$ has rank 2 and one has to solve the generalized Wellens--Ku\'s equations given in Eqs.~\eqref{SDPWKspecial1} -- \eqref{SDPWKspecial2}.
\section{Conclusion}
We have demonstrated that the problem of finding the optimal LSD of a 2-qubit state is a SDP. Indeed, the Peres-Horodecki criterion has permitted us to advantageously rephrase a separability constraint as a positivity constraint. We have shown that both the primal and the associated dual programs are strictly feasible, leading us to necessary and sufficient optimality conditions for LSD. In particular we have derived the original Wellens--Ku\'s equations for full-rank states in a simple and natural way. Moreover we have generalized them to rank-3 states. We have also described the link between the dual SDP variables and entanglement witnesses. Finally, many efficient algorithms for solving SDPs are available, allowing one to handle this problem numerically. Because the Peres--Horodecki criterion is also necessary and sufficient for composite systems of dimensions $2\times 3$, it might be possible to extend the SDP formulation to this case.
\begin{acknowledgments}
TGC wishes to thank the National University of Singapore for granting him an undergraduate scholarship under which this study was carried out. Centre for Quantum Technologies is a Research Centre of Excellence funded by Ministry of Education and National Research Foundation of Singapore.
\end{acknowledgments}
|
1,116,691,497,250 | arxiv |
\section{Conclusions}
\label{sec:Conclusion}
In conclusion, we introduced an effective adaptive-frequency MPC and optimization framework for bipedal locomotion over terrains with discontinuities such as stepping stones with varied gait periods and step lengths. In addition, we also introduced the adaptive-frequency trajectory optimization framework to generate optimal gait periods for each step, CoM trajectory, and foot positions based on the terrain. We paired MPC with WBC for more accurate tracking control performance. Through numerical validation in simulation, we successfully allowed the robot to walk over a series of uneven stepping stones with perturbations while maintaining the robot's average linear velocity at 1.5 $\unit{m/s}$.
\section{Adaptive-frequency Control with Varied Gait Periods}
\label{sec:trackingControl}
\begin{figure*}[!t]
\hspace{0.2cm}
\center
\begin{subfigure}[b]{0.78\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/optimization1.png}
\caption{Snapshot of Optimization Results}
\label{fig:optresults}
\end{subfigure}
\\
\begin{subfigure}[b]{0.85\textwidth}
\centering
\includegraphics[clip, trim=0.2cm 0cm 0.3cm 0cm, width=\columnwidth]{Figures/snapshots1.png}
\caption{Snapshot of Controller Tracking Results in Simulation with Terrain Perturbations (all using results from (a))}
\label{fig:trackingresults}
\end{subfigure}
\caption{{\bfseries Motion Snapshots of Uneven Stepping Stone Locomotion} a). Optimization results. b). Simulation results of various cases with terrain perturbations.}
\label{fig:snapshots}
\vspace{-1.5em}
\end{figure*}
In this section, we present a force-and-moment-based MPC with adaptive frequency in bipedal walking gait with varied step lengths to overcome discontinued terrains without slowing down or coming to a complete stop.
The optimization introduced in Section.\ref{subsec:Optimization} outputs optimized sampling times for MPC, which can also be interpreted as the gait period for each step. Hence it is important to modify these controllers to accept walking gait with different gait periods.
\subsection{Adaptive-frequency MPC for Bipedal Locomotion}
\label{subsec:VGP-MPC}
First, we present the adaptive-frequency MPC. The MPC framework works with varied gait periods from the optimization results. Both MPC and optimization use the same simplified dynamics model shown in Figure. \ref{fig:design}.
To form a linear state-space dynamics equation for MPC, we choose to include gravity $\bm g$ as a dummy state variable $\bm x = [{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c; \bm g] \in \mathbb{R}^{15}$ in equation (\ref{eq:simpDyn}) to form,
\begin{align}
\label{eq:linearSS}
\dot{{\bm { x}}}(t) = {\hat{\bm A_c}} {{\bm {x}}} + {\hat{\bm B_c}} \bm u.
\end{align}
where continuous-time matrices ${\hat{\bm A_c} \in \mathbb{R}^{15\times15}}$ and ${\hat{\bm B_c} \in \mathbb{R}^{15\times10}}$ are modified from $\bm A$ and $\bm B$.
A formulation of the MPC problem with finite horizon $k$ can be written in the following form,
\begin{align}
\label{eq:MPCform}
\underset{\bm{x,u}}{\operatorname{min}} \:\: & \sum_{i = 0}^{k-1}(\bm x_{i+1}- \bm x_{i+1}^{ref})^T\bm Q_i(\bm x_{i+1}- \bm x_{i+1}^{ref}) + \bm{R}_i\| \bm{u}_i \|
\end{align}
\begin{subequations}
\begin{align}
\label{eq:dynamicCons}
\:\:\operatorname{s.t.} \quad {\bm {x}}[i+1] = \bm {\hat{A}}[i]\bm x[i] + \bm {\hat{B}}[i]\bm u[i], \\
\label{eq:frictionCons}
\nonumber
-\mu {F}_{iz} \leq F_{ix} \leq \mu {F}_{iz} \quad \quad\\
-\mu {F}_{iz} \leq F_{iy} \leq \mu {F}_{iz} \quad \quad\\
\label{eq:forceCons}
0< {F}_{min} \leq F_{iz} \leq {F}_{max} \quad \quad\\
\label{eq:MPCeqCons}
\bm D_i \bm u_i = 0 \quad \quad \quad \quad
\end{align}
\end{subequations}
The objective of the problem is to drive state $\bm x$ close to command and minimize $\bm u$. These objectives are weighted by diagonal matrices $\bm Q_i\in \mathbb{R}^{15\times15}$ and $\bm R_i\in \mathbb{R}^{10\times10}$.
Equation (\ref{eq:dynamicCons}) to (\ref{eq:forceCons}) are constraints of the MPC problem. Equation (\ref{eq:dynamicCons}) is an equality constraint of the linearized dynamics equation in discrete-time at $i$th time-step derived from equation (\ref{eq:linearSS}). Equation (\ref{eq:frictionCons}) describes inequality constraints on contact friction pyramid. Equation (\ref{eq:forceCons}) describes the bounds of reaction forces. Equation (\ref{eq:MPCeqCons}) enforces gait constraint to ensure the swing leg exerts zero control input.
The translation of the proposed MPC problem into Quadratic Programming (QP) form to be efficiently solved can be found in many related works and previous works (e.g., \cite{di2018dynamic}, \cite{li2021force}).
\subsection{Whole-body Control}
\label{subsec:WBC}
With adaptive-frequency MPC, in a step with a long gait period, the sampling frequency can be as low as only 20 $\unit{Hz}$. The low-frequency MPC cannot guarantee optimal tracking performance. Hence we choose to combine MPC with WBC to ensure more accurate tracking control. The WBC is an established level-low control method to map reaction forces to joint torques on legged robots \cite{kim2019highly,chignoli2021humanoid}.
We adapt the WBC to work with force-and-moment-based MPC control input and allow bipedal walking gait with varied gait periods. The WBCs used in \cite{kim2019highly} and \cite{kim2020dynamic} are paired with a high-frequency joint PD controller to track desired joint position and velocity in addition to computing joint torques based on prioritized tasks. Both CoM and swing foot position control are parts of the WBC tasks. Our WBC framework only uses torque output from QP optimization and does not require joint tracking. Instead, we chose to continue using Cartesian space PD swing foot control \cite{li2021force} to track optimal foot placement from optimization. With this approach, the WBC tasks reduced to only driving CoM position and rotation $\bm x_c = [ p_{c,x},\: p_{c,y},\: p_{c,z},\:\phi,\:\theta,\:\psi ]^\intercal$ to desired input (i.e. trajectory tracking). Hence it avoids extra computation time at the very computation-costly derivative of contact Jacobian $\dot{\bm J_c}$ for the 5-DoF bipedal robot leg.
The full joint space equation of motion for the bipedal robot has the form,
\begin{align}
\label{eq:EOM}
\mathbf M \ddot{\mathbf q} + \mathbf C + \mathbf g = \left[\begin{array}{c} \mathbf 0 \\ \bm \tau \end{array} \right]
+ \bm \tau_b
\end{align}
$\ddot{\mathbf q}$ is a linear vector space containing both entries of body state (i.e. CoM position vector and Euler angles) and joint states components, $\ddot{\mathbf q} = [\ddot{\mathbf q}_b;\: \ddot{\mathbf q}_j]$, where $\ddot{\mathbf q}_b \in \mathbb R^6$, $\ddot{\mathbf q}_j \in \mathbb R^{10}$, and $\bm \tau_b = \bm {J}_c^\intercal \bm u$.
The desired acceleration of the CoM tracking task uses the optimal CoM trajectory from the trajectory optimization as reference $\bm x_c^{des}$, and is computed based on a PD control law,
\begin{align}
\label{eq:desAcc}
\ddot{\bm x}_c^{des} = \bm K_P^{WBC}(\bm x_c^{des} - \bm x_c) + \bm K_D^{WBC}(\dot{\bm x}_c^{des} - \dot{\bm x}_c)
\end{align}
And the acceleration command $\ddot {\mathbf {q}}_{cmd}$ is calculated by a similar task-space projection algorithm in \cite{kim2019highly}.
Now the WBC-QP problem to compute the minimized relaxation components of MPC ground reaction force $\Delta \bm u$ and joint acceleration command $\Delta \ddot{\mathbf q}$ is as follows,
\begin{align}
\label{eq:WBC-QP}
\underset{{\Delta \ddot{\mathbf q},\Delta \bm u}}{\operatorname{min}} \:\: &
\Delta \ddot{\mathbf q}^\intercal {\mathbf H} \Delta \ddot{\mathbf q} + \Delta \bm u^\intercal {\mathbf K} \Delta \bm u
\vspace{0.5cm}
\end{align}
\begin{subequations}
\begin{align}
\label{eq:WBC_cons1}
\nonumber
\operatorname{s.t.} \quad
\mathbf S_{b}\{\mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g \\
- \bm J_c^\intercal (\Delta \bm u + \bm u)\} = \mathbf 0 \\
\label{eq:WBC_cons3}
\quad \quad \quad \bm u_{min} \leq \Delta \bm u + \bm u \leq \bm u_{max} \quad\\
\label{eq:WBC_cons4}
\quad \quad \quad \bm \tau_{min} \leq \bm \tau \leq \bm \tau_{max} \quad
\end{align}
\end{subequations}
In equation (\ref{eq:WBC-QP}), $\mathbf H \in \mathbb{R}^{16\times16}$ and $\mathbf K \in \mathbb{R}^{10\times10}$ are diagonal weighting matrices for each objective. Equation (\ref{eq:WBC_cons1}) is a dynamics constraint to control the floating base dynamics. Selection matrix $\mathbf S_{b}\in \mathbb{R}^{6\times16}$ consists of 1s and 0s to identify the float base joints.
The final joint torques can be calculated as
\begin{align}
\label{eq:torque}
\left[\begin{array}{c} \bm 0 \\ \bm \tau \end{array} \right]
= \mathbf M (\Delta \ddot{\mathbf q} + \ddot{\mathbf q}_{cmd}) + \mathbf C + \mathbf g - \bm J_c^\intercal (\Delta \bm u + \bm u)
\end{align}
As for swing leg, the joint torques $\bm {\tau}_{swing,n} \in \mathbb{R}^{5}$ are computed separately by inverse Jacobian $\bm J_{v,n}^\intercal$ of leg $n$,
\begin{align}
\label{eq:forceTorqueMapSwing}
\bm {\tau}_{swing,n} = \bm J_{v,n}^\intercal \bm F_{swing,n}.
\end{align}
Where swing foot force $\bm F_{swing,n}$ is determined by a simple PD control law,
\begin{align}
\label{eq:pdlaw}
\bm F_{swing,n}=\bm K_P(\bm p_{n,des}-\bm p_{n})+\bm K_D(\dot{\bm p}_{n,des}-\dot{\bm p}_n)
\end{align}
\section{Introduction}
\label{sec:Introduction}
Uneven terrain locomotion has always been one of the most important problems that researchers aim to solve on bipedal robots via motion planning and control. The value of such capability will allow bipedal robots to perform robust locomotion in many real-world tasks such as rescue and exploration missions with unknown terrains. Recent advancement in control strategies has allowed many successful integrations of control frameworks with bipedal robots.
For instance, on one hand, Hybrid Zero Dynamics (HZD) model \cite{westervelt2003hybrid} is an effective control scheme employed on bipedal robots such as MABEL \cite{sreenath2011compliant}.
HZD on ATRIAS robot \cite{rezazadeh2015spring} has allowed more intricate motion planning strategies to be integrated, such as gait libraries for stepping stones \cite{nguyen2018dynamic}. The gait library collected from offline optimization has allowed ATRIAS (2-D) to precisely place its foot on the stepping stones by online motion planning and position control. This position-control-based approach requires accurate terrain information, including next stone distance and height, and is not robust to uneven terrain perturbations.
On the other hand, force-based control schemes on quadruped robots became more popular. Such control frameworks can be used with linearized dynamics models and constraints. The Quadratic Programming (QP)-based force control and Model Predictive Control (MPC) on quadruped robots (\cite{di2018dynamic,nguyen2019optimized}) both employ simplified rigid-body dynamics and have demonstrated effectiveness in stable locomotion over uneven terrain.
We believe bipedal robots can also benefit from the robustness on uneven terrain with force-based locomotion control.
\begin{figure}[t]
\center
\includegraphics[width=1 \columnwidth]{Figures/title4.png}
\caption{{\bfseries Bipedal Robot Traversing Terrain with Uneven Stepping Stones} Simulation video: \protect\url{https://youtu.be/8hLihy96lCg}. }
\label{fig:title}
\vspace{-1.5em}
\end{figure}
Our recent work on force-and-moment-based MPC schemes on a 16-Degree-of-Freedom (DOF) bipedal robot \cite{li2021force} has allowed stable 3-D locomotion with fixed gait periods (i.e. fixed-frequency MPC). \update{However, due to the unawareness of the terrain, the robot cannot adapt its footsteps based on the terrain.} The next-step foot placement \cite{raibert1986legged} of bipedal locomotion is dependent on both linear velocity and gait period. Hence, when maintaining a constant velocity during walking while aiming to vary step length, it can be achieved by adjusting the gait period of each step. We introduce adaptive frequency to the MPC to allow the robot to walk with varied gait periods for each step and achieve varied step lengths with a constant walking speed.
Kino-dynamics-based trajectory optimization has been introduced and used in many works on mobile-legged robots (e.g. \cite{dai2014whole, herzog2016structured}). The framework has the advantage of simplified system dynamics while being able to apply robot joint constraints. To synchronize the motion control and optimization, we use the same simplified dynamics model in both MPC and optimization, the same foot placement policy in swing foot control and optimization foot placement, and the same discrete time steps in MPC and optimization.
Many related works (e.g.,\cite{kryczka2015online,khadiv2016step,guo2021fast,daneshmand2021variable}) that use trajectory optimization/planning for bipedal gait and trajectory generation share a similarity in that the foot placement adaptation is included in the frameworks to optimize best capture point locations. In our work, to allow bipedal robots to overcome very narrow stepping stones, exact foot placement on the stone is required. We pre-define the desired step locations in optimization to optimize the gait periods and CoM trajectory based on each stride length.
Tracking optimal trajectory with only MPC is not optimal due to its inherent low sampling frequency, which is even lower with a long gait period. We pair the MPC with a higher-frequency Whole-body Control (WBC) for more accurate trajectory tracking.
MIT Mini Cheetah \cite{katz2019mini,kim2019highly} quadruped robot and MIT Humanoid robot \cite{chignoli2021humanoid} both have demonstrated outstanding balancing performance during dynamical motion with the force-based MPC and WBC combination. We develop the WBC strategy to work with our bipedal force-and-moment-based MPC. WBC in \cite{kim2020dynamic} employed on bipedal robots \cite{kim2016stabilizing, ahn2019control} validated the feasibility of a WBC-type control strategy in dynamic locomotion with periodic gaits. In our approach, We combine Kino-dynamics trajectory optimization with adaptive-frequency MPC framework for bipedal robot traversing stepping stones and use WBC as low-level force-to-torque mapping and trajectory tracking control.
The main contributions of the paper are as follows:
\begin{itemize}
\item \update{We allow the bipedal to have adaptive foot placement and gait periods for each step, and realize it in control with adaptive-frequency MPC as our main locomotion controller.}
\item \update{We enhance the adaptive-frequency MPC by kino-dynamics trajectory optimization for optimal trajectory generation and WBC as tracking control.}
\item \update{We use the proposed framework in bipedal locomotion over uneven stepping stones. The proposed method allows the bipedal robot to maintain high speed at around 1.5 $\unit{m/s}$ when traversing uneven stepping stone terrains with height, width, and stone surface shape perturbations while only requiring minimal terrain knowledge.}
\end{itemize}
The rest of the paper is organized as follows. Section. \ref{sec:robotModel} introduces the physical design parameters of the bipedal robotand the overview of the system architecture including optimization and control. Section.\ref{sec:trackingControl} presents the adaptive-frequency trajectory optimization framework with the bipedal kino-dynamics model. Section. \ref{sec:trackingControl} presents the adaptive-frequency MPC framework. Some simulation result highlights and comparisons are presented in Section. \ref{sec:Results}.
\section{Kino-dynamics-based Adaptive-frequency Trajectory Optimization}
\label{subsec:Optimization}
Humans can walk with different step lengths every step to adapt to the terrain and can allow swing foot to remain in the air for different periods of time. We intend to use this adaptive-frequency trajectory optimization framework to allow bipedal robots to walk with such characteristics.
We choose the kino-dynamics model in our optimization framework in order to reduce the computation cost compared to using a full-dynamics model.
The average solving time of offline trajectory optimization in our approach is shown in Table. \ref{tab:solvingTime}.
\subsection{Simplified Dynamics Model}
We first present the force-and-moment-based simplified dynamics model we use in both the kino-dynamics trajectory optimization and adaptive-frequency MPC framework, introduced in the author's previous work \cite{li2021force}. The simplified force-based dynamics model with ground reaction force and moment control inputs is shown in Figure. \ref{fig:design}. The control input consists of
$ \bm u=[\bm F_1;\:\bm F_2;\:\bm M_1;\:\bm M_2]^\intercal \in \mathbb{R}^{10}$, where $ \bm F_n = [ F_{nx},\: F_{ny},\: F_{nz}]^\intercal, \bm M_n = [ M_{ny},\: M_{nz}]^\intercal,$ leg $n = 1, 2 $.
We choose the state variables as $[{\bm \Theta};{\bm p}_c;{\bm \omega};\dot {{\bm p}}_c]$ and control inputs as $\bm u$, then the simplified dynamics equation can be represented as
\begin{align}
\label{eq:simpDyn}
\frac{d}{dt}\left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right]
= \bm A \left[\begin{array}{c} {\bm \Theta}\\{\bm p}_c\\{\bm \omega}\\\dot {{\bm p}}_c \end{array} \right] + \bm B \bm u + \left[\begin{array}{c} \mathbf 0_{3\times1}\\\mathbf 0_{3\times1}\\\mathbf 0_{3\times1}\\\bm g \end{array} \right]
\end{align}
\begin{align}
\label{eq:A}
\bm A = \left[\begin{array}{cccc}
\mathbf 0_3 & \mathbf 0_3 & \mathbf R_z & \mathbf 0_3 \\
\mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf I_3\\
\mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3\\
\mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 & \mathbf 0_3 \end{array} \right],
\mathbf R_z = \left[\begin{array}{ccc}
{c_\psi} & -{s_\psi} & 0 \\
{s_\psi} & c_\psi & 0 \\
0 & 0 & 1 \end{array} \right]
\end{align}
\begin{align}
\label{eq:B}
\bm B = \left[\begin{array}{ccccc}
\mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} \\
\mathbf 0_3 & \mathbf 0_3 & \mathbf 0_{3\times2} & \mathbf 0_{3\times2} \\
\frac{ (\bm p_1 - \bm p_c)\times}{\bm I_G} & \frac{ (\bm p_2 - \bm p_c)\times}{\bm I_G} &
\frac{\mathbf L}{\bm I_G} & \frac{ \mathbf L}{\bm I_G} \\
\frac{\mathbf {I}_{3}}{m_{trunk}} & \frac{ \mathbf {I}_{3}}{m_{trunk}} & \mathbf {0}_{3\times2} & \mathbf {0}_{3\times2} \end{array} \right]
\end{align}
where $s$ denotes sine operator, and $c$ denotes cosine operator. Note that $\mathbf R_z$ is simplified by the assumption of small roll and pitch angles $\phi \approx 0, \: \theta \approx 0$. \cite{li2021force}
In equation (\ref{eq:B}), $\bm {I}_G \in \mathbb{R}^{3\times3}$ represent the rotation inertia of the rigid body in the world frame. $\bm p_n$ represents the Cartesian coordinate of the contact point on $n$th foot. $\mathbf L$ is the selection matrix to enforce the 5-D control input, $\mathbf L = [0, 0; 1, 0; 0, 1]$.
\subsection{Optimization Problem Formulation}
The adaptive-frequency trajectory optimization is an offline multiple-shooting discretization method \cite{bulirsch2002introduction} to optimize the robot's CoM trajectory, foot placements, and gait period of each step based on the terrain map. It also maintains the linear velocity close to reference input to generate a smoother walking trajectory.
The optimization variable $\mathbf X \in \mathbb{R}^{39(N+1)}$ includes
\begin{align}
\label{eq:X}
\mathbf X = [\bm x_N ;\:\: \bm p_{N,1} ;\:\: \bm p_{N,2} ; \:\: \mathbf q_N ;\:\: \bm u_N ;\:\: dt_0\dots dt_{N}]
\end{align}
where $ dt_1\dots dt_{N}$ are discrete sampling times between each two time steps with $N$ total time steps. Subscript $N$ indicates the variable is a column vector of length of $N+1$. For bipedal walking gait, we define the total number of time steps the stance leg spends on the ground and the total number of time steps the swing leg spends in the air to be both 5; hence the one complete two-step gait period consists of 10 time steps. The MPC prediction horizon is also 10 time steps, which means it predicts a full cycle of periodic gait. It is important to ensure every 5 $dt_i$s has the same length, and thereby the gait period of each step $l$ is the summation of 5 time steps.
The formulation of the nonlinear programming (NLP) problem is as follows. The optimization objective is to drive the linear velocity close to the command and minimize the ground reaction force to maximize efficiency.
\begin{align}
\label{eq:cost}
\underset{\bm{x},\:dt_1\dots dt_{N} }{\operatorname{minimize}} \:\: \sum_{i = 0}^{N} \bm \alpha_i(\bm{ \dot p}_{c,x}[i] -\bm {\dot p}_{c,x}^{ref})^2 + \bm u[i]^\intercal \bm \beta _i \bm u[i]
\end{align}
\begin{subequations}
\begin{align}
\label{eq:cons1}
\operatorname{s.t.} \:\: \text{Initial Condition}:\:\bm x_0 = \bm x[0] \\
\label{eq:cons2}
\:\: \text{End Condition}:\:\bm x_N = \bm x[N] \\
\label{eq:cons3}
\text{Simplified Dynamics: equation (\ref{eq:simpDyn})} \\
\label{eq:cons4}
\:\: \text{Periodic Gait Constraint} \\
\label{eq:cons5}
\mathbf q_{min} \leq \mathbf q_n[i] = \texttt{IK}(\bm x[i],\: \bm p_n[i]) \leq \mathbf q_{max} \\
\label{eq:cons6}
\bm \tau_{min} \leq \bm J_n^\intercal(\mathbf q_n[i])\bm u_n[i] \leq \tau_{max} \\
\label{eq:cons7}
\bm p_l(\texttt{terrain}) = \bm p_{n,l} = \bm p_{hip,l}+\frac{t_{stance}}{2}\bm { \dot p}_{c,l} \\
\label{eq:cons8}
0.02 \leq dt_i \leq 0.05
\end{align}
\end{subequations}
Equation (\ref{eq:cons4}) enforces the periodic walking gait of the bipedal robot with 5 time steps stance phase and 5 times steps swing phase. Equation (\ref{eq:cons5}) enforces joint angle limits. Equation (\ref{eq:cons6}) constrains joint torques by contact Jacobians. Lastly, the swing foot placement is enforced by the inverted-pendulum-based foot placement policy. (\cite{raibert1986legged,di2018dynamic,li2021force}). With this foot placement policy, the optimization framework can adapt to the most optimal gait period based on how far one step needs to place to overcome the terrain while keeping the robot's linear velocity constant. $t_{stance}$ represents the total time the stance foot spends on the ground, which is the summation of 5 time steps at step $l$. The placement at touch-down for each step $l$ is acclimated to the terrain (i.e. each step is on a stepping stone).
\section{Results}
\label{sec:Results}
In this section, we will present highlighted results for validation of our proposed adaptive-frequency control and optimization framework. Associated simulation videos can be found via the link under Figure. \ref{fig:title}.
We validate our proposed approach in a high-fidelity physical-realistic simulation in MATLAB Simulink with Simscape Multibody library. We also use Spatial v2 software package \cite{featherstone2014rigid} to acquire coefficients of dynamics equations in WBC and CasADi \cite{Andersson2019} for offline optimization.
Firstly, we present the comparison between MPC-only control vs. MPC+WBC in tracking sinusoidal height command with double-leg stance. Due to low sampling frequency, previous works usually only use MPC as locomotion control and use QP-based force control as balance/stance control for its higher frequency (e.g. \cite{nguyen2019optimized,chignoli2021humanoid,li2021force}). Figure. \ref{fig:heightcomparison} shows the comparison of simulation snapshots between the two approaches, it can be observed that the WBC+MPC approach we proposed performed ideally in height tracking while the MPC-only approach failed over time.
\begin{figure}[h]
\vspace{0.2cm}
\center
\includegraphics[width=1 \columnwidth]{Figures/comparison2.png}
\caption{{\bfseries Height Command Tracking Results} Simulation snapshots are several time steps }
\label{fig:heightcomparison}
\vspace{-0.2cm}
\end{figure}
Secondly, we compare the locomotion performance over stepping stones in simulation with the following approaches.
\begin{enumerate}
\item With fixed-frequency MPC + WBC, at 0.3s
\item With adaptive-frequency MPC + WBC
\item With adaptive-frequency MPC + WBC + optimization
\end{enumerate}
As can be seen in Figure. \ref{fig:fixed_gait_mpc}, the approach with fixed-frequency control cannot adapt the foot placement based on the stepping stone gap distance. In Figure. \ref{fig:no_opt}, the adaptive-frequency MPC+WBC framework with manually-input gait periods based on the terrain shows improvement from the fixed gait period case. However, it cannot achieve precise foot placement on stepping stones nor maintain a preferable trajectory, therefore failed after only a few stones. Our proposed approach, shown in Figure. \ref{fig:with_opt}, with both adaptive-frequency control and optimization can allow the bipedal robot to traverse through the stepping stone terrain. Figure. \ref{fig:velocity_tracking} shows the velocity tracking performance with our proposed approach 3). The simulation velocity stays smoothly close to the desired trajectory through the stepping stone terrain.
\begin{figure}[t]
\vspace{0.2cm}
\center
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[clip, trim=0cm 8.2cm 0cm 0cm, width=\columnwidth]{Figures/comparison1.png}
\caption{Simulation results: fixed-frequency control}
\label{fig:fixed_gait_mpc}
\end{subfigure}
\\
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[clip, trim=0cm 4.1cm 0cm 4.1cm, width=\columnwidth]{Figures/comparison1.png}
\caption{Simulation results: adaptive-frequency control only}
\label{fig:no_opt}
\end{subfigure}
\\
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[clip, trim=0cm 0cm 0cm 8.1cm, width=\columnwidth]{Figures/comparison1.png}
\caption{Simulation results: adaptive-frequency control + optimization (proposed approach)}
\label{fig:with_opt}
\end{subfigure}
\caption{{\bfseries Motion Snapshots of Uneven Stepping Stone Locomotion} Comparison of fixed-frequency control vs. adaptive-frequency control vs. adaptive-frequency control + optimization }
\label{fig:snapshots}
\vspace{-0.0cm}
\end{figure}
\begin{figure}[!h]
\vspace{0.2cm}
\center
\includegraphics[width=1 \columnwidth]{Figures/velocity_tracking.pdf}
\caption{{\bfseries Velocity Tracking Results} Simulation with perturbed stone shapes }
\label{fig:velocity_tracking}
\end{figure}
We also would like to present the solver computation times for several tasks in CasADi with IPOPT solver in MATLAB R2021b. As a benchmark, the PC platform we use for offline optimization has an AMD Ryzen 5-5600X CPU clocked at 4.65$\unit {GHz}$. In Table.\ref{tab:solvingTime}, we measure the solving time of the proposed adaptive-frequency trajectory optimization. The cases are categorized into the number of stepping stones in the terrain. We run the optimization with 30 randomized terrain setups for each case and compute the average time.
Lastly, we present the uneven stepping stone terrain locomotion results with our proposed approach.
In realistic scenarios, the stepping stone surface shapes, heights, and widths may vary, hence the errors and disturbances in a vision-based terrain map acquisition system may hinder the accuracy of terrain information. In our approach, we can allow the terrain map in the optimization framework to be simplified to uniformly sized stepping stones with varied center-to-center distances, shown in Figure. \ref{fig:optresults}. We then use this optimization result to control the robot to traverse the terrains with various perturbations, shown in Figure. \ref{fig:trackingresults}. These terrain perturbations including varied stepping stone widths, heights, and surface shapes.
In the above simulation results, the linear velocity the robot maintained during the task is 1.5 $\unit{m/s}$. The stone center-to-center gap distance is between 15 $\unit{cm}$ to 30 $\unit{cm}$.The maximum stone height perturbation is 5 $\unit{cm}$. The stone width perturbation varied between 4 $\unit{cm}$ to 10 $\unit{cm}$.
\begin{table}[!t]
\vspace{0.2cm}
\centering
\caption{Offline Optimization Solving Time}
\label{tab:solvingTime}
\begin{tabular}{ccccc}
\hline
Cases: & 4 stones & 5 stones & 6 stones & 7 stones\\
\hline
Solving time: & 6.72$\unit{s}$ & 7.93$\unit{s}$ & 10.15$\unit{s}$ & 12.23$\unit{s}$ \\
\hline
\end{tabular}
\vspace{-0.3cm}
\end{table}
\section{Bipedal Robot Model and System Overview}
\label{sec:robotModel}
\subsection{Bipedal Robot Model}
In this section, we present the bipedal robot model that is used for this work. Our bipedal robot model is enhanced from our previous design in \cite{li2021force}, a small-scale bipedal robot with 5-DoF legs. Presented in Figure. \ref{fig:design}, each of the robot legs consists of ab/ad, hip, thigh, calf, and ankle joints which are all actuated by Unitree A1 torque-controlled motor. A1 motor is a powerful joint motor with a 33.5 $\unit{Nm}$ maximum torque output and 21.0 $\unit{rad/s}$ maximum joint speed output while weighing only 0.6 $\unit{kg}$.
\begin{figure}[!h]
\vspace{0.1cm}
\center
\includegraphics[width=1 \columnwidth]{Figures/robotAndLeg2.png}
\caption{{\bfseries Bipedal Robot Configuration and Simplified Dynamics Model}}
\label{fig:design}
\vspace{-0.5em}
\end{figure}
In this bipedal leg design, we strategically placed all joint actuators on the upper of the thigh links, close to the hips, for mass concentration, in order to minimize the leg dynamics during locomotion. Negligible leg mass is an important assumption in our force-and-moment-based simplified dynamics model in MPC \cite{li2021force}. The trunk mass of the bipedal robot is 5.8 $\unit{kg}$ and the overall mass is around 11 $\unit{kg}$.
More details about the physical design parameters can also be found in \cite{li2021force}.
\subsection{System Overview}
\label{sec:sysoverview}
The optimization and control system block diagram is shown in Figure. \ref{fig:controlArchi}.
We aim to achieve varied step lengths for each step in bipedal locomotion by varying gait frequencies in adaptive-frequency MPC. The proposed framework is built around this controller. In order to allow more stable and efficient locomotion, we pair the MPC control framework with offline trajectory optimization to generate desired trajectories.
\begin{figure}[!h]
\vspace{-0.3cm}
\center
\includegraphics[width=1 \columnwidth]{Figures/system2.pdf}
\caption{{\bfseries System Block Diagram} Optimization and control architecture.}
\label{fig:controlArchi}
\vspace{-.2cm}
\end{figure}
The optimization framework uses terrain map to generate discrete optimization data including desired body CoM trajectory $\bm x_{des} \in \mathbb{R}^3$, desired foot position $\bm p_{n,des} \in \mathbb{R}^3$ for $n$th foot, and discrete sampling time $dt_i$ at time step $i$ for MPC. The CoM trajectory and foot positions are linearly-interpolated to have a sampling frequency at 1 $\unit{kHz}$ to match the frequency of swing leg control and WBC. The MPC accepts the optimization data at its native frequency due to the synchronization of sampling time.
Reaction forces from MPC and swing leg control are input into WBC to be mapped to joint torques $\bm \tau \in \mathbb{R}^{10}$.
The robot state feedback $\bm x \in \mathbb{R}^{12}$ include body Euler angles (roll, pitch, and yaw) ${\Theta = [\phi,\:\theta,\:\psi]}^\intercal$, position $\bm p_c$, velocity of body CoM $\dot{\bm p}_c$, and angular velocity $\bm \omega$. Joint feedback $\mathbf q \in \mathbb{R}^{10}$ includes the joint positions of the bipedal robot.
\section{System Overview}
\label{sec:sysoverview}
The optimization and control system block diagram is shown in Figure. \ref{fig:controlArchi}.
We aim to achieve varied step lengths for each step in bipedal locomotion by adaptive-frequency MPC. The proposed framework is built around this controller. In order to allow more stable and efficient locomotion, we pair the MPC control framework with offline trajectory optimization to generate desired trajectories. Since MPC has a low sampling frequency and is determined by varied gait periods, we use a higher-frequency and task-oriented WBC scheme to allow more accurate trajectory tracking control.
\begin{figure}[!h]
\vspace{-0.3cm}
\center
\includegraphics[width=1 \columnwidth]{Figures/system2.pdf}
\caption{{\bfseries System Block Diagram} Optimization and control architecture.}
\label{fig:controlArchi}
\vspace{-.2cm}
\end{figure}
The optimization framework uses terrain map to generate discrete optimization data including desired body CoM trajectory $\bm x_{des} \in \mathbb{R}^3$, desired foot position $\bm p_{n,des} \in \mathbb{R}^3$ for $n$th foot, and discrete sampling time $dt_i$ at time step $i$ for MPC. The CoM trajectory and foot positions are linearly-interpolated to have a sampling frequency at 1 $\unit{kHz}$ to match the frequency of swing leg control and WBC. The MPC accepts the optimization data at its native frequency due to the synchronization of sampling time.
Reaction forces from MPC and swing leg control are input into WBC to be mapped to joint torques $\bm \tau \in \mathbb{R}^{10}$.
The robot state feedback $\bm x \in \mathbb{R}^{12}$ include body Euler angles (roll, pitch, and yaw) ${\Theta = [\phi,\:\theta,\:\psi]}^\intercal$, position $\bm p_c$, velocity of body CoM $\dot{\bm p}_c$, and angular velocity $\bm \omega$. Joint feedback $\mathbf q \in \mathbb{R}^{10}$ includes the joint positions of the bipedal robot.
|
1,116,691,497,251 | arxiv | \section{Introduction}
The measurement of the nuclear structure function in deep inelastic electron-nucleus scattering (DIS) is the best way to improve our knowledge of the nuclear parton distributions and QCD dynamics in the high energy regime (See, e.g.\cite{armesto_review,frank_review}). However, after more than 30 years of experimental and theoretical studies, a standard picture of nuclear modifications of structure functions and parton densities has not yet emerged. Fixed target DIS measurement on nuclei revealed that the ratio of nuclear to nucleon structure functions (normalized by the atomic mass number) is significantly different from unity.
In particular, these data demonstrate an intricate behavior, with the ratio being less than one at large $x$ (the EMC effect) and at small $x$ (shadowing) and larger than one for $x \approx 10^{-1}$ (antishadowing).
The existing data were taken at lower energies \cite{e665} and therefore the perturbative QCD regime ($Q^2 \ge 1$ GeV$^2$) was explored only for relatively large values of the (Bjorken) $x$ variable ($x > 10^{-2} $). Experimentally, this situation will hopefully change with a future high energy electron-ion collider (EIC) (For recent reviews see, e.g. \cite{erhic,lhec}), which is supposed to take data at higher energies and explore the region of small $x$ ($ x < 10^{-2} $) in the perturbative QCD regime.
The theory of nuclear effects in DIS is still far from being concluded. The straightforward use of nucleon parton distributions evolved with
DGLAP equations and corrected with a nuclear modification factor determined by fitting the existing data as in
Refs. \cite{eks1,eks2,hkn,ds,eps1,eps2}
is well justified only in the large $Q^2$ region and not too small $x$. Moreover, these approaches do not address the fundamental problem of
the origin of the nuclear shadowing and cannot be extended to small $x$, where we expect to see new interesting physics related to the
non-linear aspects of QCD and gluon saturation (For reviews see Ref. \cite{hdqcd}). Currently, there are several phenomenological models which
predict different magnitudes for the shadowing in the nuclear structure function based on distinct treatments for the multiple scatterings of the partonic
component of the virtual photon, assumed in general to be a quark-antiquark ($q \bar{q}$) color dipole.
Some works \cite{armesto_glauber,erike_inclusive,simone_hq,erike_exclusive} address the origin of the nuclear shadowing through the Glauber-Gribov formalism \cite{glauber,gribov} in the totally coherent limit ($l_c \approx 1 / 2 m_N x \gg R_A$, where $l_c$ is coherence length), which considers the multiple scattering of the color dipole with a nucleus made of nucleons whose binding energy is neglected. In the high energy limit, the eikonal approximation is assumed, with the dipole keeping a fixed size during the scattering process. In this approach the total photon-nucleus cross section is given by
\begin{equation}
\sigma_{\gamma^*A} = \int d^2r \, \int dz |\psi(r,z)|^2 \sigma_{dA}(x,r)
\label{sigga}
\end{equation}
where $|\psi(r,z)|^2$ is the probability of the photon to split into a $q\bar{q}$ pair of size $r$ and
$\sigma_{d A}(x,r)$ is the dipole-nucleus cross section, which is expressed as \cite{armesto_glauber}
\begin{equation}
\sigma_{dA}(x,r) = \int d^2b \,2 \, \left[ 1-\exp\left(-\frac{1}{2}A \, T_A(b) \sigma_{dp}(x,r)\right) \right]
\label{sigda}
\end{equation}
with $T_A(b)$ being the nuclear thickness function and $\sigma_{dp}(x,r)$ is the dipole-proton cross section.
It must be stressed that once $\sigma_{dp}(x,r)$ is fixed, the extension to the nuclear case is essentially parameter free in this approach.
In the Glauber formula (\ref{sigda}) it is assumed that the dipole undergoes several elastic scatterings on the target. Although reasonable
and phenomenologically successful this assumption deserves further investigation.
This model can be derived in the classical approach of the Color Glass Condensate formalism \cite{raju_acta}.
Another approach largely used in the literature is
based on the connection between nuclear shadowing and the cross section for the diffractive dissociation
of the projectile \cite{capella,frank_nuc,armesto_alba,armesto_dif}, which was established long time ago
by Gribov \cite{gribov}. Its result can be derived using reggeon calculus \cite{reggeon} and the Abramovsky-Gribov-Kancheli (AGK) cutting rules \cite{agk} and is a manifestation of the unitarity. This formalism can be used to calculate directly cross sections of photon-nucleus scattering for the interaction with two nucleons in terms of the diffractive photon-nucleon cross section.
In this formalism, the total photon-nucleus cross section is expressed as a series
containing the contribution from multiple scatterings (1, 2, $\dots$):
\begin{equation}
\sigma_{\gamma^*A} = \sigma_{\gamma^*A}^{(1)} + \sigma_{\gamma^*A}^{(2)} + \sigma_{\gamma^*A}^{(3)} + \cdots\,
\label{eq1}
\end{equation}
with the first term being the one that arises from independent scattering of the photon off $A$ nucleons:
\begin{equation}
\sigma_{\gamma^* A}^{(1)}=A\,\sigma_{\gamma^*p}
\end{equation}
and the first correction to the non-additivity of cross sections being
\begin{equation}
\sigma_{\gamma^* A}^{(2)}=-4\pi A(A-1)\int d^2b\ T_A^2(b)
\int _{M^2_\mathrm{min}}^{M^2_\mathrm{max}}dM^2 \left.
\frac{d\sigma^{\mathcal{D}}_{\gamma^*{\rm p}}}{dM^2dt}\right\vert_{t=0} F_A^2(t_\mathrm{min})
\label{sigalb}
\end{equation}
where $M^2$ is the mass of the diffractively produced system, $F_A$ is the nucleus form factor which takes into account the coherence effects and the differential $\gamma^* p$ cross section for diffractive dissociation of the virtual photon appearing in
(\ref{sigalb}) is given by:
\begin{equation}
\left.\frac{d\sigma^\mathcal{D}_{\gamma^*{\rm p}} (Q^2,x_{{I\!\!P}},\beta)}{dM^2dt}
\right\vert_{t=0}=
\frac{4\pi^2\alpha_{em}B_D}{Q^2(Q^2+M^2)}x_{{I\!\!P}}F^{(3)}_{2\mathcal{D}}(Q^2,x_{{I\!\!P}},\beta)
\label{eq3}
\end{equation}
where $B_D$ is the diffractive slope parameter and $x_{{I\!\!P}}F^{(3)}_{2\mathcal{D}}(Q^2,x_{{I\!\!P}},\beta)$ is the diffractive proton structure function. Moreover,
$t_\mathrm{min}=-m_N^2 x_\mathcal{P}^2$, $x_{{I\!\!P}}=x/\beta$ and $\beta=Q^2/(Q^2+M^2)$.
The integration limits in $M^2$ are $M^2_\mathrm{min}= 4 m_\pi^2 =0.08$ GeV$^2$, $M^2_\mathrm{max}= Q^2\left(x_{{I\!\!P} \mathrm{max}}/x-1\right)$ and $x_{{{I\!\!P}}\mathrm{max}}=0.1$.
A shortcoming of this approach is that the inclusion of the higher order rescatterings is model dependent. This resummation is specially important at small $x$, where multiple scattering is more likely to happen.
In general it is assumed that the intermediate states in the rescatterings have the same structure and two resummation schemes are considered: (a) {\it the Schwimmer equation} \cite{schwimmer}, which sums all fan diagrams with triple pomeron interactions and which is valid for the scattering of a small projectile on a large target. It implies that the photon--nucleus cross section is given by:
\begin{equation}
\sigma_{\gamma^* A}^{S}(x,r)=\sigma_{\gamma^*p}(x,r) \, A \, \int d^2b \frac{ T_A(b)}{1+(A-1) \, T_A(b) \, f(x,Q^2)}
\label{schwimmer}
\end{equation}
and (b) {\it the eikonal unitarized cross section}, given by
\begin{equation}
\sigma^{E}_{\gamma^*A}(x,r)=\sigma_{\gamma^*p} (x,r) \, A \, \int d^2b
\frac{\left\{1-\exp{\left[-2(A-1)T_A(b)f(x,Q^2)\right]}\right\}}{2(A-1)f(x,Q^2)},
\label{eikonal}
\end{equation}
where
\begin{equation}
f(x,Q^2)=\frac{4\pi}{\sigma_{\gamma^*p}(x,r)} \times \int_{M^2_{min}}^{M^2_\mathrm{max}} dM^2 \left. \frac{d\sigma^\mathcal{D}}{dM^2 dt} \right|_{t=0} \times F^2_A(t_\mathrm{min}).
\label{ff}
\end{equation}
As shown in \cite{armesto_alba,armesto_dif}, the eikonal unitarization predicts a larger magnitude for the nuclear shadowing than the Schwimmer equation.
For models which take into account the possibility of different intermediate states see, e.g., Ref. \cite{kope}.
Except for the choice of the resummation scheme, the predictions for $\sigma_{\gamma^* A}$ obtained using (\ref{schwimmer}) or (\ref{eikonal}) are parameter free once the diffractive cross section is provided. Models based on this non-perturbative Regge-Gribov framework are quite successful in describing existing data on inclusive and diffractive $ep$ and $eA$ scattering \cite{armesto_dif,armesto_ep}. However, they lack solid theoretical foundations within QCD.
It is important to emphasize that some authors \cite{frank_nuc} use these models as initial conditions for DGLAP evolution.
The comparison among the predictions of the different models for nuclear shadowing presented in Ref. \cite{armesto_review}, including the models discussed above, shows that they coincide within $\approx 15 \%$ in the region where experimental data exist ($x \ge 10^{-2}$) but differ strongly for smaller values of $x$, with the difference being almost of a factor 2 at $x = 10^{-5}$. Our goal in this paper is try to reduce the theoretical uncertainty present in these predictions. In particular,
differently from previous studies, which consider different inputs in the calculations using the Glauber, Schwimmer and Eikonal approaches, we will consider a unique model for the projectile - nucleon interaction. We will calculate the dipole - nucleon cross section and the diffractive structure function using the dipole picture and the solution of the running coupling Balitsky-Kovchegov equation \cite{bk}, which is the basic equation of the Color Glass Condensate formalism. Recently, this approach
was shown to describe quite well the $ep$ HERA data for inclusive and diffractive observables (See, e.g. Refs. \cite{rcbk,vic_joao,alba_marquet,vic_anelise}).
Following this procedure we are able to estimate the magnitude of the theoretical uncertainty associated to the way the multiple scatterings are considered, reducing the contribution associated to the choice of initial conditions used in the calculations. Moreover, we discuss the possibility of discriminating
between these unitarization procedures in a future electron-ion collider.
This paper is organized as follows. In Sec. \ref{dipole} we present a brief description of inclusive and diffrative $\gamma$ - nucleon processes in the color dipole picture with particular emphasis in the dipole - proton cross section given by the Color Glass Condensate formalism. In Section \ref{results} we present the predictions of the three unitarization schemes discussed above using as input the CGC results for the dipole - proton interaction and compare them
with the existing experimental data. Moreover, we present a comparison between the predictions for the kinematical region which will be probed in a future electron - ion collider. Finally, in Section \ref{conc} we summarize our results and present our conclusions.
\section{Inclusive and diffractive $\gamma p$ processes in the color dipole picture}
\label{dipole}
The photon-hadron interaction at high energy (small $x$) is usually described in the infinite momentum frame of the hadron in terms of the scattering of the photon off a sea quark, which is typically emitted by the small-$x$ gluons in the proton. However, as already mentioned in the introduction, in order to
describe inclusive and diffractive interactions and disentangle the small-$x$ dynamics of the hadron wavefunction, it is more adequate to consider the photon-hadron scattering in the dipole frame, in which most of the energy is
carried by the hadron, while the photon has
just enough energy to dissociate into a quark-antiquark pair
before the scattering. In this representation the probing
projectile fluctuates into a
quark-antiquark pair (a dipole) with transverse separation
$\mbox{\boldmath $r$}$ long before the interaction, which then
scatters off the target \cite{dipole}. The main motivation to use this color dipole approach is that it gives a simple unified picture of inclusive and diffractive processes. In particular,
in this approach the proton structure function is given in terms of the dipole - proton cross section, $\sigma_{d p}(x,r)$, as follows:
\begin{equation}
F_2^p(x,Q^2) = \frac{Q^2}{4\pi^2\alpha_{em}} \int d^2r \, \int dz |\psi(r,z)|^2 \sigma_{dp}(x,r)
\label{sigga}
\end{equation}
where $|\psi(r,z)|^2$ is the probability of the photon to split into a $q\bar{q}$ pair of size $r$. Moreover, the total diffractive cross sections take the following form (See e.g. Ref. \cite{GBW}),
\begin{equation}\label{eq:sigdiff}
\sigma^\mathcal{D}_{T,L} = \int_{-\infty}^0 dt\,e^{B_D t} \left. \frac{d \sigma ^\mathcal{D} _{T,L}}{d t} \right|_{t = 0} = \frac{1}{B_D} \left. \frac{d \sigma ^\mathcal{D} _{T,L}}{d t} \right|_{t = 0}
\end{equation}
where
\begin{equation}\label{eq:dsig-dt}
\left. \frac{d \sigma ^\mathcal{D} _{T,L}}{d t} \right|_{t = 0} = \frac{1}{16 \pi} \int d^2 {\bf r}
\int ^1 _0 d \alpha |\Psi _{T,L} (\alpha, {\bf r})|^2 \sigma _{dp} ^2 (x, \mbox{\boldmath $r$})
\end{equation}
It is assumed that the dependence on the momentum transfer, $t$, factorizes and is given by an exponential with diffractive slope $B_D$.
The diffractive processes can be analysed in more detail by studying the behaviour of the diffractive structure function $F_2^{\mathcal{D} (3)}(Q^{2}, \beta, x_{I\!\!P})$. Following Ref. \cite{GBW} we assume that the diffractive structure function is given by
\begin{equation}
F_2^{\mathcal{D}(3)} (Q^{2}, \beta, x_{I\!\!P}) = F^{\mathcal{D}}_{q\bar{q},L} + F^{\mathcal{D}}_{q\bar{q},T} + F^{\mathcal{D}}_{q\bar{q}g,T},
\label{soma}
\end{equation}
where the $q\bar q g$ contribution with longitudinal polarization is not
present because it has no leading logarithm in $Q^2$. The different contributions can be calculated and for the $q\bar q$ contributions
they read \cite{wusthoff,nikqqg}
\begin{equation}
x_{I\!\!P}F^{\mathcal{D}}_{q\bar{q},L}(Q^{2}, \beta, x_{I\!\!P})=
\frac{3 Q^{6}}{32 \pi^{4} \beta B_D} \sum_{f} e_{f}^{2}
2\int_{\alpha_{0}}^{1/2} d\alpha \alpha^{3}(1-\alpha)^{3} \Phi_{0},
\label{qqbl}
\end{equation}
\begin{equation}
x_{I\!\!P}F^{\mathcal{D}}_{q\bar{q},T}(Q^{2}, \beta, x_{I\!\!P}) =
\frac{3 Q^{4}}{128\pi^{4} \beta B_D} \sum_{f} e_{f}^{2}
2\int_{\alpha_{0}}^{1/2} d\alpha \alpha(1-\alpha)
\left\{ \epsilon^{2}[\alpha^{2} + (1-\alpha)^{2}] \Phi_{1} + m_f^{2} \Phi_{0} \right\}
\label{qqbt}
\end{equation}
where the lower limit of the integral over $\alpha$ is given by $\alpha_{0} = \frac{1}{2} \, \left(1 - \sqrt{1 - \frac{4m_{f}^{2}}{M^{2}}}\right)
$, the sum is performed over the quark flavors and \cite{fss}
\begin{equation}
\Phi_{0,1} \equiv \left(\int_{0}^{\infty}r dr K_{0 ,1}(\epsilon r)\sigma_{dp}(x_{I\!\!P},\mbox{\boldmath $r$}) J_{0 ,1}(kr) \right)^2.
\label{fi}
\end{equation}
The $q\bar{q}g$ contribution, within the dipole picture at leading $\ln Q^2$ accuracy, is given by \cite{wusthoff,GBW,nikqqg}
\begin{eqnarray}
\lefteqn{x_{I\!\!P}F^{\mathcal{D}}_{q\bar{q}g,T}(Q^{2}, \beta, x_{I\!\!P})
= \frac{81 \beta \alpha_{S} }{512 \pi^{5} B_D} \sum_{f} e_{f}^{2}
\int_{\beta}^{1}\frac{\mbox{d}z}{(1 - z)^{3}}
\left[ \left(1- \frac{\beta}{z}\right)^{2} + \left(\frac{\beta}{z}\right)^{2} \right] } \label{qqg} \\
& \times & \int_{0}^{(1-z)Q^{2}}\mbox{d} k_{t}^{2} \ln \left(\frac{(1-z)Q^{2}}{k_{t}^{2}}\right)
\left[ \int_{0}^{\infty} u \mbox{d}u \; \sigma_{dp}(u / k_{t}, x_{I\!\!P})
K_{2}\left( \sqrt{\frac{z}{1-z} u^{2}}\right) J_{2}(u) \right]^{2}.\nonumber
\end{eqnarray}
As pointed in Ref. \cite{marquet}, at small $\beta$ and low $Q^2$, the leading $\ln (1/\beta)$ terms should be resummed and the above expression should be modified. However, as a description with the same quality using the Eq. (\ref{qqg}) is possible by adjusting the coupling \cite{marquet}, in what follows we will use this expression for our phenomenological studies.
We use the standard notation for the variables $
x_{I\!\!P} = (M^2 + Q^2)/(W^2 + Q^2)$ and $x = Q^2/(W^2 + Q^2) = \beta x_{{I\!\!P}}$,
where $W$ the total energy of the
$\gamma ^* p$ system.
The main input for the calculations of inclusive and diffractive observables in the dipole picture is $\sigma_{dp}(x,\mbox{\boldmath $r$})$ which is determined by the QCD dynamics at small $x$. In the eikonal approximation, it is given by:
\begin{equation}
\sigma_{dp}(x, \mbox{\boldmath $r$}) = 2 \int d^2 \mbox{\boldmath $b$} \, {\cal N}(x, \mbox{\boldmath $r$}, \mbox{\boldmath $b$})
\label{sdip}
\end{equation}
where $ {\cal N}(x, \mbox{\boldmath $r$}, \mbox{\boldmath $b$})$ is the forward scattering amplitude for a dipole with size
$r=|\mbox{\boldmath $r$}|$ and impact parameter $\mbox{\boldmath $b$}$ which can be related to the expectation value of a Wilson loop \cite{hdqcd}. It
encodes all the
information about the hadronic scattering, and thus about the
non-linear and quantum effects in the hadron wave function. In general, it is assumed that the impact parameter dependence of $\cal{N}$ can be factorized as ${\cal{N}}(x,\mbox{\boldmath $r$},\mbox{\boldmath $b$}) = {\cal{N}}(x,\mbox{\boldmath $r$}) S(\mbox{\boldmath $b$})$, where
$S(\mbox{\boldmath $b$})$ is the profile function in impact parameter space, which implies $\sigma_{dp}(x,\mbox{\boldmath $r$})=\sigma_0 \mathcal{N}(x,\mbox{\boldmath $r$})$. The forward scattering amplitude ${\cal{N}}(x,\mbox{\boldmath $r$})$
can be obtained by solving the BK evolution equation \cite{rcbk} or considering phenomenological QCD inspired models to describe the interaction of the dipole with the target. BK equation is the simplest nonlinear evolution equation
for the dipole-hadron scattering amplitude, being actually a mean field version
of the first equation of the B-JIMWLK hierarchy \cite{CGC}. In its linear
version, it corresponds to the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation
\cite{bfkl}.
The solution of the LO BK equation implies that the saturation scale grows much faster with increasing energy
($Q_s^2\sim x^{-\lambda}$, with $\lambda \approx 0.5$) than that
extracted from phenomenology ($\lambda \sim 0.2-0.3$).
In the last years the next-to-leading order corrections to the BK equation were
calculated
\cite{kovwei1,javier_kov,balnlo} through the ressumation of $\alpha_s N_f$ contributions to
all orders, where $N_f$ is the number of flavors. Thanks to these works it is now possible to estimate
the soft gluon emission and running coupling corrections to the evolution kernel.
The authors have found out that the dominant contributions come from the running
coupling corrections, which allow us to determine the scale of the running coupling in the
kernel. The solution of the improved BK equation was studied in detail in Ref.
\cite{javier_kov}. The running of the coupling reduces
the speed of the evolution to values compatible with experimental data, with the geometric
scaling regime being reached only at ultra-high energies. In \cite{rcbk} a global
analysis of the small $x$ data for the proton structure function using the improved BK
equation was performed (See also Ref. \cite{weigert}). In contrast to the BK equation
at leading logarithmic $\alpha_s \ln (1/x)$ approximation, which fails to describe the
HERA data, the inclusion of running coupling effects in the evolution renders the BK equation
compatible with them (See also \cite{vic_joao,alba_marquet,vic_anelise}). In what follows we
consider the BK predictions for ${\cal{N}}(x,\mbox{\boldmath $r$})$ (from now on called rcBK) obtained using the GBW \cite{rcbk} initial
condition.
\begin{figure}
\includegraphics[scale=0.20]{PbD-rcBK2.eps}
\caption{Comparison between the predictions of the distinct models and the E665 experimental data at small $x$. }
\label{fig1}
\end{figure}
\section{Numerical results and discussion}
\label{results}
In what follows we shall consider two different nuclei, Ca and Pb, and use the deuteron (D) as a reference to calculate the experimentally measured ratios $R_{Ca/D} \equiv (2/40) F_2^{Ca}/F^D_2$ and $R_{Pb/D} \equiv (2/208)F^{Pb}_2 / F^D_2$.
We assume that the diffractive slope parameter is $B_D = 6.7$ GeV$^{-2}$ and that the nucleus form factor is given by:
\begin{equation}
F_A(t_\mathrm{min})=\int d^2b\ J_0(b\sqrt{-t_\mathrm{min}})T_A(b),
\label{eq2-1}
\end{equation}
where the thickness function is given in terms of the nuclear density $\rho_A$ as:
$$
T_A(b)=\int_{-\infty}^{+\infty} dz\rho_A(\vec{b},z),
$$
with the normalization fixed by $ \int d^2b\ T_A(\vec{b})=1$.
In Fig. \ref{fig1} we compare the predictions of the Glauber (solid line), Schwimmer (dot-dashed line), Eikonal (dashed line) and double scattering (dot-dot-dashed line) models for the ratios with the E665 experimental data at small $x$ \cite{e665}.
Although joined with lines, our results are computed at the same $\langle x \rangle$ and $\langle Q^2 \rangle$ as the experimental data. Our results demonstrate that if we compute the nuclear structure function up to two scatterings, which implies that
$\sigma_{\gamma^*A} = \sigma_{\gamma^*A}^{(1)} + \sigma_{\gamma^*A}^{(2)}$, we are not able to describe the experimental data. Furthermore, since the
magnitude of the first correction, $\sigma_{\gamma^*A}^{(2)}$, is very large,
then there is no hope to estimate the nuclear structure function by just summing a few terms in the multiple scattering series. Therefore, a full resummation of the multiple scatterings is necessary, which makes the predictions model dependent.
The agreement with the current experimental data at small $x$ of the Glauber, Schwimmer and Eikonal models is quite reasonable taking into account that no parameters have been fitted to reproduce the data. This implies that the current data are not able to discriminate between the unitarization schemes.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=0.15]{PbD-q21-rcBK2.eps} & \includegraphics[scale=0.15]{CaD-q21-rcBK2.eps}
\end{tabular}
\caption{Nuclear ratios $R_{Pb/D}$ (left panel) and $R_{Ca/D}$ (right panel) as a function of $x$ at $Q^2 = 1$ GeV$^2$.}
\label{fig2}
\end{figure}
Having in mind that a future electron - ion collider is expected to be able to analyse the kinematical region of
small $x$ ($x \simeq 10^{-5}$) and $Q^2 \ge 1$ GeV$^2$, we now compute the ratios $R_{Ca/D}$ and $R_{Pb/D}$ as a function of $x$ for two different values of $Q^2$ (= 1 and 10 GeV$^2$). In Fig. \ref{fig2} we present our predictions for $Q^2 = 1$ GeV$^2$.
It is important to emphasize that in electron scattering the range of $x$-values attainable is kinematically restricted to $x > Q^2/s$, where $s$ is the squared center-of-mass energy, which implies that at $Q^2 = 1$ GeV$^2$ the smaller values of $x$ in the perturbative region will be probed. At large $x$ ($\approx 10^{-2}$) the predictions almost coincide. However, at small $x$, the predictions based on the Schwimmer equation or on the eikonal unitarized cross section
give a stronger shadowing than those based on Glauber-like rescatterings. In particular, at $x \approx 10^{-4}$, the difference between Glauber and Schwimmer is almost 10 \% in the ratio $R(Ca/D)$ increasing to $\approx$ 20 \% in $R(Pb/D)$. At this $x$ value, the difference between Schwimmer and Eikonal is $\approx$ 5 \% and 12 \% for the ratios $R(Ca/D)$ and $R(Pb/D)$, respectively.
At smaller values of $x$, the difference between the three predictions increases, being larger than 20\%.
Consequently, a measurement of $F_2^A$ at $A = Pb$ at small $x$ with $\approx 10 \%$ precision would be a sensitive test to discriminate between the
different models.
In Fig. \ref{fig3} we present our predictions for the ratios $R_{Ca/D}$ and $R_{Pb/D}$ as a function of $x$ at $Q^2 = $ 10 GeV$^2$. The behavior is similar to the one observed in the Fig. \ref{fig2}. The main point is that the differences between the predictions is not reduced significantly and this makes the
discrimination between them possible also at this value of $Q^2$.
A final comment is in order. The results shown in Figs. \ref{fig2} and \ref{fig3} demonstrate that there is a large
uncertainty associated to the choice of unitarization scheme used to treat the multiple scatterings and that, in
principle, an experimental analysis of the nuclear ratios can be useful to discriminate between these approaches.
Another uncertainty present in the study of the nuclear effects is related to the transition between the
linear and nonlinear regimes of the QCD dynamics. We do not know precisely in which kinematical region
the predictions obtained using the linear DGLAP evolution cease to be valid. In Fig. \ref{fig4} we present a
comparison of our predictions with those obtained using the EPS09 \cite{eps1} parametrization of the nuclear
parton distribution functions, which is based on a global fit of the current nuclear data using the DGLAP dynamics.
As it can be seen, due to the large theoretical uncertainty in the DGLAP prediction in the small-$x$ region,
represented by the shaded band in the figure, it is not possible to draw any firm conclusion about which is the
correct framework to describe this observable in future $eA$ colliders. This same conclusion was already obtained
in \cite{erike_inclusive} in a somewhat different approach. Consequently, the study of other observables,
such as the nuclear diffractive structure function \cite{simone_eA,raju_eA} and nuclear vector meson production
\cite{erike_exclusive,vector_eA}, should also be considered in order to discriminate between
the linear and nonlinear regimes. To summarize: in order to learn more about the
unitarization schemes using the nuclear ratios we must disentangle
the nonlinear and linear regimes of the QCD dynamics. Our estimates show that due to the large freedom present in the DGLAP analysis they predict similar
magnitudes for the nuclear ratios, which implies that a combined analysis of several observables is necessary.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=0.15]{PbD-q210-rcBK2.eps} & \includegraphics[scale=0.15]{CaD-q210-rcBK2.eps}
\end{tabular}
\caption{Nuclear ratios $R_{Pb/D}$ (left panel) and $R_{Ca/D}$ (right panel) as a function of $x$ at $Q^2 = 10$ GeV$^2$. }
\label{fig3}
\end{figure}
\begin{figure}
\vskip0.5cm
\includegraphics[scale=0.20]{ratioPbDwitheps09.eps}
\caption{Predictions of the different models discussed in the text. The dash-dash-dot line represents the
central value of the prediction obtained with the EPS09 parametrization of the nuclear parton distribution functions.
The shaded band represents the theoretical error coming from the uncertainties in the EPS09 parametrization.}
\label{fig4}
\end{figure}
\section{Conclusion}
\label{conc}
The behaviour of the nuclear wave function at high energies provides fundamental information for the determination of the initial conditions in heavy ion collisions and particle production in collisions involving nuclei.
One of the main uncertainties is associated to the magnitude of the nuclear shadowing, which comes mainly from the
way in which the multiple scattering problem is treated and from the modelling of the projectile - nucleon interaction.
Since a future EIC will probe the shadowing region while keeping sufficiently large $Q^2$, new studies which determine the main sources of uncertainties in the predictions are necessary. In this work we compare three frequently used approaches to estimate the nuclear shadowing in nuclear DIS.
As in these approaches the nuclear cross section is completely determined once the interaction of the projectile with the nucleon is specified, we considered
a single model (rcBK) as input of our calculations in order to quantify the theoretical uncertainty which comes from the choice of the unitarization model. In particular, we calculate the nuclear ratio between structure functions considering the Glauber, Schwimmer and Eikonal approaches down to very low-$x$ utilizing the rcBK results both for inclusive and diffractive cross sections in $\gamma^* p$ scattering.
Our results demonstrate that the current experimental data at small $x$ are described successfully by the three approaches. However, the difference between their predictions becomes large in the kinematical region which will be probed in the future electron - ion colliders.
\section{Acknowledgments}
This work was partially financed by the Brazilian funding agencies CAPES, CNPq and FAPESP.
|
1,116,691,497,252 | arxiv | \section*{Introduction: unitarity of partial amplitude}
Unitarity written for the partial wave amplitude $f_l(s)$ of elastic scattering (spin degrees of freedom are neglected) has familiar form at high energies, i.e.
\begin{equation}
\mbox{Im}f_l(s)=|f_l(s)|^2+\eta_l(s),
\end{equation}
where the function $\eta_l(s)$ represents contribution of the intermidiate inelastic states into the product $SS^+$ ($SS^+=1$). Respective elastic S--matrix element is $S_l(s)=1+2if_l(s)$.
Unitarity in the impact parameter representation ($b=2l/\sqrt{s}$) connects the elastic and inelastic overlap functions introduced by Van Hove \cite{vh} with $h_{tot}(s,b)\equiv \mbox{Im} f(s,b)$ by the following relation
\begin{equation}
h_{tot}(s,b)=h_{el}(s,b)+h_{inel}(s,b)
\end{equation}
The respective cross--sections are determined by the integrals of the functions $h_i$ over $b$:
\begin{equation}
\sigma_i(s)=8\pi\int_0^\infty bdb h_i(s,b)
\end{equation}
where $i=tot,el,inel$.
This note is devoted to the inelastic overlap function, $h_{inel}(s,b)$, namely, its energy behavior
at $b=0$. Based on unitarity we consider this quantity as a function of elastic scattering amplitude.
Unitarity allows variation of the scattering amplitude in the interval $0\leq |f|\leq 1$ which covers both the absorptive and reflective scattering modes when $|f|\leq 1/2$ and $|f|>1/2$, respectively. Consideration of the inelastic overlap function provides valuable insight on the nature of the scattering. Transition to the reflective scattering mode results in the peripheral behavior of the inelastic overlap function as well as it changes the structure of the hadron interaction region \cite{07}.
\section{Symmetry of inelastic overlap function}
We consider a pure imaginary case of elastic scattering amplitude with replacement $f\to if$. Inelastic overlap function can be expressed then through the scattering amplitude $f(s,b)$ in the following form due to unitarity
\begin{equation}\label{inel}
h_{inel}(s,b)= f(s,b)[1-f(s,b)].
\end{equation}
To clarify the symmetry property of inelastic overlap function consider energy variation of the scattering amplitude $f$ under fixed value of the impact parameter. It is enough to consider the case of $b=0$.
The region of variation of the scattering amplitude covers the range $0\leq f \leq 1$ and $h_{inel}\equiv h_{inel}(s,0) $ invariant under replacement
\begin{equation}\label{sym}
f\leftrightarrow1-f.
\end{equation}
Thus, the Eq. (\ref{inel}) is invariant for the two amplitude variation intervals (0, 1/2] and [1/2, 1), i.e.
\begin{equation}\label{symin}
(0, 1/2]\leftrightarrow [1/2, 1)
\end{equation}
and the both ranges of amplitude variation correspond to a single range of inelastic overlap function variation $(0,1/4]$. These two intervals are equivalent in the sense that $h_{inel}$ repeats its values.
Note, that the use of $U$--matrix unitarization \cite{jpg22} provides a continuous transition from absorptive to reflective scattering mode covering the whole range of the amplitude variation allowed by unitarity.
The energy evolution of the elastic $S$--matrix scattering element $S$ ($S\equiv S(s,0)$), the elastic scattering amplitude $f$ ($f\equiv f(s,0)$) and the inelastic overlap function from some initial energy $s_i$ to a final value $s_f$
across the energy $s_m$ where a $h_{inel}$ has its maximal value, ( $h_{inel}^m= 1/4$):
\begin{equation}\label{path}
s_i\to s_{m}\to s_f
\end{equation}
has been discussed in \cite{jpg22}. It is illustrated by the following relations:
\begin{equation}\label{sii}
S^i>0 \to S^f<0, \,\mbox{i.e.}\,
f^i<1/2\to f^f>1/2,
\end{equation} and
\begin{equation}\label{si}
h_{inel}^i<1/4
\to h_{inel}^f<1/4.
\end{equation}
Thus, the inelastic overlap function $h_{inel}$ can perform a loop variation with increasing energy in accordance with Eq.(\ref{path}). It varies as
\begin{equation}\label{hhh}
h_{inel}^i\to 1/4\to h_{inel}^f
\end{equation}
and $h_{inel}^f=h_{inel}^i$
provided that
$ f^i+f^f=1.$
Eq. (\ref{si}) means the appearance of the reflective scattering mode and it takes place regardless of the scattering amplitude form.
Indeed, such behavior of $h_{inel}$
implies an onset of decrease (i.e. $\partial h_{inel}(s,b)/\partial s$ at $b=0$ becomes negative) of the inelastic overlap function and is
associated with appearance of reflection at the LHC energy range\footnote{The $U$-matrix form of unitarization and its relation to the symmery property of $h_{inel}$ was used for continuous extrapolation to higher energies in \cite{jpg22}. It incorporates both scattering modes providing a ground for their simultaneous presence.}. Impact parameter profile of $h_{inel}(s,b)$ becomes peripheral when $s>s_m$, i.e. $\partial h_{inel}(s,b)/\partial b$ at $b=0$ is strictly positive. Thus, this invariance of the inelastic overlap function
$h_{inel}$
is in favor of coexistence of the absorptive and reflective scattering modes in elastic scattering (see for discussion of the latter in \cite{07,jpg22}) and corresponds to the transition of the elastic scattering matrix element:
\begin{equation} \label{refl}
S\leftrightarrow -S.
\end{equation}
It is not clear whether this property has a meaning of a separate physical concept.
However, it is coherent with saturation of unitarity limit for the amplitude $f$ implied by the principle of maximal strength of strong interactions proposed long ago by Chew and Frautchi \cite{chew, chew1}. They noted that a `` characteristic of strong interactions is a capacity to `` saturate unitarity condition at high energies''.
Factor $-1$ in Eq. (\ref{refl}) is interpreted as a result of a reflection by analogy with optics. It can also be considered as a result of an analogue of Berry phase appearance \cite{07} or of color--conducting matter formation in the interaction region of reaction \cite{jpg}. The color--conducting matter formation can be used in its turn for explanation of the correspondence of above symmetry property to a quark--hadron duality (confinement) \cite{brd}.
For discussion of correlation of the $S$--matrix unitarity and confinement see \cite{np}. It is also suggested to associate this mode with effective resonance formation resulting from `` an exceptional intermidiate state that unites correlated partons'' \cite{anis,ani,nek}.
\section{Real part of the scattering amplitude}
It should be recollected that the above symmetry property for the inelastic overlap function takes place for the pure imaginary scattering amplitude.
This section discusses the changes in the symmetry properties due to the real part of the scattering amplitude. Relaxing requirement of a pure imaginary elastic scattering amplitude and taking $b=0$, unitarity condition in the impact parameter representation can be rewritten in the form:
\begin{equation}\label{inelt}
\mbox{Im}f[1-\mbox{Im}f]=h_{inel}+[\mbox{Re}f]^2.
\end{equation}
It is evident from Eq. (\ref{inelt}) that
\begin{equation}\label{ineq}
[\mbox{Re}f]^2 \leq 1/4-h_{inel}.
\end{equation}
Thus, $|\mbox{Re}f| \to 0$ in both cases: when $h_{inel}\to 1/4$ (full absorption, $S=0$) and/or $\mbox{Im}f\to 1$ (full reflection, $S=-1$). Without neglect of the small real part of the scattering amplitude, one should consider invariance of the function $h_{inel}+[\mbox{Re}f]^2$ under replacement
\begin{equation}\label{symr}
\mbox{Im}f\leftrightarrow1-\mbox{Im}f.
\end{equation}
Account of a small real part of the scattering amplitude makes picture less transparent but does not change it qualitatively\footnote{Numerical calculations based on the model--independent analysis of available experimental data \cite{tamas} give $h_{inel}=1/4-\alpha$ with small positive function $\alpha$, $\alpha\ll 1/4$, at the LHC energies.}.
\section*{Conclusion}
The symmetry property of the inelastic overlap function and simultaneous presence of the two scattering modes is a consequence of unitarity. It is in favor of coexistence of the absorptive and reflective scattering modes at small impact parameter values\footnote{When addressing the asymptotics with the use of the respective relations, for example, Gribov--Froissart projection formula \cite{grb,frs}, one should not expect the symmetry realization since the scattering is reduced to purely absorptive mode at large impact parameter values.}. In contrast, the considered symmetry property disfavors an {\it ad hoc} exclusion of one of the scattering modes (i.e the reflective mode) under approaching the asymptotic limit $s\to\infty$. Predominance of the particular mode is to be correlated with the energy and impact parameters ranges under consideration.
|
1,116,691,497,253 | arxiv | \section{Introduction}
\label{sec:intro}
Determination of convergence, divergence or oscillation of infinite series has a very rich
tradition in mathematics, and a large number of tests exist for the purpose. Unfortunately,
there does not seem to exist any universal test that provides conclusive answers to all infinite series; see,
for example, \ctn{Ilyin82}, \ctn{Knopp90}, \ctn{Bou12}. Attempts to resolve the issue as much as possible
using hierarchies of tests, with the successive tests in the hierarchy providing conclusive answers
to successively larger ranges of infinite series, are provided by \ctn{Knopp90}, \ctn{Bromwich05},
\ctn{Bou11} and \ctn{Lif11}. These tests are based on the Kummer approach for positive series
and the chain of the Ermakov tests for positive monotone series.
The hierarchy of tests provided in \ctn{Bou12} are based on \ctn{Bromwich05} and are related to the well-known Cauchy's test
(see, for example, \ctn{Fich70}, \ctn{Rudin76}, \ctn{Spivak94}).
Below we briefly discuss the approach of \ctn{Bou12}, who consider positive series. It is important to
remark at the outset that positive series is not a requirement for the approaches that we propose and develop
in this article.
\subsection{Hierarchical tests of convergence}
\label{subsec:hierarchy}
The tests of \ctn{Bou12} are based on the following theorem,
which is a refinement of a result of \ctn{Bromwich05}.
\begin{theorem}[\ctn{Bou12}]
Let $\sum_{i=1}^{\infty}F'(i)$ be a divergent series where $F(x)>0$, $F'(x)>0$ and $F'(x)$ is decreasing.
If $\sum_{i=1}^{\infty}X_i$ is a positive series, then denoting
$\frac{\log\left\{\frac{F'(i)}{X_i}\right\}}{\log F(i)}=W_i$, the following hold:
\begin{align}
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\inf}~W_i>1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{converges};\notag\\
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\sup}~W_i<1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{diverges}.\notag
\end{align}
\end{theorem}
Letting $F(z)=z$ in the above theorem, \ctn{Bou12} obtain their first test, which we provide below.
\begin{theorem}[Test $T_1$ of \ctn{Bou12}]
Consider a positive series $\sum_{i=1}^{\infty}X_i$ and let $T_{1,i}=\frac{i}{\log i}\left(1-X^{\frac{1}{i}}_i\right)$.
Then
\begin{align}
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\inf}~T_{1,i}>1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{converges};\notag\\
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\sup}~T_{1,i}<1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{diverges}.\notag
\end{align}
\end{theorem}
This result is the same as that of \ctn{Bromwich05}, but a proof was not supplied in that work.
Now choosing $F(z)=\log z$, \ctn{Bou12} form their second test of the hierarchy; we provide the result below.
Again, the result has been formulated by \ctn{Bromwich05}, but a proof was not given.
\begin{theorem}[Test $T_2$ of \ctn{Bou12}]
Consider a positive series $\sum_{i=1}^{\infty}X_i$ and let $T_{2,i}=\frac{\log i}{\log\log i}\left(T_{1,i}-1\right)$.
Then
\begin{align}
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\inf}~T_{2,i}>1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{converges};\notag\\
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\sup}~T_{2,i}<1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{diverges}.\notag
\end{align}
\end{theorem}
Setting $F(z)=\log\log z$, the following result has been proved by \ctn{Bou12}:
\begin{theorem}[Test $T_3$ of \ctn{Bou12}]
Consider a positive series $\sum_{i=1}^{\infty}X_i$ and let $T_{3,i}=\frac{\log i}{\log\log i}\left(T_{2,i}-1\right)$.
Then
\begin{align}
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\inf}~T_{3,i}>1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{converges};\notag\\
&\mbox{If}~\underset{i\rightarrow\infty}{\lim\sup}~T_{3,i}<1,~\mbox{then}~\sum_{i=1}^{\infty}X_i~\mbox{diverges}.\notag
\end{align}
\end{theorem}
Successively selecting $F(z)=\log\log\log z$, $F(z)=\log\log\log\log z$, etc. successively more refined tests
$T_4$, $T_5$, etc. can be constructed, with each test having wider scope compared to the preceding test with regard
to obtaining conclusive decision on convergence or divergence of the underlying series.
However, if, say, at stage $k$, $\underset{i\rightarrow\infty}{\lim\inf}~T_{k,i}<1<
\underset{i\rightarrow\infty}{\lim\sup}~T_{k,i}$ so that $T_k$ is inconclusive, then all the subsequent tests
will also fail to provide any conclusion.
Thus, in spite of the above developments, conclusion regarding the series can still be elusive. For instance, an example
considered in \ctn{Bou12} is the following series:
\begin{equation}
S_1=\sum_{i=3}^{\infty}\left(1-\frac{\log i}{i}-\frac{\log\log i}{i}
\left\{\cos^2\left(\frac{1}{i}\right)\right\}\left(a+(-1)^ib\right)\right)^i,
\label{eq:inconclusive1}
\end{equation}
where $a\geq 0$ and $b\geq 0$. For $a=b=1$, $\underset{i\rightarrow\infty}{\lim\inf}~T_{2,i}=0<1<2=
\underset{i\rightarrow\infty}{\lim\sup}~T_{2,i}$. Hence, the hierarchy of tests
$\left\{T_k;k\geq 1\right\}$ fails to provide definitive answer to the question of convergence of the above series.
In fact, we can generalize the series (\ref{eq:inconclusive1}) such that the hierarchy of tests fails for the general class
of series. Indeed, consider
\begin{equation}
S_2=\sum_{i=3}^{\infty}\left(1-\frac{\log i}{i}-\frac{\log\log i}{i}
f(i)\left(a+(-1)^ib\right)\right)^i,
\label{eq:inconclusive2}
\end{equation}
where $0\leq f(i)\leq 1$ for all $i=1,2,3,\ldots$, and $f(i)\rightarrow 1$ as $i\rightarrow\infty$.
Such a function can be easily constructed as follows. Let $g(i)$ be positive and monotonically increase
to $c$, where $c>0$. Then let $f(i)=g(i)/c$, for $i=1,2,3,\ldots$. A simple example of such a function
$g$ is $g(i)=c-\frac{1}{i}$; $g(i)=\cos^2\left(\frac{1}{i}\right)$ is another example, showing the generality
of (\ref{eq:inconclusive2}) compared to (\ref{eq:inconclusive1}).
\subsection{Riemann Hypothesis and series convergence}
\label{subsec:RH_series}
It is well-known that the famous Riemann Hypothesis is equivalent to convergence of an infinite
series on a certain interval. A brief introduction to the problem, along with
the necessary background, is provided in Section \ref{sec:RH}. Studying the relevant infinite
series, if at all possible, is then the most challenging problem of mathematics. The existing
mathematical literature, however, does not seem to be able to provide any directions in this regard.
Hence, innovative theories and methods for analyzing infinite series should be particularly welcome.
In this paper, we attempt to provide an alternative method of characterization of series convergence and divergence
using Bayesian theory, which we also subsequently extend to infinite series with multiple or even infinite
number of limit points. For the Bayesian purpose we must formulate our theory stochastically, that is, in terms
of random infinite series, noting that the theory regarding deterministic infinite series is a special case
of our Bayesian formulation.
\section{The key concept}
\label{sec:key_concept}
Let us consider the random infinite series
\begin{equation}
S_{1,\infty}=\sum_{i=1}^{\infty}X_i.
\label{eq:S}
\end{equation}
It is required to determine whether the series of the above form converges, diverges or oscillates.
Observe that convergence or divergence of the sum $S_{1,\infty}$ may be thought of as a mapping
$f(S_{1,\infty})=p$, where $f$ is some appropriate transformation and $p$ is either $0$ or $1$, where $0$
stands for divergence and $1$ is associated with convergence. Since we assume that it is not known if
the underlying series $S_{1,\infty}$ converges or diverges, the value of $p$ is unknown, signifying that we must
acknowledge uncertainty about $p$. Conceptually, given the value of a partial sum of the form
$\sum_{i=m}^nX_i$, for large $m$ and $n$ ($m\leq n$), one may have a subjective expectation whether or not the series
$S_{1,\infty}$ converges, which may be quantified, under the notion of randomness of $X_i$, as
$$E\left(\mathbb I_{\left\{\left|\sum_{i=m}^nX_i\right|\leq c_{m,n}\right\}}\right)
=P\left(\left|\sum_{i=m}^nX_i\right|\leq c_{m,n}\right)=p_{m,n},$$
where, for any set $A$, $\mathbb I_A$ denotes indicator of $A$, and
$c_{m,n}$ are non-negative quantities satisfying $c_{m,n}\downarrow 0$ as $m,n\rightarrow\infty$.
Thus, the expectation depends on how large $m$ and $n$ are.
Note that, as $m,n\rightarrow\infty$,
$$\mathbb I_{\left\{\left|\sum_{i=m}^nX_i\right|\leq c_{m,n}\right\}}
\rightarrow \mathbb I_{\left\{\underset{m,n\rightarrow\infty}{\lim}~\left|\sum_{i=m}^nX_i\right|=0\right\}}$$
almost surely, so that uniform integrability leads one to expect
$$f(S_{1,\infty})=\underset{m,n\rightarrow\infty}{\lim}~p_{m,n}
=\underset{m,n\rightarrow\infty}{\lim}~P\left(\left|\sum_{i=m}^nX_i\right|\leq c_{m,n}\right)
=P\left(\underset{m,n\rightarrow\infty}{\lim}~\left|\sum_{i=m}^nX_i\right|=0\right)
=p,$$
where $p$ is the probability of convergence of the series $S_{1,\infty}$.
To convert this key concept to a practically useful theory, one requires the Bayesian paradigm, where,
for each pair $(m,n)$, belief regarding $p_{m,n}$ needs to be quantified using prior distributions.
The terms $X_i$ need to be viewed as realizations of some
random process so that the partial sums $\sum_{i=m}^nX_i$ provide coherent probabilistic information
on $p$ when quantified by the posterior distribution of $p_{m,n}$. As $m$ and $n$ are (deterministically) updated,
the posterior of $p_{m,n}$ must also be coherently updated, utilizing the new partial sum information.
In particular, as $m,n\rightarrow\infty$, it is desirable that the posterior of $p_{m,n}$ converges to either
$\delta_{\left\{1\right\}}$ or $\delta_{\left\{0\right\}}$ in some appropriate sense,
accordingly as $S_{1,\infty}$ converges or diverges.
Here, for any $x$, $\delta_{\left\{x\right\}}$ denotes point mass at $x$.
In Section \ref{sec:recursive1} we devise a recursive Bayesian methodology that achieves the goal discussed above.
It is important to remark that no restrictive assumption is necessary for the development of our ideas,
not even independence of $X_i$.
With this methodology, we then characterize convergence and divergence of infinite series
in Section \ref{sec:characterization}, illustrating in Section \ref{sec:illustrations} our theory and methods with seven
examples. In Section \ref{sec:RH} we apply our ideas to Riemann Hypothesis, obtaining results that
are not in complete favour of the conjecture.
We also extend our theory and methods to infinite series with multiple or infinite number
of limit points; details are provided in Section S-3 of the supplement.
Illustrations of our Bayesian multiple limit point theory are provided in Sections S-4 and S-5
of the supplement,
the latter section detailing the application to Riemann
Hypothesis in order to vindicate our results obtained in Section \ref{sec:RH}. Finally, we make
concluding remarks in Section \ref{sec:conclusion}.
\section{A recursive Bayesian procedure for studying infinite series}
\label{sec:recursive1}
Since we view $X_i$ as realizations from some random process, we first formalize the notion
in terms of the relevant probability space.
Let $(\Omega,\mathcal A,\mu)$ be a probability space, where $\Omega$ is the sample space,
$\mathcal A$ is the Borel $\sigma$-field on $\Omega$, and $\mu$ is some probability measure.
Let, for $i=1,2,3,\ldots$, $X_i:\Omega\mapsto\mathbb R$ be real valued random variables
measurable with respect to the Borel $\sigma$-field $\mathcal B$ on $\mathbb R$.
As in \ctn{Schervish95}, we can then define a $\sigma$-field of subsets of $\mathbb R^{\infty}$ with
respect to which $X=(X_1,X_2,\ldots)$ is measurable. Indeed, let us define $\mathbb B^{\infty}$ to be
the smallest $\sigma$-field containing sets of the form
\begin{align}
B&=\left\{X:X_{i_1}\leq r_1,X_{i_2}\leq r_2,\ldots,X_{i_p}\leq r_p,~\mbox{for some}~p\geq 1,\right.\notag\\
&\quad\quad\left.~\mbox{some integers}~
i_1,i_2,\ldots,i_p,~\mbox{and some real numbers}~r_1,r_2,\ldots,r_p\right\}.\notag
\end{align}
Since $B$ is an intersection of finite number of sets
of the form $\left\{X:X_{i_j}\leq r_j\right\}$; $j=1,\ldots,p$, all of which belong to $\mathcal A$ (since
$X_{i_j}$ are measurable)
it follows that $X^{-1}(B)\in\mathcal A$, so that $X$ is measurable with respect to
$(\mathbb R^{\infty},\mathbb B^{\infty},P)$, where $P$ is the probability measure induced by $\mu$.
Alternatively, note that it is possible to represent any stochastic process $\{X_i:i\in \mathfrak I\}$, for fixed
$i$ as a random variable $\omega\mapsto X_i(\omega)$, where $\omega\in\Omega$;
$\Omega$ being the set of all functions from $\mathfrak I$ into $\mathbb R$.
Also, fixing $\omega\in\Omega$, the function $i\mapsto X_i(\omega);~i\in \mathfrak I$,
represents a path of $X_i;~i\in\mathfrak I$. Indeed, we can identify $\omega$ with the function
$i\mapsto X_i(\omega)$ from $\mathfrak I$ to $\mathbb R$; see, for example, \ctn{Oksendal00}, for
a lucid discussion.
This latter identification will be convenient for our purpose, and we adopt this in this article.
Note that the $\sigma$-algebra $\mathcal F$ induced by $X$
is generated by sets of the form
\[
\left\{\omega:\omega(i_1)\in B_1,\omega(i_2)\in B_2,\ldots,\omega(i_k)\in B_k\right\},
\]
where $B_j\subset\mathbb R;~j=1,\ldots,k$, are Borel sets in $\mathbb R$.
\subsection{Development of the stage-wise likelihoods}
\label{subsec:Bayesian_method}
For $j=1,2,3,\ldots$, let
\begin{equation}
S_{j,n_j}=\sum_{i=\sum_{k=0}^{j-1}n_k+1}^{\sum_{k=0}^jn_k}X_i,
\label{eq:S_j_n}
\end{equation}
where $n_0=0$ and $n_j\geq 1$ for all $j\geq 1$.
Also let $\{c_j\}_{j=1}^{\infty}$ be a non-negative decreasing sequence and
\begin{equation}
Y_{j,n_j}=\mathbb I_{\left\{\left|S_{j,n_j}\right|\leq c_j\right\}}.
\label{eq:Y_j_n}
\end{equation}
Let, for $j\geq 1$,
\begin{equation}
P\left(Y_{j,n_j}=1\right)=p_{j,n_j}.
\label{eq:p_j_n}
\end{equation}
Hence, the likelihood of $p_{j,n_j}$, given $y_{j,n_j}$, is given by
\begin{equation}
L\left(p_{j,n_j}\right)=p^{y_{j,n_j}}_{j,n_j}\left(1-p\right)^{1-y_{j,n_j}}
\label{eq:likelihood}
\end{equation}
It is important to relate $p_{j,n_j}$ to convergence or divergence of the underlying series.
Note that $p_{j,n_j}$ is the probability that $|S_{j,n_j}|$ falls below $c_j$. Thus,
$p_{j,n_j}$ can be interpreted as the probability that the
series $S_{1,\infty}$ is convergent when the data observed is $S_{j,n_j}$.
If $S_{1,\infty}$ is convergent, then it is to be expected {\it a posteriori}, that
\begin{equation}
p_{j,n_j}\rightarrow 1\quad\mbox{as}~j\rightarrow\infty.
\label{eq:convergent_p}
\end{equation}
Note that the above is expected to hold even for $n_j=n$ for all $j\geq 1$, and for all $n\geq 1$. This is related
to Cauchy's criterion of convergence of partial sums: for every $\epsilon>0$ there exists a positive
integer $N$ such that for all $n\geq m\geq N$, $|\sum_{i=m}^nX_i|<\epsilon$.
Indeed, as we will formally show, condition (\ref{eq:convergent_p})
is both necessary and sufficient for convergence of the series.
On the other hand, if the series is divergent, then there exist $j_0\geq 1$
such that for every $j>j_0$ there exists $n_j\geq 1$ satisfying $|S_{j,n_j}|>c_j$. Here we expect,
{\it a posteriori}, that
\begin{equation}
p_{j,n_j}\rightarrow 0\quad\mbox{as}~j\rightarrow\infty.
\label{eq:divergent_p}
\end{equation}
Again, we will prove formally that the above condition is both necessary and sufficient for divergence.
In this work we call the series $S_{1,\infty}$ oscillating if the sequence
$\left\{S_{1,n};~n=1,2,\ldots\right\}$ has more than one limit points.
Thus, these are non-convergent series, and so, the probability of convergence of these series
must tend to zero in our Bayesian framework, which is in fact ensured by our theoretical developments.
But it is also important to be able to categorize and learn about the limit points.
A general theory, which encompasses finite as well as infinite number of limit points,
with perhaps unequal frequencies of occurrences, is developed in Section S-3 of the supplement.
\begin{comment}
\subsection{A brief discussion on the prior structure for the series convergence problem}
\label{subsec:prior_structure}
It is important to note that not all priors on $\left\{p_{j,n_j}\right\}_{j=1}^{\infty}$ supported on $[0,1]$ are feasible.
This is because (\ref{eq:p_j_n}) implies that the marginal probability of the event $Y_{j,n_j}=1$, is
$P\left(Y_{j,n_j}=1\right)=E(p_{j,n_j})$, and since $\mathbb I_{\left\{|S_{j,n_j}|\leq c_{j,n_j}\right\}}
\rightarrow \mathbb I_{\left\{\underset{j\rightarrow\infty}{\lim}~|S_{j,n_j}|=0\right\}}$ almost surely, it follows
from uniform integrability that $\underset{j\rightarrow\infty}{\lim}~P\left(Y_{j,n_j}=1\right)
=\underset{j\rightarrow\infty}{\lim}~E(p_{j,n_j})=P\left(\underset{j\rightarrow\infty}{\lim}~|S_{j,n_j}|=0\right)=1$
for all sequences $\{n_j\}_{j=1}^{\infty}$ if and only if $S_{1,\infty}<\infty$ almost surely.
On the other hand, there exists a sequence $\{n_j\}_{j=1}^{\infty}$ such that $E(p_{j,n_j})\rightarrow 0$
as $j\rightarrow\infty$ if and only if $S_{1,\infty}=\infty$ almost surely.
Hence, it is not possible, for example, to choose a sequence of priors $\left\{Beta(\alpha_j,\beta_j)\right\}_{j=1}^{\infty}$
with $\alpha_j=\beta_j$ for $j\geq j_0$, for some $j_0\geq 1$.
Indeed, since in practice it is not known if $S_{1,\infty}$ converges or diverges, one must consider a single
sequence of prior structure for $\left\{p_{j,n_j}\right\}_{j=1}^{\infty}$
(possibly depending on the sequence $\{n_j\}_{j=1}^{\infty}$). Since the prior sequence must satisfy either
$E(p_{j,n_j})\rightarrow 1$ for all $\{n_j\}_{j=1}^{\infty}$ or $E(p_{j,n_j})\rightarrow 0$ for some $\{n_j\}_{j=1}^{\infty}$,
accordingly as $S_{1,\infty}$ converges or diverges, the prior sequence must adaptively adjust itself,
depending upon the series at hand. In other words, the prior of $p_{j,n_j}$ must depend upon the
series information available till stage $j-1$. In this article we propose such a dynamic sequence of priors that
adaptively updates itself.
\end{comment}
In what follows we shall first construct a recursive Bayesian methodology that formally characterizes
convergence and divergence in terms of formal posterior convergence related to (\ref{eq:convergent_p})
and (\ref{eq:divergent_p}).
\subsection{Development of recursive Bayesian posteriors}
\label{subsec:recursive_posteriors}
We assume that $\left\{y_{j,n_j};j=1,2,\ldots\right\}$ is observed successively at stages indexed by $j$.
That is, we first observe $y_{1,n_1}$, and based on our prior belief regarding the first stage probability,
$p_{1,n_1}$, compute the posterior distribution of $p_{1,n_1}$ given $y_{1,n_1}$, which we denote by
$\pi(p_{1,n_1}|y_{1,n_1})$.
Based on this posterior we construct a prior for the second stage, and compute the posterior
$\pi(p_{2,n_2}|y_{1,n_1},y_{2,n_2})$. We continue this procedure for as many stages as we desire.
Details follow.
Consider the sequences $\left\{\alpha_j\right\}_{j=1}^{\infty}$ and $\left\{\beta_j\right\}_{j=1}^{\infty}$,
where $\alpha_j=\beta_j=1/j^2$ for $j=1,2,\ldots$.
At the first stage of our recursive Bayesian algorithm, that is, when $j=1$,
let us assume that the prior is given by
\begin{equation}
\pi(p_{1,n_1})\equiv Beta(\alpha_1,\beta_1),
\label{eq:prior_stage_1}
\end{equation}
where, for $a>0$ and $b>0$, $Beta(a,b)$ denotes the Beta distribution with mean $a/(a+b)$
and variance $(ab)/\left\{(a+b)^2(a+b+1)\right\}$.
Combining this prior with the
likelihood (\ref{eq:likelihood}) (with $j=1$), we obtain the following posterior of $p_{1,n_1}$ given $y_{1,n_1}$:
\begin{equation}
\pi(p_{1,n_1}|y_{1,n_1})\equiv Beta\left(\alpha_1+y_{1,n_1},\beta_1+1-y_{1,n_1}\right).
\label{eq:posterior_stage_1}
\end{equation}
At the second stage (that is, for $j=2$), for the prior of $p_{2,n_2}$ we consider the posterior
of $p_{1,n_1}$ given $y_{1,n_1}$ associated with the $Beta(\alpha_1+\alpha_2,\beta_1+\beta_2)$ prior.
That is, our prior on $p_{2,n_2}$ is given by:
\begin{equation}
\pi(p_{2,n_2})\equiv Beta\left(\alpha_1+\alpha_2+y_{1,n_1},\beta_1+\beta_2+1-y_{1,n_1}\right).
\label{eq:prior_stage_2}
\end{equation}
The reason for such a prior choice is that the uncertainty regarding convergence of the series
is reduced once we obtain the posterior at the first stage, so that at the second stage the uncertainty
regarding the prior is expected to be lesser compared to the first stage posterior. With our choice, it
is easy to see that the prior variance at the second stage, given by
$$\left\{(\alpha_1+\alpha_2+y_{1,n_1})(\beta_1+\beta_2+1-y_{1,n_1})\right\}/\left\{(\alpha_1+\alpha_2+\beta_1+\beta_2+1)^2
(\alpha_1+\alpha_2+\beta_1+\beta_2+2)\right\},$$
is smaller than the first stage posterior variance, given by
$$\left\{(\alpha_1+y_{1,n_1})(\beta_1+1-y_{1,n_1})\right\}/\left\{(\alpha_1+\beta_1+1)^2
(\alpha_1+\beta_1+2)\right\}.$$
The posterior of $p_{2,n_2}$ given $y_{2,n_2}$ is then obtained by combining the second stage prior
(\ref{eq:prior_stage_2}) with (\ref{eq:likelihood}) (with $j=2$). The form of the posterior
at the second stage is thus given by
\begin{equation}
\pi(p_{2,n_2}|y_{2,n_2})\equiv Beta\left(\alpha_1+\alpha_2+y_{1,n_1}+y_{2,n_2},\beta_1+\beta_2+2-y_{1,n_1}-y_{2,n_2}\right).
\label{eq:posterior_stage_2}
\end{equation}
Continuing this way, at the $k$-th stage, where $k>1$, we obtain the following posterior of $p_{k,n_k}$:
\begin{equation}
\pi(p_{k,n_k}|y_{k,n_k})\equiv Beta\left(\sum_{j=1}^k\alpha_j+\sum_{j=1}^ky_{j,n_j},
k+\sum_{j=1}^k\beta_j-\sum_{j=1}^ky_{j,n_j}\right).
\label{eq:posterior_stage_k}
\end{equation}
It follows from (\ref{eq:posterior_stage_k}) that
\begin{align}
E\left(p_{k,n_k}|y_{k,n_k}\right)&=\frac{\sum_{j=1}^k\alpha_j
+\sum_{j=1}^ky_{j,n_j}}{k+\sum_{j=1}^k\alpha_j+\sum_{j=1}^k\beta_j};
\label{eq:postmean_p_k}\\
Var\left(p_{k,n_k}|y_{k,n_k}\right)&=
\frac{(\sum_{j=1}^k\alpha_j+\sum_{j=1}^ky_{j,n_j})(k+\sum_{j=1}^k\beta_j-\sum_{j=1}^ky_{j,n_j})}
{(k+\sum_{j=1}^k\alpha_j+\sum_{j=1}^k\beta_j)^2(1+k+\sum_{j=1}^k\alpha_j+\sum_{j=1}^k\beta_j)}.
\label{eq:postvar_p_k}
\end{align}
Since $\sum_{j=1}^k\alpha_j=\sum_{j=1}^k\beta_j=\sum_{j=1}^k\frac{1}{j^2}$, (\ref{eq:postmean_p_k})
and (\ref{eq:postvar_p_k}) admit the following simplifications:
\begin{align}
E\left(p_{k,n_k}|y_{k,n_k}\right)&=\frac{\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^ky_{j,n_j}}
{k+2\sum_{j=1}^k\frac{1}{j^2}};
\label{eq:postmean_p_k_2}\\
Var\left(p_{k,n_k}|y_{k,n_k}\right)&=
\frac{(\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^ky_{j,n_j})(k+\sum_{j=1}^k\frac{1}{j^2}-\sum_{j=1}^ky_{j,n_j})}
{(k+2\sum_{j=1}^k\frac{1}{j^2})^2(1+k+2\sum_{j=1}^k\frac{1}{j^2})}.
\label{eq:postvar_p_k_2}
\end{align}
\section{Characterization of convergence properties of the underlying infinite series}
\label{sec:characterization}
Based on our recursive Bayesian theory we have the following theorem that characterizes
convergence of $S_{1,\infty}$ in terms of the limit of the posterior
probability of $p_{k,n_k}$, as $k\rightarrow\infty$.
Note that the sample space of $S_{1,\infty}$ is also given by $\mathfrak S$.
We also assume, for the sake of generality, that for any $\omega\in\mathfrak S\cap\mathfrak N^c$, where
$\mathfrak N~(\subset\mathfrak S)$ has zero probability measure, the non-negative monotonically
decreasing sequence $\{c_j\}_{j=1}^{\infty}$
depends upon $\omega$, so that we shall denote the sequence by $\{c_j(\omega)\}_{j=1}^{\infty}$.
In other words, we allow $\left\{c_j(\omega)\right\}_{j=1}^{\infty}$ to depend upon the corresponding series
$S_{1,\infty}(\omega)$.
Note that if $S_{1,\infty}(\omega)<\infty$, then
the sequence $\left\{|S_{j,n_j}(\omega)|\right\}_{j=1}^{\infty}$ is uniformly bounded,
for all sequences $\{n_j\}_{j=1}^{\infty}$,
and converges to zero for all sequences $\{n_j\}_{j=1}^{\infty}$, which implies that there exists a
monotonically decreasing sequence $\left\{c_j(\omega)\right\}_{j=1}^{\infty}$ independent of the choice of
$\{n_j\}_{j=1}^{\infty}$ such that for some $j_0(\omega)\geq 1$,
\begin{equation}
|S_{j,n_j}(\omega)|\leq c_j(\omega),~\mbox{for}~j\geq j_0(\omega).
\label{eq:bound_S}
\end{equation}
Indeed, in most of our illustrations presented in this paper, including the Riemann Hypothesis, we
choose $\left\{c_j(\omega)\right\}_{j=1}^{\infty}$ in a way that depends upon the infinite series at hand.
\begin{theorem}
\label{theorem:convergence}
For any $\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ is some null set having probability measure zero,
$S_{1,\infty}(\omega)<\infty$ if and only if
there exists a non-negative monotonically decreasing sequence
$\left\{c_j(\omega)\right\}_{j=1}^{\infty}$ such that
for any choice of the sequence $\{n_j\}_{j=1}^{\infty}$,
\begin{equation}
\pi\left(\mathcal N_1|y_{k,n_k}(\omega)\right)\rightarrow 1,
\label{eq:consistency_at_1}
\end{equation}
as $k\rightarrow\infty$,
where $\mathcal N_1$ is any neighborhood of 1 (one).
\end{theorem}
\begin{proof}
Let, for $\omega\in\mathfrak S\cap\mathfrak N^c$,
$S_{1,\infty}(\omega)$ be convergent.
Then, by (\ref{eq:bound_S}), $|S_{j,n_j}(\omega)|\leq c_j(\omega)$ for all $n_j$, so that $y_{j,n_j}(\omega)=1$
for all $j>j_0(\omega)$, for all $n_j$.
Hence, in this case, $\sum_{j=1}^ky_{j,n_j}(\omega)=k-k_0(\omega)$,
where $k_0(\omega)\geq 0$. Also, $\sum_{j=1}^k\frac{1}{j^2}\rightarrow\frac{\pi^2}{6}$, as $k\rightarrow\infty$.
Consequently, it is easy to see that
\begin{align}
\mu_k=E\left(p_{k,n_k}|y_{k,n_k}(\omega)\right)&\sim\frac{\frac{\pi^2}{6}+k-k_0(\omega)}{k+\frac{\pi^2}{3}}
\rightarrow 1,~\mbox{as}~k\rightarrow\infty,~\mbox{and},
\label{eq:postmean_p_k_limit}\\
\sigma^2_k=Var\left(p_{k,n_k}|y_{k,n_k}(\omega)\right)&\sim
\frac{(\frac{\pi^2}{6}+k)(\frac{\pi^2}{6})}{(k+\frac{\pi^2}{3})^2(1+k+\frac{\pi^2}{3})}
\rightarrow 0~\mbox{as}~k\rightarrow\infty.
\label{eq:postvar_p_k_limit}
\end{align}
In the above, for any two sequences $\left\{a_k\right\}_{k=1}^{\infty}$ and $\left\{b_k\right\}_{k=1}^{\infty}$,
$a_k\sim b_k$ indicates $\frac{a_k}{b_k}\rightarrow 1$, as $k\rightarrow\infty$.
Now let $\mathcal N_1$ denote any neighborhood of 1, and let $\epsilon>0$ be sufficiently small such that
$\mathcal N_1\supseteq\left\{1-p_{k,n_k}<\epsilon\right\}$. Combining (\ref{eq:postmean_p_k_limit})
and (\ref{eq:postvar_p_k_limit}) with Chebychev's inequality ensures
that (\ref{eq:consistency_at_1}) holds.
Now assume that (\ref{eq:consistency_at_1}) holds.
Then for any given $\epsilon>0$,
\begin{equation}
\pi\left(p_{k,n_k}>1-\epsilon|y_{k,n_k}(\omega)\right)\rightarrow 1,~\mbox{as}~k\rightarrow\infty.
\label{eq:post1}
\end{equation}
Hence, it can be seen, using Markov's inequality, that
\begin{align}
E\left(p_{k,n_k}|y_{k,n_k}(\omega)\right)&\rightarrow 1;
\label{eq:postmean1}\\
Var\left(p_{k,n_k}|y_{k,n_k}(\omega)\right)&\rightarrow 0,
\label{eq:postvar1}
\end{align}
as $k\rightarrow\infty$.
If $S_{1,\infty}(\omega)$ does not converge then there exists $j_0(\omega)$ such that for each
$j\geq j_0(\omega)$, there exists $n_j(\omega)$ satisfying
$\left|S_{j,n_j(\omega)}(\omega)\right|>c_j(\omega)$, for any choice of non-negative sequence
$\{c_j(\omega)\}_{j=1}^{\infty}$ monotonically converging to zero.
Hence, in this situation,
$0\leq \sum_{j=1}^ky_{j,n_j(\omega)}(\omega)\leq j_0(\omega)$.
Substituting this in (\ref{eq:postmean_p_k_2}) and (\ref{eq:postvar_p_k_2}), it is easy to see that,
as $k\rightarrow\infty$,
\begin{align}
E\left(p_{k,n_k(\omega)}|y_{k,n_k(\omega)}(\omega)\right)\rightarrow 0;
\label{eq:postmean_div}\\
Var\left(p_{k,n_k(\omega)}|y_{k,n_k(\omega)}(\omega)\right)\rightarrow 0,
\label{eq:postvar_div}
\end{align}
so that (\ref{eq:postmean1}) is contradicted.
\end{proof}
We now prove the following theorem that provides necessary and sufficient conditions for
divergence of $S_{1,\infty}(\omega)$ in terms of the limit of the posterior
probability of $p_{k,n_k(\omega)}$, as $k\rightarrow\infty$.
\begin{theorem}
\label{theorem:divergence}
$S_{1,\infty}$ is almost surely divergent if and only if
for any $\omega\in\mathfrak S\cap\mathfrak N^c$ where $\mathfrak N$ is some null set having probability measure zero,
there exists a sequence
$\{n_j(\omega)\}_{j=1}^{\infty}$ such that
\begin{equation}
\pi\left(\mathcal N_0|y_{k,n_k(\omega)}(\omega)\right)\rightarrow 1,
\label{eq:consistency_at_0}
\end{equation}
$k\rightarrow\infty$,
where $\mathcal N_0$ is any neighborhood of 0 (zero).
\end{theorem}
\begin{proof}
Assume that $S_{1,\infty}(\omega)$ is divergent. Then
then there exist $j_0(\omega)\geq 1$ such that for every $j\geq j_0(\omega)$, one can find
$n_j(\omega)$ satisfying
$\left|S_{j,n_j(\omega)}(\omega)\right|>c_j(\omega)$, for any choice of non-negative sequence $\{c_j(\omega)\}_{j=1}^{\infty}$
monotonically converging to zero.
From the proof of the sufficient condition of Theorem \ref{theorem:convergence} it follows that
(\ref{eq:postmean_div}) and (\ref{eq:postvar_div}) hold.
Let $\epsilon>0$ be small enough so that $\mathcal N_0\supseteq\left\{p_{k,n_k(\omega)}<\epsilon\right\}$. Then
combining Chebychev's inequality with (\ref{eq:postmean_div}) and (\ref{eq:postvar_div})
it is easy to see that (\ref{eq:consistency_at_0}) holds.
Now assume that (\ref{eq:consistency_at_0}) holds.
Then for any given $\epsilon>0$,
\begin{equation}
\pi\left(p_{k,n_k(\omega)}<\epsilon|y_{k,n_k(\omega)}(\omega)\right)\rightarrow 1,~\mbox{as}~k\rightarrow\infty.
\label{eq:post2}
\end{equation}
It can be seen, now using Markov's inequality with respect to $1-p_{k,n_k(\omega)}$, that
\begin{align}
E\left(p_{k,n_k(\omega)}|y_{k,n_k(\omega)}(\omega)\right)&\rightarrow 0;
\label{eq:postmean2}\\
Var\left(p_{k,n_k(\omega)}|y_{k,n_k(\omega)}\right)&\rightarrow 0,
\label{eq:postvar2}
\end{align}
as $k\rightarrow\infty$.
If $S_{1,\infty}(\omega)$ is convergent, then by Theorem \ref{theorem:convergence},
$\pi\left(\mathcal N_1|y_{k,n_k}(\omega)\right)\rightarrow 1$ as $k\rightarrow\infty$, for
all sequences $\{n_j\}_{j=1}^{\infty}$, so that
$E\left(p_{k,n_k(\omega)}|y_{k,n_k(\omega)}(\omega)\right)\rightarrow 1$, which is a contradiction to (\ref{eq:postmean2}).
\end{proof}
Note that Theorem \ref{theorem:divergence} encompasses even oscillatory series. For instance, if for some
$\omega\in\mathfrak S\cap\mathfrak N^c$, $S_{1,\infty}(\omega)=\sum_{i=1}^{\infty}\left(-1\right)^{i-1}$, then
the sequence $n_j(\omega)=1+2(j-1)$ ensures that $|S_{j,n_j}(\omega)|>c_j(\omega)$ for all $j\geq j_0(\omega)$,
for some $j_0(\omega)\geq 1$, for any monotonically decreasing non-negative sequence $\{c_j(\omega)\}_{j=1}^{\infty}$.
This of course forces declaration of divergence of this particular series,
as per Theorem \ref{theorem:divergence}.
We show in Section S-4.1 of the supplement,
with the help of our Bayesian idea of studying
oscillatory series, how to identify the number and proportions of the limit points of this oscillatory series.
\subsection{Characterization of infinite series using non-recursive Bayesian posteriors}
\label{subsec:non_recursive}
Observe that it is not strictly necessary for the prior at any stage to depend upon the previous stage.
Indeed, we may simply assume that $\pi\left(p_{j,n_j}\right)\equiv Beta\left(\alpha_j,\beta_j\right)$,
for $j=1,2,\ldots$. In this case, the posterior of $p_{k,n_k}$ given $y_{k,n_k}$ is simply
$Beta\left(\alpha_k+y_{k,n_k},1+\beta_k-y_{k,n_k}\right)$. The posterior mean and variance are then given by
\begin{align}
E\left(p_{k,n_k}|y_{k,n_k}(\omega)\right)&=\frac{\alpha_k+y_{k,n_k}(\omega)}
{1+\alpha_k+\beta_k};
\label{eq:postmean_p_k_3}\\
Var\left(p_{k,n_k}|y_{k,n_k}(\omega)\right)&=
\frac{(\alpha_k+y_{k,n_k}(\omega))(1+\beta_k-y_{k,n_k}(\omega))}
{(1+\alpha_k+\beta_k)^2(2+\alpha_k+\beta_k)}.
\label{eq:postvar_p_k_3}
\end{align}
Since $y_{k,n_k}(\omega)$ converges to $1$ or $0$ as $k\rightarrow\infty$, accordingly as
$S_{1,\infty}(\omega)$ is convergent or divergent, it is easily seen, provided that $\alpha_k\rightarrow 0$
and $\beta_k\rightarrow 0$ as $k\rightarrow\infty$, that (\ref{eq:postmean_p_k_3}) converges to $1$ (respectively, $0$)
if and only if $S_{1,\infty}(\omega)$ is convergent (respectively, divergent).
Thus, characterization of convergence or divergence of infinite series is possible even with the non-recursive approach.
Indeed, note that the prior parameters $\alpha_k$ and $\beta_k$ are more flexible compared to those
associated with the recursive approach. This is because, in the non-recursive approach
we only require $\alpha_k\rightarrow 0$ and $\beta_k\rightarrow 0$ as
$k\rightarrow\infty$, so that convergence of the series $\sum_{j=1}^{\infty}\alpha_j$ and $\sum_{j=1}^{\infty}\beta_j$
are not necessary, unlike the recursive approach. However, choosing $\alpha_k$ and $\beta_k$ to be of sufficiently
small order ensures much faster convergence of the posterior mean and variance as compared to the recursive approach.
Observe that even though the posterior mean in this case converges pointwise to one, $E(p_{j,n_j})\rightarrow 1/2$
as $j\rightarrow\infty$ if $\alpha_j=\beta_j$ for $j\geq j_0$ for some $j\geq 1$. That is, not all $\alpha_j$ and
$\beta_j$ such that $\alpha\rightarrow 0$ and $\beta_j\rightarrow 0$ as $j\rightarrow\infty$, are suitable.
Unfortunately, an important drawback of the non-recursive approach is that it does not admit extension
to the case of general oscillatory series with multiple limit points, where blocks of partial sums can not be used; see Section
S-3 of the supplement. On the other hand, as we show in Section S-3 of the supplement, the principles of our recursive theory
can be easily adopted to develop a Bayesian characterization of oscillating series, which also includes
the characterization of non-oscillating series as a special case. In other words, the recursive
approach seems to be more powerful from the perspective of development of a general characterization theory.
Moreover, as our examples on convergent and divergent series demonstrate, the recursive posteriors converge
sufficiently fast to the correct degenerate distributions, obviating the need to consider the non-recursive approach.
Consequently, we do not futher pursue the non-recursive approach in this article but reserve the topic for further
investigation in the future.
\section{Illustrations}
\label{sec:illustrations}
We now illustrate our ideas with seven examples. These seven examples can be categorized into three
categories in terms of construction of the upper bound $c_{j}$. With the first example
we demonstrate that it may sometimes be easy to devise an appropriate upper bound. In Examples
2 -- 5, we show that usually simple bounds such as that in Example 1, are not adequate in practice,
but appropriate bounds may be constructed if convergence and divergence of the series in question
is known for some values of the parameters; the resultant bounds can be utilized to learn about
convergence or divergence of the series for the remaining values of the parameters. In Examples
6 and 7, the series in question are stand-alone in the sense they are not defined
by parameters with known convergence/divergence for some of their values which might have aided our
construction of $c_{j}$. However, we show that these series can be embedded into appropriately
parameterized series, facilitating similar analysis as Examples 2 -- 5.
For these examples, we consider $n_j=n$ for $j=1,\ldots,K$, with $n=10^6$ and $K=10^5$.
Since $n$ seems to be sufficiently large, in the case of divergence we expect $|S_{j,n}|$ to exceed
the monotonically decreasing $c_j$ for all $j\geq j_0$, for sufficiently large $j_0$. Our experiments
demonstrate that this is indeed the case. For further justification we conducted some experiments with larger values
of $n$, but the results remained unchanged. Hence, for relative computational ease we set $n=10^6$ for
the illustrations in this work.
Since we needed to sum $10^6$ terms at each step of $10^5$ stages,
the associated computation is extremely demanding. For the purpose of efficiency, we parallelized
the computation of the sums of $10^6$ terms, splitting the job on many processors, using the
Message Passing Interface (MPI) protocol. In more details,
we implemented our parallelized codes, written in C, in VMware consisting of 60 double-threaded, 64-bit physical cores,
each running at 2793.269 MHz. Parallel computation of our methods
associated with Examples 1 to 5 take, respectively, 1 minute, 4 minutes, 7 minutes, 6 minutes, and 9 minutes.
Examples 6 and 7 require about 6 minutes and 4 minutes of computational time.
For space issues we present our applications to the first four examples here; the applications of the
remaining examples are provided in Section S-2 of the supplement.
\subsection{Example 1}
\label{subsec:example1}
In their first example \ctn{Bou12} study the following divergent series with their methods:
\begin{equation}
S=\sum_{i=2}^{\infty}\frac{1}{\log(i)}.
\label{eq:example1}
\end{equation}
We test our Bayesian idea on this series choosing the monotonically decreasing sequence as $c_{j,n}=1/\sqrt{nj}$,
where we represent $c_j$ as $c_{j,n}$ to reflect dependence on $n$.
Figure \ref{fig:example1}, a plot of the posterior means of $\left\{p_{k,n};k=1,\ldots,10^5\right\}$,
clearly and correctly indicates that the series is divergent.
We also constructed approximate 95\% highest posterior density credible intervals at each recursive step; however,
thanks to very less variances at each stage, the intervals turned out to be too small to be clearly distinguishable
from the plot of the stage-wise posterior means.
\begin{figure}
\centering
\includegraphics[width=6cm,height=5cm]{figures/example1_divergence-crop.pdf}
\caption{Example 1: The series (\ref{eq:example1}) is divergent.}
\label{fig:example1}
\end{figure}
\subsection{Example 2}
\label{subsec:example2}
Example 2 of \ctn{Bou12} deals with the following series:
\begin{equation}
S^a=\sum_{i=2}^{\infty}\left(1-\left\{\frac{\log(i)}{i}\right\}-a\frac{\log\log(i)}{i}\right)^i,
\label{eq:example2}
\end{equation}
where $a\in\mathbb R$. \ctn{Bou12} prove that the series converges for $a>1$ and diverges for $a\leq 1$.
\subsubsection{Choice of $c_{j,n}$}
\label{subsubsec:example2_c}
Now, however, selecting the monotone sequence as $c_{j,n}=1/\sqrt{nj}$ turn out to be inappropriate
for this series, the behaviour of which is quite sensitive to the parameter $a$, particularly around $a=1$.
Hence, any appropriate sequence $\left\{c_{j,n}\right\}_{j=1}^{\infty}$ must depend on the parameter $a$
of the series (\ref{eq:example2}).
Denoting $c_{j,n}$ by $c^a_{j,n}$ to reflect the dependence on $a$
as well, we first set
\begin{equation}
u^a_{j,n}=S^{a_0}_{j,n}+\frac{(a-1-9\times 10^{-11})}{\log(j+1)},
\label{eq:u_example2}
\end{equation}
and then let
\begin{equation}
c^a_{j,n}=\left\{\begin{array}{ccc} u^a_{j,n}, &\mbox{if}~u^a_{j,n}>0;\\
S^{a_0}_{j,n}, & \mbox{otherwise.}\end{array}\right.
\label{eq:example2_c}
\end{equation}
where $a_0=1+10^{-10}$. The reason behind such a choice of $c^a_{j,n}$ is provided below.
Let, for $\epsilon>0$,
\begin{equation}
\tilde S=\sup\left\{S^a:a\geq 1+\epsilon\right\}.
\label{eq:tilde_S_example2}
\end{equation}
Thus, $\tilde S$ may be interpreted as the convergent series which is closest to divergence
given the convergence criterion $a\geq 1+\epsilon$.
Since $S^a$ is decreasing in $a$, it easily follows that equality of (\ref{eq:tilde_S_example2})
is attained at $a_0=1+\epsilon$.
Since the terms of the series $S^a$ are decreasing in $i$, it follows that $S^{a_0}_{j,n}$ in (\ref{eq:example2_c})
is decreasing in $j$.
We assume that $\epsilon$ is chosen to be so small that convergence properties
of the series for $\left\{a\leq 1\right\}\cup\left\{a\geq 1+\epsilon\right\}$ are only desired.
Indeed, since $\left(1-\left\{\frac{\log(i)}{i}\right\}-a\frac{\log\log(i)}{i}\right)^i$ is decreasing in $a$
for any given $i\geq 3$, our method of constructing $c^a_{j,n}$ need not
be able to correctly identify the convergence properties of the series for $1<a<1+\epsilon$.
For the purpose of illustrations we choose $\epsilon=10^{-10}$.
Note that for $a>1$ the term $\frac{(a-1-9\times 10^{-11})}{\log(j+1)}$ inflates $c^a_j$
making $S^a_{j,n}$ more likely to fall below $c^a_{j,n}$ for increasing $a$, thus paving the way for
diagnosing convergence. The same term also
ensures that for $a\leq 1$, $c^a_{j,n}<S^{a_0}_{j,n}$, so that
$S^a_{j,n}$ is likely to exceed $c^a_j$, thus providing an inclination towards divergence. The term
$-9\times 10^{-11}$ is an adjustment for the case $a=1+10^{-10}$, ensuring that $c^a_{j,n}$ marginally exceeds $S^a_{j,n}$
to ensure convergence.
The scaling factor $\log(j+1)$ ensures that the part $\frac{(a-1-9\times 10^{-11})}{\log(j+1)}$ of
(\ref{eq:example2_c}) tends to zero at a slow rate so that $c^a_{j,n}$ is decreasing with $j$ and $n$ even if
$a-1-9\times 10^{-11}$ is negative.
Figure \ref{fig:example2}, depicting our Bayesian results for this series, is in agreement
with the results of \ctn{Bou12}. In fact, we have applied our methods to many more values of $a\in A_{\epsilon}$
with $\epsilon=10^{-10}$, and in every case the correct result is vindicated.
\begin{figure}
\centering
\subfigure [Divergence: $a = 1-10^{-10}$.]{ \label{fig:example1_a_less_1}
\includegraphics[width=6cm,height=5cm]{figures/example2_a_less_1-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1$.]{ \label{fig:example2_a_1}
\includegraphics[width=6cm,height=5cm]{figures/example2_a_1-crop.pdf}}\\
\subfigure [Convergence: $a=1+10^{-10}$.]{ \label{fig:example2_a_greater_1}
\includegraphics[width=6cm,height=5cm]{figures/example2_a_greater_1-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+20^{-10}$.]{ \label{fig:example2_a_greater_1_2}
\includegraphics[width=6cm,height=5cm]{figures/example2_a12-crop.pdf}}\\
\subfigure [Divergence: $a=-1$.]{ \label{fig:example2_a_minus_1}
\includegraphics[width=6cm,height=5cm]{figures/example2_a_minus_1-crop.pdf}}\\
\caption{Example 2: The series (\ref{eq:example2}) converges for $a>1$ and diverges
for $a\leq 1$.}
\label{fig:example2}
\end{figure}
\subsection{Example 3}
\label{subsec:example3}
Let us now consider the following series analysed by \ctn{Bou12}:
\begin{equation}
S=\sum_{i=3}^{\infty}\left(1-\left(\frac{\log(i)}{i}\right)a^{\frac{\log\log(i)}{\log(i)}}\right)^i,
\label{eq:example3}
\end{equation}
where $a>0$. As is shown by \ctn{Bou12}, the series converges for $a>e$
and diverges for $a\leq e$.
\subsubsection{Choice of $c_{j,n}$}
\label{subsubsec:example3_c}
Here we first set
\begin{equation}
u^a_{j,n}=S^{a_0}_{j,n}+\frac{(a-e-9\times 10^{-11})}{\log(j+1)},
\label{eq:u_example3}
\end{equation}
and then let $c^a_{j,n}$ defined by (\ref{eq:example2_c}).
Again, it is easily seen that $S^{a_0}_{j,n}$ is decreasing in $j$.
In this example we set $a_0=e+10^{-10}$. The rationale behind the choice remains the same as detailed in
Section \ref{subsubsec:example2_c}.
As before, the results obtained by our Bayesian theory, as displayed in Figure \ref{fig:example3}, are
in complete agreement with the
results obtained by \ctn{Bou12}.
\begin{figure}
\centering
\subfigure [Divergence: $a=e-10^{-10}$.]{ \label{fig:example3_e_less_1}
\includegraphics[width=6cm,height=5cm]{figures/example3_a_less_e-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=e$.]{ \label{fig:example3_a_e}
\includegraphics[width=6cm,height=5cm]{figures/example3_a_e-crop.pdf}}\\
\subfigure [Convergence: $a=e+10^{-10}$.]{ \label{fig:example3_a_greater_e}
\includegraphics[width=6cm,height=5cm]{figures/example3_a_greater_e-crop.pdf}}
\subfigure [Convergence: $a=e+20^{-10}$.]{ \label{fig:example3_a_e2}
\includegraphics[width=6cm,height=5cm]{figures/example3_a_e2-crop.pdf}}
\caption{Example 3: The series (\ref{eq:example3}) converges for $a>e$ and diverges
for $a\leq e$.}
\label{fig:example3}
\end{figure}
\subsection{Example 4}
\label{subsec:example4}
We now consider series (\ref{eq:inconclusive1}). It has been proved by
\ctn{Bou12} that the series is convergent for $a-b>1$ and divergent for $a+b<1$.
As mentioned before, the hierarchy of tests of \ctn{Bou12} are
inconclusive for $a=b=1$.
In this example we denote the partial sums by $S^{a,b}_{j,n}$ and the actual series $S$ by
$S^{a,b}$ to reflect the dependence on both
the parameters $a$ and $b$.
\begin{equation}
S^{a,b}_{j,n}=\sum_{i=3+n(j-1)}^{3+nj-1}\left(1-\frac{\log i}{i}-\frac{\log\log i}{i}
\left\{\cos^2\left(\frac{1}{i}\right)\right\}\left(a+(-1)^ib\right)\right)^i,
\label{eq:S_example4}
\end{equation}
We then have the following lemma, the proof of which is
presented in Section S-1 of the supplement.
\begin{lemma}
\label{lemma:example4}
For series (\ref{eq:inconclusive1}), for $j\geq 1$ and $n$ even,
$S^{a,b}_{j,n}$ given by (\ref{eq:S_example4}) is decreasing in $a$ but increasing in $b$.
\end{lemma}
Since $S^{a,b}$ is just summation of the partial sums, it follows that
\begin{corollary}
\label{corollary:example4}
$S^{a,b}$ is decreasing in $a$ and increasing in $b$.
\end{corollary}
We let
\begin{equation}
A_{\epsilon}=\left\{a:0\leq a\leq 1\right\}\cup\left\{a:a\geq 1+\epsilon\right\},
\label{eq:A_example4}
\end{equation}
and
\begin{equation}
\tilde S=\underset{a\in A_{\epsilon}}{\inf}\underset{b\geq 0}{\sup}~\left\{S^{a,b}:a-b>1\right\}.
\label{eq:tilde_S}
\end{equation}
It is easy to see in this case, due to Corollary \ref{corollary:example4} and the convergence
criterion $a-b>1$, that $\tilde S$ is attained at $a_0=1+\epsilon$ and $b_0=0$.
As before, we set $\epsilon=10^{-10}$.
Hence, arguments similar to those in Section \ref{subsubsec:example2_c} lead to the following choice of the
upper bound for $S^{a,b}_{j,n}$, which we denote in this example by $c^{a,b}_{j,n}$:
\begin{equation}
c^{a,b}_{j,n}=\left\{\begin{array}{ccc} u^{a,b}_{j,n}, &\mbox{if}~u^{a,b}_{j,n}>0;\\
S^{a_0,b_0}_{j,n}, & \mbox{otherwise},\end{array}\right.
\label{eq:example4_c}
\end{equation}
where $a_0=1+10^{-10}$, $b_0=0$, and
\begin{equation}
u^{a,b}_{j,n}=S^{a_0,b_0}_{j,n}+\frac{(a-1-b-9\times 10^{-11})}{\log(j+1)}.
\label{eq:u_example4}
\end{equation}
As before, it is easily seen that $S^{a_0,b_0}_{j,n}$ is decreasing in $j$. Also note that $-b$
in (\ref{eq:u_example4}) takes account of the fact that the partial sums are increasing in $b$,
thus favouring divergence for increasing $b$.
Setting aside panel (c) of Figure \ref{fig:example4b}, observe that the remaining panels of
Figures \ref{fig:example4a} and \ref{fig:example4b} are in agreement with the results of \ctn{Bou12},
but in the case $a=b=1$, the tests of \ctn{Bou12} turned out to be
inconclusive. Panel (c) of Figure \ref{fig:example4b} demonstrates that the series is divergent for $a=b=1$.
\begin{figure}
\centering
\subfigure [Convergence: $a=3,b=1$.]{ \label{fig:example4_a_3_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example4_a_3_b_1-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+10^{-10},b=0$.]{ \label{fig:example4_a11_b00}
\includegraphics[width=6cm,height=5cm]{figures/example4_a11_b00-crop.pdf}}\\
\subfigure [Convergence: $a=1+20^{-10},b=10^{-10}$.]{ \label{fig:example4_a_12_b_01}
\includegraphics[width=6cm,height=5cm]{figures/example4_a12_b01-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1/2,b=1/3$.]{ \label{fig:example4_a_1_2_b_1_3}
\includegraphics[width=6cm,height=5cm]{figures/example4_a_1_2_b_1_3-crop.pdf}}
\caption{Example 4: The series (\ref{eq:inconclusive1}) converges for $(a=3,b=1)$, $\left(a=1+10^{-10},b=0\right)$,
$\left(a=1+20^{-10},b=10^{-10}\right)$ and diverges for $\left(a=1/2,b=1/3\right)$.}
\label{fig:example4a}
\end{figure}
\begin{figure}
\centering
\subfigure [Divergence: $a=\frac{1}{2}\left(1-10^{-11}\right),
b=\frac{1}{2}\left(1-10^{-11}\right)$.]{ \label{fig:example4_a+b_less_1}
\includegraphics[width=6cm,height=5cm]{figures/example4_a+b_less_1-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1,b=0$.]{ \label{fig:example4_a_1_b_0}
\includegraphics[width=6cm,height=5cm]{figures/example4_a_1_b_0-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1,b=1$.]{ \label{fig:example4_a_1_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example4_a_1_b_1-crop.pdf}}
\caption{Example 4: The series (\ref{eq:inconclusive1}) diverges for
$\left(a=\frac{1}{2}\left(1-10^{-11}\right),b=\frac{1}{2}\left(1-10^{-11}\right)\right)$,
$(a=1,b=0)$ and $(a=1,b=1)$.}
\label{fig:example4b}
\end{figure}
\begin{comment}
\subsection{Example 5}
\label{subsec:example5}
Now consider the following series presented and analysed in \ctn{Bou12}:
\begin{equation}
S=\sum_{i=3}^{\infty}\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i;~a>0,b>0.
\label{eq:example5}
\end{equation}
\ctn{Bou12} show that the series converges when $a-b>1$ and diverges when $a+b<1$.
Again, as in the case of Example 4, the following lemma holds in Example 5, the proof of which
is provided in Appendix \ref{appendix:example5}.
Note that for mathematical convenience we consider partial sums from the $5$-th term onwards.
We also assume $n$ to be a multiple of $4$.
\begin{lemma}
\label{lemma:example5}
For the series (\ref{eq:example5}),
let
\begin{equation}
S^{a,b}_{j,n}=\sum_{i=5+n(j-1)}^{5+nj-1}\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i,
\label{eq:S_example5}
\end{equation}
for $j\geq 1$ and $n$, a multiple of $4$. Then $S^{a,b}_{j,n}$ is decreasing in $a$ and increasing in $b$.
\end{lemma}
The following corollary with respect to $S^{a,b}$ again holds:
\begin{corollary}
\label{corollary:example5}
$S^{a,b}$ is decreasing in $a$ and increasing in $b$.
\end{corollary}
Thus, we follow the same method as in Example 4 to determine $c^{a,b}_{j,n}$, but we need to note
that in this example $a>0$ and $b>0$ instead of $a\geq 0$ and $b\geq 0$ of Example 4. Consequently, here we define
$b\geq\epsilon$, for $\epsilon>0$, the set $A_{\epsilon}$ given by (\ref{eq:A_example4})
and
\begin{equation}
\tilde S=\underset{a\in A_{\epsilon}}{\inf}\underset{b\geq \epsilon}{\sup}~\left\{S^{a,b}:a-b>1\right\}.
\label{eq:tilde_S2}
\end{equation}
In this case, Corollary \ref{corollary:example5} and the convergence
criterion $a-b>1$ ensure that $\tilde S$ is attained at $a_0=1+\epsilon$ and $b_0=\epsilon$.
As before, we set $\epsilon=10^{-10}$.
The rest of the arguments leading to the choice of $c^{a,b}_{j,n}$ remains the same as in Example 4,
and hence in this example $c^{a,b}_{j,n}$ has the same form as
(\ref{eq:example4_c}), with $a_0=1+10^{-10}$, $b_0=10^{-10}$, where $S^{a_0,b_0}_{j,n}$ is decreasing in $j$ as before.
Figure \ref{fig:example5} depicts the results of our Bayesian analysis of the series (\ref{eq:example5}) for
various values of $a$ and $b$. All the results are in accordance with those of \ctn{Bou12}.
\begin{figure}
\centering
\subfigure [Convergence: $a=2,b=1$.]{ \label{fig:example5_a_2_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example5_a_2_b_1-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+20^{-10},b=10^{-10}$.]{ \label{fig:example5_a12_b01}
\includegraphics[width=6cm,height=5cm]{figures/example5_a12_b01-crop.pdf}}\\
\hspace{2mm}
\subfigure [Convergence: $a=1+30^{-10},b=20^{-10}$.]{ \label{fig:example5_a13_b02}
\includegraphics[width=6cm,height=5cm]{figures/example5_a13_b02-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1/2,b=1/2$.]{ \label{fig:example5_a_1_2_b_1_2}
\includegraphics[width=6cm,height=5cm]{figures/example5_a_1_2_b_1_2-crop.pdf}}\\
\subfigure [Divergence: $a=\frac{1}{2}\left(1-10^{-11}\right),
b=\frac{1}{2}\left(1-10^{-11}\right)$.]{ \label{fig:example5_a+b_less_1}
\includegraphics[width=6cm,height=5cm]{figures/example5_a+b_less_1-crop.pdf}}
\caption{Example 5: The series (\ref{eq:example5}) converges for $(a=2,b=1)$, $(a=1+20^{-10},b=10^{-10})$,
$(a=1+30^{-10},b=20^{-10})$ and diverges for $(a=1/2,b=1/2)$ and
$\left(a=\frac{1}{2}\left(1-10^{-11}\right),b=\frac{1}{2}\left(1-10^{-11}\right)\right)$.}
\label{fig:example5}
\end{figure}
\subsection{Example 6}
\label{subsec:example6}
We now investigate whether or not the following series converges:
\begin{equation}
S=\sum_{i=1}^{\infty}\frac{1}{i^3|\sin i|}.
\label{eq:example6_S}
\end{equation}
This series is a special case of the generalized form of the Flint Hills series (see
\ctn{Pickover02} and \ctn{Alek11}).
For our purpose, we first embed the above series into
\begin{equation}
S^{a,b}=\sum_{i=1}^{\infty}\frac{i^{b-3}}{a+|\sin i|},
\label{eq:example6}
\end{equation}
where $b\in\mathbb R$ and $|a|\leq\eta$, for some $\eta>0$, specified according to our
purpose. Note that, $S=S^{0,0}$, and we set $\eta=10^{-10}$ for our investigation of (\ref{eq:example6_S}).
Note that for any fixed $a\neq 0$, $S^{a,b}$ converges if $b<2$ and diverges if $b\geq 2$.
Since $S^{a,b}$ increases in $b$
it follows that the equality in
\begin{equation}
\tilde S=\sup\left\{S^{a,b}:a=\epsilon,~b\leq 2-\epsilon \right\}
\label{eq:sup_A_example6}
\end{equation}
is attained at $(a_0,b_0)=(\epsilon,2-\epsilon)$.
Arguments in keeping with those in the previous examples
lead to the following choice of the
upper bound for $S^{a,b}_{j,n}$, which we again denote by $c^{a,b}_{j,n}$:
\begin{equation}
c^{a,b}_{j,n}=\left\{\begin{array}{ccc} u^{a,b}_{j,n}, &\mbox{if}~b<2;\\
v^{a,b}_{j,n}, & \mbox{otherwise},\end{array}\right.
\label{eq:example6_c}
\end{equation}
where
\begin{align}
u^{a,b}_{j,n}&=S^{a_0,b_0}_{j,n}+\frac{(|a|-b+2-2\epsilon+10^{-5})}{\log(j+1)};
\label{eq:u_example6}\\
v^{a,b}_{j,n}&=S^{a_0,b_0}_{j,n}+\frac{(|a|-b+2-2\epsilon-10^{-5})}{\log(j+1)}.
\label{eq:v_example6}
\end{align}
It can be easily verified that the upper bound is decreasing in $j$.
Notice that we add the term $10^{-5}$ when $b<2$ so that our Bayesian method favours convergence and subtract
the same when $b\geq 2$ to facilitate detection of divergence. Since convergence or divergence of $S^{a,b}$
does not depend upon $a\in [-\eta,\eta]\setminus\left\{0\right\}$,
we use $|a|$ in (\ref{eq:u_example6}) and (\ref{eq:v_example6}).
Setting $\epsilon=10^{-10}$, Figures \ref{fig:example6a} and \ref{fig:example6b}
depict convergence and divergence of $S^{a,b}$ for various
values of $a$ and $b$. In particular, panel (e) of Figure \ref{fig:example6b}
shows that our main interest, the series $S$, given by (\ref{eq:example6_S}), converges.
\begin{figure}
\centering
\subfigure [Convergence: $a=-10^{-10},b=2-10^{-10}$.]{ \label{fig:example6_a_minus_e_b_2_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_2_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=-10^{-10},b=2+10^{-10}$.]{ \label{fig:example6_a_minus_e_b_2_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_2_plus_e-crop.pdf}}\\
\subfigure [Convergence: $a=10^{-10},b=2-10^{-10}$.]{ \label{fig:example6_a_plus_e_b_2_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_2_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=10^{-10},b=2+10^{-10}$.]{ \label{fig:example6_a_plus_e_b_2_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_2_plus_e-crop.pdf}}
\caption{Example 6: The series (\ref{eq:example6}) converges for $(a=-10^{-10},b=2-10^{-10})$,
$(a=10^{-10},b=2-10^{-10})$, and diverges for $(a=-10^{-10},b=2+10^{-10})$,
$(a=10^{-10},b=2+10^{-10})$.}
\label{fig:example6a}
\end{figure}
\begin{figure}
\centering
\subfigure [Convergence: $a=-10^{-10},b=-10^{-10}$.]{ \label{fig:example6_a_minus_e_b_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=-10^{-10},b=10^{-10}$.]{ \label{fig:example6_a_minus_e_b_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_plus_e-crop.pdf}}\\
\subfigure [Convergence: $a=10^{-10},b=-10^{-10}$.]{ \label{fig:example6_a_plus_e_b_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=10^{-10},b=10^{-10}$.]{ \label{fig:example6_a_plus_e_b_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_plus_e-crop.pdf}}\\
\subfigure [Convergence: $a=0,b=0$.]{ \label{fig:example6_a_0_b_0}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_0_b_0-crop.pdf}}
\caption{Example 6: The series (\ref{eq:example6}) converges for $(a=-10^{-10},b=-10^{-10})$,
$(a=-10^{-10},b=10^{-10})$, $(a=10^{-10},b=-10^{-10})$, $(a=10^{-10},b=10^{-10})$, and
$(a=0,b=0)$.}
\label{fig:example6b}
\end{figure}
\subsection{Example 7}
\label{subsec:example7}
We now consider
\begin{equation}
S=\sum_{i=1}^{\infty}\frac{|\sin~i|^i}{i}.
\label{eq:example7}
\end{equation}
We embed this series into
\begin{equation}
S^{a,b}=\sum_{i=1}^{\infty}\frac{|\sin~a\pi i|^i}{i^b},
\label{eq:example7_embedding}
\end{equation}
where $a\in\mathbb R$ and $b\geq 1$.
The above series converges if $b>1$, for all $a\in\mathbb R$.
But for $b=1$, it is easy to see that
the series diverges if $a=\ell/2m$, where
$\ell$ and $m$ are odd integers.
Letting $a_0=\pi^{-1}$ and $b_0=1+\epsilon$, with $\epsilon=10^{-10}$,
we set the following upper bound that is decreasing in $j$:
\begin{equation}
c^{a,b}_{j,n}=S^{a_0,b_0}_{j,n}+\frac{\epsilon}{j}.
\label{eq:example7_upper_bound}
\end{equation}
Thus, $c^{a,b}_{j,n}$ corresponds to a convergent series which is also sufficiently close
to divergence. Addition of the term $\frac{\epsilon}{j}$ provides further protection from
erroneous conclusions regarding divergence.
Panel(a) of Figure \ref{fig:example7} demonstrates that the series of our interest, given by
(\ref{eq:example7}), diverges. Panel (b) confirms that for $a=5/(2\times 7)$ and $b=1$, the series
indeed diverges, as it should.
\begin{figure}
\centering
\subfigure [Divergence: $a=\pi^{-1},b=1$.]{ \label{fig:example7_a_pi_inv_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example7_a_pi_inv_b_1-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=5/(2\times 7),b=1$.]{ \label{fig:example7_a_odd1_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example7_a_odd1_b_1-crop.pdf}}
\caption{Example 7: The series (\ref{eq:example7_embedding}) diverges for $(a=\pi^{-1},b=1)$,
$(a=5/7,b=1)$.}
\label{fig:example7}
\end{figure}
\end{comment}
\section{Application to Riemann Hypothesis}
\label{sec:RH}
\subsection{Brief background}
\label{subsec:brief_background}
Consider the Riemann zeta function given by
\begin{equation}
\zeta(a)=\frac{1}{1-2^{1-a}}\sum_{n=0}^{\infty}\frac{1}{2^{n+1}}\sum_{k=0}^n\left(-1\right)^k
\frac{n!}{k!(n-k)!}(k+1)^{-a},
\label{eq:Riemann_zeta}
\end{equation}
where $a$ is complex.
The above function is formed by first considering Euler's function
\begin{equation}
Z(a)=\sum_{n=1}^{\infty}\frac{1}{n^a},
\label{eq:Euler}
\end{equation}
then by multiplying both sides of (\ref{eq:Euler}) by $\left(1-\frac{2}{2^a}\right)$ to obtain
\begin{equation}
\left(1-\frac{2}{2^a}\right)Z(a)=\sum_{n=1}^{\infty}\frac{\left(-1\right)^{n+1}}{n^a},
\label{eq:Euler2}
\end{equation}
and then dividing the right hand side of (\ref{eq:Euler2}) by $\left(1-\frac{2}{2^a}\right)$.
The advantage of the function $\zeta(a)$ in comparison with the parent function $Z(a)$ is that,
$Z(a)$ is divergent if the real part of $a$, which we denote by $Re(a)$, is less than or equal to $1$,
while $\zeta(a)$ is convergent for all $a$ with $Re(a)>0$. Importantly, $\zeta(a)=Z(a)$ whenever $Z(a)$
is convergent.
Whenever $0<Re(a)<1$, $\zeta(a)$ satisfies the following identity:
\begin{equation}
\zeta(a)=2^a\pi^{a-1}\sin\left(\frac{\pi a}{2}\right)\Gamma(1-a)\zeta(1-a),
\label{eq:zeta_identity}
\end{equation}
where $\Gamma(\cdot)$ is the gamma function.
This can be extended to the set of complex numbers by defining a function with non-positive real part
by the right hand side of (\ref{eq:zeta_identity}); abusing notation, we denote the new function by
$\zeta(a)$. Because of the sine function, it follows that
the trivial zeros of the above function occur when the values of $a$ are negative even integers.
Hence, the non-trivial zeros must satisfy $0<Re(a)<1$.
\ctn{Riemann1859} conjectured that all the non-trivial zeros have the real part $1/2$, which is
the famous Riemann Hypothesis.
For accessible account of the Riemann Hypothesis, see \ctn{Borwein06}, \ctn{Derbyshire04}.
One equivalent condition for the Riemann Hypothesis is related to sums of
of the M\"{o}bius function, given by
\begin{equation}
\mu(n)=\left\{
\begin{array}{ccc}-1 & \mbox{if} & n~\mbox{is a square-free positive integer with an odd number of prime factors};\\
0 & \mbox{if} & n~\mbox{has a squared prime factor};\\
1 & \mbox{if} & n~\mbox{is a square-free positive integer with an even number of prime factors},
\end{array}
\right.
\label{eq:mobius2}
\end{equation}
where, by square-free integer we mean that the integer is not divisible by any perfect square other than $1$.
Specifically, the condition
\begin{equation}
\sum_{n=1}^x\mu(n)=O\left(x^{\frac{1}{2}+\epsilon}\right)
\label{eq:Merten}
\end{equation}
for any $\epsilon>0$,
is equivalent to Riemann Hypothesis. This condition implies that the Dirichlet series for the M\"{o}bius function,
given by
\begin{equation}
M(a)=\sum_{n=1}^{\infty}\frac{\mu(n)}{n^a}=\frac{1}{\zeta(a)},
\label{eq:mobius}
\end{equation}
is analytic in $Re(a)>1/2$. This again ensures that $\zeta(a)$ is meromorphic in $Re(a)>1/2$
and that it has no zeros in this region. Using the functional equation (\ref{eq:zeta_identity}) it follows
that there are no zeros of $\zeta(a)$ in $0<Re(a)<1/2$ either. Hence, (\ref{eq:Merten}) implies
Riemann Hypothesis. The converse is also certainly true.
The above arguments also imply that convergence of $M(a)$ in (\ref{eq:mobius})
for $Re(a)>1/2$ is equivalent to Riemann Hypothesis, and it is this criterion that is of our interest
in this paper.
Now, $M(a)$ converges absolutely for $Re(a)>1$;
moreover, $M(1)=0$.
The latter is equivalent to the prime number
theorem stating that the number of primes below $x$ is asymptotically $x/\log(x)$, as $x\rightarrow\infty$
(\ctn{Landau06}).
Thus, $M(a)$ converges for $Re(a)\geq 1$. That $M(a)$ diverges for $Re(a)\leq 1/2$
can be seen as follows. Note that if $M(a)$ converged for any $a^*$ such that $Re(a^*)\leq 1/2$, then analytic
continuation for Dirichlet series of the form $M(a)$ would guarantee convergence of $M(a)$ for all $a$ with $Re(a)>Re(a^*)$.
But $\zeta(a)$ is not analytic on $0<Re(a)<1$ because of its non-trivial zeros on the strip. This would contradict
the analytic continuation leading to the identity $M(a)=1/\zeta(a)$ on the entire set of complex numbers.
Hence, $M(a)$ must be divergent for $Re(a)\leq 1/2$.
In this paper, we apply our ideas to particularly investigate convergence of $M(a)$ when $1/2<a<1$.
\subsection{Choice of the upper bound and implementation details}
\label{subsec:upper_bound_RH}
To form an idea of the upper bound we first plot the partial sums $S^a_{j,n}$, for
$j=1000$ and $n=10^6$, with respect to $a$. In this regard, panel (a) of Figure \ref{fig:plot_partial_sums}
shows the decreasing nature of the partial sums with respect to $a$, and panel (b) magnifies the plot
in the domain $1/2<a<1$ that we are particularly interested in. The latter shows that the partial sums
decrease sharply till about $0.7$, getting appreciably close to zero around that point,
after which the rate of decrease diminishes. Thus, one may expect a change point around $0.7$ regarding convergence.
Specifically, divergence may be expected below a point slightly larger than $0.7$ and convergence above it.
\begin{figure}
\centering
\subfigure [Plot of partial sums in the domain $(0,5)$.]{ \label{fig:plot_partial_sums_RH}
\includegraphics[width=6cm,height=5cm]{figures/RH/plot_partial_sums_RH-crop.pdf}}
\hspace{2mm}
\subfigure [Plot of partial sums in the domain $(0.5,1)$.]{ \label{fig:plot_partial_sums_RH_magnify}
\includegraphics[width=6cm,height=5cm]{figures/RH/plot_partial_sums_RH_magnify-crop.pdf}}
\caption{Plot of the partial sums $S^a_{1000,1000000}$ versus $a$. Panel (a) shows the plot in the domain
$[0,5]$ while panel (b) magnifies the same in the domain $(0.5,1)$.}
\label{fig:plot_partial_sums}
\end{figure}
Since $M(1)<\infty$, we consider this series as the basis for our upper bound,
with the value of $a$ also taken into account.
Specifically, we choose the upper bound as
\begin{equation}
c_{j,n}=\left|S^1_{j,n}+\frac{a}{j+1}\right|.
\label{eq:upper_bound_RH}
\end{equation}
Since Figure \ref{fig:plot_partial_sums} shows that the partial sums are of monotonically decreasing nature,
the above choice of upper bound facilitates detection of convergence for relatively large values of $a$.
The part $\frac{a}{j+1}$, which tends to zero as $j\rightarrow\infty$,
takes care of the fact that the series may be convergent if $a<1$, by slightly inflating $S^1_{j,n}$.
For our purpose, we compute the first $10^9$ values of the M\"{o}bius function using an efficient algorithm
proposed in \ctn{Lioen94}, which is based on the Sieve of Eratosthenes (\ctn{Eratosthenes}). We set $K=1000$ and $n=10^6$.
A complete analysis with our VMware with our parallel implementation takes about $2$ minutes.
\subsection{Results of our Bayesian analysis}
\label{subsec:results_RH}
Panels (a)--(e) of Figure \ref{fig:RH_1} and panels (d)--(f) of Figure \ref{fig:RH_2} show the $M(a)$ diverges for
$a=0.1$, $0.2$, $0.3$, $0.4$, $0.5$, but converges for
$a=1+10^{-10}$, $2$ and $3$.
In fact, for many other values
that we experimented with, $M(a)$ converged for $a>1$ and diverged for $a<1/2$,
demonstrating remarkable consistency with the known, existing results.
Certainly far more important are the results for $1/2<a<1$. Indeed,
panel (f) of Figure \ref{fig:RH_1} and panels (a)--(c) of Figure \ref{fig:RH_2} show that $M(a)$ diverged for
$a=0.6$ and $0.7$ and converged for $a=0.8$ and $0.9$. It thus appears that $M(a)$ diverges
for $a<a^*$ and converges for $a\geq a^*$, for some $a^*\in(0.7,0.8)$. Figure \ref{fig:RH_3} displays
results of our further experiments in this regard. Panels (a) and (b) of Figure \ref{fig:RH_3} show
the posterior means for the full set of iterations and the last $500$ iterations, respectively, for $a=0.71$.
Note that from panel (a), convergence seems to be attained, although towards the end, the plot seems to
be slightly tilted downwards. Panel (b) magnifies this, clearly showing divergence. Panels (c) and (d) of
Figure \ref{fig:RH_3} depict similar phenomenon for $a=0.715$, but as per panel (d), divergence seems to
ensue all of a sudden, even after showing signs of convergence for the major number of iterative stages.
Convergence of $M(a)$ begins at $a=0.72$ (approximately); panels (e) and (f)
of Figure \ref{fig:RH_3} take clear note of this.
Thus, as per our methods, $M(a)$ diverges for $a<0.72$ and converges for $a\geq 0.72$.
This is remarkably in keeping with the wisdom gained from panel (b) of Figure \ref{fig:plot_partial_sums}
that convergence is expected to occur for values of $a$ exceeding $0.7$.
Note that neither the upper bound (\ref{eq:upper_bound_RH}), nor our methodology, is in any way biased towards
$a\approx 0.7$; hence, our result is perhaps not implausible.
\subsection{Implications of our result}
\label{subsec:implications}
As per our results, $M(a)$ does not converge for all $a>1/2$, and hence does not completely
support Riemann Hypothesis. However, convergence of $M(a)$ fails only for the relatively small region
$0.5<a<0.72$, which perhaps is the reason why there exists much evidence in favour of Riemann Hypothesis.
\begin{figure}
\centering
\subfigure [Divergence: $a=0.1$.]{ \label{fig:RH_a_01}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_01-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.2$.]{ \label{fig:RH_a_02}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_02-crop.pdf}}\\
\subfigure [Divergence: $a=0.3$.]{ \label{fig:RH_a_03}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_03-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.4$.]{ \label{fig:RH_a_04}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_04-crop.pdf}}\\
\subfigure [Divergence: $a=0.5$.]{ \label{fig:RH_a_05}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_05-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.6$.]{ \label{fig:RH_a_06}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_06-crop.pdf}}
\caption{Riemann Hypothesis: The mobius function based series diverges for
$a=0.1$, $0.2$, $0.3$, $0.4$, $0.5$, $0.6$.}
\label{fig:RH_1}
\end{figure}
\begin{figure}
\centering
\subfigure [Divergence: $a=0.7$.]{ \label{fig:RH_a_07}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_07-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=0.8$.]{ \label{fig:RH_a_08}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_08-crop.pdf}}\\
\subfigure [Convergence: $a=0.9$.]{ \label{fig:RH_a_09}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_09-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+10^{-10}$.]{ \label{fig:RH_a_1_e}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_1+e-crop.pdf}}\\
\subfigure [Convergence: $a=2$.]{ \label{fig:RH_a_2}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_2-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=3$.]{ \label{fig:RH_a_3}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_3-crop.pdf}}
\caption{Riemann Hypothesis: The mobius function based series diverges for
$a=0.7$ but converges for $a=0.8$, $0.9$, $1+10^{-10}$, $2$, $3$.}
\label{fig:RH_2}
\end{figure}
\begin{figure}
\centering
\subfigure [Divergence: $a=0.71$.]{ \label{fig:RH_a_071}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_071-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.71$.]{ \label{fig:RH_a_071_partial}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_071_partial-crop.pdf}}\\
\subfigure [Divergence: $a=0.715$.]{ \label{fig:RH_a_0715}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_0715-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.715$.]{ \label{fig:RH_a_0715_partial}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_0715_partial-crop.pdf}}\\
\subfigure [Convergence: $a=0.72$.]{ \label{fig:RH_a_072}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_072-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=0.72$.]{ \label{fig:RH_a_072_partial}
\includegraphics[width=6cm,height=5cm]{figures/RH/RH_a_072_partial-crop.pdf}}
\caption{Riemann Hypothesis: The left panels show the posterior means for the full set of iterations,
while the right panels depict the posterior means for the last $500$ iterations, for $a=0.71$, $0.715$
and $0.72$. It is evident that the mobius function based series diverges for $a=0.71$ and $0.715$
but converges for $a=0.72$.}
\label{fig:RH_3}
\end{figure}
\section{Summary and conclusion}
\label{sec:conclusion}
In this paper, we proposed and developed a novel Bayesian methodology for assessment of convergence
of infinite series; we further extended the theory to enable detection
of multiple or even infinite number of limit points of the underlying infinite series.
Our developments do not require any restrictive assumption, not even independence of the elements
$X_i$ of the infinite series.
We demonstrated the reliability and efficiency of our
methods with varieties of examples, the most important one being associated with
Riemann Hypothesis.
Both methods proposed in this paper, namely the convergence assessment method and the
multiple limit points method are almost completely in agreement that the Riemann Hypothesis
can not be completely supported. Indeed, both the methods agree that there exists some
$a^*$ in the neighborhood of $0.7$ such that the infinite series based on the M\"{o}bius function
diverges for $a<a^*$ and converges for $a\geq a^*$. The results that we obtained by our Bayesian
analyses are also supported by informal plots of the partial sums depicted in Figure \ref{fig:plot_partial_sums}.
Further support of our Riemann hypothesis results can be obtained by exploiting the
characterization of Riemann hypothesis by convergence of certain infinite series based on Bernoulli numbers;
the details are presented in Section S-6 of the supplement.
\begin{comment}
Such characterizations
are provided in \ctn{Carey03} (unpublished, according to our knowledge);
in particular, it has been shown that
Riemann hypothesis is true if and only if the following series is convergent:
\begin{equation}
\tilde S_1=\sum_{m=1}^{\infty}\frac{\pi (4m+3)}{2^{4m+1}}
\sum_{k=0}^m(-1)^k\frac{{2m+1\choose k}{4m+2-2k\choose 2m+1}}{2m+2-2k}
\log\left(\frac{\left(2\pi\right)^{2m+2-2k}\left|B_{2m+2-2k}\right|}{2(2m+2-2k)^2(2m-2k)!}\right),
\label{eq:bernoulli_1}
\end{equation}
where $\left\{B_n;~n=0,1,\ldots\right\}$ are Bernoulli numbers characterized by their generating function
$\sum_{n=0}^{\infty}B_nx^n/n!=x/\left(\exp(x)-1\right)$. The Bernoulli numbers are related to the
Riemann zeta function by (see, for example \ctn{Sury03})
\begin{equation}
B_{2m}=(-1)^{m-1}\frac{2(2m)!}{(2\pi)^{2m}}\zeta(2m).
\label{eq:Riemann_Bernoulli}
\end{equation}
\ctn{Carey03} further showed that convergence of the related series
\begin{equation}
\tilde S_2=\sum_{m=1}^{\infty}\frac{\pi (4m+3)}{2^{4m+1}}
\sum_{k=0}^m(-1)^k\frac{{2m+1\choose k}{4m+2-2k\choose 2m+1}}{2m+2-2k}
\log\left((2m+1-2k)\frac{\left|B_{2m+2-2k}\right|}{\left|B_{2m+4-2k}\right|}\right),
\label{eq:bernoulli_2}
\end{equation}
is also equivalent to the assertion that Riemann hypothesis is correct.
However, the terms of both the series (\ref{eq:bernoulli_1}) and (\ref{eq:bernoulli_2}) tend to explode very quickly.
Stirlings's approximation of the factorials involved in the summands facilitates computation of larger number
of summands compared to the original terms. In this context, note that Stirling's approximation
applied to the factorials in (\ref{eq:Riemann_Bernoulli}), along with the approximation
$\zeta(2m)\sim 1$, as $m\rightarrow\infty$,
lead the following asymptotic form of $B_{2m}$ as as $m\rightarrow\infty$:
\begin{equation}
B_{2m}\sim (-1)^{m-1}4\sqrt{\pi m}\left(\frac{m}{\pi e}\right)^{2m}.
\label{eq:bernoulli_asymp}
\end{equation}
Figure \ref{fig:RH_Bernoulli} shows the logarithms of the first few terms $a_m$ of the above two series,
based on the actual terms $a_m$ and the Stirling-approximated $a_m$ (ignoring a multiplicative constant);
the rest of the terms become
too large to be reliably computed, even with Stirling's approximation.
The bottomline that emerges from
(\ref{fig:RH_Bernoulli}) is that the series $\tilde S_1$ and $\tilde S_2$ appear to be clearly divergent,
providing some support to our result on Riemann hypothesis.
\begin{figure}
\centering
\subfigure [Actual terms of series $\tilde S_1$.]{ \label{fig:bernoulli_1_actual}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli/Bernoulli_1_actual-crop.pdf}}
\hspace{2mm}
\subfigure [Stirling based terms of series $\tilde S_1$.]{ \label{fig:bernoulli_1_stirling}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli/Bernoulli_1_Stirling-crop.pdf}}\\
\subfigure [Actual terms of series $\tilde S_2$.]{ \label{fig:bernoulli_2_actual}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli/Bernoulli_2_actual-crop.pdf}}
\hspace{2mm}
\subfigure [Stirling based terms of series $\tilde S_2$.]{ \label{fig:bernoulli_2_stirling}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli//Bernoulli_2_Stirling-crop.pdf}}
\caption{Actual and Stirling-approximated terms $a_m$ of the series $\tilde S_1$ and $\tilde S_2$.}
\label{fig:RH_Bernoulli}
\end{figure}
\end{comment}
In fine, it is worth reminding the reader that although our work attempts to provide
insights regarding Riemann hypothesis,
we did not develop our Bayesian approach keeping Riemann hypothesis in mind. Indeed,
our primary objective is to develop Bayesian approaches to studying convergence properties of infinite series in general.
From this perspective, Riemann hypothesis is just an example where it makes sense to learn about convergence properties of
a certain class of infinite series. Further development of our approach is of course in the cards.
Note that the theory that we developed readily applies to random series; we shall carry out a detailed investigation including
comparisons with existing theories on random infinite series. We then intend to extend these works
to complex infinite series, both deterministic and random.
\section*{Acknowledgment}
We thank Arun Kumar Kuchibhotla and Debapratim Banerjee for their very useful feedback
on the first draft of this paper.
\begin{comment}
\section*{Appendix}
\begin{appendix}
\section{Proof of Lemma \ref{lemma:example4}}
\label{appendix:example4}
Since each term of the series (\ref{eq:inconclusive1}) is decreasing in $a$, it is clear that $S^{a,b}_{j,n}$
is decreasing in $a$. We need to show that $S^{a,b}_{j,n}$ is increasing in $b$.
Let, for $i\geq 3$,
\begin{equation}
g(i)=\left(1-\frac{\log i}{i}-\frac{\log\log i}{i}
\left\{\cos^2\left(\frac{1}{i}\right)\right\}\left(a+(-1)^ib\right)\right)^i.
\label{eq:g_example4}
\end{equation}
Observe that all our partial sums of the form
$S^{a,b}_{j,n}$ for $j\geq 3$ admit the form
\begin{equation}
S^{a,b}_{j,n}=\sum_{i=r}^{r+n-1}g(i),
\label{eq:S_partial_example4}
\end{equation}
where $r=3+n(j-1)$, which is clearly odd because $n$ is even.
Now,
\begin{equation}
\sum_{i=r}^{r+n-1}g(i)=\left\{g(r)+g(r+1)\right\}+\left\{g(r+2)+g(r+3)\right\}+\cdots +\left\{g(r+n-2)+g(r+n-1)\right\},
\label{eq:sum_example4}
\end{equation}
where the sums of the consecutive terms within the parentheses have the form
\begin{align}
&g(r+\ell)+g(r+\ell+1)\notag\\
&=\left(1-\frac{\log (r+\ell)}{r+\ell}-\frac{\log\log (r+\ell)}{r+\ell}
\left\{\cos^2\left(\frac{1}{r+\ell}\right)\right\}\left(a+(-1)^{(r+\ell)}b\right)\right)^{(r+\ell)}\notag\\
&\quad\quad+\left(1-\frac{\log (r+\ell+1)}{r+\ell+1}-\frac{\log\log (r+\ell+1)}{r+\ell+1}
\left\{\cos^2\left(\frac{1}{r+\ell+1}\right)\right\}\left(a+(-1)^{(r+\ell+1)}b\right)\right)^{(r+\ell+1)}.
\label{eq:pairwise_sum_example4}
\end{align}
Since $r$ is odd, and since the terms are represented pairwise in (\ref{eq:sum_example4}) it follows that
in (\ref{eq:pairwise_sum_example4}), $r+\ell$ is odd and $r+\ell+1$ is even. That is, in (\ref{eq:pairwise_sum_example4}),
$a+(-1)^{(r+\ell)}b=a-b$ and $a+(-1)^{(r+\ell+1)}b=a+b$.
Since $\cos^2\left(\theta\right)$ is decreasing on $\left[0,\frac{\pi}{2}\right]$, and since
$\frac{1}{i}\leq \frac{\pi}{2}$ for $i\geq 3$, it follows that $\cos^2\left(\frac{1}{i}\right)$ is increasing in $i$.
Moreover, $\frac{\log\log i}{i}$ decreases in $i$ at a rate faster
than $\cos^2\left(\frac{1}{i}\right)$ increases, so that
$\frac{\log\log i}{i}\times\cos^2\left(\frac{1}{i}\right)$ decreases in $i$.
It follows that
\begin{equation}
\frac{\log\log (r+\ell)}{r+\ell}\cos^2\left(\frac{1}{r+\ell}\right)
>\frac{\log\log (r+\ell+1)}{r+\ell+1}\cos^2\left(\frac{1}{r+\ell+1}\right).
\label{eq:compare}
\end{equation}
Note that in $g(r+\ell)+g(r+\ell+1)$, $\frac{\log\log (r+\ell)}{r+\ell}\cos^2\left(\frac{1}{r+\ell}\right)$
is associated with $-b$ while $\frac{\log\log (r+\ell+1)}{r+\ell+1}\cos^2\left(\frac{1}{r+\ell+1}\right)$
involves $b$. Hence, increasing $b$ increases $g(r+\ell)$ but decreases $g(r+\ell+1)$,
and because of (\ref{eq:compare}), $g(r+\ell)+g(r+\ell+1)$ increases in $b$.
This ensures that $\sum_{i=r}^{r+n-1}g(i)$,
given by (\ref{eq:sum_example4}), is increasing in $b$. In other words, partial sums of the form
(\ref{eq:S_partial_example4}) are
increasing in $b$, proving Lemma \ref{lemma:example4} when $n$ is even.
\section{Proof of Lemma \ref{lemma:example5}}
\label{appendix:example5}
That $S^{a,b}_{j,n}$ is decreasing in $a$ follows trivially since
each term of (\ref{eq:example5}) is decreasing in $a$. We need to show that $S^{a,b}_{j,n}$ is increasing in $b$.
Let, for $i\geq 5$,
\begin{equation}
g(i)=\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i.
\label{eq:g_example5}
\end{equation}
Now note that, with $r=5+n(j-1)$,
\begin{align}
\sum_{i=r}^{r+n-1}g(i)&=\sum_{m=1}^{\frac{n}{4}}Z_{r,m}\notag\\
&=\left\{Z_{r,1}+Z_{r,2}\right\}+\left\{Z_{r,3}+Z_{r,4}\right\}+
\cdots+\left\{Z_{r,\frac{n}{4}-1}+Z_{r,\frac{n}{4}}\right\},
\label{eq:sum_example5}
\end{align}
where
\begin{equation}
Z_{r,m}=\sum_{\ell=5+4(m-1)}^{5+4(m-1)+3}g(r+\ell).
\label{eq:Z_example5}
\end{equation}
Now, for any $\ell\geq 1$, observe that in $\left\{Z_{r,\ell}+Z_{r,\ell+1}\right\}$, the term
$Z_{r,\ell}$ consists of only negative signs of the sine-values, while in $Z_{r,\ell+1}$ the
corresponding signs are positive, although the magnitudes are the same. Since $\log(i)/i$ is decreasing in $i$,
it follows that $\left\{Z_{r,\ell}+Z_{r,\ell+1}\right\}$ is increasing in $b$ for $\ell\geq 1$. Hence,
it follows that (\ref{eq:sum_example5}), and $S^{a,b}_{j,n}$, defined by (\ref{eq:S_example5}),
are increasing in $b$ for $j\geq 1$ and $n$, a multiple of $4$, proving Lemma \ref{lemma:example5}.
\end{appendix}
\end{comment}
\newpage
\input{supp}
\newpage
\normalsize
\bibliographystyle{natbib}
\section{Proof of Lemma 5.1}
\label{sec:proof_lemma5.1}
Since each term of the series (1)
is decreasing in $a$, it is clear that $S^{a,b}_{j,n}$
is decreasing in $a$. We need to show that $S^{a,b}_{j,n}$ is increasing in $b$.
Let, for $i\geq 3$,
\begin{equation}
g(i)=\left(1-\frac{\log i}{i}-\frac{\log\log i}{i}
\left\{\cos^2\left(\frac{1}{i}\right)\right\}\left(a+(-1)^ib\right)\right)^i.
\label{eq:g_example4}
\end{equation}
Observe that all our partial sums of the form
$S^{a,b}_{j,n}$ for $j\geq 3$ admit the form
\begin{equation}
S^{a,b}_{j,n}=\sum_{i=r}^{r+n-1}g(i),
\label{eq:S_partial_example4}
\end{equation}
where $r=3+n(j-1)$, which is clearly odd because $n$ is even.
Now,
\begin{equation}
\sum_{i=r}^{r+n-1}g(i)=\left\{g(r)+g(r+1)\right\}+\left\{g(r+2)+g(r+3)\right\}+\cdots +\left\{g(r+n-2)+g(r+n-1)\right\},
\label{eq:sum_example4}
\end{equation}
where the sums of the consecutive terms within the parentheses have the form
\begin{align}
&g(r+\ell)+g(r+\ell+1)\notag\\
&=\left(1-\frac{\log (r+\ell)}{r+\ell}-\frac{\log\log (r+\ell)}{r+\ell}
\left\{\cos^2\left(\frac{1}{r+\ell}\right)\right\}\left(a+(-1)^{(r+\ell)}b\right)\right)^{(r+\ell)}\notag\\
&\quad\quad+\left(1-\frac{\log (r+\ell+1)}{r+\ell+1}-\frac{\log\log (r+\ell+1)}{r+\ell+1}
\left\{\cos^2\left(\frac{1}{r+\ell+1}\right)\right\}\left(a+(-1)^{(r+\ell+1)}b\right)\right)^{(r+\ell+1)}.
\label{eq:pairwise_sum_example4}
\end{align}
Since $r$ is odd, and since the terms are represented pairwise in (\ref{eq:sum_example4}) it follows that
in (\ref{eq:pairwise_sum_example4}), $r+\ell$ is odd and $r+\ell+1$ is even. That is, in (\ref{eq:pairwise_sum_example4}),
$a+(-1)^{(r+\ell)}b=a-b$ and $a+(-1)^{(r+\ell+1)}b=a+b$.
Since $\cos^2\left(\theta\right)$ is decreasing on $\left[0,\frac{\pi}{2}\right]$, and since
$\frac{1}{i}\leq \frac{\pi}{2}$ for $i\geq 3$, it follows that $\cos^2\left(\frac{1}{i}\right)$ is increasing in $i$.
Moreover, $\frac{\log\log i}{i}$ decreases in $i$ at a rate faster
than $\cos^2\left(\frac{1}{i}\right)$ increases, so that
$\frac{\log\log i}{i}\times\cos^2\left(\frac{1}{i}\right)$ decreases in $i$.
It follows that
\begin{equation}
\frac{\log\log (r+\ell)}{r+\ell}\cos^2\left(\frac{1}{r+\ell}\right)
>\frac{\log\log (r+\ell+1)}{r+\ell+1}\cos^2\left(\frac{1}{r+\ell+1}\right).
\label{eq:compare}
\end{equation}
Note that in $g(r+\ell)+g(r+\ell+1)$, $\frac{\log\log (r+\ell)}{r+\ell}\cos^2\left(\frac{1}{r+\ell}\right)$
is associated with $-b$ while $\frac{\log\log (r+\ell+1)}{r+\ell+1}\cos^2\left(\frac{1}{r+\ell+1}\right)$
involves $b$. Hence, increasing $b$ increases $g(r+\ell)$ but decreases $g(r+\ell+1)$,
and because of (\ref{eq:compare}), $g(r+\ell)+g(r+\ell+1)$ increases in $b$.
This ensures that $\sum_{i=r}^{r+n-1}g(i)$,
given by (\ref{eq:sum_example4}), is increasing in $b$. In other words, partial sums of the form
(\ref{eq:S_partial_example4}) are
increasing in $b$, proving Lemma 5.1
when $n$ is even.
\section{Further examples on detection of series convergence and divergence using our Bayesian method}
\subsection{Example 5}
\label{subsec:example5}
Now consider the following series presented and analysed in \ctn{Bou12}:
\begin{equation}
S=\sum_{i=3}^{\infty}\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i;~a>0,b>0.
\label{eq:example5}
\end{equation}
\ctn{Bou12} show that the series converges when $a-b>1$ and diverges when $a+b<1$.
Again, as in the case of Example 4, the following lemma holds in Example 5.
Note that for mathematical convenience we consider partial sums from the $5$-th term onwards.
We also assume $n$ to be a multiple of $4$.
\begin{lemma}
\label{lemma:example5}
For the series (\ref{eq:example5}),
let
\begin{equation}
S^{a,b}_{j,n}=\sum_{i=5+n(j-1)}^{5+nj-1}\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i,
\label{eq:S_example5}
\end{equation}
for $j\geq 1$ and $n$, a multiple of $4$. Then $S^{a,b}_{j,n}$ is decreasing in $a$ and increasing in $b$.
\end{lemma}
\begin{proof}
That $S^{a,b}_{j,n}$ is decreasing in $a$ follows trivially since
each term of (\ref{eq:example5}) is decreasing in $a$. We need to show that $S^{a,b}_{j,n}$ is increasing in $b$.
Let, for $i\geq 5$,
\begin{equation}
g(i)=\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i.
\label{eq:g_example5}
\end{equation}
Now note that, with $r=5+n(j-1)$,
\begin{align}
\sum_{i=r}^{r+n-1}g(i)&=\sum_{m=1}^{\frac{n}{4}}Z_{r,m}\notag\\
&=\left\{Z_{r,1}+Z_{r,2}\right\}+\left\{Z_{r,3}+Z_{r,4}\right\}+
\cdots+\left\{Z_{r,\frac{n}{4}-1}+Z_{r,\frac{n}{4}}\right\},
\label{eq:sum_example5}
\end{align}
where
\begin{equation}
Z_{r,m}=\sum_{\ell=5+4(m-1)}^{5+4(m-1)+3}g(r+\ell).
\label{eq:Z_example5}
\end{equation}
Now, for any $\ell\geq 1$, observe that in $\left\{Z_{r,\ell}+Z_{r,\ell+1}\right\}$, the term
$Z_{r,\ell}$ consists of only negative signs of the sine-values, while in $Z_{r,\ell+1}$ the
corresponding signs are positive, although the magnitudes are the same. Since $\log(i)/i$ is decreasing in $i$,
it follows that $\left\{Z_{r,\ell}+Z_{r,\ell+1}\right\}$ is increasing in $b$ for $\ell\geq 1$. Hence,
it follows that (\ref{eq:sum_example5}), and $S^{a,b}_{j,n}$, defined by (\ref{eq:S_example5}),
are increasing in $b$ for $j\geq 1$ and $n$, a multiple of $4$, proving Lemma \ref{lemma:example5}.
\end{proof}
The following corollary with respect to $S^{a,b}$ again holds:
\begin{corollary}
\label{corollary:example5}
$S^{a,b}$ is decreasing in $a$ and increasing in $b$.
\end{corollary}
Thus, we follow the same method as in Example 4 to determine $c^{a,b}_{j,n}$, but we need to note
that in this example $a>0$ and $b>0$ instead of $a\geq 0$ and $b\geq 0$ of Example 4. Consequently, here we define
$b\geq\epsilon$, for $\epsilon>0$, the set $A_{\epsilon}$ given by
$ A_{\epsilon}=\left\{a:0\leq a\leq 1\right\}\cup\left\{a:a\geq 1+\epsilon\right\}$
and
\begin{equation}
\tilde S=\underset{a\in A_{\epsilon}}{\inf}\underset{b\geq \epsilon}{\sup}~\left\{S^{a,b}:a-b>1\right\}.
\label{eq:tilde_S2}
\end{equation}
In this case, Corollary \ref{corollary:example5} and the convergence
criterion $a-b>1$ ensure that $\tilde S$ is attained at $a_0=1+\epsilon$ and $b_0=\epsilon$.
As before, we set $\epsilon=10^{-10}$.
The rest of the arguments leading to the choice of $c^{a,b}_{j,n}$ remains the same as in Example 4,
and hence in this example $c^{a,b}_{j,n}$ has the form
\begin{equation}
c^{a,b}_{j,n}=\left\{\begin{array}{ccc} u^{a,b}_{j,n}, &\mbox{if}~u^{a,b}_{j,n}>0;\\
S^{a_0,b_0}_{j,n}, & \mbox{otherwise},\end{array}\right.
\label{eq:example4_c_supp}
\end{equation}
with $a_0=1+10^{-10}$, $b_0=10^{-10}$, where $S^{a_0,b_0}_{j,n}$ is decreasing in $j$ as before.
Figure \ref{fig:example5} depicts the results of our Bayesian analysis of the series (\ref{eq:example5}) for
various values of $a$ and $b$. All the results are in accordance with those of \ctn{Bou12}.
\begin{figure}
\centering
\subfigure [Convergence: $a=2,b=1$.]{ \label{fig:example5_a_2_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example5_a_2_b_1-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+20^{-10},b=10^{-10}$.]{ \label{fig:example5_a12_b01}
\includegraphics[width=6cm,height=5cm]{figures/example5_a12_b01-crop.pdf}}\\
\hspace{2mm}
\subfigure [Convergence: $a=1+30^{-10},b=20^{-10}$.]{ \label{fig:example5_a13_b02}
\includegraphics[width=6cm,height=5cm]{figures/example5_a13_b02-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1/2,b=1/2$.]{ \label{fig:example5_a_1_2_b_1_2}
\includegraphics[width=6cm,height=5cm]{figures/example5_a_1_2_b_1_2-crop.pdf}}\\
\subfigure [Divergence: $a=\frac{1}{2}\left(1-10^{-11}\right),
b=\frac{1}{2}\left(1-10^{-11}\right)$.]{ \label{fig:example5_a+b_less_1}
\includegraphics[width=6cm,height=5cm]{figures/example5_a+b_less_1-crop.pdf}}
\caption{Example 5: The series (\ref{eq:example5}) converges for $(a=2,b=1)$, $(a=1+20^{-10},b=10^{-10})$,
$(a=1+30^{-10},b=20^{-10})$ and diverges for $(a=1/2,b=1/2)$ and
$\left(a=\frac{1}{2}\left(1-10^{-11}\right),b=\frac{1}{2}\left(1-10^{-11}\right)\right)$.}
\label{fig:example5}
\end{figure}
\subsection{Example 6}
\label{subsec:example6}
We now investigate whether or not the following series converges:
\begin{equation}
S=\sum_{i=1}^{\infty}\frac{1}{i^3|\sin i|}.
\label{eq:example6_S}
\end{equation}
This series is a special case of the generalized form of the Flint Hills series (see
\ctn{Pickover02} and \ctn{Alek11}).
For our purpose, we first embed the above series into
\begin{equation}
S^{a,b}=\sum_{i=1}^{\infty}\frac{i^{b-3}}{a+|\sin i|},
\label{eq:example6}
\end{equation}
where $b\in\mathbb R$ and $|a|\leq\eta$, for some $\eta>0$, specified according to our
purpose. Note that, $S=S^{0,0}$, and we set $\eta=10^{-10}$ for our investigation of (\ref{eq:example6_S}).
Note that for any fixed $a\neq 0$, $S^{a,b}$ converges if $b<2$ and diverges if $b\geq 2$.
Since $S^{a,b}$ increases in $b$
it follows that the equality in
\begin{equation}
\tilde S=\sup\left\{S^{a,b}:a=\epsilon,~b\leq 2-\epsilon \right\}
\label{eq:sup_A_example6}
\end{equation}
is attained at $(a_0,b_0)=(\epsilon,2-\epsilon)$.
Arguments in keeping with those in the previous examples
lead to the following choice of the
upper bound for $S^{a,b}_{j,n}$, which we again denote by $c^{a,b}_{j,n}$:
\begin{equation}
c^{a,b}_{j,n}=\left\{\begin{array}{ccc} u^{a,b}_{j,n}, &\mbox{if}~b<2;\\
v^{a,b}_{j,n}, & \mbox{otherwise},\end{array}\right.
\label{eq:example6_c}
\end{equation}
where
\begin{align}
u^{a,b}_{j,n}&=S^{a_0,b_0}_{j,n}+\frac{(|a|-b+2-2\epsilon+10^{-5})}{\log(j+1)};
\label{eq:u_example6}\\
v^{a,b}_{j,n}&=S^{a_0,b_0}_{j,n}+\frac{(|a|-b+2-2\epsilon-10^{-5})}{\log(j+1)}.
\label{eq:v_example6}
\end{align}
It can be easily verified that the upper bound is decreasing in $j$.
Notice that we add the term $10^{-5}$ when $b<2$ so that our Bayesian method favours convergence and subtract
the same when $b\geq 2$ to facilitate detection of divergence. Since convergence or divergence of $S^{a,b}$
does not depend upon $a\in [-\eta,\eta]\setminus\left\{0\right\}$,
we use $|a|$ in (\ref{eq:u_example6}) and (\ref{eq:v_example6}).
Setting $\epsilon=10^{-10}$, Figures \ref{fig:example6a} and \ref{fig:example6b}
depict convergence and divergence of $S^{a,b}$ for various
values of $a$ and $b$. In particular, panel (e) of Figure \ref{fig:example6b}
shows that our main interest, the series $S$, given by (\ref{eq:example6_S}), converges.
\begin{figure}
\centering
\subfigure [Convergence: $a=-10^{-10},b=2-10^{-10}$.]{ \label{fig:example6_a_minus_e_b_2_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_2_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=-10^{-10},b=2+10^{-10}$.]{ \label{fig:example6_a_minus_e_b_2_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_2_plus_e-crop.pdf}}\\
\subfigure [Convergence: $a=10^{-10},b=2-10^{-10}$.]{ \label{fig:example6_a_plus_e_b_2_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_2_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=10^{-10},b=2+10^{-10}$.]{ \label{fig:example6_a_plus_e_b_2_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_2_plus_e-crop.pdf}}
\caption{Example 6: The series (\ref{eq:example6}) converges for $(a=-10^{-10},b=2-10^{-10})$,
$(a=10^{-10},b=2-10^{-10})$, and diverges for $(a=-10^{-10},b=2+10^{-10})$,
$(a=10^{-10},b=2+10^{-10})$.}
\label{fig:example6a}
\end{figure}
\begin{figure}
\centering
\subfigure [Convergence: $a=-10^{-10},b=-10^{-10}$.]{ \label{fig:example6_a_minus_e_b_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=-10^{-10},b=10^{-10}$.]{ \label{fig:example6_a_minus_e_b_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_minus_e_b_plus_e-crop.pdf}}\\
\subfigure [Convergence: $a=10^{-10},b=-10^{-10}$.]{ \label{fig:example6_a_plus_e_b_minus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_minus_e-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=10^{-10},b=10^{-10}$.]{ \label{fig:example6_a_plus_e_b_plus_e}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_plus_e_b_plus_e-crop.pdf}}\\
\subfigure [Convergence: $a=0,b=0$.]{ \label{fig:example6_a_0_b_0}
\includegraphics[width=6cm,height=5cm]{figures/plots_flint/example6_a_0_b_0-crop.pdf}}
\caption{Example 6: The series (\ref{eq:example6}) converges for $(a=-10^{-10},b=-10^{-10})$,
$(a=-10^{-10},b=10^{-10})$, $(a=10^{-10},b=-10^{-10})$, $(a=10^{-10},b=10^{-10})$, and
$(a=0,b=0)$.}
\label{fig:example6b}
\end{figure}
\subsection{Example 7}
\label{subsec:example7}
We now consider
\begin{equation}
S=\sum_{i=1}^{\infty}\frac{|\sin~i|^i}{i}.
\label{eq:example7}
\end{equation}
We embed this series into
\begin{equation}
S^{a,b}=\sum_{i=1}^{\infty}\frac{|\sin~a\pi i|^i}{i^b},
\label{eq:example7_embedding}
\end{equation}
where $a\in\mathbb R$ and $b\geq 1$.
The above series converges if $b>1$, for all $a\in\mathbb R$.
But for $b=1$, it is easy to see that
the series diverges if $a=\ell/2m$, where
$\ell$ and $m$ are odd integers.
Letting $a_0=\pi^{-1}$ and $b_0=1+\epsilon$, with $\epsilon=10^{-10}$,
we set the following upper bound that is decreasing in $j$:
\begin{equation}
c^{a,b}_{j,n}=S^{a_0,b_0}_{j,n}+\frac{\epsilon}{j}.
\label{eq:example7_upper_bound}
\end{equation}
Thus, $c^{a,b}_{j,n}$ corresponds to a convergent series which is also sufficiently close
to divergence. Addition of the term $\frac{\epsilon}{j}$ provides further protection from
erroneous conclusions regarding divergence.
Panel(a) of Figure \ref{fig:example7} demonstrates that the series of our interest, given by
(\ref{eq:example7}), diverges. Panel (b) confirms that for $a=5/(2\times 7)$ and $b=1$, the series
indeed diverges, as it should.
\begin{figure}
\centering
\subfigure [Divergence: $a=\pi^{-1},b=1$.]{ \label{fig:example7_a_pi_inv_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example7_a_pi_inv_b_1-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=5/(2\times 7),b=1$.]{ \label{fig:example7_a_odd1_b_1}
\includegraphics[width=6cm,height=5cm]{figures/example7_a_odd1_b_1-crop.pdf}}
\caption{Example 7: The series (\ref{eq:example7_embedding}) diverges for $(a=\pi^{-1},b=1)$,
$(a=5/7,b=1)$.}
\label{fig:example7}
\end{figure}
\begin{comment}
\section{Proof of Lemma \ref{lemma:example5}}
\label{appendix:example5}
That $S^{a,b}_{j,n}$ is decreasing in $a$ follows trivially since
each term of (\ref{eq:example5}) is decreasing in $a$. We need to show that $S^{a,b}_{j,n}$ is increasing in $b$.
Let, for $i\geq 5$,
\begin{equation}
g(i)=\left(1-\left(\frac{\log(i)}{i}\right)
\left(a\left(1+\sin^2\left(\sqrt{\left(\frac{\log\left(\log(i)\right)}{\log(i)}\right)}\right)\right)
+b\sin\left(\frac{i\pi}{4}\right)\right)\right)^i.
\label{eq:g_example5}
\end{equation}
Now note that, with $r=5+n(j-1)$,
\begin{align}
\sum_{i=r}^{r+n-1}g(i)&=\sum_{m=1}^{\frac{n}{4}}Z_{r,m}\notag\\
&=\left\{Z_{r,1}+Z_{r,2}\right\}+\left\{Z_{r,3}+Z_{r,4}\right\}+
\cdots+\left\{Z_{r,\frac{n}{4}-1}+Z_{r,\frac{n}{4}}\right\},
\label{eq:sum_example5}
\end{align}
where
\begin{equation}
Z_{r,m}=\sum_{\ell=5+4(m-1)}^{5+4(m-1)+3}g(r+\ell).
\label{eq:Z_example5}
\end{equation}
Now, for any $\ell\geq 1$, observe that in $\left\{Z_{r,\ell}+Z_{r,\ell+1}\right\}$, the term
$Z_{r,\ell}$ consists of only negative signs of the sine-values, while in $Z_{r,\ell+1}$ the
corresponding signs are positive, although the magnitudes are the same. Since $\log(i)/i$ is decreasing in $i$,
it follows that $\left\{Z_{r,\ell}+Z_{r,\ell+1}\right\}$ is increasing in $b$ for $\ell\geq 1$. Hence,
it follows that (\ref{eq:sum_example5}), and $S^{a,b}_{j,n}$, defined by (\ref{eq:S_example5}),
are increasing in $b$ for $j\geq 1$ and $n$, a multiple of $4$, proving Lemma \ref{lemma:example5}.
\end{comment}
\section{Oscillatory series with multiple limit points}
\label{sec:osc_mult}
In this section we assume that the sequence $\left\{S_{1,n}\right\}_{n=1}^{\infty}$ has multiple
limit points, including the possibility that the number of limit points is countably infinite.
\subsection{Finite number of limit points}
\label{eq:finite_limit_points}
Let us assume that there are $M~(>1)$ limit points of the sequence $\left\{S_{1,n}\right\}_{n=1}^{\infty}$.
Then there exist sequences
$\{c_{m,j}\}_{j=1}^{\infty}$; $m=0,\ldots,M$, such that
$\left\{(c_{m-1,j},c_{m,j}];~m=1,\ldots,M\right\}$
partition the real line $\mathbb R$
for every $j\geq 1$ and that there exists $j_0\geq 1$ such that for all $j\geq j_0$,
the interval $(c_{m-1,j},c_{m,j}]$ contains at most one limit point of the sequence $\left\{S_{1,n}\right\}_{n=1}^{\infty}$,
for every $m=1,\ldots,M$. With these sequences we define
\begin{equation}
Y_{j}=m~~\mbox{if}~~c_{m-1,j}<S_{1,j}\leq c_{m,j};~m=1,2,\ldots,M,
\label{eq:y_finite}
\end{equation}
Recall that in Section 4 of our main manuscript
we allowed the sequence $\{c_j\}_{j=1}^{\infty}$
to depend upon the underlying series $S_{1,\infty}$.
Likewise, here also we allow the quantities $c_{0,j},c_{1,j},\ldots,c_{M,j}$ to depend upon
$S_{1,\infty}$. In other words,
for $\omega\in\mathfrak S$, for $m=0,1,2,\ldots,M$, and $j=1,2,3,\ldots$,
$c_{m,j}=c_{m,j}(\omega)$ corresponds to $S_{1,\infty}(\omega)$.
Note that unlike our ideas appropriate for non-oscillating series, here do not consider blocks
of partial sums,
$S_{j,n_j}=\sum_{i=\sum_{k=0}^{j-1}n_k+1}^{\sum_{k=0}^jn_k}X_i$,
but $S_{1j}=\sum_{i=1}^jX_i$. In other words,
for Bayesian analysis of non-oscillating series we compute sums of $n_j$ terms in each iteration,
whereas for oscillating series
we keep adding a single term at every iteration. Thus, computationally, the latter is a lot simpler.
We assume that
\begin{equation}
\left(\mathbb I(Y_{j}=1),\ldots,\mathbb I(Y_{j}=M)\right)
\sim Multinomial\left(1,p_{1,j},\ldots,p_{M,j}\right),
\label{eq:multinomial}
\end{equation}
where $p_{m,j}$ can be interpreted as the probability that $S_{1,j}\in (c_{m-1,j},c_{m,j}]$.
As $j\rightarrow\infty$ it is expected that $c_{m-1,j}$ and $c_{m,j}$ will converge to appropriate constants
depending upon $m$, and that $p_{m,j}$ will tend to the correct proportion of the limit point
indexed by $m$. Indeed, let $\left\{p_{m,0};~m=1,\ldots,M\right\}$ denote the actual proportions
of the limit points indexed by $\left\{1,\ldots,M\right\}$, as $j\rightarrow\infty$.
Following the same principle discussed in Section 3 of our main manuscript,
and extending the Beta prior to the Dirichlet prior, at the $k$-th stage we arrive at the
following posterior of $\left\{p_{m,k}:m=1,\ldots,M\right\}$:
\begin{equation}
\pi\left(p_{1,k},\ldots,p_{M,k}|y_{k}\right)
\equiv Dirichlet\left(\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^k\mathbb I\left(y_{j}=1\right),\ldots,
\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^k\mathbb I\left(y_{j}=M\right)\right).
\label{eq:posterior_dirichlet}
\end{equation}
The posterior mean and posterior variance of $p_{m,k}$, for $m=1,\ldots,M$, are given by:
\begin{align}
E\left(p_{m,k}|y_{k}\right)&=
\frac{\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)}
{M\sum_{j=1}^k\frac{1}{j^2}+k}
\label{eq:mean_dirichlet}\\
Var\left(p_{m,k}|y_{k}\right)&=
\frac{\left(\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)\right)
\left((M-1)\sum_{j=1}^k\frac{1}{j^2}+k-\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)\right)}
{\left(M\sum_{j=1}^k\frac{1}{j^2}+k\right)^2\left(M\sum_{j=1}^k\frac{1}{j^2}+k+1\right)}.
\label{eq:var_dirichlet}
\end{align}
Let $k=M\tilde k$, where $\tilde k\rightarrow\infty$. Then,
from (\ref{eq:mean_dirichlet}) and (\ref{eq:var_dirichlet}) it is easily seen, using
$\frac{\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)}{k}\rightarrow p_{m,0}$ almost surely as $k\rightarrow\infty$,
that almost surely,
\begin{align}
E\left(p_{m,k}|y_{k}\right)&\rightarrow p_{m,0},~~\mbox{and}
\label{eq:mean_dirichlet_convergence}\\
Var\left(p_{m,k}|y_{k}\right)&= O\left(\frac{1}{k}\right)\rightarrow 0,
\label{eq:var_dirichlet_convergence}
\end{align}
as $k\rightarrow\infty$.
We can now characterize the $m$ limit points of
$S_{1,\infty}$ in terms of the limits of the marginal posterior
probabilities of $p_{m,k}$, denoted by $\pi_m\left(\cdot|y_k\right)$,
as $k\rightarrow\infty$.
\begin{theorem}
\label{theorem:finite_limit_points}
$\left\{S_{1,n}\right\}_{n=1}^{\infty}$ has $M~(>1)$ limit points almost surely if and only if
for every $\omega\in\mathfrak S\cap \mathfrak N^c$, where $\mathfrak N$ has zero probability measure,
\begin{itemize}
\item[(1)] There exist sequences $\{c_{m,j}(\omega)\}_{j=1}^{\infty}$; $m=0,\ldots,M$, such that
$(c_{m-1,j}(\omega),c_{m,j}(\omega)]$ partition the real line $\mathbb R$ for every $j\geq 1$ and $m=1,\ldots,M$.
\item[(2)] There exists $j_0(\omega)\geq 1$ such that for all $j\geq j_0(\omega)$, for $m=1,\ldots,M$,
$(c_{m-1,j}(\omega),c_{m,j}(\omega)]$ contains at most one
limit point of $\{S_{1,n}(\omega)\}_{n=1}^{\infty}$.
\item[(3)] With $Y_j$ defined as in (\ref{eq:y_finite}),
\begin{equation}
\pi_m\left(\mathcal N_{p_{m,0}}|y_{k}(\omega)\right)\rightarrow 1,
\label{eq:consistency_at_p_m_0}
\end{equation}
as $k\rightarrow\infty$.
In the above, $\mathcal N_{p_{m,0}}$ is any neighborhood of $p_{m,0}$, with $p_{m,0}$ satisfying
$0<p_{m,0}<1$ for $m=1,\ldots,M$ such that $\sum_{m=1}^Mp_{m,0}=1$.
\end{itemize}
\end{theorem}
\begin{proof}
For $\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero probability measure,
let $S_{1,\infty}(\omega)$ be oscillatory with $M$ limit points having proportions
$\left\{p_{m,0};~m=1,\ldots,M\right\}$. Conditions (1) and (2) then clearly hold.
Then with our definition of $Y_j$ provided in (\ref{eq:y_finite}),
the results (\ref{eq:mean_dirichlet_convergence}) and (\ref{eq:var_dirichlet_convergence}) hold
with $k=M\tilde k$, where $\tilde k\rightarrow\infty$.
Now let $\mathcal N_{p_{m,0}}$ be any neighborhood of $p_{m,0}$.
Let $\epsilon>0$ be sufficiently small so that
$\mathcal N_{p_{m,0}}\supseteq\left\{|p_{m,k}-p_{m,0}|<\epsilon\right\}$. Then by Chebychev's inequality,
using (\ref{eq:mean_dirichlet_convergence}) and (\ref{eq:var_dirichlet_convergence}), it is seen that
$\pi_m\left(\mathcal N_{p_{m,0}}|y_k(\omega)\right)\rightarrow 1$, as $k\rightarrow\infty$.
Thus, (\ref{eq:consistency_at_p_m_0}) holds. In fact, more generally, condition (3) holds.
Now assume that conditions (1), (2), (3) hold.
Then $\pi_m\left(|p_{m,k}-p_{m,0}|<\epsilon|y_k(\omega)\right)\rightarrow 1$, as $k\rightarrow\infty$.
Combining this with Chebychev's inequality it follows that
(\ref{eq:mean_dirichlet_convergence}) and (\ref{eq:var_dirichlet_convergence}) hold with
$0<p_{m,0}<1$ for $m=1,\ldots,M$ such that $\sum_{m=1}^Mp_{m,0}=1$. If $\left\{S_{1,n}(\omega)\right\}_{n=1}^{\infty}$ has
less than $M$ limit points, then at least one $p_{m,0}=0$, providing a contradiction.
Hence $\left\{S_{1,n}(\omega)\right\}_{n=1}^{\infty}$ must have $M$ limit points.
\end{proof}
\subsection{Choice of $c_{0,j},\ldots,c_{M,j}$ for a given series}
\label{subsec:choice_c}
Let us define, for $j=1,2,\ldots,k$,
\begin{align}
\tilde p_{\ell,j}&=\left\{\begin{array}{ccc}
0 & \mbox{if} & \ell=0;\\
E\left(p_{\ell,j}|y_{j}\right) & \mbox{if} & \ell=1,2,\ldots,M.
\end{array}\right.
\label{eq:osc_recursive_postmean1}
\end{align}
We also define, for $\ell=1,2,\ldots,M$,
\begin{equation}
\tilde p_{\ell,0}=E\left(p_{\ell,1}\right),
\label{eq:osc_priormean}
\end{equation}
the prior mean at the first stage, before observing any data.
We then set $c_{0,j}\equiv 0$ for all $j=1,2,\ldots,k$, and, for $m\geq 1$, define
\begin{equation}
c_{m,j}=\log\left[\frac{\left(\sum_{\ell=1}^m\tilde p_{\ell,{j-1}}\right)^{1/\rho(\theta)}}
{1-\left(\sum_{\ell=1}^m\tilde p_{\ell,{j-1}}\right)^{1/\rho(\theta)}}\right],
\label{eq:c_finite}
\end{equation}
for $j=1,2,\ldots,k$.
Thus, the inequality $c_{m-1,j}<S_{1,j}\leq c_{m,j}$ in (\ref{eq:y_finite}) is equivalent to
\begin{equation}
\sum_{\ell=1}^{m-1}\tilde p_{\ell,k}<\left(\frac{\exp\left(S_{1,j}\right)}{1+\exp\left(S_{1,j}\right)}\right)^{\rho(\theta)}
\leq \sum_{\ell=1}^m\tilde p_{\ell,k},
\label{eq:c_finite2}
\end{equation}
where $\rho(\theta)$ is some relevant power depending upon the set of parameters $\theta$ of the given series,
responsible for appropriately
inflating or contracting the quantity $\frac{\exp\left(S_{1,j}\right)}{1+\exp\left(S_{1,j}\right)}$
for properly diagnosing the limit points. Thus, given the series $S_{1,\infty}(\omega)$, $\theta=\theta(\omega)$
is allowed to depend upon the underlying series.
If $\left(\frac{\exp\left(S_{1,j}\right)}{1+\exp\left(S_{1,j}\right)}\right)^{\rho(\theta)}\geq 1$, we set $Y_j=M$.
By (\ref{eq:consistency_at_p_m_0}), for large $k$, $\tilde p_{\ell,k}$ and $S_{1,j}$ adaptively adjust
themselves so that the correct proportions of the limit points are achieved in the long run.
\subsection{Infinite number of limit points}
\label{subsec:infinite_limit_points}
We now assume that the number of limits points of $\left\{S_{1,n}\right\}_{n=1}^{\infty}$ is countably infinite,
and that $\left\{p_{m,0};m=1,2,3,\ldots\right\}$, where $0\leq p_{m,0}\leq 1$ and $\sum_{m=1}^{\infty}p_{m,0}=1$,
are the true proportions of the limit points.
Now we define
\begin{equation}
Y_{j}=m~~\mbox{if}~~c_{m-1,j}<S_{1,j}\leq c_{m,j};~m=1,2,\ldots,\infty,
\label{eq:y_infinite}
\end{equation}
where the sequences $\left\{c_{m,j}\right\}_{j=1}^{\infty}$; $m\geq 1$, are such that
$(c_{m-1,j},c_{m,j}]$; $m\geq 1$, partition $\mathbb R$ for every $j\geq 1$, and that there
exists $j_0\geq 1$ such that for all $j\geq j_0$, these intervals contain at most one limit point of
$\left\{S_{1,n}\right\}_{n=1}^{\infty}$.
Let $\mathcal X=\left\{1,2,\ldots\right\}$ and let $\mathcal B\left(\mathcal X\right)$ denote the Borel
$\sigma$-field on $\mathcal X$ (assuming every singleton of $\mathcal X$ is an open set). Let $\mathcal P$
denote the set of probability measures on $\mathcal X$. Then, at the $j$-th stage,
\begin{equation}
[Y_j|P_j]\sim P_j,
\label{eq:Y_DP}
\end{equation}
where $P_j\in\mathcal P$. We assume that
$P_j$ is the following Dirichlet process (see \ctn{Ferguson73}):
\begin{equation}
P_j\sim DP\left(\frac{1}{j^2}G\right),
\label{eq:DP}
\end{equation}
where, the probability measure $G$ is such that, for every $j\geq 1$,
\begin{equation}
G\left(Y_j=m\right)=\frac{1}{2^m}.
\label{eq:G}
\end{equation}
It then follows using the same previous principles that, at the $k$-th stage,
the posterior of $P_k$ is again a Dirichlet process, given by
\begin{equation}
[P_k|y_k]\sim DP\left(\sum_{j=1}^k\frac{1}{j^2}G+\sum_{j=1}^k\delta_{y_j}\right),
\label{eq:posterior_DP}
\end{equation}
where $\delta_{y_j}$ denotes point mass at $y_j$.
It follows from (\ref{eq:posterior_DP}) that
\begin{align}
E\left(p_{m,k}|y_{k}\right)&=
\frac{\frac{1}{2^m}\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)}
{\sum_{j=1}^k\frac{1}{j^2}+k}
\label{eq:mean_DP}\\
Var\left(p_{m,k}|y_{k}\right)&=
\frac{\left(\sum_{j=1}^k\frac{1}{j^2}+\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)\right)
\left((1-\frac{1}{2^m})\sum_{j=1}^k\frac{1}{j^2}+k-\sum_{j=1}^k\mathbb I\left(y_{j}=m\right)\right)}
{\left(\sum_{j=1}^k\frac{1}{j^2}+k\right)^2\left(\sum_{j=1}^k\frac{1}{j^2}+k+1\right)}.
\label{eq:var_DP}
\end{align}
As before, it easily follows from (\ref{eq:mean_DP}) and (\ref{eq:var_DP}) that
for $m=1,2,3,\ldots$,
\begin{align}
E\left(p_{m,k}|y_{k}\right)&\rightarrow p_{m,0},~~\mbox{and}
\label{eq:mean_DP_convergence}\\
Var\left(p_{m,k}|y_{k}\right)&= O\left(\frac{1}{k}\right)\rightarrow 0,
\label{eq:var_DP_convergence}
\end{align}
almost surely, as $k\rightarrow\infty$.
The theorem below characterizes countable number of limit points of
$S_{1,\infty}$ in terms of the limit of the marginal posterior
probabilities of $p_{m,k}$,
as $k\rightarrow\infty$.
\begin{theorem}
\label{theorem:infinite_limit_points}
$\left\{S_{1,n}\right\}_{n=1}^{\infty}$ has countable limit points almost surely if and only if
for every $\omega\in\mathfrak S\cap \mathfrak N^c$, where $\mathfrak N$ has zero probability measure,
\begin{itemize}
\item[(1)] There exist sequences $\{c_{m,j}(\omega)\}_{j=1}^{\infty}$; $m=0,1,2\ldots$, such that
$(c_{m-1,j}(\omega),c_{m,j}(\omega)]$ partition the real line $\mathbb R$ for every $j\geq 1$ and $m\geq 1$.
\item[(2)] There exists $j_0(\omega)\geq 1$ such that for all $j\geq j_0(\omega)$,
$(c_{m-1,j}(\omega),c_{m,j}(\omega)]$ contains at most one
limit point of $\{S_{1,n}(\omega)\}_{n=1}^{\infty}$, for every $m\geq 1$.
\item[(3)] With $Y_j$ defined as in (\ref{eq:y_infinite}),
\begin{equation}
\pi_m\left(\mathcal N_{p_{m,0}}|y_{k}(\omega)\right)\rightarrow 1,
\label{eq:consistency_at_p_m_0_DP}
\end{equation}
as $k\rightarrow\infty$.
In the above, $\mathcal N_{p_{m,0}}$ is any neighborhood of $p_{m,0}$, with $p_{m,0}$ satisfying
$0\leq p_{m,0}\leq 1$ for $m=1,2,\ldots$ such that $\sum_{m=1}^{\infty}p_{m,0}=1$, with at most finite number
of $m$ such that $p_{m,0}=0$.
\end{itemize}
\end{theorem}
\begin{proof}
Follows using the same ideas as the proof of Theorem \ref{theorem:finite_limit_points}.
\end{proof}
As regards the choice of the quantities $c_{m,j}$, we simply extend the construction detailed
in Section \ref{subsec:choice_c} by only letting $M\rightarrow\infty$, and with obvious replacement
of the posterior means with those associated with the posterior Dirichlet process.
It is useful to remark that our theory with countably infinite number of limit points is readily
applicable to
situations where the number of limit points
is finite but unknown. In such cases, only a finite number of the probabilities $\left\{p_{m,j};~m=1,2,3\ldots\right\}$
will have posterior probabilities around positive quantities, while the rest will concentrate around zero.
For known finite number of limit points, it is only required to specify $G$ such that it
gives positive mass to only a specific finite set.
\subsection{Characterization of convergence and divergence with our approach on limit points}
\label{subsec:discussion_limpoints}
Note that for convergent series, $\pi_m\left(\mathcal N_1|y_k\right)\rightarrow 1$ as $k\rightarrow\infty$
for smaller values of $m$, while for divergent series with $S_{1,\infty}=\infty$
or $S_{1,\infty}=-\infty$, $\pi_m\left(\mathcal N_1|y_k\right)\rightarrow 1$ as $k\rightarrow\infty$
for much larger values of $m$ and the smallest value of $m$, respectively.
We formalize these statements below as the following theorems.
\begin{theorem}
\label{theorem:divergence_limpoints}
Let there be $M$ number of possible limit points of $S_{1,\infty}$, where
$M$ may be infinite.
Then
$S_{1,\infty}=\infty$ almost surely if and only if,
for any
$\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero
probability measure, for any sequences $\{c_{m,j}(\omega)\}_{j=1}^{\infty}$;
$m=1,2,\ldots,M$, such that $(c_{m-1,j}(\omega),c_{m,j}(\omega)]$; $m=1,\ldots,M$, partitions the real line $\mathbb R$
for every $j\geq 1$, it holds that
\begin{equation}
\pi_{m,k}\left(\mathcal N_{1}|y_k(\omega)\right)\rightarrow 1,
\label{eq:post_limpoints1}
\end{equation}
as $k\rightarrow\infty$ and $m\rightarrow M$.
\end{theorem}
\begin{proof}
For $\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero probability measure,
let $S_{1,\infty}(\omega)=\infty$.
Then as $k\rightarrow\infty$,
\begin{equation}
\left(\frac{\exp\left(S_{1,k}(\omega)\right)}{1+\exp\left(S_{1,k}(\omega)\right)}\right)^{\rho(\theta(\omega))}\rightarrow 1.
\label{eq:S_divergence1}
\end{equation}
In other words, for any fixed $M~(>1)$, $y_k(\omega)\rightarrow M$, as $k\rightarrow\infty$. Hence, as
$k\rightarrow\infty$ and $m\rightarrow M$,
it easily follows using the same techniques as before, that (\ref{eq:post_limpoints1}) holds.
Consequently, for infinite number of limit points, (\ref{eq:post_limpoints1}) holds as $m\rightarrow\infty$.
Now assume that (\ref{eq:post_limpoints1}) holds. It then follows from the formula of the posterior
mean that $y_k(\omega)\rightarrow M$, as $k\rightarrow\infty$, for fixed $M$.
Hence, (\ref{eq:S_divergence1}) holds, from which it follows that $S_{1,\infty}(\omega)=\infty$.
\end{proof}
\begin{theorem}
\label{theorem:divergence_limpoints_negative}
Let there be $M$ number of possible limit points of $S_{1,\infty}$, where
$M$ may be infinite.
Then
$S_{1,\infty}=-\infty$ almost surely if and only if
for any
$\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero
probability measure, for any sequences $\{c_{m,j}(\omega)\}_{j=1}^{\infty}$;
$m=1,2,\ldots,M$, such that $(c_{m-1,j}(\omega),c_{m,j}(\omega)]$; $m=1,\ldots,M$, partitions the real line $\mathbb R$
for every $j\geq 1$, it holds that
\begin{equation}
\pi_{m,k}\left(\mathcal N_{1}|y_k(\omega)\right)\rightarrow 1,
\label{eq:post_limpoints1_negative}
\end{equation}
as $k\rightarrow\infty$ and $m\rightarrow 1$.
\end{theorem}
\begin{proof}
For $\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero probability measure,
let $S_{1,\infty}(\omega)=-\infty$.
Then as $k\rightarrow\infty$,
\begin{equation}
\left(\frac{\exp\left(S_{1,k}(\omega)\right)}{1+\exp\left(S_{1,k}(\omega)\right)}\right)^{\rho(\theta(\omega))}\rightarrow 0.
\label{eq:S_divergence1_negative}
\end{equation}
In other words, for any fixed $M~(>1)$, $y_k(\omega)\rightarrow 1$, as $k\rightarrow\infty$. Hence, as
$k\rightarrow\infty$ and $m\rightarrow 1$, it is easily seen that (\ref{eq:post_limpoints1_negative}) holds.
Also, if (\ref{eq:post_limpoints1}) holds, then it follows from the formula of the posterior
mean that $y_k(\omega)\rightarrow 1$, as $k\rightarrow\infty$.
Hence, (\ref{eq:S_divergence1_negative}) holds, from which it follows that $S_{1,\infty}(\omega)=-\infty$.
\end{proof}
\begin{theorem}
\label{theorem:convergence_limpoints}
For all $\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero probability measure,
$S_{1,\infty}(\omega)$ is convergent if and only if
for any
$\omega\in\mathfrak S\cap\mathfrak N^c$, where $\mathfrak N$ has zero
probability measure, for any sequences $\{c_{m,j}(\omega)\}_{j=1}^{\infty}$;
$m=1,2,\ldots,M$, such that $(c_{m-1,j}(\omega),c_{m,j}(\omega)]$; $m=1,\ldots,M$, partitions the real line $\mathbb R$
for every $j\geq 1$, it holds
for some finite $m_0(\omega)\geq 1$, that
\begin{equation}
\pi_{m_0(\omega),k}\left(\mathcal N_{1}|y_k(\omega)\right)\rightarrow 1,
\label{eq:post_limpoints2}
\end{equation}
as $k\rightarrow\infty$.
\end{theorem}
\begin{proof}
Let $S_{1,\infty}(\omega)$ be convergent.
Then as $k\rightarrow\infty$,
\begin{equation}
\left(\frac{\exp\left(S_{1,k}(\omega)\right)}{1+\exp\left(S_{1,k}(\omega)\right)}\right)^{\rho(\theta(\omega))}\rightarrow
c(\omega),
\label{eq:S_convergence1}
\end{equation}
for some constant $0\leq c(\omega)<1$. Hence, there exists some finite $m_0(\omega)\geq 1$ such that
$y_k(\omega)\rightarrow m_0(\omega)$, as $k\rightarrow\infty$.
Using the same techniques as before, it is seen that that (\ref{eq:post_limpoints2}) holds.
Now assume that (\ref{eq:post_limpoints2}) holds. It then follows from the formula of the posterior
mean, that $y_k(\omega)\rightarrow m_0(\omega)$, as $k\rightarrow\infty$.
Hence, (\ref{eq:S_convergence1}) holds, from which it follows that $S_{1,\infty}(\omega)$ is convergent.
\end{proof}
According to Theorems \ref{theorem:divergence_limpoints_negative} and \ref{theorem:convergence_limpoints},
$m$ tends to $1$ and a finite quantity greater than or equal to $1$, accordingly as the series diverges
to $-\infty$ or converges. If the finite quantity in the latter case turns out to be $1$, then it is not
possible to distinguish between convergence and divergence to $-\infty$ by this method. However,
Theorem 4.1 of our main manuscript
can be usefully exploited in this case. If this method based on oscillating
series yields $m=1$, then we suggest checking for convergence using Theorem 4.1,
which would then help us confirm if the series is truly convergent.
\subsection{A rule of thumb for diagnosis of convergence, divergence and oscillations}
\label{subsec:thumb_rule}
Based on the above theorems we propose the following rule of thumb for detecting convergence and divergence
when $M$ is finite:
if $\frac{m}{M}> 0.9$ such that $\pi_{m,k}\left(\mathcal N_{1}|y_k\right)\rightarrow 1$ as $m\rightarrow M$
and $k\rightarrow\infty$, then declare the series as divergent to $\infty$. If
$0.1<\frac{m}{M}\leq 0.9$ such that $\pi_{m,k}\left(\mathcal N_{1}|y_k\right)\rightarrow 1$, then declare the
series as convergent. On the other hand, if $\frac{m}{M}\leq 0.1$, use Theorem 4.1
to check for convergence; in the case of negative result, declare the series as divergent to $-\infty$.
If, instead, there exist $m_{\ell};~\ell=1,\ldots,L$ ($L>1$) such that
$\pi_{m_{\ell},k}\left(\mathcal N_{p_{m_{\ell},0}}|y_k\right)\rightarrow 1$ as
$k\rightarrow\infty$, where $0<p_{m_{\ell},0}<1$ for $\ell=1,\ldots,L$ and $\sum_{\ell=1}^Lp_{m_{\ell},0}=1$, then
say that the sequence $\left\{S_{1,n}\right\}_{n=1}^{\infty}$ has $L$ limit points.
Note that the value of $\frac{m}{M}$ is not important in this situation.
\section{Illustration of our Bayesian theory on oscillation}
\label{sec:osc_examples}
We first consider a simple oscillatory series to illustrate our Bayesian idea on detection of limit points
(Section \ref{subsec:osc_series1}).
Next, in Section \ref{subsec:Bayesian_limpoints}, we illustrate our theory on limit points with Example 5,
arguably the most complex series in our
set of examples (other than Riemann Hypothesis) and in Section \ref{sec:Bayesian_limpoints_RH},
validate our result on Riemann Hypothesis with our Bayesian limit point theory.
\subsection{Illustration with a simple oscillatory series}
\label{subsec:osc_series1}
Let us re-consider the series $S_{1,\infty}=\sum_{i=1}^{\infty}\left(-1\right)^{i-1}$,
which we already introduced after Theorem 4.2 of our main manuscript.
We consider the theory based on Dirichlet process
developed in Section \ref{subsec:infinite_limit_points}, assuming for the sake
of illustrations that $G$ is concentrated on $M$ values, with $G\left(Y_j=m\right)=\frac{1}{M}$;
$m=1,2,\ldots,M$. We set $M=10$ and $K=10^5$ for our experiments.
With $\rho(\theta)=2$, the results are depicted
in Figure \ref{fig:osc_series1}. Two explicit limit points, with proportions $0.5$ each, are
correctly recognized. The limit points are obviously $0$ and $1$ for this example.
Implementation takes just a fraction of a second, even on an ordinary 32-bit laptop.
\begin{figure}
\centering
\subfigure [First limit point: The posterior of $p_{5,k}$ converges to $0.5$ as $k\rightarrow\infty$]
{\label{fig:series1_1}
\includegraphics[width=6cm,height=5cm]{figures/OSC_DP_plots/series1_1-crop.pdf}}
\hspace{2mm}
\subfigure [Second limit point: The posterior of $p_{6,k}$ converges to $0.5$ as $k\rightarrow\infty$.]
{ \label{fig:series1_2}
\includegraphics[width=6cm,height=5cm]{figures/OSC_DP_plots/series1_2-crop.pdf}}
\caption{Illustration of the Dirichlet process based theory on the first oscillating series:
two limit points, each with proportion $0.5$, are captured.}
\label{fig:osc_series1}
\end{figure}
\subsection{Illustration of the Bayesian limit point theory with Example 5}
\label{subsec:Bayesian_limpoints}
Since there is at most one limit point in the cases that we investigated, application
of our ideas to these cases must be able to re-confirm this.
As before we consider the theory based on Dirichlet process
with $G\left(Y_j=m\right)=\frac{1}{M}$;
$m=1,2,\ldots,M$, where we set $M=10$.
Thus, by our rule of thumb, divergence is to be declared only if
$\pi_{m=10,k}\left(\mathcal N_{1}|y_k\right)\rightarrow 1$, as $k\rightarrow\infty$.
As regards implementation, notice that here there is no scope for parallelization since at the $j$-th step
only $y_j$ is added to the existing $S_{1,j-1}$ to form $S_{1,j}=S_{1,j-1}+y_j$. As such, on our VMware,
using a single processor, only about two seconds are required for $10^5$ iterations associated with the series
(\ref{eq:example5}), for various values of $a~(>0)$ and $b~(>0)$.
\subsubsection{Choice of $\rho(\theta)$ in
$\left(\frac{\exp\left(S_{1,k}\right)}{1+\exp\left(S_{1,k}\right)}\right)^{\rho(\theta)}$}
\label{subsubsec:rho_choice}
In our example, $\theta=(a,b)$. We choose, for $j\geq 1$,
\begin{equation}
\tilde\rho(\theta)=a-b+\epsilon,
\label{eq:rho_choice}
\end{equation}
and set
\begin{equation}
\left(\frac{\exp\left(S_{1,j}\right)}{1+\exp\left(S_{1,j}\right)}\right)^{\rho(\theta)}
=\min\left\{1,\left(\frac{\exp\left(S_{1,j}\right)}{1+\exp\left(S_{1,j}\right)}\right)^{\tilde\rho(\theta)}\right\}
\label{eq:S_limpoints}
\end{equation}
Recall that the series (\ref{eq:example5}), defined for $a>0$ and $b>0$, converges for $a-b>1$ and diverges for $a+b<1$.
In keeping with this result,
(\ref{eq:S_limpoints}) decreases as $(a-b)$ increases, so that the chance of correctly diagnosing convergence
increases. Moreover, if both $a$ and $b$ are between 0 and 1 such that $a+b<1$, then (\ref{eq:S_limpoints}) tends
to be inflated, thereby increasing the chance of correctly detecting divergence.
The term $\epsilon$ in (\ref{eq:S_limpoints}) prevents the power from becoming zero when $a=b$. It is important
to note here that for $a+b=1$ convergence or divergence is not guaranteed, but if $\epsilon=0$ in (\ref{eq:S_limpoints}),
then $a=b$ would trivially indicate divergence, even if the series is actually convergent. A positive
value of $\epsilon$ provides protection from such erroneous decision.
Note that if $a<b-\epsilon$, the convergence criterion $a-b>1$ is not met but the divergence criterion
$a+b<1$ may still be satisfied. Thus, for such instances, greater weight in favour of divergence is indicated.
In our illustration, we set $\epsilon=10^{-10}$.
\subsubsection{Results}
\label{subsubsec:results}
Figure \ref{fig:limpoints_example5} shows the results of our Bayesian analysis of the series (\ref{eq:example5})
based on our Dirichlet process model. Based on the rule of thumb proposed in Section \ref{subsec:thumb_rule}
all the results are in agreement with the results based on Figure \ref{fig:example5}.
\begin{figure}
\centering
\subfigure [Convergence: $a=2,b=1$. The posterior of $p_{6,k}$ converges to 1 as $k\rightarrow\infty$]
{\label{fig:limpoints_a_2_b_1}
\includegraphics[width=6cm,height=5cm]{figures/Example5_DP_plots/limpoints_a_2_b_1-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+20^{-10},b=10^{-10}$.
The posterior of $p_{6,k}$ converges to 1 as $k\rightarrow\infty$.]
{ \label{fig:limpoints_a12_b01}
\includegraphics[width=6cm,height=5cm]{figures/Example5_DP_plots/limpoints_a12_b01-crop.pdf}}\\
\hspace{2mm}
\subfigure [Convergence: $a=1+30^{-10},b=20^{-10}$.
The posterior of $p_{6,k}$ converges to 1 as $k\rightarrow\infty$.]
{\label{fig:limpoints_a13_b02}
\includegraphics[width=6cm,height=5cm]{figures/Example5_DP_plots/limpoints_a13_b02-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=1/2,b=1/2$.
The posterior of $p_{10,k}$ converges to 1 as $k\rightarrow\infty$.]
{ \label{fig:limpoints_a_1_2_b_1_2}
\includegraphics[width=6cm,height=5cm]{figures/Example5_DP_plots/limpoints_a_1_2_b_1_2-crop.pdf}}\\
\subfigure [Divergence: $a=\frac{1}{2}\left(1-10^{-11}\right),
b=\frac{1}{2}\left(1-10^{-11}\right)$.
The posterior of $p_{10,k}$ converges to 1 as $k\rightarrow\infty$.]
{\label{fig:limpoints_a+b_less_1}
\includegraphics[width=6cm,height=5cm]{figures/Example5_DP_plots/limpoints_a+b_less_1-crop.pdf}}
\caption{Illustration of the Dirichlet process based theory with Example 5:
For $(a=2,b=1)$ in the series (\ref{eq:example5}), $\frac{m}{M}=\frac{6}{10}<0.9$, indicating convergence,
for $(a=1+20^{-10},b=10^{-10})$, $\frac{m}{M}=\frac{6}{10}<0.9$, indicating convergence, for
$(a=1+30^{-10},b=20^{-10})$, $\frac{m}{M}=\frac{6}{10}<0.9$, indicating convergence,
for $(a=1/2,b=1/2)$, $\frac{m}{M}=\frac{10}{10}>0.9$, indicating divergence,
and for $\left(a=\frac{1}{2}\left(1-10^{-11}\right),b=\frac{1}{2}\left(1-10^{-11}\right)\right)$,
$\frac{m}{M}=\frac{10}{10}>0.9$, indicating divergence.}
\label{fig:limpoints_example5}
\end{figure}
\section{Application of the Bayesian multiple limit points theory to Riemann Hypothesis}
\label{sec:Bayesian_limpoints_RH}
To strengthen our result on Riemann Hypothesis presented in Section 6 of our main manuscript
we consider application of our Bayesian multiple limit points theory to Riemann Hypothesis.
\subsection{Choice of $\rho(\theta)$ in
$\left(\frac{\exp\left(S_{1,k}\right)}{1+\exp\left(S_{1,k}\right)}\right)^{\rho(\theta)}$}
\label{subsec:rho_choice_RH}
For Riemann Hypothesis, $\theta=a$; we choose, for $j\geq 1$,
\begin{equation}
\tilde\rho(\theta)=a^6.
\label{eq:rho_choice_RH}
\end{equation}
The reason for such choice with a relatively large power is to allow discrimination between
$\left(\frac{\exp\left(S_{1,k}\right)}{1+\exp\left(S_{1,k}\right)}\right)^{\rho(\theta)}$ for close values of $a$. However,
substantially large powers of $a$ are not appropriate because that would make the aforementioned
term too small to enable detection of divergence. In fact, we have chosen the power after much experimentation.
Implementation of our methods takes about 2 seconds on our VMWare, with $10^5$ iterations.
\subsection{Results}
\label{subsec:results_RH_limpoints}
The results of application of our ideas on multiple limit points are depicted in Figures
\ref{fig:RH_DP_1}, \ref{fig:RH_DP_2} and \ref{fig:RH_DP_3}. The values of $m/M$ and the
thumb rule proposed in Section \ref{subsec:thumb_rule} show that all the results are
consistent with those obtained in Section 6.
For $a=2$ and $a=3$ we obtained $m/M=0.1$, but the existing theory and our results
reported in Section 6
confirm that the series is convergent, and not oscillating, for these values.
There seems to be a slight discrepancy
only regarding the location of the change point of convergence.
In this case, unlike $a=0.72$ as obtained in Section 6,
we obtained $a=0.7$ as the change point (see panel (b) of Figure \ref{fig:RH_DP_2}).
This (perhaps) negligible difference notwithstanding, both of our methods are remarkably
in agreement with each other, emphasizing our point that Riemann Hypothesis can not be
completely supported.
\begin{figure}
\centering
\subfigure [Divergence: $a=0.1$, $\frac{m}{M}=\frac{10}{10}$.]{ \label{fig:RH_DP_a_01}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_01-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.2$, $\frac{m}{M}=\frac{10}{10}$.]{ \label{fig:RH_DP_a_02}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_02-crop.pdf}}\\
\subfigure [Divergence: $a=0.3$, $\frac{m}{M}=\frac{10}{10}$.]{ \label{fig:RH_DP_a_03}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_03-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.4$, $\frac{m}{M}=\frac{10}{10}$.]{ \label{fig:RH_DP_a_04}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_04-crop.pdf}}\\
\subfigure [Divergence: $a=0.5$, $\frac{m}{M}=\frac{10}{10}$.]{ \label{fig:RH_DP_a_05}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_05-crop.pdf}}
\hspace{2mm}
\subfigure [Divergence: $a=0.6$, $\frac{m}{M}=\frac{10}{10}$.]{ \label{fig:RH_DP_a_06}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_06-crop.pdf}}
\caption{Riemann Hypothesis based on Bayesian multiple limit points theory: Divergence for
$a=0.1$, $0.2$, $0.3$, $0.4$, $0.5$, $0.6$.}
\label{fig:RH_DP_1}
\end{figure}
\begin{figure}
\centering
\subfigure [Convergence: $a=0.7$, $\frac{m}{M}=\frac{9}{10}$.]{ \label{fig:RH_DP_a_07}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_07-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=0.74$, $\frac{m}{M}=\frac{9}{10}$.]{ \label{fig:RH_DP_a_074}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_074-crop.pdf}}\\
\subfigure [Convergence: $a=0.8$, $\frac{m}{M}=\frac{8}{10}$.]{ \label{fig:RH_DP_a_08}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_08-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=0.9$, $\frac{m}{M}=\frac{7}{10}$.]{ \label{fig:RH_DP_a_09}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_09-crop.pdf}}\\
\subfigure [Convergence: $a=1.0$, $\frac{m}{M}=\frac{5}{10}$.]{ \label{fig:RH_DP_a_1}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_1-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=1+10^{-10}$, $\frac{m}{M}=\frac{5}{10}$.]{ \label{fig:RH_DP_a_1_e}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_1_plus_e-crop.pdf}}
\caption{Riemann Hypothesis based on Bayesian multiple limit points theory: Divergence for
$a=0.7$ but convergence for $a=0.74$, $0.8$, $0.9$, $1$, $1+10^{-10}$.}
\label{fig:RH_DP_2}
\end{figure}
\begin{figure}
\centering
\subfigure [Convergence: $a=2$, $\frac{m}{M}=\frac{1}{10}$.]{ \label{fig:RH_DP_a_2}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_2-crop.pdf}}
\hspace{2mm}
\subfigure [Convergence: $a=3$, $\frac{m}{M}=\frac{1}{10}$.]{ \label{fig:RH_DP_a_3}
\includegraphics[width=6cm,height=5cm]{figures/RH_DP_plots2/RH_a_3-crop.pdf}}
\caption{Riemann Hypothesis based on Bayesian multiple limit points theory: Convergence for
$a=2$, $3$.}
\label{fig:RH_DP_3}
\end{figure}
\section{Characterization of Riemann Hypothesis based on Bernoulli numbers}
\label{sec:Bernoulli_RH}
Characterization of Riemann Hypothesis by convergence of inifinte sums associated with Bernoulli numbers
are provided in \ctn{Carey03} (unpublished, according to our knowledge).
In particular, it has been shown that
Riemann hypothesis is true if and only if the following series is convergent:
\begin{equation}
\tilde S_1=\sum_{m=1}^{\infty}\frac{\pi (4m+3)}{2^{4m+1}}
\sum_{k=0}^m(-1)^k\frac{{2m+1\choose k}{4m+2-2k\choose 2m+1}}{2m+2-2k}
\log\left(\frac{\left(2\pi\right)^{2m+2-2k}\left|B_{2m+2-2k}\right|}{2(2m+2-2k)^2(2m-2k)!}\right),
\label{eq:bernoulli_1}
\end{equation}
where $\left\{B_n;~n=0,1,\ldots\right\}$ are Bernoulli numbers characterized by their generating function
$\sum_{n=0}^{\infty}B_nx^n/n!=x/\left(\exp(x)-1\right)$. The Bernoulli numbers are related to the
Riemann zeta function by (see, for example \ctn{Sury03})
\begin{equation}
B_{2m}=(-1)^{m-1}\frac{2(2m)!}{(2\pi)^{2m}}\zeta(2m).
\label{eq:Riemann_Bernoulli}
\end{equation}
\ctn{Carey03} further showed that convergence of the related series
\begin{equation}
\tilde S_2=\sum_{m=1}^{\infty}\frac{\pi (4m+3)}{2^{4m+1}}
\sum_{k=0}^m(-1)^k\frac{{2m+1\choose k}{4m+2-2k\choose 2m+1}}{2m+2-2k}
\log\left((2m+1-2k)\frac{\left|B_{2m+2-2k}\right|}{\left|B_{2m+4-2k}\right|}\right),
\label{eq:bernoulli_2}
\end{equation}
is also equivalent to the assertion that Riemann hypothesis is correct.
However, the terms of both the series (\ref{eq:bernoulli_1}) and (\ref{eq:bernoulli_2}) tend to explode very quickly.
Stirlings's approximation of the factorials involved in the summands facilitates computation of larger number
of summands compared to the original terms. In this context, note that Stirling's approximation
applied to the factorials in (\ref{eq:Riemann_Bernoulli}), along with the approximation
$\zeta(2m)\sim 1$, as $m\rightarrow\infty$,
lead the following asymptotic form of $B_{2m}$ as as $m\rightarrow\infty$:
\begin{equation}
B_{2m}\sim (-1)^{m-1}4\sqrt{\pi m}\left(\frac{m}{\pi e}\right)^{2m}.
\label{eq:bernoulli_asymp}
\end{equation}
Figure \ref{fig:RH_Bernoulli} shows the logarithms of the first few terms $a_m$ of the above two series,
based on the actual terms $a_m$ and the Stirling-approximated $a_m$ (ignoring a multiplicative constant);
the rest of the terms become
too large to be reliably computed, even with Stirling's approximation.
The bottomline that emerges from
(\ref{fig:RH_Bernoulli}) is that the series $\tilde S_1$ and $\tilde S_2$ appear to be clearly divergent,
providing some support to our result on Riemann hypothesis.
\begin{figure}
\centering
\subfigure [Actual terms of series $\tilde S_1$.]{ \label{fig:bernoulli_1_actual}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli/Bernoulli_1_actual-crop.pdf}}
\hspace{2mm}
\subfigure [Stirling based terms of series $\tilde S_1$.]{ \label{fig:bernoulli_1_stirling}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli/Bernoulli_1_Stirling-crop.pdf}}\\
\subfigure [Actual terms of series $\tilde S_2$.]{ \label{fig:bernoulli_2_actual}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli/Bernoulli_2_actual-crop.pdf}}
\hspace{2mm}
\subfigure [Stirling based terms of series $\tilde S_2$.]{ \label{fig:bernoulli_2_stirling}
\includegraphics[width=6cm,height=5cm]{figures/RH_Bernoulli//Bernoulli_2_Stirling-crop.pdf}}
\caption{Actual and Stirling-approximated terms $a_m$ of the series $\tilde S_1$ and $\tilde S_2$.}
\label{fig:RH_Bernoulli}
\end{figure}
|
1,116,691,497,254 | arxiv | \subsection{\@startsection{subsection}{2}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{-0.01 mm}
{\normalfont\large\bfseries}}
\newcommand{[\hspace*{-.5mm}[}{[\hspace*{-.5mm}[}
\newcommand{]\hspace*{-.5mm}]}{]\hspace*{-.5mm}]}
\catcode`\@=11
\renewcommand\subsubsection{\@startsection{subsubsection}{2}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{-0.01 mm}
{\normalfont\bfseries}}
\newtheorem{example}{Example
\newtheorem{theorem}{Theorem
\newtheorem{corollary}{Corollary
\newtheorem{definition}{Definition
\newtheorem{proposition}{Proposition
\newtheorem{lemma}{Lemma
\newtheorem{remark}{Remark
\newtheorem{conjecture}{Conjecture}
\def{\em resp.$\ $}{{\em resp.$\ $}}
\def{\em etc.}{{\em etc.}}
\def\medskip\noindent {\it Proof --- \ }{\medskip\noindent {\it Proof --- \ }}
\def\hfill $\Diamond${\hfill $\Diamond$}
\def\medskip\noindent {\it Remark --- \ }{\medskip\noindent {\it Remark --- \ }}
\def\hfill $\Box$ \bigskip{\hfill $\Box$ \bigskip}
\def\mathinner{\mkern2mu\raise1pt\hbox{.{\mathinner{\mkern2mu\raise1pt\hbox{.}
\mkern3mu\raise4pt\hbox{.}\mkern1mu\raise7pt\hbox{.}}}
\def\<{\langle\,}
\def\>{\,\rangle}
\defk\!\not< \! \mathfrak b{A}\!\not>{k\!\not< \! \mathfrak b{A}\!\not>}
\def{\it cf.$\ $}{{\it cf.$\ $}}
\def{\rm Mat}{{\rm Mat}}
\def{\rm Hom}{{\rm Hom}}
\def\underline{\rm Hom}{\underline{\rm Hom}}
\def{\rm Ext}{{\rm Ext}}
\def{\rm End}{{\rm End}}
\def{\em i.e. }{{\em i.e. }}
\def{\em e.g. }{{\em e.g. }}
\def{\rm Tab\, }{{\rm Tab\, }}
\def{\rm Sym}{{\rm Sym}}
\def{\rm inv}{{\rm inv}}
\def\mathfrak S{\mathfrak S}
\def\mathfrak H{\mathfrak H}
\def\mathfrak s{\mathfrak s}
\def\hbox{\sl Sym}{\hbox{\sl Sym}}
\def\lambda{\lambda}
\def\alpha{\alpha}
\def\mathbf a{\mathbf a}
\def\delta{\delta}
\def\beta{\beta}
\def\mathfrak b{\mathfrak{b}}
\def\gamma{\gamma}
\def{\mathbb N}{{\mathbb N}}
\def{\mathbb Z}{{\mathbb Z}}
\def{\mathbb C}{{\mathbb C}}
\def{\mathbb R}{{\mathbb R}}
\def{\mathbb Q}{{\mathbb Q}}
\def{\mathbb F}{{\mathbb F}}
\def{\mathbf T}{{\mathbf T}}
\def{\cal F}{{\cal F}}
\def{\mathbf F}{{\mathbf F}}
\def{\mathbb P}{{\mathbb P}}
\def{\bf P}{{\bf P}}
\def{\bf I}{{\bf I}}
\def{\bf F}{{\bf F}}
\def\widehat{\bf H}{\widehat{\bf H}}
\def{\cal B}{{\cal B}}
\def{\mathbf U}{{\mathbf U}}
\def{\mathbf V}{{\mathbf V}}
\def{\mathbf W}{{\mathbf W}}
\def{\cal L}{{\mathfrak L}}
\def{\cal I}{{\cal I}}
\def{\cal J}{{\cal J}}
\def{\cal S}{{\cal S}}
\def{\cal T}^{(n)}{{\cal T}^{(n)}}
\def{\rm sgn\, }{{\rm sgn\, }}
\def{\rm Tr}{{\rm Tr}}
\def\mbox{\ mod\ }{\mbox{\ mod\ }}
\def{\rm ch\, }{{\rm ch\, }}
\def{\rm wt}{{\rm wt}}
\def\mathfrak g{\mathfrak g}
\def\mathfrak h{\mathfrak h}
\def\mathfrak{gl}{\mathfrak{gl}}
\def\mathfrak{sl}{\mathfrak{sl}}
\def\mathfrak R{\mathfrak R}
\def{\bf n}{\mathfrak n}
\def\mathfrak b{\mathfrak b}
\def\widehat{\mathfrak{sl}}{\widehat{\mathfrak{sl}}}
\def\widehat{\mathfrak{gl}}{\widehat{\mathfrak{gl}}}
\def\widetilde{\mathfrak S}{\widetilde{\mathfrak S}}
\def\widehat{\mathfrak S}{\widehat{\mathfrak S}}
\def{\mathfrak n}{{\mathfrak n}}
\def\widehat H{\widehat H}
\def\widetilde H{\widetilde H}
\def{\cal A}{{\cal A}}
\def{\bf A}{{\bf A}}
\def{\cal G}{{\cal G}}
\def{\cal L}{{\cal L}}
\def\Lambda{\Lambda}
\def{\rm desc}{{\rm desc}}
\def{\cal K}{{\cal K}}
\def{\cal O}{{\cal O}}
\def\overline{\overline}
\def\underline{\underline{\theta}}\,{\underline{\underline{\theta}}\,}
\def{\rm Fr}{{\rm Fr}}
\def{\rm spin}{{\rm spin}}
\def\<{\langle}
\def\>{\rangle}
\def{\rm op}{{\rm op}}
\def{\rm im\,}{{\rm im\,}}
\def{\rm rk\,}{{\rm rk\,}}
\def{\rm pr\,}{{\rm pr\,}}
\def{\ssym P}{{\ssym P}}
\def{\cal E}{{\cal E}}
\def{\bf F}_\infty{{\bf F}_\infty}
\def{\bf H}_\infty{{\bf H}_\infty}
\def{\rm deg}{{\rm deg}}
\def{\cal B}{{\cal B}}
\def{\cal C}{{\cal C}}
\def\mu{\mu}
\def{\bf n}{{\bf n}}
\def{\bf p}{{\bf p}}
\def{\bf q}{{\bf q}}
\def{\bf s}{{\bf s}}
\def{\bf t}{{\bf t}}
\def{\cal M}{{\cal M}}
\def{\cal R}{{\cal R}}
\def{\widetilde{\lambda}}{{\widetilde{\lambda}}}
\def\leqslant{\leqslant}
\def\geqslant{\geqslant}
\def\varepsilon{\varepsilon}
\def{\rm rad}{{\rm rad}}
\def\Sigma{\Sigma}
\def\sigma{\sigma}
\defs{s}
\def\varphi{\varphi}
\def\rm int{\rm int}
\def\underline{\l}{\underline{\lambda}}
\def\underline{\m}{\underline{\mu}}
\def\underline{\nu}{\underline{\nu}}
\def\underline{\a}{\underline{\alpha}}
\def\underline{\b}{\underline{\beta}}
\def\prec{\prec}
\def\succ{\succ}
\def\preceq{\preceq}
\def\succeq{\succeq}
\def\Delta{\Delta}
\def{\rm Irr\,}{{\rm Irr\,}}
\def{\rm Con}{{\rm Con}}
\def{\rm Sy}{{\rm Sy}}
\def{\rm SSy}{{\rm SSy}}
\def{\rm Pa}{{\rm Pa}}
\def\emptyset{\emptyset}
\def{\rm Ind}{{\rm Ind}}
\def\zeta{\zeta}
\def\mathbf i{\mathbf i}
\def\mathbf j{\mathbf j}
\def\kappa{\kappa}
\def\longrightarrow{\longrightarrow}
\def\longleftarrow{\longleftarrow}
\def\longleftrightarrow{\longleftrightarrow}
\def\mathbf 1{\mathbf 1}
\def{\rm Gr}{{\rm Gr}}
\def\stackrel{\sim}{\lra}{\stackrel{\sim}{\longrightarrow}}
\def{\rm Res}{{\rm Res}}
\def\Omega{\Omega}
\def{\rm Stab}{{\rm Stab}}
\def\ra{\rightarrow}
\def\mathbf{dim}{\mathbf{dim}}
\def\varepsilon{\varepsilon}
\newcommand{\begin{smallmatrix}}{\begin{smallmatrix}}
\newcommand{\end{smallmatrix}}{\end{smallmatrix}}
\def\mathfrak f{\mathfrak f}
\def\mathbf{0}{\mathbf{0}}
\def{\rm socle}{{\rm socle}}
\def{\rm head}{{\rm head}}
\def{\rm id}{{\rm id}}
\def\mathfrak{n}_{-}{\mathfrak{n}_{-}}
\def\widetilde{\cal M}{\widetilde{\cal M}}
\def\mathfrak{p}{\mathfrak{p}}
\def\mathfrak{P}{\mathfrak{P}}
\def\mathfrak{q}{\mathfrak{q}}
\def\mathfrak{Q}{\mathfrak{Q}}
\newdimen\Squaresize \Squaresize=14pt
\newdimen\Thickness \Thickness=0.5pt
\def\Square#1{\hbox{\vrule width \Thickness
\vbox to \Squaresize{\hrule height \Thickness\vss
\hbox to \Squaresize{\hss#1\hss}
\vss\hrule height\Thickness}
\unskip\vrule width \Thickness}
\kern-\Thickness}
\def\Vsquare#1{\vbox{\Square{$#1$}}\kern-\Thickness}
\def\omit\hskip\Squaresize{\omit\hskip\Squaresize}
\def\young#1{
\vbox{\smallskip\offinterlineskip
\halign{&\Vsquare{##}\cr #1}}}
\def\shuff#1#2{\mathbin{
\hbox{\vbox{ \hbox{\vrule \hskip#2 \vrule height#1 width 0pt}%
\hrule}%
\vbox{ \hbox{\vrule \hskip#2 \vrule height#1 width 0pt
\vrule}%
\hrule}%
}}}
\def\SHUF{{\mathchoice{\shuff{7pt}{3.5pt}}%
{\shuff{6pt}{3pt}}%
{\shuff{4pt}{2pt}}%
{\shuff{3pt}{1.5pt}}}}%
\def\,\SHUF\,\,{\,\SHUF\,\,}
\title{\bf Verma modules and preprojective algebras}
\author{Christof {\sc Geiss}
\thanks{C. Geiss acknowledges support from DGAPA grant IN101402-3.}
, Bernard {\sc Leclerc}
\thanks{B. Leclerc is grateful to the GDR 2432 and the GDR 2249
for their support.}\ \
and Jan {\sc Schr\"oer}
\thanks{J. Schr\"oer was supported by a research
fellowship from the DFG (Deutsche Forschungsgemeinschaft).}}
\date{}
\begin{document}
\maketitle
\begin{abstract}
We give a geometric construction of the Verma modules of a
symmetric Kac-Moody Lie algebra $\mathfrak g$ in terms of constructible
functions on the varieties of nilpotent finite-dimensional
modules of the corresponding preprojective algebra $\Lambda$.
\end{abstract}
\section{Introduction}
Let $\mathfrak g$ be the symmetric Kac-Moody Lie algebra associated
to a finite unoriented graph $\Gamma$ without loop.
Let $\mathfrak{n}_{-}$ denote a maximal nilpotent subalgebra of $\mathfrak g$.
In \cite[\S 12]{Lu91}, Lusztig has given a geometric construction
of $U(\mathfrak{n}_{-})$ in terms of certain
Lagrangian varieties.
These varieties can be interpreted as module varieties
for the preprojective algebra $\Lambda$ attached to the graph
$\Gamma$ by Gelfand and Ponomarev \cite{GP}.
In Lusztig's construction, $U(\mathfrak{n}_{-})$ gets identified with
an algebra $({\cal M},*)$ of constructible functions on these
varieties, where $*$ is a convolution product
inspired by Ringel's multiplication for Hall algebras.
Later, Nakajima gave a similar construction of the highest
weight irreducible integrable $\mathfrak g$-modules $L(\lambda)$ in terms
of some new Lagrangian varieties which differ from Lusztig's
ones by the introduction of some extra vector spaces $W_k$ for
each vertex $k$ of $\Gamma$, and by considering only stable
points instead of the whole variety \cite[\S 10]{Na}.
The aim of this paper is to extend Lusztig's original construction
and to endow ${\cal M}$ with the structure of a Verma module $M(\lambda)$.
To do this we first give a variant of the geometrical construction
of the integrable $\mathfrak g$-modules $L(\lambda)$, using
functions on some natural open subvarieties
of Lusztig's varieties instead of functions
on Nakajima's varieties (Theorem~\ref{thI}).
These varieties have a simple description in terms of
the preprojective algebra $\Lambda$ and of certain injective
$\Lambda$-modules $q_\lambda$.
Having realized the integrable modules $L(\lambda)$ as quotients
of ${\cal M}$, it is possible, using the comultiplication of $U(\mathfrak{n}_{-})$,
to construct geometrically the raising operators $E_i^\lambda\in{\rm End}({\cal M})$
which make ${\cal M}$ into the Verma module $M(\lambda)$ (Theorem~\ref{conjV}).
Note that we manage in this way to realize Verma modules with
arbitrary highest weight (not necessarily dominant).
Finally, we dualize this setting and give a geometric construction of
the dual Verma module $M(\lambda)^*$ in terms of the delta functions
$\delta_x \in {\cal M}^*$ attached to the finite-dimensional nilpotent
$\Lambda$-modules $x$
(Theorem~\ref{dual}).
\section{Verma modules}
\label{sect1}
\subsection{}
Let $\mathfrak g$ be the symmetric Kac-Moody Lie algebra associated
with a finite unoriented graph $\Gamma$ without loop.
The set of vertices of the graph is denoted by~$I$.
The (generalized) Cartan matrix of $\mathfrak g$ is $A=(a_{ij})_{i,j\in I}$,
where $a_{ii}=2$ and, for $i\not = j$, $-a_{ij}$ is the number
of edges between $i$ and $j$.
\subsection{}
Let $\mathfrak g = {\mathfrak n}\oplus \mathfrak h\oplus \mathfrak{n}_{-}$ be a Cartan decomposition of
$\mathfrak g$, where $\mathfrak h$ is a Cartan subalgebra and $({\mathfrak n},\mathfrak{n}_{-})$ a
pair of opposite maximal
nilpotent subalgebras.
Let $\mathfrak b={\mathfrak n}\oplus\mathfrak h$.
The Chevalley generators of ${\mathfrak n}$ ({\em resp.$\ $} $\mathfrak{n}_{-}$) are denoted
by $e_i\ (i\in I)$ ({\em resp.$\ $} $f_i$) and we set $h_i=[e_i,f_i]$.
\subsection{}
Let $\alpha_i$ denote the simple root
of $\mathfrak g$ associated with $i\in I$.
Let $(-\,;\,-)$ be a symmetric bilinear form on $\mathfrak h^*$ such
that $(\alpha_i\,;\,\alpha_j)=a_{ij}$.
The lattice of integral weights in $\mathfrak h^*$ is denoted by $P$,
and the sublattice spanned by the simple roots is denoted
by $Q$.
We put
\[
P_+=\{\lambda\in P \mid (\lambda\,;\,\alpha_i) \geqslant 0, \ (i\in I)\},
\qquad Q_+=Q\cap P_+.
\]
\subsection{}
Let $\lambda\in P$ and let $M(\lambda)$ be the Verma module
with highest weight $\lambda$.
This is the induced $\mathfrak g$-module defined by
$M(\lambda) = U(\mathfrak g)\otimes_{U(\mathfrak b)} {\mathbb C}\,u_\lambda$,
where $u_\lambda$ is a basis of the one-dimensional representation of
$\mathfrak b$ given by
\[
h\,u_\lambda = \lambda(h)\,u_\lambda,\quad n\,u_\lambda = 0,\qquad (h\in \mathfrak h,\ n\in{\mathfrak n}).
\]
As a $P$-graded vector space $M(\lambda)\cong U(\mathfrak{n}_{-})$
(up to a degree shift by $\lambda$).
$M(\lambda)$ has a unique simple quotient denoted by $L(\lambda)$,
which is integrable if and only if $\lambda\in P_+$.
In this case, the kernel of the $\mathfrak g$-homomorphism
$M(\lambda) \ra L(\lambda)$ is the $\mathfrak g$-module $I(\lambda)$
generated by the vectors
\[
f_i^{(\lambda\,;\,\alpha_i)+1}\otimes u_\lambda,\qquad (i\in I).
\]
\section{Constructible functions}
\subsection{}
Let $X$ be an algebraic variety over ${\mathbb C}$ endowed with its Zariski topology.
A map $f$ from $X$ to a vector space $V$ is said to
be constructible if its image $f(X)$ is finite, and for each $v\in
f(X)$ the preimage $f^{-1}(v)$ is a constructible subset of $X$.
\subsection{}
By $\chi(A)$ we denote the Euler characteristic of a constructible
subset $A$ of $X$.
For a constructible map $f : X \ra V$ one defines
\[
\int_{x\in X} f(x) = \sum_{v\in V} \chi(f^{-1}(v))\,v \in V.
\]
More generally, for a constructible subset $A$ of $X$ we write
\[
\int_{x\in A} f(x) = \sum_{v\in V} \chi(f^{-1}(v)\cap A)\,v.
\]
\section{Preprojective algebras}
\subsection{}
Let $\Lambda$ be the preprojective algebra associated to the
graph $\Gamma$ (see for example \cite{Ri,GLS}).
This is an associative ${\mathbb C}$-algebra, which is
finite-dimensional if and only if
$\Gamma$ is a graph of type $A, D, E$.
Let $s_i$ denote the simple one-dimensional $\Lambda$-module
associated with $i\in I$, and let $p_i$ be its projective cover
and $q_i$ its injective hull.
Again, $p_i$ and $q_i$ are finite-dimensional if and only if
$\Gamma$ is a graph of type $A, D, E$.
\subsection{}
A finite-dimensional $\Lambda$-module $x$ is nilpotent if and
only if it has a composition series with all factors of the
form $s_i\ (i\in I)$.
We will identify the dimension vector of $x$ with an element
$\beta\in Q_+$ by setting $\mathbf{dim}(s_i) = \alpha_i$.
\subsection{}\label{embed}
Let $q$ be an injective $\Lambda$-module of the form
\[
q = \bigoplus_{i\in I} q_i^{\oplus a_i}
\]
for some nonnegative integers $a_i\ (i\in I)$.
\begin{lemma}
Let $x$ be a finite-dimensional $\Lambda$-module isomorphic to
a submodule of $q$. If $f_1 : x \ra q$ and $f_2 : x \ra q$
are two monomorphisms, then
there exists an automorphism $g : q \ra q$ such that
$f_2 = gf_1$.
\end{lemma}
\medskip\noindent {\it Proof --- \ }
Indeed, $q$ is the injective hull of its socle
$b=\bigoplus_{i\in I}\,s_i^{\oplus a_i}$.
Let $c_j\ (j=1,2)$ be a complement of
$f_j({\rm socle}(x))$ in $b$.
Then $c_1\cong c_2$ and the maps
\[
h_j := f_j \oplus {\rm id} : \quad x \oplus c_j \ra q, \qquad (j=1,2)
\]
are injective hulls.
The result then follows from the unicity of the injective hull.
\hfill $\Box$ \bigskip
Hence, up to isomorphism, there is a unique way
to embed $x$ into $q$.
\subsection{}
Let ${\cal M}$ be the algebra of constructible functions on the varieties
of finite-dimensional nilpotent $\Lambda$-modules defined by
Lusztig \cite{Lu00} to give a geometric realization of $U(\mathfrak{n}_{-})$.
We recall its definition.
For $\beta=\sum_{i\in I}b_i\alpha_i\in Q_+$,
let $\Lambda_\beta$ denote the variety of nilpotent $\Lambda$-modules
with dimension vector $\beta$.
Recall that $\Lambda_\beta$ is endowed with an action of the algebraic
group $G_\beta = \prod_{i\in I}GL_{b_i}({\mathbb C})$, so that two points
of $\Lambda_\beta$ are isomorphic as $\Lambda$-modules if and only if
they belong to the same $G_\beta$-orbit.
Let $\widetilde{\cal M}_\beta$ denote the vector space of constructible
functions from $\Lambda_\beta$ to ${\mathbb C}$ which are constant on $G_\beta$-orbits.
Let
\[
\widetilde{\cal M}=\bigoplus_{\beta\in Q_+} \widetilde{\cal M}_\beta.
\]
One defines a multiplication $*$ on $\widetilde{\cal M}$ as follows.
For $f\in \widetilde{\cal M}_\beta$, $g\in\widetilde{\cal M}_\gamma$ and $x\in \Lambda_{\beta+\gamma}$,
we have
\begin{equation}\label{sta}
(f*g)(x) = \int_{U} f(x') g(x''),
\end{equation}
where the integral
is over the variety of $x$-stable subspaces
$U$ of $x$ of dimension $\gamma$, $x''$ is the $\Lambda$-submodule
of $x$ obtained by restriction to $U$ and $x'=x/x''$.
In the sequel in order to simplify notation, we will not
distinguish between the subspace $U$
and the submodule $x''$ of $x$ carried by~$U$.
Thus we shall rather write
\begin{equation}\label{star}
(f*g)(x) = \int_{x''} f(x/x'') g(x''),
\end{equation}
where the integral
is over the variety of submodules
$x''$ of $x$ of dimension $\gamma$.
For $i\in I$, the variety $\Lambda_{\alpha_i}$ is reduced to a single
point : the simple module $s_i$.
Denote by $\mathbf 1_i$ the function mapping this point to $1$.
Let ${\cal G}(i,x)$ denote the variety of all submodules
$y$ of $x$ such that $x/y \cong s_i$.
Then by (\ref{star}) we have
\begin{equation}\label{stari}
(\mathbf 1_i*g)(x) = \int_{y\in{\cal G}(i,x)} g(y).
\end{equation}
Let ${\cal M}$ denote the subalgebra of $\widetilde{\cal M}$ generated by the
functions $\mathbf 1_i\ (i\in I)$.
By Lusztig~\cite{Lu00}, $({\cal M},*)$ is isomorphic to $U(\mathfrak{n}_{-})$
by mapping $\mathbf 1_i$ to the Chevalley generator $f_i$.
\subsection{}
In the identification of $U(\mathfrak{n}_{-})$ with ${\cal M}$, formula
(\ref{stari}) represents the left multiplication by $f_i$.
In order to endow ${\cal M}$ with the structure of a Verma module
we need to introduce the following important definition.
For $\nu \in P_+$, let
\[
q_\nu = \bigoplus_{i\in I} q_i^{\oplus (\nu\,;\,\alpha_i)}.
\]
Lusztig has shown \cite[\S 2.1]{Lu002} that Nakajima's Lagrangian varieties
for the geometric realization of $L(\nu)$ are isomorphic to the
Grassmann varieties of $\Lambda$-submodules of $q_\nu$ with a
given dimension vector.
Let $x$ be a finite-dimensional nilpotent $\Lambda$-module isomorphic to
a submodule of the injective module $q_\nu$.
Let us fix an embedding $F : x \ra q_\nu$ and identify $x$
with a submodule of $q_\nu$ via $F$.
\begin{definition}\label{def}
For $i\in I$ let
${\cal G}(x,\nu,i)$ be the variety of submodules $y$ of $q_\nu$ containing $x$ and
such that $y/x$ is isomorphic to $s_i$.
\end{definition}
This is a projective variety which, by \ref{embed},
depends only (up to isomorphism) on $i$, $\nu$ and the
isoclass of $x$.
\section{Geometric realization of integrable irreducible $\mathfrak g$-modules}
\label{gri}
\subsection{}
For $\lambda\in P_+$ and $\beta\in Q_+$,
let $\Lambda_{\beta}^{\lambda}$ denote the variety of nilpotent $\Lambda$-modules of dimension
vector $\beta$ which are isomorphic to a submodule of $q_\lambda$.
Equivalently $\Lambda_{\beta}^{\lambda}$ consists of the nilpotent modules of dimension
vector $\beta$ whose socle contains $s_i$ with multiplicity at most $(\lambda\,;\,\alpha_i)$
$(i\in I)$.
This variety has been considered by Lusztig \cite[\S 1.5]{Lu03}.
In particular it is known that $\Lambda_{\beta}^{\lambda}$ is an open subset of
$\Lambda_{\beta}$, and that the number of its irreducible components
is equal to the dimension of the $(\lambda-\beta)$-weight space of $L(\lambda)$.
\subsection{}
Define $\widetilde{\cal M}_\beta^\lambda$ to be the vector space of constructible functions
on $\Lambda_{\beta}^{\lambda}$ which are constant on $G_\beta$-orbits.
Let ${\cal M}_\beta^\lambda$ denote the subspace of $\widetilde{\cal M}_\beta^\lambda$
obtained by restricting elements of ${\cal M}_\beta$ to
$\Lambda_{\beta}^{\lambda}$.
Put $\widetilde{\cal M}^\lambda=\bigoplus_\beta\widetilde{\cal M}_\beta^\lambda$ and
${\cal M}^\lambda=\bigoplus_\beta{\cal M}_\beta^\lambda$.
For $i\in I$ define endomorphisms $E_i, F_i, H_i$ of $\widetilde{\cal M}^\lambda$
as follows:
\begin{eqnarray}
(E_if)(x) &=& \int_{y\in{\cal G}(x,\lambda,i)} f(y),
\qquad (f\in \widetilde{\cal M}_\beta^\lambda,\ x\in \Lambda_{\beta-\alpha_i}^{\lambda}),\label{actE}\\[3mm]
(F_if)(x) &=& \int_{y\in{\cal G}(i,x)} f(y),
\quad\qquad (f\in \widetilde{\cal M}_\beta^\lambda,\ x\in \Lambda_{\beta+\alpha_i}^{\lambda}),\label{actF} \\[3mm]
(H_if)(x) &=& (\lambda-\beta;\alpha_i)\,f(x),
\qquad (f\in \widetilde{\cal M}_\beta^\lambda,\ x\in \Lambda_{\beta}^{\lambda}).\label{actH}
\end{eqnarray}
\begin{theorem}\label{thI}
The endomorphisms $E_i, F_i, H_i$ of $\widetilde{\cal M}^\lambda$ leave stable the
subspace ${\cal M}^\lambda$. Denote again by $E_i, F_i, H_i$ the induced
endomorphisms of ${\cal M}^\lambda$.
Then the assignments $e_i\mapsto E_i$, $f_i\mapsto F_i$, $h_i\mapsto H_i$,
give a representation of $\mathfrak g$ on ${\cal M}^\lambda$ isomorphic to the
irreducible representation $L(\lambda)$.
\end{theorem}
\subsection{}
The proof of Theorem~\ref{thI} will involve a series of lemmas.
\subsubsection{}
For $\mathbf i=(i_1,\ldots ,i_r)\in I^r$ and
$\mathbf a=(a_1,\ldots ,a_r)\in {\mathbb N}^r$, define the
variety ${\cal G}(x,\lambda,(\mathbf i,\mathbf a))$ of flags of $\Lambda$-modules
\[
\mathfrak f = (x=y_0 \subset y_1 \subset \cdots \subset y_r \subset q_\lambda)
\]
with $y_k/y_{k-1} \cong s_{i_k}^{\oplus a_k}\ (1\leqslant k\leqslant r)$.
As in Definition~\ref{def}, this is a projective variety
depending (up to isomorphism) only on $(\mathbf i,\mathbf a)$, $\lambda$ and the isoclass of $x$
and not on the choice of a specific embedding of
$x$ into $q_\lambda$.
\begin{lemma}\label{prod}
Let $f\in \widetilde{\cal M}_\beta^\lambda$ and $x\in
\Lambda_{\beta-a_1\alpha_{i_1}-\cdots-a_r\alpha_{i_r}}^{\lambda}$.
Put $E_i^{(a)}=(1/a!)E_i^a$.
We have
\[
(E_{i_r}^{(a_r)}\cdots E_{i_1}^{(a_1)}f)(x) =
\int_{\mathfrak f\in{\cal G}(x,\lambda,(\mathbf i,\mathbf a))} f(y_r).
\]
\end{lemma}
The proof is standard and will be omitted.
\subsubsection{}
By \cite[12.11]{Lu91} the endomorphisms $F_i$ satisfy the
Serre relations
\[
\sum_{p=0}^{1-a_{ij}} (-1)^p\, F_j^{(p)}\, F_i\, F_j^{(1-a_{ij}-p)}=0
\]
for every $i\not = j$.
A similar argument shows that
\begin{lemma}\label{serre}
The endomorphisms $E_i$ satisfy the
Serre relations
\[
\sum_{p=0}^{1-a_{ij}} (-1)^p\, E_j^{(p)}\, E_i\, E_j^{(1-a_{ij}-p)}=0
\]
for every $i\not = j$.
\end{lemma}
\medskip\noindent {\it Proof --- \ }
Let $f\in\widetilde{\cal M}_\beta^\lambda$ and $x\in \Lambda^\lambda_{\beta- \alpha_i - (1-a_{ij})\alpha_j}$.
By Lemma~\ref{prod},
\[
(E_j^{(p)}\, E_i\, E_j^{(1-a_{ij}-p)}f)(x) =
\int_\mathfrak f f(y_3)
\]
the integral being taken on the variety of flags
\[
\mathfrak f = (x \subset y_1 \subset y_2 \subset y_3 \subset q_\lambda)
\]
with $y_1/x \cong s_j^{\oplus 1-a_{ij}-p}$, $y_2/y_1 \cong s_i$
and $y_3/y_2 \cong s_j^{\oplus p}$.
This integral can be rewritten as
\[
\int_{y_3} f(y_3)\,\chi({\cal F}[y_3;p])
\]
where the integral is now over all submodules $y_3$ of
$q_\lambda$ of dimension $\beta$ containing $x$
and ${\cal F}[y_3;p]$ is the variety of flags $\mathfrak f$
as above with fixed last step $y_3$.
Now, by moding out the submodule $x$ at each step of the flag,
we are reduced to the same situation as in \cite[12.11]{Lu91},
and the same argument allows to show that
\[
\sum_{p=0}^{1-a_{ij}} \chi({\cal F}[y_3;p]) =0,
\]
which proves the Lemma.
\hfill $\Box$ \bigskip
\subsubsection{}
Let $x\in\Lambda^\lambda_\beta$.
Let $\varepsilon_i(x)$ denote the multiplicity of $s_i$
in the head of $x$.
Let $\varphi_i(x)$ denote the multiplicity of $s_i$
in the socle of $q_\lambda/x$.
\begin{lemma}\label{keylem}
Let $i,j\in I$ (not necessarily distinct).
Let $y$ be a submodule of $q_\lambda$ containing $x$ and
such that $y/x \cong s_j$.
Then
\[
\varphi_i(y)-\varepsilon_i(y) =
\varphi_i(x)-\varepsilon_i(x) - a_{ij}.
\]
\end{lemma}
\medskip\noindent {\it Proof --- \ }
We have short exact sequences
\begin{eqnarray}
0 &\to& x\ \ \to\ \ \ q_\lambda\ \ \ \to\ \ q_\lambda/x\ \to\ \,0,
\label{eqeq1}\\
0 &\to& y\ \ \to\ \ \ q_\lambda\ \ \ \to\ \ q_\lambda/y\ \to\ \,0,
\label{eqeq1p}\\
0 &\to& x\ \ \to\ \ \ \ y\ \ \ \ \to\ \ \ s_j\ \ \ \,\to\ \ 0, \label{eqeq2}\\
0 &\to& s_j\ \to\ \ q_\lambda/x\ \to\ q_\lambda/y\ \to\ 0. \label{eqeq3}
\end{eqnarray}
Clearly, $\varepsilon_i(x)=|{\rm Hom}_\Lambda(x,s_i)|$, the dimension of
${\rm Hom}_\Lambda(x,s_i)$.
Similarly $\varepsilon_i(y)=|{\rm Hom}_\Lambda(y,s_i)|$,
$\varphi_i(x)=|{\rm Hom}_\Lambda(s_i,q_\lambda/x)|$,
$\varphi_i(y)=|{\rm Hom}_\Lambda(s_i,q_\lambda/y)|$.
Hence we have to show that
\begin{equation}\label{eqeq4}
|{\rm Hom}_\Lambda(x,s_i)|-|{\rm Hom}_\Lambda(y,s_i)| =
|{\rm Hom}_\Lambda(s_i,q_\lambda/x)| - |{\rm Hom}_\Lambda(s_i,q_\lambda/y)| - a_{ij}.
\end{equation}
In our proof, we will use
a property of preprojective algebras proved
in \cite[\S 1]{CB}, namely,
for any finite-dimensional
$\Lambda$-modules $m$ and $n$ there holds
\begin{equation}\label{eqCB}
|{\rm Ext}_\Lambda^1(m,n)|=|{\rm Ext}_\Lambda^1(n,m)|.
\end{equation}
(a)\ \ If $i=j$ then $a_{ij}=2$, $|{\rm Hom}_\Lambda(s_j,s_i)|=1$ and
$|{\rm Ext}^1_\Lambda(s_j,s_i)|=0$ since $\Gamma$ has no loops.
Applying ${\rm Hom}_\Lambda(-,s_i)$ to (\ref{eqeq2}) we get the exact sequence
\[
0 \to {\rm Hom}_\Lambda(s_j,s_i) \to {\rm Hom}_\Lambda(y,s_i) \to {\rm Hom}_\Lambda(x,s_i) \to 0,
\]
hence
\[
|{\rm Hom}_\Lambda(x,s_i)|-|{\rm Hom}_\Lambda(y,s_i)| = -1.
\]
Similarly applying ${\rm Hom}_\Lambda(s_i,-)$ to (\ref{eqeq3}) we get an exact sequence
\[
0 \to {\rm Hom}_\Lambda(s_i,s_j) \to {\rm Hom}_\Lambda(s_i,q_\lambda/x) \to {\rm Hom}_\Lambda(s_i,q_\lambda/y) \to 0,
\]
hence
\[
|{\rm Hom}_\Lambda(s_i,q_\lambda/x)| - |{\rm Hom}_\Lambda(s_i,q_\lambda/y)|=1,
\]
and (\ref{eqeq4}) follows.
(b)\ \ If $i\not = j$, we have $|{\rm Hom}_\Lambda(s_i,s_j)|=0$ and
$|{\rm Ext}^1_\Lambda(s_i,s_j)|=|{\rm Ext}^1_\Lambda(s_j,s_i)|=-a_{ij}$.
Applying ${\rm Hom}_\Lambda(s_i,-)$ to (\ref{eqeq2}) we get an exact sequence
\[
0 \to {\rm Hom}_\Lambda(s_i,x) \to {\rm Hom}_\Lambda(s_i,y) \to 0,
\]
hence
\begin{equation}\label{eqC}
|{\rm Hom}_\Lambda(s_i,x)|-|{\rm Hom}_\Lambda(s_i,y)| = 0.
\end{equation}
Moreover, by \cite[\S 1.1]{Bo},
$|{\rm Ext}^2_\Lambda(s_i,s_j)|=0$ because there are no relations from
$i$ to $j$ in the defining relations of $\Lambda$.
(Note that the proof of this result in \cite{Bo} only requires
that $I \subseteq J^2$
(here we use the notation of \cite{Bo}).
One does not need the additional assumption $J^n \subseteq I$ for some $n$.
Compare also the discussion in \cite{BK}.)
Since $q_\lambda$ is injective
$|{\rm Ext}^1_\Lambda(s_i,q_\lambda)|=0$,
thus applying ${\rm Hom}_\Lambda(s_i,-)$ to (\ref{eqeq1}) we get an exact
sequence
\[
0\to {\rm Hom}_\Lambda(s_i,x) \to {\rm Hom}_\Lambda(s_i,q_\lambda)\to {\rm Hom}_\Lambda(s_i,q_\lambda/x)
\to {\rm Ext}^1_\Lambda(s_i,x) \to 0,
\]
hence
\begin{equation}\label{eqA}
|{\rm Hom}_\Lambda(s_i,x)|-|{\rm Hom}_\Lambda(s_i,q_\lambda)|+|{\rm Hom}_\Lambda(s_i,q_\lambda/x)|
-|{\rm Ext}^1_\Lambda(s_i,x)|=0.
\end{equation}
Similarly, applying ${\rm Hom}_\Lambda(s_i,-)$ to (\ref{eqeq1p}) we get
\begin{equation}\label{eqB}
|{\rm Hom}_\Lambda(s_i,y)|-|{\rm Hom}_\Lambda(s_i,q_\lambda)|+|{\rm Hom}_\Lambda(s_i,q_\lambda/y)|
-|{\rm Ext}^1_\Lambda(s_i,y)|=0.
\end{equation}
Subtracting (\ref{eqA}) from (\ref{eqB}) and taking into account
(\ref{eqCB}) and (\ref{eqC}) we obtain
\begin{equation}\label{eqD}
|{\rm Ext}^1_\Lambda(x,s_i)|-|{\rm Ext}^1_\Lambda(y,s_i)|=|{\rm Hom}_\Lambda(s_i,q_\lambda/x)|-|{\rm Hom}_\Lambda(s_i,q_\lambda/y)|.
\end{equation}
Now applying ${\rm Hom}_\Lambda(-,s_i)$ to (\ref{eqeq2}) we get
the long exact sequence
\[
0 \to {\rm Hom}_\Lambda(y,s_i) \to {\rm Hom}_\Lambda(x,s_i) \to {\rm Ext}^1_\Lambda(s_j,s_i)
\to {\rm Ext}^1_\Lambda(y,s_i) \to {\rm Ext}^1_\Lambda(x,s_i) \to 0,
\]
hence
\[
|{\rm Hom}_\Lambda(y,s_i)|-|{\rm Hom}_\Lambda(x,s_i)|-a_{ij}
-|{\rm Ext}^1_\Lambda(y,s_i)|+|{\rm Ext}^1_\Lambda(x,s_i)|=0,
\]
thus, taking into account (\ref{eqD}), we have proved (\ref{eqeq4}).
\hfill $\Box$ \bigskip
\begin{lemma}\label{keylem2}
With the same notation we have
\[
\varphi_i(x)-\varepsilon_i(x) = (\lambda-\beta;\alpha_i).
\]
\end{lemma}
\medskip\noindent {\it Proof --- \ }
We use an induction on the height of $\beta$.
If $\beta=0$ then $x$ is the zero module and $\varepsilon_i(x) = 0$.
On the other hand $q_\lambda/x = q_\lambda$ and $\varphi_i(x)=(\lambda;\alpha_i)$
by definition of $q_\lambda$.
Now assume that the lemma holds for $x\in \Lambda^\lambda_\beta$ and let
$y\in \Lambda^\lambda_{\beta+\alpha_j}$ be a submodule
of $q_\lambda$ containing $x$.
Using Lemma~\ref{keylem} we get that
\[
\varphi_i(y)-\varepsilon_i(y) = (\lambda-\beta;\alpha_i)- a_{ij} =
(\lambda-\beta-\alpha_j;\alpha_i),
\]
as required, and the lemma follows.
\hfill $\Box$ \bigskip
\begin{lemma}\label{lemH}
Let $f\in\widetilde{\cal M}^\lambda_\beta$.
We have
\[
(E_iF_j - F_jE_i)(f) = \delta_{ij}(\lambda-\beta ; \alpha_i)f.
\]
\end{lemma}
\medskip\noindent {\it Proof --- \ }
Let $x\in\Lambda^\lambda_{\beta-\alpha_i+\alpha_j}$.
By definition of $E_i$ and $F_j$ we have
\[
(E_iF_jf)(x) = \int_{\mathfrak{p}\in\mathfrak{P}} f(y)
\]
where $\mathfrak{P}$ denotes the variety of pairs $\mathfrak{p}=(u,y)$
of submodules of $q_\lambda$ with $x \subset u$, $y \subset u$,
$u/x\cong s_i$ and $u/y\cong s_j$.
Similarly,
\[
(F_jE_if)(x) = \int_{\mathfrak{q}\in\mathfrak{Q}} f(y)
\]
where $\mathfrak{Q}$ denotes the variety of pairs $\mathfrak{q}=(v,y)$
of submodules of $q_\lambda$ with $v \subset x$, $v \subset y$,
$x/v\cong s_j$ and $y/v\cong s_i$.
Consider a submodule $y$ such that there exists in $\mathfrak{P}$
({\em resp.$\ $} in $\mathfrak{Q}$) at least one pair of the form $(u,y)$
({\em resp.$\ $} $(v,y)$).
Clearly, the subspaces carrying the submodules $x$ and $y$
have the same dimension $d$ and their intersection has dimension
at least $d-1$. If this intersection has dimension exactly
$d-1$ then there is a unique pair $(u,y)$
({\em resp.$\ $} $(v,y)$), namely $(x+y,y)$ ({\em resp.$\ $} $(x\cap y,y)$).
This means that
\[
\int_{\mathfrak{p}\in\mathfrak{P};\ y\not = x} f(y)
=
\int_{\mathfrak{q}\in\mathfrak{Q};\ y\not = x} f(y).
\]
In particular, since when $i\not = j$ we cannot have $y=x$,
it follows that
\[
(E_iF_j - F_jE_i)(f) = 0, \qquad (i\not = j).
\]
On the other hand if $i=j$ we have
\[
((E_iF_i - F_iE_i)(f))(x) = f(x) (\chi(\mathfrak{P}')-\chi(\mathfrak{Q}'))
\]
where $\mathfrak{P}'$ is the variety of submodules $u$ of $q_\lambda$
containing $x$ such that $u/x \cong s_i$,
and $\mathfrak{Q}'$ is the variety of submodules $v$ of $x$
such that $x/v \cong s_i$.
Clearly we have $\chi(\mathfrak{Q}')=\varepsilon_i(x)$
and $\chi(\mathfrak{P}')=\varphi_i(x)$.
The result then follows from
Lemma~\ref{keylem2}.
\hfill $\Box$ \bigskip
\subsubsection{}
The following relations for the endomorphisms
$E_i, F_i, H_i$ of $\widetilde{\cal M}^\lambda$ are easily checked
\[
[H_i,H_j]=0, \quad [H_i,E_j]=a_{ij}E_j,
\quad [H_i,F_j]=-a_{ij}F_j.
\]
The verification is left to the reader.
Hence, using Lemmas~\ref{serre} and \ref{lemH}, we have proved
that the assignments $e_i\mapsto E_i$, $f_i\mapsto F_i$, $h_i\mapsto H_i$,
give a representation of $\mathfrak g$ on $\widetilde{\cal M}^\lambda$.
\begin{lemma}
The endomorphisms
$E_i, F_i, H_i$
leave stable the subspace ${\cal M}^\lambda$.
\end{lemma}
\medskip\noindent {\it Proof --- \ }
It is obvious for $H_i$, and it follows from the definition
of ${\cal M}^\lambda$ for $F_i$.
It remains to prove that if $f\in{\cal M}^\lambda_\beta$ then
$E_if\in{\cal M}^\lambda_{\beta-\alpha_i}$.
We shall use induction on the height of $\beta$.
We can assume that $f$ is of the form $F_jg$ for some
$g\in {\cal M}^\lambda_{\beta-\alpha_j}$.
By induction we can also assume that
$E_ig\in{\cal M}^\lambda_{\beta-\alpha_i-\alpha_j}$.
We have
\[
E_if=E_iF_jg=F_jE_ig + \delta_{ij}(\lambda- \beta+\alpha_j;\alpha_i)g,
\]
and the right-hand side clearly belongs to ${\cal M}^\lambda_{\beta-\alpha_i}$.
\hfill $\Box$ \bigskip
\begin{lemma}\label{lem7}
The representation of $\mathfrak g$ carried by ${\cal M}^\lambda$ is isomorphic
to $L(\lambda)$.
\end{lemma}
\medskip\noindent {\it Proof --- \ }
For all $f\in{\cal M}_\beta$ and all
$x\in\Lambda^\lambda_{\beta+(a_i+1)\alpha_i}$ we have
$f*\mathbf 1_i^{*(a_i+1)}(x)=0$.
Indeed, by definition of $\Lambda^\lambda$ the socle of $x$ contains
$s_i$ with multiplicity at most $a_i$.
Therefore the left ideal of ${\cal M}$ generated by the functions
$\mathbf 1_i^{*(a_i+1)}$ is mapped to zero by the linear map
${\cal M} \ra {\cal M}^\lambda$ sending a function $f$ on $\Lambda_\beta$ to its
restriction to $\Lambda^\lambda_\beta$.
It follows that for all $\beta$ the dimension of ${\cal M}^\lambda_\beta$
is at most the dimension of the $(\lambda-\beta)$-weight space
of $L(\lambda)$.
On the other hand, the function $\mathbf 1_0$ mapping the zero $\Lambda$-module
to $1$ is a highest weight vector of ${\cal M}^\lambda$ of weight $\lambda$.
Hence $\mathbf 1_0\in{\cal M}^\lambda$ generates a quotient of the Verma module $M(\lambda)$,
and since $L(\lambda)$ is the smallest quotient of $M(\lambda)$ we must
have ${\cal M}^\lambda=L(\lambda)$.
\hfill $\Box$ \bigskip
This finishes the proof of Theorem~\ref{thI}.
\section{Geometric realization of Verma modules}\label{grv}
\subsection{}
Let $\beta\in Q_+$ and $x \in \Lambda_{\beta-\alpha_i}$.
Let $q = \bigoplus_{i\in I} q_i^{\oplus a_i}$
be the injective hull of $x$.
For every $\nu\in P_+$ such that $(\nu;\alpha_i)\geqslant a_i$ the injective
module $q_\nu$ contains a submodule isomorphic to $x$.
Hence, for such a weight $\nu$ and for any $f\in{\cal M}_\beta$, the integral
\[
\int_{y\in{\cal G}(x,\nu,i)} f(y)
\]
is well-defined.
\begin{proposition}\label{conjf}
Let $\lambda\in P$ and choose $\nu\in P_+$ such that $(\nu;\alpha_i)\geqslant a_i$
for all $i\in I$.
The number
\begin{equation}\label{stabil}
\int_{y\in{\cal G}(x,\nu,i)} f(y)\ -\ (\nu - \lambda\,;\, \alpha_i)\, f(x\oplus s_i)
\end{equation}
does not depend on the choice of $\nu$.
Denote this number by $(E_i^\lambda f)(x)$.
Then, the function
\[
E_i^\lambda f : x \mapsto (E_i^\lambda f)(x)
\]
belongs to ${\cal M}_{\beta-\alpha_i}$.
\end{proposition}
Denote by $E^\lambda_i$ the endomorphism of ${\cal M}$ mapping $f\in{\cal M}_\beta$ to
$E_i^\lambda f$.
Notice that Formula~(\ref{actF}), which is nothing but
(\ref{stari}), also defines an endomorphism of ${\cal M}$ independent of
$\lambda$ which we again denote by $F_i$.
Finally Formula~(\ref{actH}) makes sense for any $\lambda$,
not necessarily dominant, and any $f\in{\cal M}_\beta$.
This gives an endomorphism of ${\cal M}$ that we shall denote by $H^\lambda_i$.
\begin{theorem}\label{conjV}
The assignments $e_i\mapsto E^\lambda_i$, $f_i\mapsto F_i$, $h_i\mapsto H^\lambda_i$,
give a representation of $\mathfrak g$ on ${\cal M}$ isomorphic to the
Verma module $M(\lambda)$.
\end{theorem}
The rest of this section is devoted to the proofs of
Proposition~\ref{conjf} and Theorem \ref{conjV}.
\subsection{}
Denote by $e^\lambda_i$ the endomorphism of the Verma module
$M(\lambda)$ implementing the action of the Chevalley generator $e_i$.
Let ${\cal E}^\lambda_i$ denote the endomorphism of $U(\mathfrak{n}_{-})$
obtained by transporting $e^\lambda_i$ via the natural identification
$M(\lambda) \cong U(\mathfrak{n}_{-})$.
Let $\Delta$ be the comultiplication of $U(\mathfrak{n}_{-})$.
\begin{lemma}\label{lemalg1}
For $\lambda,\mu\in P$ and $u\in U(\mathfrak{n}_{-})$ we have
\[
\Delta({\cal E}^{\lambda+\mu}_i u) =
({\cal E}^\lambda_i\otimes 1 + 1\otimes{\cal E}^\mu_i)\Delta u .
\]
\end{lemma}
\medskip\noindent {\it Proof --- \ }
By linearity it is enough to prove this for $u$
of the form $u=f_{i_1}\cdots f_{i_r}$.
A simple calculation in $U(\mathfrak g)$ shows that
\[
e_if_{i_1}\cdots f_{i_r} =
f_{i_1}\cdots f_{i_r}e_i +
\sum_{k=1}^r\delta_{ii_k}f_{i_1}\cdots f_{i_{k-1}}h_if_{i_{k+1}}\cdots
f_{i_r}
\]
\[
=f_{i_1}\cdots f_{i_r}e_i +
\sum_{k=1}^r\delta_{ii_k}\left(f_{i_1}\cdots f_{i_{k-1}}f_{i_{k+1}}\cdots
f_{i_r}h_i -
\left(\sum_{s=k+1}^r a_{ii_s}\right)f_{i_1}\cdots f_{i_{k-1}}f_{i_{k+1}}\cdots
f_{i_r}\right).
\]
It follows that, for $\nu\in P$,
\[
{\cal E}^\nu_i(f_{i_1}\cdots f_{i_r}) =
\sum_{k=1}^r\delta_{ii_k}\left((\nu;\alpha_i)-\sum_{s=k+1}^r a_{ii_s}\right)
f_{i_1}\cdots f_{i_{k-1}}f_{i_{k+1}}\cdots f_{i_r}.
\]
Now, using that $\Delta$ is the algebra homomorphism defined by
$\Delta(f_i)=f_i\otimes 1 + 1\otimes f_i$, one can finish the proof
of the lemma. Details are omitted.
\hfill $\Box$ \bigskip
\subsection{}
We endow $U(\mathfrak{n}_{-})$ with the $Q_+$-grading given by
${\rm deg}(f_i)=\alpha_i$.
Let $u$ be a homogeneous element of $U(\mathfrak{n}_{-})$.
Write $\Delta u = u \otimes 1 + u^{(i)} \otimes f_i + A$,
where $A$ is a sum of homogeneous terms of the form $u'\otimes u''$
with ${\rm deg}(u'') \not = \alpha_i$.
This defines $u^{(i)}$ unambiguously.
\begin{lemma}\label{lemalg2}
For $\lambda,\mu\in P$ we have
\[
{\cal E}^{\lambda+\mu}_i u = {\cal E}^\lambda_i u + (\mu;\alpha_i)\,u^{(i)}.
\]
\end{lemma}
\medskip\noindent {\it Proof --- \ }
We calculate in two ways the unique term of the form
$E\otimes 1$ in $\Delta({\cal E}^{\lambda+\mu}_i u)$.
On the one hand, we have obviously
$E\otimes 1={\cal E}^{\lambda+\mu}_i u \otimes 1$.
On the other hand, using Lemma~\ref{lemalg1}, we have
\[
E\otimes 1 = {\cal E}^{\lambda}_i u \otimes 1
+ (1\otimes {\cal E}^{\mu}_i)(u^{(i)}\otimes f_i)
= {\cal E}^{\lambda}_i u \otimes 1 + (\mu;\alpha_i)\,u^{(i)}\otimes 1.
\]
Therefore,
\[
E={\cal E}^{\lambda+\mu}_i u = {\cal E}^\lambda_i u + (\mu;\alpha_i)\,u^{(i)}.
\]
\hfill $\Box$ \bigskip
\subsection{}
Now let us return to the geometric realization ${\cal M}$ of $U(\mathfrak{n}_{-})$.
Let $E^\lambda_i$ denote the endomorphism of ${\cal M}$
obtained by transporting $e^\lambda_i$ via the identification
$M(\lambda) \cong {\cal M}$.
\begin{lemma}\label{lem10}
Let $\lambda\in P_+$, $f\in{\cal M}_\beta$ and $x\in\Lambda^\lambda_{\beta-\alpha_i}$.
Then
\[
(E^\lambda_if)(x) = \int_{y\in{\cal G}(x,\lambda,i)} f(y).
\]
\end{lemma}
\medskip\noindent {\it Proof --- \ }
Let $r_\lambda : {\cal M} \ra {\cal M}^\lambda$ be the linear map sending $f\in{\cal M}_\beta$
to its restriction to $\Lambda^\lambda_\beta$.
By Theorem~\ref{thI}, this is a homomorphism of $U(\mathfrak{n}_{-})$-modules
mapping the highest weight vector of ${\cal M} \cong M(\lambda)$ to the
highest weight vector of ${\cal M}^\lambda \cong L(\lambda)$.
It follows that $r_\lambda$ is in fact a homomorphism of $U(\mathfrak g)$-modules,
hence the restriction of $E^\lambda_if$ to $\Lambda^\lambda_{\beta-\alpha_i}$
is given by Formula~(\ref{actE}) of Section~\ref{gri}.
\hfill $\Box$ \bigskip
Let again $\lambda\in P$ be arbitrary, and pick $f\in{\cal M}_\beta$.
It follows from Lemma~\ref{lemalg2}
that for any $\mu\in P$
\[
E^{\lambda+\mu}_i f - (\mu;\alpha_i)\,f^{(i)}
=
E^\lambda_i f.
\]
Let $x\in\Lambda_{\beta-\alpha_i}$.
Choose $\nu=\lambda+\mu$ sufficiently dominant so that
$x$ is isomorphic to a submodule of $q_\nu$.
Then by Lemma~\ref{lem10}, we have
\[
(E^\nu_i f)(x) = \int_{y\in{\cal G}(x,\nu,i)} f(y).
\]
On the other hand, by the geometric description of $\Delta$
given in \cite[\S 6.1]{GLS}, if we write
\[
\Delta f = f \otimes 1 + f^{(i)} \otimes \mathbf 1_i + A
\]
where $A$ is a sum of homogeneous terms of the form $f'\otimes f''$
with ${\rm deg}(f'') \not = \alpha_i$, we have that $ f^{(i)}$ is the
function on $\Lambda_{\beta-\alpha_i}$ given by
$f^{(i)}(x) = f(x\oplus s_i)$.
Hence we obtain that for $x\in\Lambda_{\beta-\alpha_i}$
\[
(E^\lambda_i f)(x) = \int_{y\in{\cal G}(x,\nu,i)} f(y) \ - \ (\nu -
\lambda;\alpha_i)f(x\oplus s_i).
\]
This proves both Proposition~\ref{conjf} and Theorem~\ref{conjV}.
\hfill $\Box$ \bigskip
\subsection{}
Let $\lambda\in P_+$. We note the following consequence of
Lemma~\ref{lem10}.
\begin{proposition}
Let $\lambda\in P_+$. The linear map $r_\lambda : {\cal M} \ra {\cal M}^\lambda$ sending $f\in{\cal M}_\beta$
to its restriction to $\Lambda^\lambda_\beta$ is the geometric realization of the
homomorphism of $\mathfrak g$-modules $M(\lambda) \ra L(\lambda)$. \hfill $\Box$ \bigskip
\end{proposition}
\section{Dual Verma modules}
\subsection{}
Let $S$ be the anti-automorphism of $U(\mathfrak g)$ defined by
\[
S(e_i) = f_i,\quad S(f_i)=e_i,\quad S(h_i) = h_i, \qquad (i\in I).
\]
Recall that, given a left $U(\mathfrak g)$-module $M$, the dual module $M^*$
is defined by
\[
(u\,\varphi)(m) = \varphi(S(u)\,m), \qquad
(u\in U(\mathfrak g),\ m\in M,\ \varphi\in M^*).
\]
This is also a left module.
If $M$ is an infinite-dimensional module with finite-dimensional
weight spaces $M_\nu$, we take for $M^*$ the graded dual
$M^*=\bigoplus_{\nu\in P} M_\nu^*$.
For $\lambda\in P$ we have $L(\lambda)^*\cong L(\lambda)$, hence the quotient
map $M(\lambda) \ra L(\lambda)$ gives by duality an embedding
$L(\lambda) \ra M(\lambda)^*$ of $U(\mathfrak g)$-modules.
\subsection{}
Let ${\cal M}^* = \bigoplus_{\beta\in Q_+} {\cal M}_\beta^*$ denote the vector space
graded dual of ${\cal M}$.
For $x\in \Lambda_\beta$, we denote by $\delta_x$ the delta function
given by
\[
\delta_x(f) = f(x),\qquad (f\in{\cal M}_\beta).
\]
Note that the map $\delta : x \mapsto \delta_x$ is a constructible map from
$\Lambda_\beta$ to ${\cal M}_\beta^*$. Indeed the preimage of $\delta_x$ is the
intersection of the constructible subsets
\[
{\cal M}_{(i_1,\ldots,i_r)}=
\{y \in \Lambda_\beta \mid (\mathbf 1_{i_1}*\cdots *\mathbf 1_{i_r})(y)
= (\mathbf 1_{i_1}*\cdots *\mathbf 1_{i_r})(x)\},
\quad (\alpha_{i_1}+\cdots+\alpha_{i_r}=\beta).
\]
\subsection{}
We can now dualize the results of Sections~\ref{gri}
and \ref{grv} as follows.
For $\lambda\in P$ and $x\in\Lambda_\beta$ put
\begin{eqnarray}
(E_i^*)(\delta_x) &=& \int_{y\in{\cal G}(i,x)} \delta_y,
\label{actEs} \\[3mm]
(F_i^{\lambda*})(\delta_x) &=& \int_{y\in{\cal G}(x,\nu,i)} \delta_y
\ -\ (\nu-\lambda\,;\,\alpha_i)\,\delta_{x\oplus s_i},
\label{actFs}\\[3mm]
(H_i^{\lambda*})(\delta_x) &=& (\lambda-\beta;\alpha_i)\,\delta_x,
\label{actHs}
\end{eqnarray}
where in (\ref{actFs}) the weight $\nu\in P_+$ is such
that $x$ is isomorphic to a submodule of $q_\nu$.
The following theorem then follows immediately from Theorems~\ref{thI} and
\ref{conjV}.
\begin{theorem}\label{dual}
{\rm (i)}\ The formulas above define endomorphisms $E_i^*, F_i^{\lambda*}, H_i^{\lambda*}$
of ${\cal M}^*$, and the assignments $e_i\mapsto E^*_i$, $f_i\mapsto F^{\lambda*}_i$,
$h_i\mapsto H^{\lambda*}_i$,
give a representation of $\mathfrak g$ on ${\cal M}^*$ isomorphic to the
dual Verma module $M(\lambda)^*$.
{\rm (ii)}\ If $\lambda\in P_+$, the subspace ${\cal M}^{\lambda*}$ of ${\cal M}^*$ spanned
by the delta functions $\delta_x$ of the finite-dimensional nilpotent
submodules $x$ of $q_\lambda$ carries the irreducible submodule
$L(\lambda)$.
For such a module $x$, Formula~(\ref{actFs}) simplifies as follows
\[
(F_i^{\lambda*})(\delta_x) = \int_{y\in{\cal G}(x,\lambda,i)} \delta_y\,.
\]
\hfill $\Box$ \bigskip
\end{theorem}
\begin{example}
{\rm
Let $\mathfrak g$ be of type $A_2$.
Take $\lambda=\varpi_1+\varpi_2$, where
$\varpi_i$ is the fundamental weight corresponding to $i\in I$.
Thus $L(\lambda)$ is isomorphic to the 8-dimensional adjoint
representation of $\mathfrak g=\mathfrak{sl}_3$.
A $\Lambda$-module $x$
consists of a pair of linear maps $x_{21} : V_1 \ra V_2$ and
$x_{12} : V_2 \ra V_1$ such that
$x_{12}x_{21}=x_{21}x_{12}=0$.
The injective $\Lambda$-module $q=q_\lambda$ has the following
form :
\[
q=
\begin{pmatrix}
u_1 \longrightarrow u_2 \cr v_1 \longleftarrow v_2
\end{pmatrix}
\]
This diagram means
that $(u_1,v_1)$ is a basis of $V_1$, that $(u_2,v_2)$ is a basis of
$V_2$, and that
\[
q_{21}(u_1)=u_2, \quad q_{21}(v_1)=0, \quad
q_{12}(v_2)=v_1, \quad q_{12}(u_2)=0.
\]
Using the same type of notation, we can exhibit the following submodules of $q$ :
\[
x_1=\begin{pmatrix}v_1\end{pmatrix},
\quad x_2=\begin{pmatrix}u_2\end{pmatrix}, \quad
x_3=\begin{pmatrix}v_1 && u_2\end{pmatrix}, \quad
x_4=\begin{pmatrix}u_1 \longrightarrow u_2\end{pmatrix}, \quad
x_5=\begin{pmatrix}v_1 \longleftarrow v_2\end{pmatrix},
\]
\[
x_6=\begin{pmatrix}u_1 \longrightarrow u_2 \cr v_1 \hfill\end{pmatrix},\qquad
x_7=\begin{pmatrix}\hfill u_2 \cr v_1 \longleftarrow v_2\end{pmatrix}.
\]
This is not an exhaustive list. For example,
$x'_4=\begin{pmatrix}(u_1+v_1) \longrightarrow u_2\end{pmatrix}$ is another submodule, isomorphic
to $x_4$.
Denoting by $\mathbf{0}$ the zero submodule, we see that $\delta_\mathbf{0}$
is the highest weight vector of $L(\lambda)\subset M(\lambda)^*$.
Next, writing for simplicity $\delta_i$ instead of $\delta_{x_i}$
and $F_i$ instead of $F^\lambda_i$, Theorem~\ref{dual}~(ii) gives
the following formulas for the action of the $F_i$'s on
$L(\lambda)$.
\[
F_1 \delta_\mathbf{0} = \delta_1,\quad
F_2 \delta_\mathbf{0} = \delta_2,\quad
F_1\delta_2 = \delta_3 + \delta_4,\quad
F_2\delta_1 = \delta_3 + \delta_5,
\]
\[
F_1 \delta_3 = F_1 \delta_4 =\delta_6,\quad
F_2 \delta_3 = F_2 \delta_5 =\delta_7,\quad
F_2 \delta_3 = F_1 \delta_6 = \delta_q,\quad
F_1 \delta_q = F_2 \delta_q = 0.
\]
Now consider the $\Lambda$-module $x = s_1 \oplus s_1$.
Since $x$ is not isomorphic to a submodule of $q_\lambda$,
the vector $\delta_x$ does not belong to $L(\lambda)$.
Let us calculate $F_i \delta_x\ (i = 1,2)$ by means of Formula~(\ref{actFs}).
We can take $\nu = 2\varpi_1$.
The injective $\Lambda$-module $q_\nu$ has the following
form :
\[
q_\nu=
\begin{pmatrix}
w_1 \longleftarrow w_2 \cr v_1 \longleftarrow v_2
\end{pmatrix}
\]
It is easy to see that the variety ${\cal G}(x,\nu,2)$ is isomorphic
to a projective line ${\mathbb P}_1$, and that all points on this
line are isomorphic to
\[
y=
\begin{pmatrix}
w_1\hfill \cr v_1 \longleftarrow v_2
\end{pmatrix}
\]
as $\Lambda$-modules.
Hence,
\[
F_2 \delta_x = \chi({\mathbb P}_1)\,\delta_y - (\nu - \lambda ; \alpha_2)\, \delta_{x\oplus s_2}
= 2\,\delta_y+\delta_{s_1\oplus s_1\oplus s_2}.
\]
On the other hand, ${\cal G}(x,\nu,1)=\emptyset$, so that
\[
F_1 \delta_x = - (\nu - \lambda ; \alpha_1)\, \delta_{x\oplus s_1}
= - \delta_{s_1\oplus s_1\oplus s_1}.
\]
}
\end{example}
|
1,116,691,497,255 | arxiv | \section{Introduction}\label{sec:introduction}
\IEEEPARstart{R}ecent years have witnessed a great development of Convolutional Neural Networks (CNNs) together with a wide variety of their vision applications. For the sake of high performance, these networks devote great efforts to carefully designing complicated structures for different tasks in a domain-specific manner, leading to the
inflexibility of model learning across multiple domains.
As a result, in the case of several tasks based on different domains, one needs to deploy an equal number of domain-specific models respectively, which is unrealizable in practice, especially when concerning the limitation of computational resources.
To tackle the problem, \textit{Multi-domain} learning~\cite{rebuffi2017learning,rebuffi2018efficient,li2019efficient, liu2018multi, yang2019multi} emerges as an important approach for better efficiency and generalization of model learning across multiple different yet correlated domains.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{./image/Figure1.pdf}
\caption{Illustration of \textit{multi-domain} learning. The first stage is to seek a \textit{domain-agnostic} model (e.g. VGG) as our common trunk model, and the second is to plug a set of \textit{domain-specific} adapter modules into the former, leading to the final adaptation model. After learning the adapter parameters with the parameters of the trunk model fixed, each domain can be adapted by changing a set of adapter modules.}
\label{fig:multi_domain_learning}
\end{figure}
In principle, \textit{multi-domain learning} (MDL)~\cite{rebuffi2017learning} aims to learn a compact model that works well for many different domains (e.g., internet images, scene text, medical images, satellite images, driving images, etc.). Typically, it is cast as a two-stage learning problem (illustrated in Figure~\ref{fig:multi_domain_learning}), including domain-agnostic model learning and domain-specific model adaptation. Specifically, domain-agnostic model learning is to seek a common trunk neural network model (with structures and parameters shared across domains). In comparison, domain-specific model adaptation aims at plugging a set of extremely lightweight adapter modules into the common trunk model structure for dynamically adapting to different domains. After learning the adapter parameters with
the common model fixed, we have domain-specific models that are pretty flexible in terms of changing the adapter modules.
In sum, the core content of MDL is to design a plugging strategy (i.e., where to plug) as well as a set of adapter module structures (i.e. what to plug), which directly determines the effectiveness and the compactness of the whole adaptation model.
In the research context, the adapter plugging strategy~\cite{berriel2019budget,rebuffi2017learning,rebuffi2018efficient,li2019efficient, bulat2019incremental} is usually fixed, dense, and handcrafted. Consequently, it is less flexible and discriminative with a higher computational cost in many complicated situations. Moreover, the adapter module structure is also predefined and fixed for different domains, leading to the weakness in cross-domain adaptation. Therefore, how to automatically set up the adapter plugging strategy
and adaptively fulfill the adapter structure design are crucial to effective multi-domain learning.
Motivated by this observation, we propose a novel NAS-driven scheme for multi-domain learning based on Neural Architecture Search (NAS). Specifically,
we accomplish the task of automatically finding the effective adapter plugging strategy by NAS, and meanwhile make full use of NAS to search the adapter structures adaptively. In this way, our scheme has the following advantages: 1) more flexible and sparse adapter plugging with better efficiency and generalization by NAS; and 2) more discriminative adapter modules with better adaptation to different domains. As a result, the multi-domain model we obtain is often more compact, discriminative, and domain adaptive with a relatively low computational cost when compared to previous MDL methods.
In summary, the main contributions of this work are summarized as follows:
\begin{itemize}
\item We propose a NAS-driven scheme for multi-domain learning, which effectively makes model learning seamlessly adapt to different domains by automatically determining where to plug with NAS.
\item We propose a NAS-adapter module which adaptively discovers the adapter module structures by NAS for well balancing between the model effectiveness and compactness for different domains.
\item Extensive experiments over benchmark datasets demonstrate the effectiveness of this work in accuracy and flexibility against the existing approaches.
\end{itemize}
The rest of the paper is organized as follows. We first describe the background in Section~\ref{related_work}, and then explain the details of our proposed scheme in Section~\ref{method}. In Section~\ref{experiments}, we conduct the experiments and discuss their corresponding results. Finally, we conclude this work in Section~\ref{conclusion}.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{./image/Figure2.pdf}
\caption{Illustration of our scheme for multi-domain learning. In the first stage, we search a set of appropriate adapters according to the given domain and the plugging location. In the second stage, we select an adapter plugging strategy (i.e. where to plug the adapter) to further compact the adaptation model.}
\label{fig:our_method}
\end{figure*}
\section{Related Work}\label{related_work}
\subsection{Multi-Domain Learning}
MDL~\cite{rebuffi2017learning,rebuffi2018efficient,li2019efficient, liu2018multi, yang2019multi, gu2018multi, yang2019shared,liu2019compact, fourure2017multi, guo2019depthwise} aims to learn a model that works well for several different visual domains, requiring both effectiveness and efficiency.
Various adapter modules are proposed to gain an acceptable performance, overcoming the ``catastrophic forgetting''~\cite{mccloskey1989catastrophic,pfulb2019comprehensive,coop2013ensemble} problem.
BN~\cite{bilen2017universal} used the batch-normalization layer as the domain-specific adapter and therefore only a few parameters are finetuned for each domain.
To tackle the complex visual domains, more powerful adapter modules are proposed, such as the $1 \times 1$ convolutional adapter~\cite{rosenfeld2018incremental} and the residual adapter (RA)~\cite{rebuffi2017learning}, which gain significant performance while increasing the computational resource cost in exchange.
Those methods all have one thing in common, that the adapters are all hand-crafted, and the same adapter structure is used for all different domains.
In this paper, we claim that the structure of adapters should also be adapted, along with the change of domains.
Our method learns the structure of adapters while taking the domain diversity and possible plugging locations into consideration.
On the other hand, some of the MDL methods pay more attention to compress the parameters further thereby improving efficiency. RA-SVD~\cite{rebuffi2018efficient} proposed to compress the adapter with the singular matrix decomposition (SVD). CovNorm~\cite{li2019efficient} utilized principal component analysis (PCA) aligned by the covariance from data to compress the adapter modules. While these methods concentrate on the compression within the adapter modules, we proposed to further compact the whole adaptation model in a domain-specific fashion. Adapters are plugged only into several selected locations, and the plugging strategy varies from different domains. Computational resources therefore can be obviously saved without performance decreasing.
\subsection{Neural Architecture Search}
NAS~\cite{elsken2018neural,cai2018proxylessnas,kandasamy2018neural} aims at designing effective neural network architectures automatically. There is a rich body of works in NAS, which are mainly based on three strategies: reinforcement learning method~\cite{baker2016designing,zoph2016neural,zhang2019customizable,tan2019mnasnet}, evolutionary algorithm~\cite{real2017large}, and gradient-based optimization~\cite{liu2018darts, chen2019progressive, xu2019pc,vahdat2019unas,jin2019rc}. Because of its remarkable performance, a lot of NAS-based methods have been proposed to solve some specific problems. Auto-DeepLab~\cite{liu2019auto} proposed a hierarchical search space for semantic segmentation task~\cite{feng2020taplab,chen2020banet,sun2021real,ji2019human,ji2020context}. FP~\cite{newell2019feature} proposed a searching strategy for designing efficient multi-task architecture. In life-long learning~\cite{aljundi2018memory,kirkpatrick2017overcoming,li2017learning,zhao2021memory,zhao2006mgsvf}, LTG~\cite{li2019learn} proposed to expand the network architecture by NAS while retaining the previously learned knowledge. BP-NAS~\cite{liu2020block}, PolSAR-DNAS~\cite{dong2020automatic} and BI~\cite{xu2019overview} all focus on the architecture designing. BP-NAS proposes a new two-stage NAS method for classic image classification, while PolSAR-DNAS tailors NAS for PolSAR classification task. BI reviews many architecture designing methods about bidirectional intelligence.
In contrast, the main focus of our work is to introduce NAS into the multi-domain learning task (i.e. adaptively learn what and where to plug adapters). In addition to the current Darts~\cite{liu2018darts} option, our work is quite flexible in using any other NAS alternatives.
To implement a typical NAS algorithm, firstly we need to construct an appropriate operation set, denoted as $O$ containing all possible operations. A graph of architecture containing $M$ nodes are then needed to be defined, where each node is a latent representation (e.g. a feature map in convolutional networks), and each directed edge $(i,j)$ is associated with some operation $o^{(i,j)}$ belonging to $O$.
To make the search space continuous, the categorical choice of a particular operation can be relaxed to the softmax over all possible operations~\cite{liu2018darts}:
\begin{equation}
\label{equ:mix_o}
\overline{o}^{(i,j)}(\cdot) = \sum_{o\in O}^{}\frac{exp(\alpha_{o}^{(i,j)})}{\sum_{o'\in O}^{}exp(\alpha_{o'}^{(i,j)})}o(\cdot),
\end{equation}
where the weights for a pair of nodes are parameterized by a vector $\alpha^{(i,j)}$ of dimension $|O|$, and therefore the architecture searching reduces to learning the variables $\alpha=\{\alpha^{(i,j)}\}$. The variables can be learned with a specific objective function, and the final structure can then be obtained by simply selecting the most likely operation, i.e., $o^{(i,j)}=\arg\max_{o\in O} \alpha_{o}^{(i,j)}$. Then the structure of a model can be denoted by a set of architecture weights.
\begin{table*}[ht]
\centering
\caption{Main notations and symbols used throughout the paper.}
\resizebox{0.77\textwidth}{!}{
\begin{tabular}{c l l}
\toprule
\textbf{Notation} & \multicolumn{2}{c}{\textbf{Definition}} \\
\midrule
$D$& \multicolumn{2}{l}{The number of domains}\\
$\mathcal{D}_d$& \multicolumn{2}{l}{The $d$-th domain}\\
$(x_d, y_d)$& \multicolumn{2}{l}{A sample set of class}\\
$\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$ & \multicolumn{2}{l}{The MDL model for all the $D$ domains}\\
$\mathcal{A}$& \multicolumn{2}{l}{The selected adapter structures (i.e. what to plug) of $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$}\\
$\mathcal{B}$& \multicolumn{2}{l}{the selected adapter plugging strategies (i.e. where to plug) of $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$}\\
$\Theta$& \multicolumn{2}{l}{The parameters of $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$}\\
$\Psi_0( \cdot ;\mathcal{A}_0, \mathcal{B}_0, \Theta_0)$& \multicolumn{2}{l}{The pretrained network as common trunk model for $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$}\\
$\Theta_0$& \multicolumn{2}{l}{The parameters of $\Psi_0( \cdot ;\mathcal{A}_0, \mathcal{B}_0, \Theta_0)$}\\
$N$& \multicolumn{2}{l}{The number of domain-agnostic layers for $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$}\\
$f^n$& \multicolumn{2}{l}{The $n$-th domain-agnostic layer of $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$}\\
$\Psi_d(.;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})$& \multicolumn{2}{l}{The adaptation model for $\mathcal{D}_d$}\\
$a_d^n$& \multicolumn{2}{l}{The adapter to be plugged into the $n$-th location for $\mathcal{D}_d$}\\
$\Theta_d^{a}$& \multicolumn{2}{l}{The parameters of adapters for $\Psi_d(.;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})$}\\
$\mathcal{A}_d$& \multicolumn{2}{l}{The adapter structure used for $\Psi_d(.;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})$}\\
$\mathcal{B}_d$& \multicolumn{2}{l}{The adapter plugging strategy used for $\Psi_d(.;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})$}\\
$\alpha_{d}^n$& \multicolumn{2}{l}{The structure of $a_d^n$}\\
\bottomrule
\end{tabular}%
}
\label{Notation}%
\end{table*
\section{Method}\label{method}
\subsection{Overview}\label{Problem_Formulation}
To better understand our representations, we provide detailed explanations of the main notations and symbols used throughout this paper as shown in Table~\ref{Notation}.
In MDL, data is sampled from $D$ domains $\{\mathcal{D}_d\}_{d=1}^D$ with corresponding labels for different tasks, and a sample belonging to the $d$-th domain can then be denoted as $(x_d, y_d)$. The goal is to learn a single compact model $\Psi( \cdot ;\mathcal{A},\mathcal{B}, \Theta)$ that works well for all the $D$ domains, which can be addressed in a two-stage fashion shown in Figure~\ref{fig:multi_domain_learning}. Here, $\mathcal{A}$ denotes the selected adapter structures (i.e. what to plug), $\mathcal{B}$ denotes the selected adapter plugging strategies (i.e. where to plug) for all the domains and $\Theta$ represents the parameters. Both $\mathcal{A}$ and $\mathcal{B}$ affect the structure of the learned MDL model.
In the first stage, we choose a pretrained network $\Psi_0( \cdot ;\mathcal{A}_0, \mathcal{B}_0, \Theta_0)$ which consists of $N$ layers as our trunk model:
\begin{equation}
\Psi_0( \cdot ;\mathcal{A}_0, \mathcal{B}_0, \Theta_0) = f^N \circ f^{N-1} \circ \dots \circ f^1( \cdot ;\Theta_0),
\end{equation}
where $\Theta_0$ represents the pretrained parameters and $f^n$ ($n\in\{1,2,\dots,N\}$) denotes the $n$-th domain-agnostic layer. $\mathcal{A}_0$ and $\mathcal{B}_0$ are not utilized because the pretrained network has no adapters.
In the second stage, for each domain $\mathcal{D}_d$, we need to construct an adaptation model $\Psi_d(.;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})$ which is composed of the trunk model and a set of additional adapters. We use $\Theta_d^{a}$ to represent the parameters of adapters. The adapter to be plugged into the $n$-th location for domain $\mathcal{D}_d$ is represented as $a_d^n$. For ease of notation, we will apply a domain-agnostic layer followed by its adapter module as $f_d^n = a_d^{n} \circ f_n$.
After selecting an appropriate adapter structure and obtaining an appropriate plugging strategy for the adaptation model, the goal is then to find an optimal $\Theta_{d}^{a*}$ of the adapters while fixing $\mathcal{A}_d$ and $\mathcal{B}_d$:
\begin{equation}
\label{equ:mdl_train}
\Theta_{d}^{a*} = \mathop{\arg\min}\limits_{\Theta_d^{a}} \sum\limits_{(x_d, y_d) \in \mathcal{D}_d} ||\Psi_d(x_d;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})-y_d||^2_2,
\end{equation}
where the domain-agnostic parameters $\Theta_0$ is fixed in training and the only term to optimize is $\Theta_d^{a}$, i.e. domain-specific parameters adapted to the domain $\mathcal{D}_d$. At the end, the MDL model $\Psi(\cdot ;\mathcal{A},\mathcal{B}, \Theta)$ is obtained with the fixed domain-agnostic parameters $\Theta_0$ and the optimal domain-specific parameters $\{\Theta_{d}^{a*}\}_{d=1}^D$.
As shown in Figure~\ref{fig:multi_domain_learning} and Equation~\eqref{equ:mdl_train},
the domain-specific adapter $\Theta_d^{a}$ affects the performance primarily among domains, which brings out the extra parameters and complexity. Previous methods usually use the same adapter structure for each domain, while in this work, we claim that an MDL model should be equipped with different adapters that vary from domain to domain.
A simple adapter module would fail in complex domain-transformation, while a complicated one may lead to the waste of computational resources.
Therefore, it is a challenge to make a balance between model effectiveness and compactness with respect to different domains.
To obtain a discriminative MDL model, we propose to find a set of domain-specific adapter structures for each adaptation model $\Psi_d( \cdot ;\mathcal{A}_d,\mathcal{B}_d, \Theta_0,\Theta_d^{a})$, while taking both the domain differences and complexity into consideration, this will be detailed in Section~\ref{NAS_adaptation}. We also observe that the whole MDL model can be further compacted by removing several specific adapters (i.e. setting those adapters to be an identity mapping) without sacrificing the performance. Our scheme thereby further introduces a selection of plugging strategy to achieve a more compact MDL model in Section~\ref{plugging_strategy}. With our adapter plugging strategy selection, the domain-specific adapter modules are more flexible to different domains. The illustration of our scheme for MDL is shown in Figure~\ref{fig:our_method}.
\subsection{Adapter Module Selection}\label{NAS_adaptation}
In this section, we introduce the process of searching the adapter module structures, taking the domain diversity and the complexity into consideration.
According to Equation \eqref{equ:mdl_train} and Figure~\ref{fig:multi_domain_learning}, finding the specific adapter modules in essence means seeking an appropriate structure for each adapter in $\{a_{d}^n\}^{N}_{n=1}$.
This problem can be reduced to learning a set of structure weights $\{\alpha_{d}^n\}^{N}_{n=1}$ by NAS~\cite{liu2018darts} and then $\mathcal{A}_d $ is correspond to $\{\alpha_d^1,\dots,\alpha_d^N\}$. A NAS-adapter used to select the structure of the MDL adapter $a_{d}^n$ is detailed in what follows.
To achieve a compact MDL model, the searching space needs to be properly designed, because the additional complexity and performance of the adaptation model $\Psi_d(\cdot;\mathcal{A}_d, \mathcal{B}_d,\Theta_0,\Theta_d^{a})$ depend on the adapter structures $\{a_d^n\}^{N}_{n=1}$.
The core of multi-domain learning (MDL) is to ensure the simplicity and compactness of architecture for the concern of practicability.
To achieve that, we adopt NAS with parameter-constrained prior which limits the searching space of the adapter structure (i.e. what to plug).
We draw inspiration from previous popular structures of adapters~\cite{rebuffi2017learning,rebuffi2018efficient} and collect the operation set $O_a$: $1\times1$ convolution operation, batch normalization operation, skip connection and identity shortcut~\cite{he2016identity}. Our NAS-adapter consists of $M$ nodes, the example for $M=3$ is illustrated in Figure~\ref{fig:NAS_adapter}.
For the domain $\mathcal{D}_d$, we search an appropriate set of adapter structures by learning a set of structure weights $\mathcal{A}_d$:
\begin{equation}
\label{equ:alpha}
\mathcal{A}_{d}^{*} = \mathop{\arg\min}\limits_{\mathcal{A}_d} \sum\limits_{(x_d, y_d) \in \mathcal{D}_d} ||\Psi_d(x_d;\mathcal{A}_d,\mathcal{B}_d,\Theta_0,\Theta_d^{a})-y_d||^2_2.
\end{equation}
We utilize a similar training strategy in~\cite{liu2018darts} and the training data is equally divided into a validation set and a training set. Specifically, we use the validation set to update the weights of adapter structures, while the parameters of adapters are optimized by the training set.
\begin{table}[b]
\centering
\caption{Performance of different adapter structures with the trunk model VGG-16.}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Method &Flowers&FGVC &CIFAR100 &MITIndoor&Total Param.\\
\midrule
BN~\cite{bilen2017universal}&91.47\%&63.04\%&64.80\%&57.60\%&\textbf{$\approx$1}\\
DAN~\cite{rosenfeld2018incremental}&\textbf{92.65\%}&\textbf{86.80\%}&\textbf{74.45\%}&\textbf{63.02\%}&2.05\\
\bottomrule
\end{tabular}
}
\label{tab:adapt_structure}
\end{table}
Through searching the adapter structure for different domains, our multi-domain learning method aims to build an effective multi-domain model with limited memory cost, that is, keeping a good balance between the effectiveness and efficiency. More specifically, our method is able to automatically determine the adapter structure according to the complexity of domain. Namely, it seeks for selecting simple adapters for simple domains to save memory cost while selecting complicated adapters for complex domains to pursue the performance. Typically, a complex domain corresponds to the dataset that comprises more content-diverse samples with richer textures and complicated background clutters, following a very large multi-class classification problem setting with massive samples. In contrast, conventional MDL methods usually adopt the same predefined handcrafted adapter structure for different domains (e.g. BN~\cite{bilen2017universal}, DAN~\cite{rosenfeld2018incremental}). As a result, they can either build a complicated adapter structure with high accuracy to deal with complex domains or only construct a simple adapter structure but with low accuracy for all domains. Thus, they are incapable of achieving a good trade-off between effectiveness and efficiency. Suppose a dataset consists of both complex domains (e.g. MITIndoor) and simple domains (e.g. Flowers).
In this situation, adopting the same predefined handcrafted adapter structure for different domains and learning the parameters only is not enough. As shown in Table~\ref{tab:adapt_structure}, adopting a simple adapter BN and learning the parameters of it can achieve high performance on a simple domain (i.e. Flowers) but low performance on complex domains (i.e. FGVC, CIFAR100, MITIndoor). And adopting a complicated adapter DAN and learning the parameters of it can achieve high performance on all domains but consumes more memory cost. In contrast, our method can adaptively build adapter structures for different domains and is able to keep a good trade-off between the effectiveness and efficiency in such a situation.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{./image/Figure3.pdf}
\caption{Illustration of our NAS-adapter. A NAS-adapter consists of $M$ nodes ($M=3$ in this figure). The final structure is achieved by optimizing the structure parameters and selecting an operation from a possible set. We, in this paper, construct the set with $1\times1$ convolution operation, batch normalization operation, skip connection, and identity shortcut.}
\label{fig:NAS_adapter}
\end{figure}
\subsection{Plugging Strategy Selection}\label{plugging_strategy}
The learning requirements vary for different domains. Therefore, using the same plugging strategy may not be optimal for all domains. Prior works plug the adapters into every available slot of MDL models, leaving much room for the improvement of compactness, achieving that is the focus of this section. Given a set of adapters with fixed searched structures, we further propose to determine whether or not to plug the adapter into each possible slot for every visual domain with NAS.
This problem can be considered as a typical NAS problem where the operation set only consists of the identity operation and the candidate adapter operation. Therefore, for the domain $\mathcal{D}_d$, searching a plugging strategy is reduced to learning a set of continuous variables $\mathcal{B}_d=\{\beta_d^1,\beta_d^2,...,\beta_d^N\}$ for the $N$ possible plugging locations:
\begin{equation}
\label{equ:beta}
\mathcal{B}_d^* = \mathop{\arg\min}\limits_{\mathcal{B}_d} \sum\limits_{(x_d, y_d) \in \mathcal{D}_d} ||\Psi_d(x_d;\mathcal{A}_d,\mathcal{B}_d,\Theta_0,\Theta_d^{a})-y_d||^2_2,
\end{equation}
In the training process for domain $\mathcal{D}_d$, we need to optimize the weights of the plugging strategy $\mathcal{B}_d = \{\beta_d^1,\dots,\beta_d^N\}$ and the parameters of added adapters $\Theta_d^a$, which is a bilevel optimization problem. Similarly to the optimization done in Section~\ref{NAS_adaptation}, we use the validation set to update the weights of the plugging strategy, and the training set for the adapter parameters.
After learning an appropriate adapter structure and an appropriate adapter plugging strategy, we can further update the parameters $\Theta_d^{a}$ with the training data by Equation \eqref{equ:mdl_train}.
We have provided an algorithm flow (i.e. Algorithm~\ref{nas_ada}) and training details about our method.
Our method mainly consists of two steps: adapter module selection (line 2-7) and plugging strategy selection (line 8-13).
For the adapter module selection, we first construct the training set, validation set, and adapter module structure searching space (line 2-3), then learning the adapter module structure and parameters (line 4-7). For the plugging strategy selection, we first construct the plugging strategy searching space (line 8), then learning the plugging strategy and adapter parameters (line 9-12), finally we fix the adapter structures and plugging strategy and continue to update the parameters of adapters with the training data (line 13).
\begin{algorithm}[ht]
\caption{Our NAS-driven MDL method}
\label{nas_ada}
\KwIn{The data for $D$ domains $\{\mathcal{D}_d\}_{d=1}^D$, a pretrained network $\Psi_0( \cdot ;\mathcal{A}_0, \mathcal{B}_0, \Theta_0)$ which consists of $N$ domain-agnostic layers $\{f_n\}_{n=1}^{N}$ as the trunk model and the maximum number $T_{max}$ of iterations.}
\everypar={\nl}
\For{domains $1, 2, 3, \dots, D$}{
Divide the training data (the sample $(x_d, y_d)$ is from domain $\mathcal{D}_d$) into a training set and validation set equally\;
\tcp{Adapter Module Selection}
\everypar={\nl}
Create nodes and corresponding edges of the NAS-adapter parametrized by $\alpha_{d}^n$ after each domain-agnostic layer $f_n$\;
\For{iterations $1, 2, 3, \dots, T_{max}$}{
Update the structure weights $\mathcal{A}_d=\{\alpha_{d}^n\}_{n=1}^N$ with data sampled from the validation set by Equation(4)\;
Update the parameters $\Theta_d^{a}$ with data sampled from the training set by Equation (3)\;
}
\tcp{Plugging Strategy Selection}
\everypar={\nl}
Create the operations parametrized by $\beta_d^n$ after each domain-agnostic layer $f_n$\;
\For{iterations $1, 2, 3, \dots, T_{max}$}{
Update the weights of the plugging strategy $\mathcal{B}_d=\{\beta_d^n\}_{n=1}^{N}$ with data sampled from the validation set by Equation (5)\;
Update the parameters $\Theta_d^a$ with data sampled from the training set by Equation (3)\;
}
Fix the adapter structure and plugging strategy, update the parameters $\Theta_d^a$ with data sampled from the domain $\mathcal{D}_d$ by Equation (3)\;
}
\KwOut{Derive the MDL model $\Psi(\cdot ;\mathcal{A},\mathcal{B}, \Theta)$ with the fixed domain-agnostic parameters $\Theta_0$ and the optimal domain-specific parameters $\{\Theta_{d}^{a*}\}_{d=1}^D$, a set of adapter structure weights $\{\mathcal{A}_d^*\}_{d=1}^D$ and plugging strategy$\{\mathcal{B}_d^*\}_{d=1}^D$.}
\end{algorithm}
\section{Experiments}\label{experiments}
\subsection{Datasets}
We evaluated our approach with two different benchmarks. We first use the Visual Decathlon benchmark~\cite{rebuffi2017learning}, built with $10$ different datasets from \textbf{ImageNet}~\cite{russakovsky2015imagenet} to \textbf{German Traffic Signs}~\cite{stallkamp2012man}, in which the images are resized to $72 \times 72$.
As for the second benchmark~\cite{li2019efficient}, a set of seven popular vision datasets are collected for evaluation and this benchmark is used for large CNNs. \textbf{SUN 397}~\cite{xiao2010sun} contains $397$ classes of scene images and more than a million images. \textbf{MITIndoor}~\cite{valenti2007indoor} is an indoor scene dataset with $67$ classes and $80$ samples per class. \textbf{FGVC-Aircraft Benchmark}~\cite{bilen2017universal} is a fine-grained classification dataset of $10,000$ images of $100$ types of airplanes. \textbf{Flowers102}~\cite{nilsback2008automated} is a fine-grained dataset with $102$ flower categories and $40$ to $258$ images per class. \textbf{CIFAR100}~\cite{krizhevsky2009learning} contains $60,000$ tiny images, from 100 classes. \textbf{Caltech256}~\cite{griffin2007caltech} contains $30,607$ images of $256$ object categories, with at least $80$ samples per class. \textbf{SVHN}~\cite{netzer2011reading} is a digit recognition dataset with $10$ classes and more than $70,000$ samples. In this benchmark, images are rescaled to a common size of $224\times224$ and the training and testing sets are defined by the corresponding dataset, if available, while $75\%$ of samples are used for training and $25\%$ are for testing, otherwise.
\subsection{Implementation Details}
\paragraph{Network architectures}
For the Visual Decathlon benchmark, we follow~\cite{rebuffi2018efficient} and conduct experiments using a ResNet~\cite{he2016identity} with $26$ layers as the common trunk structure. We employ the same data pre-processing setting and freeze the parameters of our \textbf{ResNet-26} model after the pretraining on ImageNet. For the second benchmark, we follow~\cite{li2019efficient} and use a \textbf{VGG-16}~\cite{simonyan2014very} model in all experiments. This model contains convolutional layers of dimension ranging from $64$ to $4096$, and the parameters are also pretrained on ImageNet.
\paragraph{Evaluation protocol} These two benchmarks are designed to address classification problems. Similar to~\cite{rebuffi2017learning,rebuffi2018efficient,li2019efficient}, we report the accuracy for each domain (denoted by ``Acc.") and the average accuracy over all the domains (denoted by ``Ave. Acc.") . The score function~\cite{rebuffi2017learning} $S$ (denoted by ``S.") is also adopted for the evaluation, formulated as:
\begin{equation}
S=\sum_{d=1}^{N}\lambda_d\max\{0,E^{max}_d-E_d\}^2,
\end{equation}
where $N$ is the number of different domains and $E_d$ denotes the test error of the MDL method for the domain $D_d$. $E^{max}_d$ is twice the testing error rate of baseline, which is the fully finetuned network for that domain, and $\lambda_d$ is a coefficient to ensure the best result for each domain is $1000$. The score favors the methods that perform well over all domains, and methods which are outstanding only on a few domains will be penalized. Furthermore, parameter cost is also taken into consideration following~\cite{li2019efficient, rebuffi2017learning}.
We report the adapter parameter usage for each domain (denoted by ``Ada. Param.") or
report the total number of parameters relative to the initial pretrained trunk model (excluding the classifiers) over all domains (denoted by ``Total Param.").
\paragraph{Training details}
For the Visual Decathlon benchmark, we train the ResNet-26 model with the same training strategy in~\cite{rebuffi2017learning}. To select a NAS-adapter module structure, we follow the strategy in~\cite{liu2018darts} with an NVIDIA 1080Ti GPU and divide the training dataset into two parts of equal size. One part is used to optimize the structure weights while the other part is to optimize the network parameters. For structure weights learning, we use Adam optimizer~\cite{kingma2014adam} with weight decay $0.001$ and momentum ($0.5$, $0.999$) and the initial learning rate set to $0.0003$.
For network parameters optimization, we use SGD optimizer with an initial learning rate $0.01$ (annealed down to zero following a cosine schedule without restart~\cite{loshchilov2016sgdr}), momentum $0.9$, and weight decay $0.0005$.
For plugging strategy selection, the learning rate is initialized by $0.005$ and divided by $10$ after $20$, $40$, $60$ epochs. For the second benchmark, we utilize the training approach in~\cite{li2019efficient} for the VGG-16 model. The rest of the settings for adapter module selection and plugging strategy selection are the same as those on the former.
The elapsed time taken for running our NAS-driven MDL method mainly consists of three parts: the NAS training time for adapter structure selection (denoted by ``NAS-adapter Time"), the NAS training time for plugging strategy selection (denoted by ``NAS-plugging Time"), and the training time for adapter parameters updating (denoted by ``Adapter-parameters Time"). On the Visual Decathlon benchmark with the trunk model ResNet-26, the total elapsed time for running our method is about 36 hours (NAS-adapter Time 16, NAS-plugging Time 8, Adapter-parameters Time 12). On the benchmark of seven domains with the trunk model VGG-16, the total elapsed time for running our method is about 185 hours (NAS-adapter Time 80, NAS-plugging Time 40, Adapter-parameters Time 65) with an NVIDIA 1080Ti GPU.
\subsection{Ablation Study}
In this section, we firstly carry out ablation experiments to validate the effectiveness of our proposed NAS-adapter module (\textbf{ablation experiment-1}) and plugging strategy selection scheme (\textbf{ablation experiment-2}). Then we give a detailed analysis about what to plug (i.e. the adapter structure) and where to plug (i.e. the plugging strategy). About what adapter structure to plug, we give a statistic for the distribution of learned adapter structure across domains (\textbf{ablation experiment-5}) and show the importance of adapter structure search (\textbf{ablation experiment-3}). About where the adapters to plug, we give a statistic for the frequency of each plugging location to be selected (\textbf{ablation experiment-6}) and compare our selected plugging strategy with other hand-crafted plugging strategies (\textbf{ablation experiment-4}). Finally, we make a discussion about the accuracy of our method with regard to the domain diversity (\textbf{ablation experiment-7}) and different paradigms for the task of multi-domain learning (\textbf{ablation experiment-8}).
\begin{table}[!t]
\centering
\caption{Accuracy of different adapter modules with trunk model VGG-16, using ``All" plugging strategy.}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{lcccc}
\toprule
Adapter Structure&Plugging Strategy &MITIndoor &Flowers& FGVC\\
\midrule
Res Adapt &\multirow{4}{*}{All}&72.40$\pm$0.24\% & 96.43$\pm$0.12\%& 88.92$\pm$0.28\%\\
$1 \times 1$ Adapt&~ &63.02$\pm$ 0.26\% &92.65$\pm$0.16\%& 86.80$\pm$0.32\% \\
BN Adapt&~ &57.60$\pm$0.15\% &91.47$\pm$0.11\% & 63.04$\pm$0.25\%\\
NAS Adapt&~ &\textbf{73.05$\pm$ 0.26\%} &\textbf{96.81$\pm$0.15\%}& \textbf{89.08$\pm$0.33\%}\\
\bottomrule
\end{tabular}
}
\label{tab:one}
\end{table}
\begin{table}[!t]
\centering
\caption{Accuracy of different adapter modules with trunk model ResNet-26, using ``All" plugging strategy.}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Adapter Structure &Plugging Strategy& OGlt &SVHN&DTD \\
\midrule
Res Adapt&\multirow{4}{*}{All} & 89.82$\pm$0.13\% &96.17$\pm$0.09\%&57.02$\pm$0.19\% \\
$1 \times 1$ Adapt&~ & 89.67$\pm$0.16\% &96.77$\pm$0.12\% &56.54$\pm$0.22\%\\
BN Adapt&~ & 84.83$\pm$0.11\% & 94.10$\pm$0.07\%&51.60$\pm$0.13\%\\
NAS Adapt&~ & \textbf{90.02$\pm$0.15\%} &\textbf{96.98$\pm$0.11\%}&\textbf{59.30$\pm$0.20\%}\\
\bottomrule
\end{tabular}
}
\label{tab:two}
\end{table}
Previous MDL approaches construct the adaptation module with a fixed hand-crafted structure and directly add an adaptation module after each layer of common trunk model. We select three different adapter structures (Res Adapt~\cite{rebuffi2017learning}, $1 \times 1$ Adapt~\cite{rosenfeld2018incremental}, BN Adapt~\cite{bilen2017universal}) and plugging the adapters at each possible slot (such a plugging strategy denoted by ``All") as baselines.
\begin{table}[t]
\centering
\caption{Accuracy and adapter parameter usage (setting plugging all 15 adapters to be 100\%) of different adapter structures with trunk of VGG-16. The highest accuracy is in \textbf{bold}, and the lowest parameter usage is \underline{underlined}.}
\resizebox{1\columnwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Adapter Structure &Plugging Strategy & Evaluation &MITIndoor &Flowers& FGVC\\
\midrule
\midrule
\multirow{4}{*}{Res Adapt}&\multirow{2}{*}{All} & Acc.&72.40$\pm$0.24\%& 96.43$\pm$0.12\%& 88.92$\pm$0.30\%\\
~&~&Ada. Param.&35.42M (100\%) & 35.42M (100)\% & 35.42M (100\%) \\
\cline{2-6}
~&\multirow{2}{*}{Ours} & Acc.&\textbf{72.61$\pm$0.26\%}&\textbf{96.66$\pm$0.13\%}&\textbf{88.93$\pm$0.30\%}\\
~&~&Ada. Param.&\underline{17.32M (48.91\%)} &\underline{16.80M (47.42\%)}&\underline{18.29M (51.65\%)} \\
\midrule
\multirow{4}{*}{$1 \times 1$ Adapt}&\multirow{2}{*}{All} & Acc. &63.02$\pm$0.26\% &92.65$\pm$0.16\% & 86.80$\pm$0.32\%\\
~&~& Ada. Param. & 35.42M (100\%) &35.42M (100\%) &35.42M (100\%)\\
\cline{2-6}
~&\multirow{2}{*}{Ours} & Acc. &\textbf{67.96$\pm$0.27\%} &\textbf{94.83$\pm$0.18\%}& \textbf{87.42$\pm$0.35\%}\\
~&~& Ada. Param. &\underline{7.61M (21.49\%)} &\underline{16.80M (47.43\%)}&\underline{10.74M (30.3\%)} \\
\midrule
\multirow{4}{*}{BN Adapt} &\multirow{2}{*}{All} & Acc. &57.60$\pm$0.15\% &91.47$\pm$0.11\%&63.04$\pm$0.25\%\\
~&~& Ada. Param.&60 (100\%) &60 (100\%) &60 (100\%)\\
\cline{2-6}
~&\multirow{2}{*}{Ours} & Acc. &\textbf{68.71$\pm$0.16\%} & \textbf{92.75$\pm$0.12\%}& \textbf{68.29$\pm$0.28\%}\\
~&~&Ada. Param. &\underline{24 (40.00\%)} &\underline{15 (25.00\%)}&\underline{26 (43.33\%)}\\
\bottomrule
\end{tabular}
}
\label{tab:three}
\end{table}
\begin{table}[t]
\centering
\caption{Accuracy and adapter parameter usage (setting plugging all 25 adapters to be 100\%) of different adapter structures with trunk of ResNet-26. The highest accuracy is in \textbf{bold}, and the lowest parameter usage is \underline{underlined}.}
\resizebox{1\columnwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Adapter Structure &Plugging Strategy & Evaluation & OGlt &CIFAR100&DTD\\
\midrule
\midrule
\multirow{4}{*}{Res Adapt}&\multirow{2}{*}{All} & Acc. &89.82$\pm$0.13\% &81.31$\pm$0.09\%&57.02$\pm$0.18\%\\
~&~&Ada. Param.&0.69M (100\%) &0.69M (100\%) & 0.69M (100\%) \\
\cline{2-6}
~&\multirow{2}{*}{Ours} & Acc. & \textbf{89.96$\pm$0.17\%} &\textbf{81.45$\pm$0.12\%}&\textbf{57.93$\pm$0.25\%}\\
~&~&Ada. Param.&\underline{0.49M (70.79\%)} &\underline{0.30M (42.98\%)}&\underline{0.28M (40.59\%)} \\
\midrule
\multirow{4}{*}{$1 \times 1$ Adapt}&\multirow{2}{*}{All} & Acc. & \textbf{89.67$\pm$0.16\%} &\textbf{80.07$\pm$0.10\%} &56.54 $\pm$ 0.22 \%\\
~&~&Ada. Param.&0.69M (100\%) &0.69M (100\%) & 0.69M (100\%) \\
\cline{2-6}
~&\multirow{2}{*}{Ours} & Acc. & 89.39$\pm$0.14\% &79.44$\pm$0.08\%&\textbf{56.98$\pm$0.20\%}\\
~&~&Ada. Param.&\underline{0.62M (89.90\%} &\underline{0.34M (49.93\%)}&\underline{0.28M (39.97\%)}\\
\midrule
\multirow{4}{*}{BN Adapt} &\multirow{2}{*}{All} & Acc. & \textbf{84.83$\pm$0.11\%} & 78.62$\pm$0.06\%&\textbf{51.60$\pm$0.13\%}\\
~&~&Ada. Param.&100 (100\%) &100 (100\%) &100 (100\%) \\
\cline{2-6}
~&\multirow{2}{*}{Ours}& Acc. & 83.90$\pm$0.12\% &\textbf{78.75$\pm$0.05\%}&51.54$\pm$0.12\%\\
~&~&Ada. Param.& \underline{48 (48.00\%)} &\underline{61 (61.00\%)}&\underline{63 (63.00)\%} \\
\bottomrule
\end{tabular}
}
\label{tab:four}
\end{table}
\begin{figure*}[t]
\centering
\subfigure{}{
\begin{minipage}[ht]{0.32\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure4.pdf}
\end{minipage}}
\subfigure{}{
\begin{minipage}[ht]{0.32\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure5.pdf}
\end{minipage}
}
\subfigure{}{
\begin{minipage}[ht]{0.32\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure6.pdf}
\end{minipage}
}
\caption{Performance of different plugging strategies (with the same number of adapter modules) on different datasets (VGG-16). \emph{Ours:} the selected plugging strategy. \emph{Top-Down:} select the first $n$ locations to plug in. \emph{Bottom-Up:} select the last $n$ locations to plug in. \emph{Random:} Randomly select $n$ locations to plug in.}
\label{fig:four}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure{}{
\begin{minipage}[ht]{0.32\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure7.pdf}
\end{minipage}}
\subfigure{}{
\begin{minipage}[ht]{0.32\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure8.pdf}
\end{minipage}
}
\subfigure{}{
\begin{minipage}[ht]{0.32\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure9.pdf}
\end{minipage}
}
\caption{Performance of different plugging strategies (with the same number of adapter modules) on different datasets (ResNet-26). \emph{Ours:} the selected plugging strategy. \emph{Top-Down:} select the first $n$ locations to plug in. \emph{Bottom-Up:} select the last $n$ locations to plug in. \emph{Random:} randomly select $n$ locations to plug in.}
\label{fig:five}
\end{figure*}
\begin{table}[t]
\centering
\caption{Performance of our method with or without adapter structure search for VGG-16.}
\resizebox{1\columnwidth}{!}{
\begin{tabular}{lcccccc}
\toprule
Plugging Strategy & Adapter Structure Search &Adapter Structure &MITIndoor &Flowers& FGVC & Total Param.\\
\midrule
\multirow{4}{*}{Ours} &\multirow{3}{*}{no}&Res Adapt&72.61\%&96.66\%&88.93\%&1.79\\
&&$1 \times 1$ Adapt&67.96\%&94.83\%&87.42\%&1.79\\
&&BN Adapt&68.71\%&92.75\%&68.29\%&\textbf{$\approx$1}\\
\cline{2-7}
&yes&NAS Adapt&\textbf{73.51\%}&\textbf{96.96\%}&\textbf{89.34\%}&1.33\\
\bottomrule
\end{tabular}
}
\label{tab:vgg:plugging}
\end{table}
\subsubsection{Comparison with hand-crafted adapter structures} We compare our NAS-adapter module with other three hand-crafted ones in Table~\ref{tab:one} and Table \ref{tab:two}. For a fair comparison, we also construct the adaptation model by embedding the NAS-adapter module after each domain-agnostic layer (i.e. NAS Adapt). While taking VGG-16 as the common trunk model in Table~\ref{tab:one}, we can observe that the adaptation model with our NAS-adapter can achieve the best results among others. Res Adapt yields the second place of those three datasets, followed by $1 \times 1$ Adapt and BN Adapt. As for experiments where ResNet-26 serves as the common trunk model in Table~\ref{tab:two}, our NAS-adapter module still performs better than others.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{./image/RFigure2.pdf}
\caption{(a): BN Adapt~\cite{bilen2017universal}, (b): NAS-1, (c): $1 \times 1$ Adapt~\cite{rosenfeld2018incremental}, (d): Res Adapt~\cite{rebuffi2017learning}, (e): NAS-2, (f): NAS-3.}
\label{rfig:novel_module}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure{}{
\begin{minipage}[t]{1\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure16.pdf}
\end{minipage}}
\caption{Frequency of each plugging location to be selected on different datasets with the trunk model VGG-16.}
\label{fig:six}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{./image/RFigure1.pdf}
\caption{The distribution of learned adapter structure on different datasets with the trunk model VGG-16.}
\label{rfig:one}
\end{figure}
\subsubsection{Effectiveness of our plugging strategy selection scheme} We evaluate our plugging strategy selection scheme with three kinds of baseline adapter structures. The plugging strategy controls whether or not to plug the adapter into a possible slot. For VGG-16, there are $15$ possible locations for plugging, while for ResNet-26 the number becomes $25$.
Our plugging strategy selection scheme is capable of any hand-crafted adapter modules. $1 \times 1$ Adapt with the plugging strategy ``Ours" (Res Adapt with ``Ours" or BN Adapt-Part with ``Ours") denotes adding the $1 \times 1$ convolutional adapter modules (residual adapter modules or batch-normalize adapter modules) into the common trunk model at the locations decided by our selected plugging strategy. For the adapter parameter cost estimation, we use the plugging strategy ``All'' as a comparison point. We consider that ``All'' induce a $100\%$ parameter cost since they add adapters at all possible locations. It is then possible to calculate the relative cost for ``Ours".
The results of taking the VGG-16 as the common trunk model are presented in Table~\ref{tab:three}. It can be noticed that $1 \times 1$ Adapt with our selected plugging strategy (Res Adapt with ``Ours" or BN Adapt with ``Ours") has higher accuracy with $1 \times 1$ Adapt with the plugging strategy ``All" (Res Adapt with ``All" or BN Adapt-Part with ``All") but uses less additional parameters on three target datasets. This result demonstrates that some of the adapter modules perform a redundant role, which can be omitted without sacrificing accuracy. As for ResNet-26, reported in Table~\ref{tab:four}, with a different experiment setting (new trunk model and new datasets) we can observe that our optimized plugging strategy still consumes less extra parameters and obtains a competitive accuracy. This practice is especially important for VGG-16 since it contains a particularly large number of parameters, which has enough room for selecting an appropriate plugging strategy to reduce parameter cost of the adapters with accuracy improvement.
\subsubsection{Comparison with hand-crafted plugging strategies} In order to further demonstrate the effectiveness of our selected plugging strategy, we compare ``ours" with other three intuitively hand-crafted strategies. One of the intuitive plugging strategies is adding the adapter modules with a top-down order, i.e. select the first $n$ locations to plug the adapter modules. We denote this strategy as ``Top-Down''. Another intuitive plugging strategy is doing the opposite and adding the adapter modules with a bottom-up order, i.e. select the last $n$ locations to add the adapter, which is denoted as ``Bottom-Up''. A ``Random" strategy is also conducted by simply plugging the adapter module into $n$ random locations. For a fair comparison, we construct all the strategies by the same number of adapter modules.
As shown in Figure~\ref{fig:four} and Figure~\ref{fig:five}, our strategy achieves the best performance on all datasets. For the Res Adapt module, the accuracies of the three hand-crafted strategies are nearly the same when using ResNet-26 as the trunk, while the Top-Down strategy fails with VGG-16. For the $1 \times 1$ Adapt, the three are also comparable with each other when using ResNet-26, but the Bottom-Up strategy occupies an inferior place with VGG-16. Also, with VGG-16 as the trunk, experiments on batch-normalize adapter module show that the accuracy of the Top-Down and Random plugging strategies are much higher than that of the Bottom-Up strategy. All these results demonstrate that the same plugging strategy performs differently depending on the adapter module structure and the trunk model, which fits our assumption.
We have also included a training time comparison for the entire pipeline in contrast to these hand-crafted plugging strategies (e.g. ``Random": randomly plugging adapter modules.
Compared to randomly plugging adapter modules, our method utilizes 40 hours for plugging strategy selection and improves the average accuracy by $2.27\%$ on the benchmark of seven domains.
\subsubsection{The importance of adapter structure search}
We evaluate our method with or without adapter structure search on three datasets. As shown in Table~\ref{tab:vgg:plugging}, the performance of our method with adapter structure search outperforms that without adapter structure search, which shows the importance of structure search to model performance.
\subsubsection{The distribution of learned adapter structure across domains}
We have conducted experiments to show the distribution of learned adapter structure across domains on three target datasets (i.e. MITIndoor, FGVC, Flowers) with the trunk model VGG-16 (For the domain complexity, MITIndoor $\textgreater$ FGVC $\textgreater$ Flowers) and the results are shown in Figure~\ref{rfig:one}. We search the adapter structure 10 times on each domain and calculate the frequency of each adapter structure. Six kinds of adapter structures are obtained in this experiment: BN Adapt, $1 \times 1$ Adapt, Res Adapt, NAS-1, NAS-2, and NAS-3. These adapter structures are listed in increasing order of complexity and shown in Figure~\ref{rfig:novel_module}. As shown in Figure~\ref{rfig:one}, the distribution of learned adapter structure for different domains varies. For the simple domain Flowers, NAS-1 and BN Adapt are more frequently selected. For the complex domain MITIndoor, our method tends to select Res Adapt and NAS-3. Simple adapter structures are usually selected on simple domains, and vice versa. All of these experiments demonstrate the diversity of our searching results on different domains, which shows that our method is effective.
\begin{figure*}[t]
\centering
\subfigure{}{
\begin{minipage}[t]{1\textwidth}
\includegraphics[width = 1\columnwidth]{./image/Figure19.pdf}
\end{minipage}}
\caption{Frequency of each plugging location to be selected on different datasets with the trunk model ResNet-26.}
\label{fig:seven}
\end{figure*}
\subsubsection{Frequency of each plugging location to be selected} We show the frequency of each plugging location to be selected. For plugging strategy selection, we employ our method several times ($10$ in this experiment) and calculate the frequency of plugging an adapter at each plugging location. For VGG-16, there are $15$ possible plugging locations, while for ResNet-26 the number becomes $25$.
The results of taking the VGG-16 as the common trunk model are presented in Figure~\ref{fig:six}. On each dataset, the plugging locations with the highest frequency shows that these locations are often selected to plug an adapter, which implies they are important for learning this domain. For different datasets, the frequency of the same plugging location is different. All of these results demonstrate that a domain-specific plugging strategy is needed for each domain, which fits our motivation. Also, with ResNet-26 as the trunk in Figure~\ref{fig:seven}, we can obtain similar observations.
\begin{table*}[t]
\centering
\caption{Comparison of different paradigms on the benchmark of seven domains with the trunk model MobileNet. The best value is in \textbf{bold}.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{lcccccccccc}
\toprule
Method &FGVC &MITIndoor & Flowers & Caltech256 & SVHN & SUN397 &CIFAR100 & Ave. Acc.& Total Param.\\
\midrule
\midrule
mobilenet-FNFT &\textbf{79.63$\pm$0.14\%} &\textbf{68.94$\pm$0.08\%}& \textbf{95.69$\pm$0.11\%} & 82.71$\pm$0.17\% & \textbf{95.56$\pm$0.13\%} & 53.08$\pm$0.12\% &\textbf{78.90$\pm$0.08\%}& \textbf{79.22\%}&7\\
\midrule
mobilenet-Ours & 79.55$\pm$0.05\%&68.43$\pm$0.11\%&94.51$\pm$0.05\% &\textbf{84.09$\pm$0.03\%} &95.42$\pm$0.02\%&\textbf{53.31$\pm$0.09\%} & 78.84$\pm$0.11\%&79.16\% &\textbf{4.55}\\
\bottomrule
\end{tabular}
}
\label{tab:mobilenet:sota:one}
\end{table*}
\begin{table*}[t]
\centering
\caption{Accuracy, average accuracy, score and total parameter cost for seven popular vision datasets with the trunk model VGG-16. The best value is in \textbf{bold}.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{lccccccccccc}
\toprule
Method &FGVC &MITIndoor & Flowers & Caltech256 & SVHN & SUN397 &CIFAR100 & Ave. Acc. &S.& Total Param.&FLOP\\
\midrule
\midrule
FNFT &85.73\% &71.77\% & 95.67\% & 83.73\% & 96.41\% & 57.29\% &80.45\% & 81.58\%&1750&7& \textbf{1}\\
\midrule
\midrule
BN~\cite{bilen2017universal} &63.04\%& 57.60\% & 91.47\% & 73.66\% & 91.10\% & 47.04\% &64.80\% & 69.82\%&253& \textbf{$\approx$ 1}& $\approx$ 1 \\
\midrule
DAN~\cite{rosenfeld2018incremental} &86.80\%& 63.02\% & 92.65\% & 68.63\% & \textbf{96.55\%} & 45.98\% & 74.45\% & 75.44\%&957&2.84 &1.15\\ %
\midrule
RA~\cite{rebuffi2017learning} &88.92\% &72.40\%& 96.43\% & \textbf{84.17\%} & 96.13\% & \textbf{57.38\%}&79.55\%& 82.14\%&1935&2.85&1.15\\
\midrule
PA~\cite{rebuffi2018efficient} &86.23\% &71.41\% & 95.20\% & 84.02\% & 96.05\% & 57.27\% &\textbf{79.85\%} & 81.43\%&1656&2.84&1.15\\
\midrule
BP-NAS~\cite{liu2020block}&89.01\% &72.53\% & 96.27\% & 83.64\% & 96.09\% & 57.14\% &79.36\% &82.01\%&1891&2.41&1.12\\
\midrule
PolSAR-DNAS~\cite{dong2020automatic}&86.59\% &70.13\% & 95.88\% & 83.48\% & 96.34\% & 57.26\% &78.59\% &81.18\%&1715&3.02&1.16\\
\midrule
Ours & \textbf{89.34\%}&\textbf{73.51\%} &\textbf{96.96\%} &83.80\% &96.47\%&57.28\% & 79.48\%& \textbf{82.41\%}&\textbf{2082}&1.84&1.09\\
\bottomrule
\end{tabular}
}
\label{tab:five}
\end{table*}
\subsubsection{The accuracy of our method with regard to domain diversity}
We give a detailed analysis about the accuracy of our method with regard to domain diversity. On the benchmark of seven domains, we construct two kinds of datasets: 1) the one contains the domains (i.e. FGVC+Flowers+SVHN covering particular fine-grained classes) which are more different from ImageNet (with general coarse-grained object classes); 2) the other one contains the domains (i.e. Caltech256+SUN197+CIFAR100) which are more similar to ImageNet. On these two datasets, we compare our method with the baseline RA~\cite{rebuffi2017learning}. As shown in Table~\ref{tab:five}, our method outperforms RA by a large margin on domains of the first dataset. The performance of the RA and our method are close to each other on the domains of the other dataset. From the results, we see that the performance gap increases on the dataset with more diverse domains.
\subsubsection{Comparison of different paradigms}
We compare two different paradigms on the benchmark of seven domains: one is training a smaller network from scratch for each domain (denoted by ``mobilenet-FNFT") and the other is employing our NAS-driven multi-domain learning method to plug a set of adapters to a trunk model MobileNet (denoted by ``mobilenet-Ours"). As shown in Table~\ref{tab:mobilenet:sota:one}, compared to mobilenet-FNFT, mobilenet-Ours achieves comparable performance and only utilizes about 65\% parameters.
\subsection{Comparison to Previous Methods}
In this section, we evaluate the MDL performance of our proposed scheme on two benchmarks, against other methods with different adapter structures and different architecture searching methods, including RA~\cite{rebuffi2017learning}, BN~\cite{bilen2017universal}, DAN~\cite{rosenfeld2018incremental}, PA~\cite{rebuffi2018efficient}, BP-NAS~\cite{liu2020block}, PolSAR-DNAS~\cite{dong2020automatic}.
\begin{table*}[t]
\centering
\caption{Accuracy, average accuracy, score and total parameter cost for the Visual Decathlon Challenge with the trunk model ResNet-26. The best value is in \textbf{bold}.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{lccccccccccccc}
\toprule
Method &ImNet &Airc. & C100 & DPed & DTD >SR & Flwr &OGlt& SVHN&UCF& Ave. Acc. &S.& Total Param.\\
\midrule
\midrule
FNFT &59.87\% &60.34\% & 82.12\% & 92.82\% & 55.53\% & 97.53\% &81.41\% & 87.69\%&96.55\%&51.20\%& 76.51\% &2500& 10\\
\midrule
\midrule
BN~\cite{bilen2017universal} & 59.87\% &43.05\% & 78.62\% & 92.07\% & 51.60\% & 95.82\% &74.14\% & 84.83\%&94.10\%&43.51\%& 71.76\% &1263& \textbf{$\approx$ 1}\\
\midrule
DAN~\cite{rosenfeld2018incremental} &57.74\% &64.11\% & 80.07\% & 91.29\% & 56.54\% & 98.46\% &86.05\% & 89.67\%&\textbf{96.77\%}&49.38\%& 77.01\% &2851& 2.02\\
\midrule
RA~\cite{rebuffi2017learning}& 59.23\% &63.73\% & 81.31\%& 93.30\% & 57.02\% & 97.47\% &83.43\% & 89.82\%&96.17\%&50.28\%& 77.17\% &2643& 2.03\\
\midrule
PA~\cite{rebuffi2018efficient}& 60.32\% &64.21\% & 81.91\% & \textbf{94.73\%} & 58.83\% & \textbf{99.38\%} &84.68\% & 89.21\%&96.54\%&\textbf{50.94\%}& 78.07\% &3412& 2.02\\
\midrule
BP-NAS~\cite{liu2020block}& 60.35\% &64.19\% & \textbf{81.92\%} & 94.67\%& 58.94\% & 98.77\% &84.64\% & 89.99\%&96.57\%&50.88\%& 78.09\% &3247& 1.86\\
\midrule
PolSAR-DNAS~\cite{dong2020automatic}& 59.97\% &64.14\% & 81.42\% & 93.54\% & 58.47\% & 98.34\% &83.96\% & 89.94\%&96.35\%&50.72\%& 77.69\% &2950& 2.31\\
\midrule
Ours & \textbf{60.43\%} &\textbf{64.32\% }& 81.70\% & 94.61\% & \textbf{59.47\%} &99.34\% & \textbf{84.77\%}& \textbf{90.02\%}&96.63\%&50.87\%& \textbf{78.22\%} &\textbf{3446}&1.54\\
\bottomrule
\end{tabular}
}
\label{tab:six}
\end{table*}
\paragraph{Results on the benchmark of seven domains}
We evaluate methods on the benchmark consisted of seven visual domains. The trunk structure we used is VGG-16. As shown in Table~\ref{tab:five}, FNFT, i.e. finetuning the full network for each domain, takes the most parameters since it uses a whole different version of the trunk for each domain. Our method yields $82.41\%$ average accuracy with $1.84$ times the number of parameters. Compared to RA, this accuracy rate is on par with their results, but the parameters cost is much lower (saving almost $55\%$ additional parameter cost). Although almost no additional parameters are added by BN, its average accuracy is $13\%$ lower than our method. As shown in Table~\ref{tab:five}, our method achieves better average accuracy and lower FLOPs than DAN. Compared to RA, PA, BP-NAS and PolSAR-NAS, our method also achieves lower FLOPs with comparable performance. The lower computational cost comes from the following aspects: 1) selecting an appropriate plugging strategy can reduce the computational cost because the number of adapters plugged to the trunk model is decreased; 2) searching a simple adapter structure on some domains can reduce the computational cost because the computational cost of an adapter is decreased. Our method only achieves comparable performance compared with the baseline because our method adaptively keeps a trade-off between the performance and memory cost for each domain. As shown in Table~\ref{tab:five}, the average accuracy of our method outperforms that of the baseline with a simple adapter structure BN by 12.59\%. Compared to the baseline with a complicated adapter structure RA, our method achieves comparable performance but only uses around 60\% memory cost. These results demonstrate that our method achieves a good balance between effectiveness and efficiency, which shows that our method is superior. We have also conducted experiments when enlarging the proposed model with similar total parameter cost to RA with the trunk model VGG-16. The average accuracy of our method with similar parameter cost to RA is increased by 0.64\% than before.
\paragraph{Results on the Visual Decathlon benchmark}
We also analyze the performance on the Visual Decathlon benchmark, take the ResNet-26 as the trunk model. As shown in Table~\ref{tab:six}, BN utilizes the fewest parameters but has a poor performance across the tested domains. Both the average accuracy and score of our method is better than BN. Compared with other methods, our method achieves higher accuracy and score with less total parameter cost. On some specific domains such as DTD, our method owns the highest accuracy.
\section{Conclusion}\label{conclusion}
In this paper, we have proposed a novel NAS-driven multi-domain learning scheme, which aims to automatically set up the adapter plugging strategy and adaptively fulfill the adapter structure design. The proposed scheme is capable of utilizing NAS to learn where to plug as well as what adapter structure to plug. With the plugging strategy, our scheme is flexible in adapting to different domains. When compared to other methods, the MDL model obtained by our scheme is more compact and discriminative. Comprehensive experiments and analysis demonstrate the effectiveness of our scheme.
\section*{Acknowledgment}
This work is supported in part by National Key Research and Development Program of China under Grant 2020AAA0107400, National Natural Science Foundation of China under Grant U20A20222, Zhejiang Provincial Natural Science Foundation of China under Grant LR19F020004, and key scientific technological innovation research project by Ministry of Education
\bibliographystyle{IEEEtran}
|
1,116,691,497,256 | arxiv | \section{Introduction}
Bose-Einstein condensation is a macroscopic quantum phenomenon, which occurs spontaneously at thermodynamic equilibrium in a bosonic many-particle system at low temperatures or high particle densities. The phenomenon manifests itself through the emergence of a coherent state of matter, consisting of a macroscopic number of particles with the same energy and momentum at the lowest energy level of the system.
Its existence was predicted for an ideal bosonic quantum gas by Satyendra Nath Bose and Albert Einstein in 1924-1925. \cite{Einstein2005} In 1968, Herbert Fr\"{o}hlich proposed the concept of condensation of electric modes driven out of thermal equilibrium by external excitation. \cite{Frohlich1968} While the Bose-Einstein condensate (BEC) closest to the ideal gas model was first observed in low-density clouds of ultra-cold atoms, \cite{Anderson1995, Davis1995} well-known phenomena such as superfluidity in liquid $^4 \mathrm{He}$ and superconductivity of Cooper pairs can also be related to Bose-Einstein condensation. Later, equilibrium and non-equilibrium BEC-like \cite{Rodrigues2018} states have been found in various quasi-particle systems such as magnons in liquid $^3 \mathrm{He}$ \cite{BorovikRomanov1984, Bunkov2007} and solid-state ferromagnets \cite{Demokritov2006, Serga2014, Safranski2017, Schneider2020, Divinskiy2021}, triplons in dimerized quantum antiferromagnets, \cite{Giamarchi2008} exciton-polaritons, \cite{Kasprzak2006, Lerario2017} phonons, \cite{Rodrigues2006} and photons \cite{Klaers2010, Damm2016}. In many of the quasiparticle systems, supercurrent and superfluid effects, \cite{Borovik-Romanov1988, Volovik2008, Amo2009, Bozhko2016} and related phenomena such as Josephson oscillations, \cite{Lagoudakis2010, Abbarchi2013, Kreil2021, Autti2020} Bogoliubov waves, \cite{Bozhko2019} and quantized vorticity \cite{Nowik-Boltyk2012} were reported.
Their use in magnonics \cite{Pirro2021, Barman2021, Chumak2022} and spintronics \cite{Hoffmann2015} opens the way to a new generation of functional and logical devices that exploit the spontaneous coherence of Bose-Einstein condensates and their unusual transport properties. \cite{Dzyapko2008, Rezende2009, Nakata2015, Safranski2017, Tserkovnyak2017, Bunkov2020, Noack2021a, Mohseni2022} The essential task is to control the characteristics of magnon condensates.
This work focuses on the employment of supercurrents as a means of transport inside of a magnon BEC. The BEC is observed in an in-plane magnetized monocrystalline films of yttrium iron garnet (YIG), a dielectric ferrimagnet with very low magnetic damping. \cite{Cherepanov1993, Chumak2017} In this system, condensation can be quite easily accomplished by overpopulating the magnon gas by parametric electromagnetic pumping. Furthermore, the high, experimentally achievable magnon densities, reaching $10^{19} - 10^{20} \mathrm{cm}^{-3}$, allow the realization of Bose-Einstein condensation conditions even at room temperature. \cite{Demokritov2006, Serga2014}
As the density of the gas increases, its chemical potential rises to the bottom of the magnon spectrum, and two energy-degenerated BEC states with opposite wavevectors $\pm q_\mathrm{BEC}$ and zero group velocities are formed there. \cite{Mohseni2022}
The straightforward approach for spatial manipulation of a BEC is to control the surrounding energy landscape, for instance by applying an additional, spatially confined bias magnetic field. It has been shown that this kind of artificial topology can lead to redistribution of the magnon density inside the condensate \cite{Borisenko2020, Borisenko2020_2, Kreil2021}. However, since the local magnetic fields are created by electric currents flowing through strip conductors on the surface of a YIG film, this method comes with demanding restrictions in terms of landscape design.
On the contrary, changing the saturation magnetization opens up an alternative path. It has been shown that efficient control of the magnon energy landscape and realization of supercurrent BEC transport can be achieved using optical heating. \cite{Bozhko2016, Kreil2018, Bozhko2019} An increased temperature results in a decreased saturation magnetization, which in turn leads to a lowered frequency of the condensate. Although the action of this method on the BEC frequency is weaker and slower compared to the manipulation of the magnetization field, it provides an opportunity to investigate the effect of diverse form factors of reconfigurable energy landscapes \cite{Vogel2015, Vogel2018} on the dynamics of the magnon BEC.
Here, using the optical heating method, we achieved an organized spatial redistribution of the BEC density caused by the flux of condensed magnons from heated to cold regions of the YIG film. In particular, we show that by choosing an appropriate distance between the two higher temperature regions, it is possible to form a pronounced BEC occupancy peak between them, and to extend the condensate lifetime in such a heat trap.
\section{Experimental setup}
The experimental setup shown in Fig.\,\ref{F:Setup} can be divided into three modules: The excitation module for the generation of the condensate via parallel parametric pumping, the optical heating module for forming and projecting optical heating patterns onto the YIG film sample, and the optical detection module using Brillouin light scattering spectroscopy (BLS). The aim of the following sections is to describe the function of each of these modules, and subsequently, their interplay. For the control of each individual module, the automation framework \textit{thaTEC:OS} (THATec Innovation GmbH) was used. The subsequent data evaluation was performed using \textit{Python} along with additional libraries such as \textit{PyThat} and \textit{xarray}. \cite{THATec, PyThat, hoyer2017xarray, 2020SciPy-NMeth}
\subsection{Microwave-setup for magnon excitation}
The excitation module is designed to create magnons using a parallel parametric pumping process.
In this process, the microwave pumping magnetic field $\boldsymbol{h}_\mathrm{p}$ is directed parallel to the external, constant magnetization field $\boldsymbol{H}_\mathrm{ext}$ and, therefore, along the equilibrium direction of the magnetization vector $\boldsymbol{M}$. Due to the ellipticity of the precession motion of $\boldsymbol{M}$, the length of its longitudinal component, oriented along the field $\boldsymbol{H}$, is not conserved but oscillates with twice the precession frequency. When the pumping frequency coincides with this oscillation frequency, energy transfer to the spin system occurs. \cite{Bracher2014, Schlomann1960} In this case, a microwave photon of the pumping field with a near-zero wavenumber $q_\mathrm{p} \approx 0$ decays into two magnons of half the pumping frequency $\omega_\mathrm{p}/2$ with oppositely directed wave vectors $\pm q_\mathrm{pm}$.
When the pumping power $P_\mathrm{p}$ exceeds the threshold of parametric instability,\cite{Mihalceanu2018} that is, after the influx of parametrically generated magnons exceeds their losses, the magnon number begins to grow exponentially until it saturates, since the nonlinear phase mismatch between the electromagnetic pumping and the longitudinal magnetization oscillations limits further amplification. \cite{Lvov1994book}
Not limited by magnetic losses, this process provides an efficient magnon injection mechanism required for Bose-Einstein condensation. The injected magnons move to the bottom of the spectrum due to cascading four-magnon scattering and kinetic instability processes \cite{Bozhko2015, Clausen2015, Kreil2018}. At a pump power of about 24\,dB above the threshold of the parametric instability, the threshold of BEC formation is reached, and a strong magnon accumulation in a narrow phase volume near the lowest energy state $(\omega_\mathrm{BEC}, \pm q_\mathrm{BEC})$ occurs. \cite{Noack2021a}
It should be noted that in our experiment, as in several previous works \cite{Serga2014, Kreil2018, Kreil2019, Noack2021a}, the formation of a coherent magnon condensate evolved during the free evolution of the congested magnon gas only after the pumping action was turned off.
In our experiment (see Fig.\,\ref{F:Setup}), the YIG film sample is magnetized by a bias magnetic field $\boldsymbol{H}_\mathrm{ext}$ of \SI{180}{\milli\tesla}. The pumping field $\boldsymbol{h}_\mathrm{p}$ is induced by a microwave electric current in a \SI{100}{\micro\meter}-wide microstrip resonator tuned to the pumping frequency \hbox{$\omega_\mathrm{p}=2 \pi \cdot \SI{14}{GHz}$} and placed in direct contact with the YIG film of \SI{5}{\micro\meter}-thickness.
The resonator is fed by microwave pulses of \SI{1}{\micro\second} duration and \SI{40}{\watt} power with a repetition frequency of \SI{1}{\kilo\hertz} from a microwave generator and power amplifier.
The chosen pumping parameters allowed the formation of BECs after the end of the pump pulses but did not lead to appreciable heating of the YIG film by relaxing magnons and microwave currents.
\begin{figure}[t]
\includegraphics[width=1.0\linewidth]{slm-bls.pdf}
\caption{Schematic depiction of the experimental setup. The optical heating module is shown on the right, the YIG sample and the BLS probing beam in the middle and the microwave components on the left. The optical heating module is fixed on a motorized stage, which offers the possibility to move the intermediate image and therefore the projection of the heating pattern on the sample surface.}
\label{F:Setup}
\end{figure}
\subsection{Heating module for spatial supercurrent manipulation}
The heating module is the core piece for the control of the magnon BEC. It creates reconfigurable energy landscapes in which the dynamics of a BEC can be observed. By projecting complex optical intensity landscapes onto the surface of the sample, the local temperature can be increased. In consequence, the saturation magnetization $M_\mathrm{s}$ is decreased, which results in a local drop of the spin wave frequency spectrum \cite{Bozhko2019, Kreil2018, Vogel2015, Vogel2018}.
These optical intensity landscapes were created by means of phase-based wavefront modulation of light. This was achieved by using a spatial-light-modulator (SLM) \textit{Santec SLM-100}, a 2D liquid crystal display, which can imprint a phase map onto a coherent beam of light by changing the refractive index of each pixel individually. The phase-maps used for the wavefront modulation are calculated with a Gerchberg-Saxton algorithm \cite{Gerchberg1972, Vogel2015, Vogel2018, Alsaka2018, Wang2016}. An intermediate image of the desired intensity pattern is then created by placing a lens at its focal distance behind the SLM (see Fig.\,\ref{F:Setup}). This lens creates an image of the Fourier transform of the previously calculated phase map, which in turn corresponds to the desired intensity pattern. An iris aperture was added at this position, blocking higher orders of the diffraction pattern created by the SLM. The intermediate image is then demagnified by a microscope lens-system, decreasing the size of the projected image on the sample.
Since the optical absorption of YIG increases drastically when approaching the UV spectrum\cite{Doormann1984}, a laser source \textit{Cobolt Twist}, which operates at \SI{457}{\nano\meter}, was chosen for the illumination of the SLM. This way, the optical intensity is almost completely absorbed even by thin YIG-layers. Furthermore, the small line-width of the light source and hence long coherence length grant high contrast and resolution of the resulting image.
The thermal landscape was created by continuous laser heating with \SI{20}{\milli\W} of power at the sample. In order to erase the previous structure before the start of a new measurement, the laser heating was disabled for one minute, whenever the heating pattern had been altered.
\subsection{BLS-Module for BEC detection}
The detection of the magnon density at the bottom of the spin-wave spectrum is achieved by means of frequency-, wavevector-, time-, and space-resolved Brillouin light scattering (BLS) spectroscopy \cite{sandercock1975, Sandweg2010, Bozhko2020, Buttner2000}. In this method, the intensity of light inelastically scattered by magnons is proportional to the density of magnons involved in the process, and the frequency shift of light is equal to the frequency of these magnons.
The direction of the external magnetic field is parallel to the projection of the probing beam on the sample surface, which allows for the detection of dipole-exchange magnons in the \textit{backward volume} geometry. \cite{Rezende2020} The magnon wavevectors can be selected with resolution $\pm$ \SI{0.2}{\radian\per\micro\meter} by changing the angle of incidence of the probing light. \cite{Sandweg2010, Bozhko2020, Bozhko2017, Bozhko2019}. The probing laser source is a \textit{Coherent Verdi} laser with a wavelength of \SI{532}{\nano\meter} under an angle of incidence of \SI{11}{\degree}, which corresponds to a detected magnon wavenumber of \SI{4.5}{\radian\per\micro\meter}. The area of detection is determined by the diameter of the laser spot of \SI{50}{\micro\meter}. In order to prevent the heating of the sample due to the optical power induced by the probing laser beam, this beam is switched off using an acousto-optic modulator as soon as the magnon condensate relaxes and is switched on just before the next pumping pulse is delivered. The low repetition rate, along with a short measurement window of a few microseconds, ensures that the average optical power is negligible.
\begin{figure}[tb]
\includegraphics[width=1.0\linewidth]{spots.pdf}
\caption{Schematic of the configuration of motion. The filled green and blue circles represent the initial position of the probing spot and the two heating spots, respectively. The dotted blue circles depict the displacement of the heating laser pattern. The probe spot remains in a static position while the heating pattern moves along the antenna, as shown by blue arrows.}
\label{F:movement}
\end{figure}
\subsection{Combination of BLS and SLM}
In order to simultaneously illuminate the sample with the probing beam of the detection module and project an optical intensity landscape with the heating module, a dichroic long-pass mirror with a reflectivity cut-off wavelength of \SI{490}{\nano\meter} is placed directly above the sample (Fig.\,\ref{F:Setup}). While the probing beam can pass through this mirror unperturbed, the heating beam is reflected, creating an additional port for optical coupling. The heating module is mounted on a separate, motorized table, whose position can be controlled independently. The intermediate image created by the Fourier lens serves as an interface between the BLS module and the heating module.
\begin{figure*}[t]
\includegraphics[width=1.0\linewidth]{im2_norm_decay_lines.pdf}
\caption{The four panels on the left side show cross sections of the integrated BLS intensity in the vicinity of the two-spots heating pattern with an inter-spot-distance of \SI{160}{\micro\meter}. The cross sections are depicted for a) \SI{10}{\nano\second} b) \SI{200}{\nano\second} c) \SI{400}{\nano\second} d) \SI{600}{\nano\second} after switching off the pumping pulse. The intensity has been normalized to the corresponding reference signal without heating, shown by the blue line. The time traces as shown in e) are taken at different positions, marked by the red (between the spots), orange (left spot), purple (right spot), and blue (reference) markings. The dotted lines show the regression of an exponential decay: In the upper panel, the time interval between \SI{300}{\nano\second} and \SI{550}{\nano\second} has been used, in the lower panel the interval between \SI{550}{\nano\second} and \SI{1200}{\nano\second}. }
\label{F:Data1}
\end{figure*}
Since the probing beam and the heating beam are mechanically decoupled, the intermediate image of the optic intensity pattern can be moved freely. This offers the possibility to move the intensity pattern over the sample while the location of investigation may remain stationary (see Fig.\,\ref{F:movement}). In this case, the magnification factor of the microscope connecting both modules has to be considered as a scaling factor between the movement of the stage and the movement of the resulting intensity pattern. Note that, since the probing point is fixed and only the structure of the sample changes, the reference case without heating is represented by only one spatial position.
Although the described wavefront modulation technique can be used to create very complex intensity patterns \cite{Wang2016, Vogel2015, Vogel2018}, in this study, we focused on a simple two-point structure with a varying distance. On the one hand, the chosen configuration represents the most fundamental case of a periodic structure. On the other hand, concentrating the available optical power in a small spatial area maximizes the heating effect on the BEC behavior. Since the magnon condensate is located above the microstrip antenna, \cite{Kreil2021, Serga2012} the chosen optical pattern also fits the system's symmetry. Therefore, the probing beam and the heating pattern were accurately positioned over the microstrip resonator before each measurement.
\section{Experimental Results}
\subsection{Spatial distribution of the magnon condensate}
Figure\,\ref{F:Data1} shows the measured BLS-intensity as a function of time and space. The data is integrated over the frequency interval between \SI{4.0}{\giga\hertz} and \SI{4.6}{\giga\hertz}. Panels (a), (b), (c), and (d) show the resulting magnon density as a function of the relative position of the heating pattern for different moments in time. While no influence of the temperature landscape is visible right after the end of the pumping pulse at $t=0$ (see Fig.\,\ref{F:Data1}a), the formation of two pronounced dips can be observed, starting at around \hbox{\SI{200}{\nano\second} (b)}. Simultaneously a pronounced peak emerges between both dips along with two weaker peaks at the outer edges of the heating spots at \hbox{$y=\pm$\SI{80}{\nano\meter}}. Although the absolute magnitude of the integrated BLS intensity decreases exponentially with time due to inherent damping of the freely evolving magnon system, the contrast between the dips and peaks only gets more pronounced. And while the magnon density far away from the heated region is comparable to the reference case depicted by the horizontal blue line, the BLS intensity between both intensity spots is up to two times as high.
It can be assumed that---although not directly exposed to optical heating---the region between the two heating spots is still significantly hotter than in the case without any optical heating.
This suggests that the difference in magnon density between heated and unheated regions is caused not just by the difference in their temperatures but by the presence of a specific temperature gradient.
Although increasing the temperature of the YIG film may reduce the efficiency of the BLS scattering process, \cite{Olsson2018} it does not explain the increased BLS intensity between the heating spots.
On a side-note, in Figs.\,\ref{F:Data1}c-\ref{F:Data1}d, there are also two more intensity dips at \hbox{$\pm$ \SI{240}{\micro\meter}}. These are due to higher order diffraction orders in the intensity landscape, which appear with about \SI{10}{\percent} of the intensity of the central order.
\subsection{Temporal dynamics of the magnon condensate}
The temporal behavior of the magnon population in the heated and cold regions in comparison is depicted in Fig.\,\ref{F:Data1}e. Both regions show the same BLS intensity during pumping and even at the beginning of the condensation process until \SI{200}{\nano\second}. However, at the peak of the condensation process, \cite{Serga2014} the behavior of the magnon populations in these regions begins to differ. At this point, the magnon density in the heated regions begins to decrease significantly faster than in the reference case. In contrast, the magnon population in the region between the two heating spots decreases significantly more slowly than in the heated regions, even more slowly than in the reference situation (see the upper panel). This difference persists until about \SI{550}{\nano\second}, when the decay rates begin to equalize again (see lower panel).
As a result, it stands to reason that the characteristic of the magnon distribution is caused by magnon transport from hot to the cold regions. While a continuous outflow of magnons from the heated region accelerates the local effective decay, the opposite effect occurs in neighboring colder regions, where an influx of magnons works against the internal attenuation.
The observed phenomenon can be associated with the emergence of magnon supercurrents as found by Bozhko \textit{et al.~} \cite{Bozhko2016, Bozhko2019}. In this work, a large decrease in the magnon density in the heated region, caused by the outflow of the magnon BEC, was also recorded.
\subsection{BEC behavior as a function of the inter-spot-distance}
Figure\,\ref{F:Data2} ultimately shows the central benefit of the measurement technique presented above. The distance between the two heating spots can be changed without changing any other parameter, including the position of the probing point and the location of the sample in the magnetization field. As for the data shown in Fig.\,\ref{F:Data1}, the position of the temperature landscape was moved, whilst the position of the probing beam remained static.
In Fig.\,\ref{F:Data2}, the deviation from the mean BLS intensity as a function of position and inter-spot distance is shown in a false color scheme. For all of the investigated inter-spot-distances a similar situation is observed. The magnon density drops strongly in the heated region, whereas it is increased above the reference level in the adjacent colder regions. However, although the magnitude of the intensity dips is comparable for all situations studied, the magnon density between the heating spots increases to a lesser degree for larger distances.
This behavior can be attributed to a dilution effect: As long as the system characteristics responsible for the spatial transport of magnons remain constant, the total number of transported magnons is also constant. As the inter-spot-distance increases, these magnons are spread between the two spots, which leads to a decrease in the observed BEC density. However, for the shortest investigated distance, the two high-temperature areas begin to overlap due to thermal diffusion. This weakens the temperature gradient toward the middle point and, consequently, the transport of magnons. So, the contrast in the magnon density is also weakened in this configuration. In this case, thermal diffusion during continuous heating can be recognized as a limiting factor for optical resolution and feature size.
\begin{figure}[t]
\includegraphics[width=1.0\linewidth]{im3.pdf}
\caption{Color-map of cross-sections at different inter-spot-distances. The color code has been chosen in a way that white refers to values of the reference signal, while red depicts more and blue less intensity. The distance between the two heated regions, is varied between \SI{112}{\micro\meter} and \SI{355}{\micro\meter}. For all observed distances, two local minima and a local maximum of the magnon density could be observed between both heated regions. While the accumulation is most pronounced for \SI{160}{\micro\meter}, it is weakened for large or very small distances.}
\label{F:Data2}
\end{figure}
\section{Conclusion}
In conclusion, we were able to utilize magnon supercurrents to achieve trapping of a magnon Bose-Einstein-condensate at a predefined position. The continuous magnon flux towards the trapped region results in increased lifetime of the condensate in that area. Consequently, the density of the condensate can be controlled by changing the spacing between the adjacent regions with decreased saturation magnetization. Under optimal conditions, an increase in density of up to 100\,\% could be achieved. It has been shown, that the demonstrated method offers the potential to create complex magnetization landscapes, which would be challenging to create by other means.
\begin{acknowledgments}
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)---TRR\,173/2---268565370 (Project B04).
\end{acknowledgments}
|
1,116,691,497,257 | arxiv | \section{Introduction}
Sentiment detection is the process of determining whether a text has
a positive or negative attitude toward a given entity (topic) or in
general. Detecting sentiment on Twitter\textemdash a social network
where users interact via short 140-character messages, exchanging
information and opinions\textemdash is becoming ubiquitous. Sentiment
in Twitter messages (tweets) can capture the popularity level of political
figures, ideas, brands, products and people. Tweets and other social
media texts are challenging to analyze as they are inherently different;
use of slang, mis-spelling, sarcasm, emojis and co-mentioning of other
messages pose unique difficulties. Combined with the vast amount of
Twitter data (mostly public), these make sentiment detection on Twitter
a focal point for data science research.
SemEval is a yearly event in which teams compete in natural language
processing tasks. Task 4 is concerned with sentiment analysis in Twitter;
it contains five sub-tasks which include classification of tweets
according to 2, 3 or 5 labels and quantification of sentiment distribution
regarding topics mentioned in tweets; for a complete description of
task 4 see \citet{SemEval:2017:task4}.
This paper describes our system and participation in all sub-tasks
of SemEval 2017 task 4. Our system consists of two parts: a recurrent
neural network trained on a private Twitter dataset, followed by a
task-specific combination of model stacking and logistic regression
classifiers.
The paper is organized as follows: section \ref{sec:RNN-Models} describes
the training of RNN models, data being used and model selection; section
\ref{sec:Features-Extraction} describes the extraction of semantic
features; section \ref{sec:Experiments} describes the task-specific
workflows and scores. We review and summarize in section \ref{sec:Review-and-Conclusions}.
Finally, section \ref{sec:Future-Work} describes our future plans,
mainly the development of an LSTM algorithm.
\section{\label{sec:RNN-Models}RNN Models}
The first part of our system consisted of training recursive-neural-tensor-network
(RNTN) models \citep{socher2013recursive}.
\subsection{Data}
Our training data for this part was created by taking a random sample\footnote{We used Twitter stream API.}
from Twitter and having it manually annotated on a 5-label basis to
produce fully sentiment-labeled parse-trees, much like the Stanford
sentiment treebank. The sample contains twenty thousand tweets with
sentiment distribution as following:\medskip{}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\cline{2-6}
\multicolumn{1}{c|}{} & {\small{}v-neg.} & {\small{}neg.} & {\small{}neu.} & {\small{}pos.} & {\small{}v-pos.}\tabularnewline
\hline
{\small{}Train} & {\small{}$8.4\%$} & {\small{}$23.2\%$} & {\small{}$31.7\%$} & {\small{}$25.3\%$} & {\small{}$11.4\%$}\tabularnewline
\hline
{\small{}Test} & {\small{}$8.6\%$} & {\small{}$23.0\%$} & {\small{}$33.2\%$} & {\small{}$24.8\%$} & {\small{}$10.4\%$}\tabularnewline
\hline
\end{tabular}\medskip{}
\par\end{center}
\subsection{Preprocessing}
First we build a custom dictionary by means of crawling Wikipedia
and extracting lists of brands, celebrities, places and names. The
lists were then pruned manually. Then we define the following steps
when preprocessing tweets:
\begin{enumerate}
\item Standard tokenization of the sentences, using the Stanford coreNLP
tools \citep{manning-EtAl:2014:P14-5}.
\item Word-replacement step using the Wiki dictionary with representative
keywords.
\item Lemmatization, using coreNLP.
\item Emojis: removing duplicate emojis, clustering them according to sentiment
and replacing them with representative keywords, e.g. ``happy-emoji''.
\item Regex: removing duplicate punctuation marks, replacing URLs with a
keyword, removing Camel casing.
\item Parsing: parts-of-speech and constituency parsing using a shift-reduce
parser\footnote{\href{http://nlp.stanford.edu/software/srparser.shtml}{http://nlp.stanford.edu/software/srparser.shtml}.},
which was selected for its speed over accuracy.
\item NER: using entity recognition annotator\footnote{\href{http://nlp.stanford.edu/software/CRF-NER.shtml}{http://nlp.stanford.edu/software/CRF-NER.shtml}.},
replacing numbers, dates and locations with representative keywords.
\item Wiki: second step of word-replacement using our custom wiki dictionary.
\end{enumerate}
\subsection{Training}
We used the Stanford coreNLP sentiment annotator, introduced by \citet{socher2013recursive}.
Words are initialized either randomly as $d$ dimensional vectors,
or given externally as word vectors. We used four versions of the
training data; with and without lemmatization and with and without
pre-trained word representations\footnote{Twitter pre-trained word vectors were used, \href{http://nlp.stanford.edu/projects/glove/}{http://nlp.stanford.edu/projects/glove/}}
\citep{pennington2014glove}.
\subsection{Tweet Aggregation}
Twitter messages can be comprised of several sentences, with different
and sometimes contrary sentiments. However, the trained models predict
sentiment on individual sentences. We aggregated the sentiment for
each tweet by taking a linear combination of the individual sentences
comprising the tweet with weights having the following power dependency:
\begin{align}
h(f,l,\mbox{pol})=(1+f)^{\alpha}\,l^{\,\beta}\,(1+\mbox{pol})^{\gamma}+1,
\end{align}
where $\alpha,\beta,\gamma$ are numerical factors to be found, $f,l,\mbox{pol}$
are the fraction of known words, length of the sentence and polarity,
respectively, with polarity defined by:
\begin{align}
\textrm{pol}=\left|10\cdot\text{vn}+\text{n}-\text{p}-10\cdot\text{vp}\right|,
\end{align}
where vn, n, p, vp are the probabilities as assigned by the RNTN for
very-negative, negative, positive and very-positive label for each
sentence. We then optimized the parameters $\alpha,\beta,\gamma$
with respect to the true labels.
\subsection{Model Selection}
After training dozens of models, we chose to combine only the best
ones using stacking, namely combining the models output using a supervised
learning algorithm. For this purpose, we used the Scikit-learn \citep{scikit-learn}
recursive feature elimination (RFE) algorithm to find both the optimal
number and the actual models, thus choosing the best \uline{five}
models. The models chosen include a representative from each type
of the data we used and they were:
\begin{itemize}
\item Training data without lemmatization step, with randomly initialized
word-vectors of size 27.
\item Training data with lemmatization step, with pre-trained word-vectors
of size 25.
\item 3 sets of training data with lemmatization step, with randomly initialized
word-vectors of sizes 24, 26.
\end{itemize}
The five models output is concatenated and used as input for the various
tasks, as described in \ref{subsec:General-Workflow}.
\section{\label{sec:Features-Extraction}Features Extraction}
In addition to the RNN trained models, our system includes feature
extraction step; we defined a set of lexical and semantical features
to be extracted from the original tweets:
\begin{itemize}
\item In-subject, In-object: whether the entity of interest is in the subject
or object.
\item Containing positive/negative adjectives that describe the entity of
interest.
\item Containing negation, quotations or perfect progressive forms.
\end{itemize}
For this purpose, we used the Stanford deterministic coreference resolution
system \citep{lee2011stanford,recasens_demarneffe_potts2013}.
\section{\label{sec:Experiments}Experiments}
The experiments were developed by using Scikit-learn machine learning
library and Keras deep learning library with TensorFlow backend \citep{abadi2016tensorflow}.
Results for all sub-tasks are summarized in table
\begin{table*}
\begin{centering}
\emph{\noun{}}%
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\small{}Task} & {\small{}A } & {\small{}B} & {\small{}C} & {\small{}D} & {\small{}E}\tabularnewline
& {\small{}3-class.} & {\small{}2-class.} & {\small{}5-class.} & {\small{}2-quant.} & {\small{}5-quant.}\tabularnewline
\hline
{\small{}Metric} & $\rho$ & $\rho$ & {\small{}$MAE^{M}$} & {\small{}$KLD$} & {\small{}$EMD$}\tabularnewline
\hline
\hline
{\small{}Score} & {\small{}$0.575$} & {\small{}$0.822$} & {\small{}$0.599$} & {\small{}$0.149$} & {\small{}$0.345$}\tabularnewline
\hline
{\small{}Rank} & {\small{}27/37} & {\small{}11/23} & {\small{}3/15} & {\small{}11/15} & {\small{}6/12}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{tab:Evaluation-results1}Summary of evaluation results, metrics
used and rank achieved, for all sub tasks. $\rho$ is macro-averaged
recall, $MAE^{M}$ is macro-averaged mean absolute error, $KLD$ is
Kullback-Leibler divergence and $EMD$ is earth-movers distance.}
\end{table*}
\ref{tab:Evaluation-results1}.
\subsection{General Workflow\label{subsec:General-Workflow}}
For each tweet, we first ran the RNN models and got a 5-category probability
distribution from each of the trained models, thus a 25-dimensional
vector. Then we extracted sentence features and concatenated them
with the RNN vector. We then trained a Feedforward NN which outputs
a 5-label probability distribution for each tweet. That was the starting
point for each of the tasks; we refer to this process as the pipeline.
\subsection{Task A}
The goal of this task is to classify tweets sentiment into three classes
(negative, neutral, positive) where the measured metric is a macro-averaged
recall.
We used the SemEval 2017 task A data in the following way: using SemEval
2016 TEST as our TEST, partitioning the rest into TRAIN and DEV datasets.
The test dataset went through the previously mentioned pipeline, getting
a 5-label probability distribution.
We anticipated the sentiment distribution of the test data would be
similar to the training data\textemdash as they may be drawn from
the same distribution. Therefore we used re-sampling of the training
dataset to obtain a skewed dataset such that a logistic regression
would predict similar sentiment distributions for both the train and
test datasets. Finally we trained a logistic regression on the new
dataset and used it on the task A test set. We obtained a macro-averaged
recall score of $\rho=0.575$ and accuracy of $Acc=0.587$.
Apparently, our assumption about distribution similarity was misguided
as one can observe in the next table.
\medskip{}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\cline{2-4}
\multicolumn{1}{c|}{} & {\small{}Negative} & {\small{}Neutral} & {\small{}Positive}\tabularnewline
\hline
{\small{}Train} & $15.5\%$ & $41.1\%$ & $43.4\%$\tabularnewline
\hline
{\small{}Test} & $32.3\%$ & $48.3\%$ & $19.3\%$\tabularnewline
\hline
\end{tabular}
\par\end{center}
\subsection{Tasks B, D}
The goals of these tasks are to classify tweets sentiment regarding
a given entity as either positive or negative (task B) and estimate
sentiment distribution for each entity (task D). The measured metrics
are macro-averaged recall and KLD, respectively.
We started with the training data passing our pipeline. We calculated
the mean distribution for each entity on the training and testing
datasets. We trained a logistic regression from a 5-label to a binary
distribution and predicted a positive probability for each entity
in the test set. This was used as a prior distribution for each entity,
modeled as a Beta distribution. We then trained a logistic regression
where the input is a concatenation of the 5-labels with the positive
component of the probability distribution of the entity's sentiment
and the output is a binary prediction for each tweet. Then we chose
the label\textemdash using the mean positive probability as a threshold.
These predictions are submitted as task B. We obtained a macro-averaged
recall score of $\rho=0.822$ and accuracy of $Acc=0.802$.
Next, we took the predictions mean for each entity as the likelihood,
modeled as a Binomial distribution, thus getting a Beta posterior
distribution for each entity. These were submitted as task D. We obtained
a score of $KLD=0.149$.
\subsection{Tasks C, E}
The goals of these tasks are to classify tweets sentiment regarding
a given entity into five classes\textemdash very negative, negative,
neutral, positive, very positive\textemdash (task C) and estimate
sentiment distribution over five classes for each entity (task E).
The measured metrics are macro-averaged MAE and earth-movers-distance
(EMD), respectively.
We first calculated the mean sentiment for each entity. We then used
bootstrapping to generate a sample for each entity. Then we trained
a logistic regression model which predicts a 5-label distribution
for each entity. We modified the initial 5-label probability distribution
for each tweet using the following formula:
\begin{align}
p^{\text{new}}(t_{0},c_{0}) & =\sum_{c\in C}\frac{p\left(t_{0},c\right)\cdot p^{\text{entity-LR}}\left(t_{0},c_{0}\right)}{\sum_{t\in T}p\left(t,c\right)},
\end{align}
where $t_{0},c_{0}$ are the current tweet and label, $p^{\text{entity-LR}}$
is the sentiment prediction of the logistic regression model for an
entity, $T$ is the set of all tweets and $C=\left\{ \text{vn, n, neu, p, vp}\right\} $
is the set of labels. We trained a logistic regression on the new
distribution and the predictions were submitted as task C. We obtained
a macro-averaged MAE score of $MAE^{M}=0.599$.
Next, we defined a loss function as follows:
\begin{align}
\text{loss}(t_{0},c_{0}) & =\sum_{c\in C}\left|c-c_{0}\right|\cdot\frac{p\left(t_{0},c\right)}{\sum_{t\in T}p\left(t,c\right)},
\end{align}
where the probabilities are the predicted probabilities after the
previous logistic regression step. Finally we predicted a label for
each tweet according to the lowest loss, and calculated the mean sentiment
for each entity. These were submitted as task E. We obtained a score
of $EMD=0.345$.
\section{\label{sec:Review-and-Conclusions}Review and Conclusions}
In this paper we described our system of sentiment analysis adapted
to participate in SemEval task 4. The highest ranking we reached was
third place on the 5-label classification task. Compared with classification
with 2 and 3 labels, in which we scored lower, and the fact we used
similar workflow for tasks A, B, C, we speculate that the relative
success is due to our sentiment treebank ranking on a 5-label basis.
This can also explain the relatively superior results in quantification
of 5 categories as opposed to quantification of 2 categories.
Overall, we have had some unique advantages and disadvantages in this
competition. On the one hand, we enjoyed an additional twenty thousand
tweets, where every node of the parse tree was labeled for its sentiment,
and also had the manpower to manually prune our dictionaries, as well
as the opportunity to get feedback from our clients. On the other
hand, we did not use any user information and/or metadata from Twitter,
nor did we use the SemEval data for training the RNTN models. In addition,
we did not ensemble our models with any commercially or freely available
pre-trained sentiment analysis packages.
\section{\label{sec:Future-Work}Future Work}
We have several plans to improve our algorithm and to use new data.
First, we plan to extract more semantic features such as verb and
adverb classes and use them in neural network models as additional
input. Verb classification was used to improve sentiment detection
\citep{chesley2006using}; we plan to label verbs according to whether
their sentiment changes as we change the tense, form and active/passive
voice. Adverbs were also used to determine sentiment \citep{benamara2007sentiment};
we plan to classify adverbs into sentiment families such as intensifiers
(``very''), diminishers (``slightly''), positive (``delightfully'')
and negative (``shamefully'').
Secondly, we can use additional data from Twitter regarding either
the users or the entities-of-interest.
\begin{figure*}
\begin{centering}
\emph{\includegraphics[scale=0.35]{lstm3-plot.pdf}}
\par\end{centering}
{\small{}\caption{\label{fig:LSTM-module} LSTM module; round purple nodes are element-wise
operations, turquoise rectangles are neural network layers, orange
rhombus is a dim-reducing matrix, splitting line is duplication, merging
lines is concatenation.}
}{\small \par}
\end{figure*}
Finally, we plan to implement a long short-term memory (LSTM) network
\citep{hochreiter1997long} which trains on a sentence together with
all the syntax and semantic features extracted from it. There is some
work in the field of semantic modeling using LSTM, e.g. \citet{Palangi:2014aa,Palangi:2016:DSE:2992449.2992457}.
Our plan is to use an LSTM module to extend the RNTN model of \citet{socher2013recursive}
by adding the additional semantic data of each phrase and a reference
to the entity-of-interest. An illustration of the computational graph
for the proposed model is presented in figure \ref{fig:LSTM-module}.
The inputs/outputs are: $V$ is a word vector representation of dimension
$d$, $D$ encodes the parts-of-speech (POS) tagging, syntactic category
and an additional bit indicating whether the entity-of-interest is
present in the expression\textemdash all encoded in a $7$ dimensional
vector, $C$ is a control channel of dimension $d$, $O$ is an output
layer of dimension $d+7$ and $H$ is a sentiment vector of dimension
$s$.
The module functions are defined as following:
\begin{align}
f_{t} & =\sigma\left[L_{f}\left(\left[V_{t},D_{t}\right],O_{t-1}\right)\right]\nonumber \\
i_{t} & =\sigma\left[L_{i}\left(\left[V_{t},D_{t}\right],O_{t-1}\right)\right]\nonumber \\
C'_{t} & =\tanh\left[L_{C'}\left(\left[V_{t},D_{t}\right],O_{t-1}\right)\right]\nonumber \\
i''_{t} & =\sigma\left[L_{i''}\left(\left[C''_{t-1},D_{t}\right],\left[C_{t-1},D_{t-1}\right]\right)\right]\nonumber \\
C''_{t} & =\tanh\left[L_{C''}\left(\left[C''_{t-1},D_{t}\right],\left[C_{t-1},D_{t-1}\right]\right)\right]\nonumber \\
g_{t} & =\sigma\left[L_{g}\left(\left[V_{t},D_{t}\right],O_{t-1}\right)\right]\nonumber \\
C_{t} & =C_{t-1}\odot f_{t}+C'_{t}\odot i_{t}+i''_{t}\odot C''_{t}\nonumber \\
H_{t} & =W_{\text{out}}\cdot\left(g_{t}\odot\tanh\left(C_{t}\right)\right)\nonumber \\
O_{t} & =\left[D_{t},\left(g_{t}\odot\tanh\left(C_{t}\right)\right)\right],
\end{align}
where $W_{\text{out}}\in\mathbb{R}^{s\times d}$ is a matrix to be
learnt, $\odot$ denotes Hadamard (element-wise) product and $[.,.]$
denotes concatenation. The functions $L_{i}$ are the six NN computations,
given by:
\begin{align}
L^{k}\left(S_{ij}\right) & =S_{ij}T^{k,\left[1:d\right]}S_{ij}^{\top}+I_{0,0}W_{0,0}^{k}S_{ij}^{\top}\nonumber \\
& \quad+I_{0,1}W_{0,1}^{k}S_{ij}^{\top}+I_{1,0}W_{1,0}^{k}S_{ij}^{\top}\nonumber \\
& \quad+I_{1,1}W_{1,1}^{k}S_{ij}^{\top}\nonumber \\
S_{ij} & =\left(\left(v_{i},s_{i},e_{i}\right),\left(v_{j},s_{j},e_{j}\right)\right),
\end{align}
where $\left(v_{i},s_{i},e_{i}\right)$ are the $d$ dimensional word
embedding, 6-bit encoding of the syntactic category and an indication
bit of the entity-of-interest for the $i$th phrase, respectively,
$S_{ij}$ encodes the inputs of a left descendant $i$ and a right
descendant $j$ in a parse tree and $k\in\left\{ 1,\ldots,6\right\} $.
Define $D=2d+14$, then $T^{\left[1:d\right]}\in\mathbb{R}^{D\times D\times d}$
\,is a tensor defining bilinear forms, $I_{I,J}$ with $I,J\in\left\{ 0,1\right\} $
are indication functions for having the entity-of-interest on the
left and/or right child and $W_{I,J}\in\mathbb{R}^{d\times D}$ are
matrices to be learnt.
The algorithm processes each tweet according to its parse tree, starting
at the leaves and going up combining words into expressions; this
is different than other LSTM algorithms since the parsing data is
used explicitly. As an example, figure \ref{fig:amobee-graph} presents
the simple sentence ``Amobee is awesome'' with its parsing tree.
The leaves are given by $d$-dimensional word vectors together with
their POS tagging, syntactic categories (if defined for the leaf)
and an entity indicator bit. The computation takes place in the inner
nodes; ``is'' and ``awesome'' are combined in a node marked by
``VP'' which is the phrase category. In terms of our terminology,
``is'' and ``awesome'' are the $i,j$ nodes, respectively for
``VP'' node calculation. We define $C''_{t-1}$ as the cell's state
for the \emph{left} child, in this case the ``is'' node. Left and
right are concatenated as input $V_{t}$ and the metadata $D_{t}$
is from the \emph{right} child while $D_{t-1}$ is the metadata from
the \emph{left} child. The second calculation takes place at the root
``S''; the input $V_{t}$ is now a concatenation of ``Amobee''
word vector, the input $O_{t-1}$ holds the $O_{t}$ output of the
previous step in node ``VP''; the cell state $C''_{t-1}$ comes
from the ``Amobee'' node.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{graph}
\par\end{centering}
\caption{\label{fig:amobee-graph}Constituency-based parse tree; the LSTM module
runs on the internal nodes by concatenating the left and right nodes
as its input.}
\end{figure}
\bibliographystyle{acl_natbib}
|
1,116,691,497,258 | arxiv | \section{I.\hspace{0.5cm} Introduction}
The Gutzwiller projected wave functions(GWFs) are widely used to
approximate the ground state of the $t-J$ model and the Heisenberg
model. In these models, local electronic correlation, as manifested
in the no double occupancy constraint of electrons, plays a vital
role in determining the low energy physics. Such strong local
correlation make these systems difficult to study analytically. In
the variational approach based on GWFs, these models are first
treated in the mean field approximation in which the local
constraint is relaxed to a global one. Then the local constraint is
enforced afterwards by the Gutzwiller projection which simply
filters out the unphysical components with doubly occupied sites in
the mean field state.
The above variational strategy is used extensively in the study of
the high temperature superconductors and quantum antiferromagnets.
After many years of efforts, it is now believed that the Gutzwiller
projected d-wave BCS state describe well the superconducting state
of the high temperature
superconductors\cite{yokoyama1,gros,paramekanti1,paramekanti2}.
Quite recently, progress is also made on the understanding of the
quasiparticle properties above such a
state\cite{yunoki1,yunoki2,randeria,nave}. The same kind of wave
function is also used in recent studies on the exotic orders and
exotic excitations of frustrated quantum
antiferromagnet\cite{ivanov,yunoki3,paramekanti3,sorella}.
An unresolved issue about the GWF is that it is not clear if the
posteriorly executed projection can capture the kinematic effect of
the local constraints, even qualitatively. In this paper, we address
this issue with the one dimensional $t-J$ model.
The one dimensional $t-J$ model has been studied extensively by a
broad band of methods including Bethe-Ansatz solution, conformal
field theories\cite{bares,kawakami,kuramoto}, quantum Monte
carlo\cite{assaad}, exact diagonalization\cite{ogata1}, and also
Variational Monte Carlo
calculations\cite{ogata2,gebhard,hellberg,chen1,yokoyama2,kobayashi,chen2}.
Many properties concerning the ground state of this model are now
well established. This give us the unique opportunity to judge the
validity of a given approximation. The one dimensional $t-J$ model
is exactly soluble at $J/t=0$ and $J/t=2$. For $J/t=0$, the spin and
the charge degree of freedom of the system are totally
separated\cite{ogata2}. The spin part is described by the Heisenberg
model on the squeezed chain with doped holes removed, while the
charge part is described by a noninteracting spinless Fermion
system. For $J/t=2$, the system is supersymmetric and it is found
that the GWF provides a fairly accurate approximation for the ground
state of the system\cite{yokoyama2,kuramoto}. For general value of
$J/t$ and electron density $n$, the system is a Tomonaga-Luttinger
liquid(TLL) below a critical value $J_{c}/t$ around 2.5. The
correlation exponent of the TLL varies continuesly with $J/t$ and
$n$\cite{ogata1}. For $J/t>J_{c}/t$, the system is unstable toward
phase separation. For small $n$ and $J/t>2$, there is also a small
region in which the system exhibit a spin gap\cite{chen1,chen2}.
The Gutzwiller projected Fermi sea wave function and its variants
has long been used to describe the ground state of the one
dimensional $t-J$ model. It is well known that this wave function
provides an excellent description of the undoped case of the model,
namely the spin $\frac{1}{2}$ Heisenberg spin chain\cite{gebhard}.
However, the same wave function is not that satisfactory for the
doped system, except for the supersymmetric case of $J/t=2$. For
example, it fails to predict the TLL behavior in the small $J/t$
region. A $2\mathrm{k}_{F}$ peak in the spin structure factor is
also missed by this wave function. Since the wave function is
parameter free, it also gives no clue on the origin of the spin gap
state and the phase separation at large $J/t$.
It is generally believed that the problems with the GWF originate
from the insufficient account of the charge correlation in the
system. Along this line of thinking, various kind of Jastrow factor
are proposed to remedy the drawbacks of GWF. For example, Hellberg
and Mele introduced a long range Jastrow factor of the form
$|F(r_{i\uparrow},r_{j\downarrow})|^{\nu} $ and succeed in
reproducing the TLL behavior, where
$|F(r_{i\uparrow},r_{j\downarrow})|^{\nu} $ is a Slater determinant
of all the electron positions\cite{hellberg}. Yokoyama and Ogata
found a short range repulsive Jastrow factor is able to restore the
$2\mathrm{k}_{F}$ peak in the spin structure factor, while a
sufficiently attractive Jastrow factor can cause phase
separation\cite{yokoyama2}. However, both wave functions have
difficulties in reproducing the correct phase diagram. For example,
the spin gap state is missed in both wave functions. At the same
time, both wave functions predict a fully phase-separated state
along the boundary of phase separation, which is in fact an
oversimplification\cite{ogata1}. More importantly, no understanding
on the physical origin of the proposed Jastrow factor is available
and it is hard to judge if a similar modification is relevant for
higher dimensional system.
For the sake of possible extension to higher dimensional case, it is
important to know the reason that the simple GWF fails before any
modification on it is made. As mentioned above, it is the residual
charge correlation in the system which is responsible for the
failure of GWF. In this paper we make this statement more precise by
showing that the GWF has the correct phase structure to describe the
kink nature of the doped holes in the ground state of the one
dimensional $t-J$ model. In fact, we find the spin structure factor
of the GWF in the squeezed chain coordinate is almost identical to
that of a half filled spin chain. Thus the missing $2\mathrm{k}_{F}$
peak in the spin structure factor for small $J/t$ should be
recovered if the removed holes are reinserted into the squeezed
chain in the right manner.
The physical origin of the residual charge correlation can be easily
seen if one reformulate the GWF in terms of the slave Boson
theory\cite{kotliar,lee}. In the slave Boson theory, the constrained
electron operator is decomposed as
$\hat{c}^{\dagger}_{i,\sigma}=f^{\dagger}_{i,\sigma}b_{i}$, in which
$f^{\dagger}_{i,\sigma}$ is a spin $\frac{1}{2}$ neutral Fermion
called spinon and $b_{i}$ is a spinless charge 1 Boson called holon.
The local constraint now takes the form of an equality,
$\sum_{\sigma}f^{\dagger}_{i,\sigma}f_{i,\sigma}+b^{\dagger}_{i}b_{i}=1$.
In terms of the slave Boson theory, the GWF corresponds to a state
with all holon condensed into the zero momentum state. However, the
holon is not a true Boson but a hard core Boson as a result of the
local constraint. For general value of $J/t$, there is also an
effective attraction between the holons caused by the exchange term
of the $t-J$ model. Thus a XXZ-type effective Hamiltonian should be
a good approximation for the residual charge correlation.
Based on these observations, a Pfaffian-type variational wave
function is proposed for the ground state of the one dimensional
$t-J$ model. This wave function, which has only one parameter,
reproduce well the global phase diagram of the model, including the
Luther-Emery(LE) phase in the small $n$ and large $J/t$ region. It
is found that this wave function also reproduces well various
correlation functions of the system and provides a refined picture
for the phase separation at large $J/t$.
The paper is organized as follows. Section II is devoted to the
investigation of the properties of the GWF. In Section III, the new
variational scheme and the Pfaffian-type wave function are
introduced. The phase diagram and correlation functions determined
from this new variational wave function are presented in Section IV.
Section V summarize the paper and includes a discussion on related
issues.
\section{II.\hspace{0.5cm} The GWF}
The one dimensional $t-J$ model reads
\begin{equation}\label{1}
\mathcal{H}=-t\sum_{i,\sigma}(\hat{c}_{i\sigma}^{\dagger}\hat{c}_{i+1,\sigma}+h.c.)
+J\sum_{i}(\mathbf{S}_{i} \cdot
\mathbf{S}_{i+1}-\frac{1}{4}n_{i}n_{i+1}),
\end{equation}
in which
$\mathbf{S}_{i}=\frac{1}{2}\sum_{\alpha\beta}\hat{c}_{i\alpha}^{\dagger}
\mathbf{\sigma}_{\alpha\beta}\hat{c}_{i\beta}$ and
$n_{i}=\sum_{\alpha}\hat{c}_{i\alpha}^{\dagger}\hat{c}_{i\alpha}$.
The electron in this model is subjected to the constraint of no
double occupancy
\begin{equation}\label{2}
\sum_{\alpha}\hat{c}_{i\alpha}^{\dagger}\hat{c}_{i\alpha}\leq1.
\end{equation}
The ground state of the one dimensional $t-J$ model is governed by a
well defined phase structure. This can be most easily seen at half
filling when the system reduces to the Heisenberg spin chain. For
the Heisenberg model, it is well known that the ground state satisfy
the Marshall sign rule\cite{marshall,weng}. The rule says that the
ground state wave function is real in the Ising basis and its sign
is given by $(-1)^{N_{\downarrow}}$ up to a global phase, where
$N_{\downarrow}$ denotes the number of down spins in the even
sublattice. This sign rule is a manifestation of the
antiferromagnetic spin correlation in the ground state. With such a
sign rule, one easily verify that
$\langle\mathbf{S}_{i}\cdot\mathbf{S}_{j}\rangle\leq0$ for $i$ and
$j$ belonging to different sublattices and
$\langle\mathbf{S}_{i}\cdot\mathbf{S}_{j}\rangle\geq0$ for $i$ and
$j$ belonging to the same sublattice.
The ground state at finite doping is governed by a similar sign
rule. It can be easily checked that all matrix elements of the $t-J$
Hamiltonian are negative definite in a wave function that satisfy
the Marshall sign rule on the squeezed chain. The squeezed chain is
the chain in which the sites occupied by the doped holes are
removed. This can be seen by noting that the motion of holes in this
model do not disturb the spin configuration on the squeezed chain.
Thus, the ground state of the one dimensional $t-J$ model should
satisfy such a modified Marshall sign rule. With such a modified
Marshall sign rule, one easily see that the holes in the ground
state behaves as an antiphase domain wall for spin.
Now we show that the GWF satisfies the Marshall sign rule on the
squeezed chain. The GWF reads
\begin{equation}\label{4}
|\mathrm{GWF}\rangle=\prod_{i}(1-n_{i\uparrow}n_{i\downarrow})|\mathrm{FS}\rangle,
\end{equation}
in which $|\mathrm{FS}\rangle$ denotes the simple Fermi sea. In the
natural basis
$\prod_{i,j}c_{i\uparrow}^{\dagger}c_{j\downarrow}^{\dagger}|0\rangle$,
the amplitude of GWF is given by the following Vandermont
determinant
\begin{equation}\label{8}
\Psi(\{i\},\{j\})=\psi_{PW}\prod_{\alpha<\beta}(Z_{i_{\alpha}}-Z_{i_{\beta}})\prod_{l<m}(Z_{j_{l}}-Z_{j_{m}}),
\end{equation}
in which
$\psi_{PW}=\exp[-i\mathrm{k}_{F}(\sum_{\alpha}i_{\alpha}+\sum_{l}j_{l})]$
is a plane wave factor, $Z_{i_{\alpha}}=\exp(i\frac{2\pi
i_{\alpha}}{N})$ and $Z_{j_{l}}=\exp(i\frac{2\pi j_{l}}{N})$ are
chord coordinates of up spins and down spins.
Now we exchange the up spin at site $i_{1}$ and the down spin at
site $j_{1}$. The resultant change in phase is given by
\begin{equation}
\Delta\Phi=\arg(\prod_{\alpha>1}\frac{Z_{i_{\alpha}}-Z_{j_{1}}}{Z_{i_{\alpha}}-Z_{i_{1}}}\prod_{l>1}
\frac{Z_{j_{l}}-Z_{i_{1}}}{Z_{j_{l}}-Z_{j_{1}}}).
\end{equation}
Since $|Z|=1$,
$\arg(\frac{Z_{i_{\alpha}}-Z_{j_{1}}}{Z_{i_{\alpha}}-Z_{i_{1}}})$ is
nothing but the angle in the segment $Z_{i_{1}}-Z_{j_{1}}$ in the
unit circle. Noting the fact that in a circle the angles in the same
segment equal one another and sum of the opposite angles of
quadrilaterals equals $\pi$, one easily find that
$\Delta\Phi=N_{c}\pi$, in which $N_{c}$ denotes the number of
electrons between site $i_{1}$ and site $j_{1}$. Taking into account
the sign due to Fermion exchange, one find the change in phase is in
accordance with the modified Marshall sign rule. Following
essentially the same steps, one can also verify the case of
exchanging a hole and an electron.
\begin{figure}[h!]
\includegraphics[width=6.5cm,angle=0]{squeezed.eps}
\caption{Spin structure factor of the GWF in the squeezed coordinate
as compared with that of a half filled spin chain. The calculation
is done on a 204 sites lattice which is quarter filled.}
\label{fig1}
\end{figure}
Thus the GWF has the right phase structure to describe the ground
state of the one dimensional $t-J$ model and the kink nature of the
doped holes in it. In fact, this conclusion can be made even
stronger. In Figure 1, we plot the spin structure factor of the GWF
in the squeezed coordinate and compare it with that of a half filled
spin chain. We see the two are almost identical with each other.
Since the spin degree of freedom is described exactly by a
Heisenberg model on the squeezed chain at $J/t=0$, while the GWF
provides an exceedingly good approximation for $J/t=2$, it is
natural to expect that the same behavior to hold for arbitrary $J/t$
and $n$.
Two conclusions follow directly from the above reasoning. First,
since the spin correlation on the squeezed chain is already well
described by the GWF, the missing $2\mathrm{k}_{F}$ peak in the spin
structure factor should be recovered if the removed holes are
correctly reinserted into the squeezed chain, or, the the missing
$2\mathrm{k}_{F}$ peak should be attributed to the residual charge
correlation in the system. Second, since the squeezed spin chain
picture is argued to hold for arbitrary $J/t$ and $n$, a single wave
function may suffice to describe the whole phase diagram of the one
dimensional $t-J$ model, including the spin gap phase at small $n$
and large $J/t$.
\section{III.\hspace{0.5cm} The new variational scheme}
The origin of the residual charge correlation can be most easily
seen by reformulating the GWF in terms of the slave Boson theory. In
the slave Boson theory, the constrained electron operator is
decomposed as
$\hat{c}_{i,\sigma}^{\dagger}=f_{i,\sigma}^{\dagger}b_{i}$, in which
$f_{i,\sigma}^{\dagger}$ represents the Fermionic spinon and $b_{i}$
represents the Bosonic holon. In terms of these slave particles, the
$t-J$ model reads
\begin{eqnarray*}
\mathcal{H} &=& \mathcal{H}_{t}+\mathcal{H}_{J} \\
\\
\mathcal{H}_{t} &=& -t\sum_{i,\sigma}(f_{i,\sigma}^{\dagger}f_{i+1,\sigma}b_{i+1}^{\dagger}b_{i}+h.c.) \\
\mathcal{H}_{J} &=& \frac{J}{2}
\sum_{i}b_{i}b_{i}^{\dagger}b_{i+1}b_{i+1}^{\dagger}(\mathbf{S}_{i}^{f} \cdot
\mathbf{S}_{i+1}^{f}-\frac{1}{4}n_{i}^{f}n_{i+1}^{f}),
\end{eqnarray*}
in which
$\mathbf{S}_{i}^{f}=\frac{1}{2}\sum_{\alpha\beta}f_{i\alpha}^{\dagger}
\mathbf{\sigma}_{\alpha\beta}f_{i\beta}$ and
$n_{i}^{f}=\sum_{\alpha}f_{i\alpha}^{\dagger}f_{i\alpha}$. The no
double occupancy constraint now takes the form of an equality
\begin{equation}\label{3}
\sum_{\alpha}f_{i\alpha}^{\dagger}f_{i\alpha}+b_{i}^{\dagger}b_{i}=1.
\end{equation}
When the local constraints Eq.(6) is exactly satisfied, the factor
$b_{i}b_{i}^{\dagger}b_{i+1}b_{i+1}^{\dagger}$ appearing in
$\mathcal{H}_{J}$ plays no role and can be neglected.
In the mean field treatment, an RVB order parameter
$\chi=\sum_{\alpha} <f_{i+1\alpha}^{\dagger}f_{i\alpha}>$ is
introduced to decompose the interaction term. At the same time, the
local constraint is relaxed to a global one. The mean field
Hamiltonian for the spinon and the holon part read\cite{lee}
\begin{eqnarray*}
\mathcal{H}^{f} &=& -(tx+\frac{3J\chi}{8})\sum_{i\sigma}(f_{i,\sigma}^{\dagger}f_{i+1\sigma}+h.c.)\\
\mathcal{H}^{b} &=& -t\chi\sum_{i}(b_{i}^{\dagger}b_{i+1}+h.c.),
\end{eqnarray*}
in which $x$ is the hole density. The mean field ground state is
given by the product of the spinon Fermi sea and the holon Bose
condensate
\begin{equation}\label{5}
|\Phi\rangle=(b_{\mathrm{k}=0}^{\dagger})^{N_{h}}
\prod_{\mathrm{k}\leq\mathrm{k}_{F}}f_{\mathrm{k}\uparrow}^{\dagger}
f_{\mathrm{k}\downarrow}^{\dagger}|0\rangle.
\end{equation}
When this state is projected into the subspace that satisfy the
constraint Eq.(6), we get the GWF.
In the mean field theory, the holon is a free Boson and condenses in
the ground state. However, due to the local constraint, the holon is
actually a hard core Boson which can not condense in one spatial
dimension. The uncondensed nature of the hard core Boson in 1d
originates from the kinematic effect of the local constraint. Due to
this constraint, the Hilbert space for the one dimensional hard core
Boson system becomes disconnected at the single particle level. We
note for comparison that the Hilbert space of the spinon part is
still connected even when the local constraint is enforced. Thus the
holon should be treated as hard core Boson rather than free Boson.
Another source of the residual charge correlation is provided by the
superexchange term of the $t-J$ model. When two electrons are next
to each other, they enjoy an attraction due to the superexchange.
This attraction is not captured by the mean field order parameter
$\chi$ and should be reintroduced.
Combining these considerations, the residual charge correlation
beyond the GWF should be described by the following XXZ-type
effective Hamiltonian
\begin{equation}\label{8}
\mathcal{H}_{v}=-\sum_{i}(\hat{b}_{i}^{\dagger}\hat{b}_{i+1}+h.c.)
-v\sum_{i}\hat{b}_{i}^{\dagger}\hat{b}_{i+1}^{\dagger}\hat{b}_{i+1}\hat{b}_{i},
\end{equation}
in which $\hat{b}_{i}^{\dagger}$ is the operator for hard core Boson
and $v$ is the rescaled attraction. If we denote the ground state of
$\mathcal{H}_{v}$ as $\Lambda_{v}$, then
$\mathrm{P_{G}}\Lambda_{v}|\mathrm{GWF}\rangle$ should be a good
variational wave function for the one dimensional $t-J$ model.
Although $\mathcal{H}_{v}$ is exactly soluble\cite{yangcn}, an
explicit form for $\Lambda_{v}$ is available only in limited cases.
For $v=0$, $\Lambda_{v}$ is nothing but the Hellberg-Mele Jastrow
factor with $\nu=1$. For $v=1$, $\Lambda_{v}$ is a constant and our
proposed wave function reduce to the GWF. At the quarter filling,
$\mathcal{H}_{v}$ exhibit particle-hole symmetry. In a separated
paper\cite{yang}, we show a Hellberg-Mele-type variational wave
function provides an exceedingly good description for the ground
state of the XXZ model in the $S^{z}=0$ sector. However, away from
the particle-hole symmetric point, the Hellberg-Mele wave function
cease to be a good approximation.
For general value of $v$ and Boson density, we have to resort to
approximation. Through the Jordan-Wigner transformation, the XXZ
Hamiltonian can be rewritten as
\begin{equation}\label{8}
\mathcal{H}_{v}=-\sum_{i}(c_{i}^{\dagger}c_{i+1}+h.c.)
-v\sum_{i}c_{i}^{\dagger}c_{i+1}^{\dagger}c_{i+1}c_{i},
\end{equation}
in which $c_{i}^{\dagger}$ is a spinless Fermion. For this
Hamiltonian, we adopt the BCS approximation to decouple the
interaction term. The BCS ground state for the spinless Fermion
reads
\begin{equation}\label{9}
\prod_{\mathrm{k}>0}(u_{\mathrm{k}}+v_{\mathrm{k}}c_{\mathrm{k}}^{\dagger}c_{\mathrm{-k}}^{\dagger})|0\rangle,
\end{equation}
in which
$\frac{v_{\mathrm{k}}}{u_{\mathrm{k}}}=\frac{\Delta_{\mathrm{k}}}{\epsilon_{\mathrm{k}}+\mathrm{E}_{\mathrm{k}}}$,
$\Delta_{\mathrm{k}}=\Delta\sin(\mathrm{k})$,
$\epsilon_{\mathrm{k}}=-2\cos(\mathrm{k})-\mu$ and
$\mathrm{E}_{\mathrm{k}}=\sqrt{\epsilon_{\mathrm{k}}^{2}+\Delta_{\mathrm{k}}^{2}}$.
Here $\Delta$ is the BCS gap for the spinless Fermion and is treated
as the only variational parameter in our theory(the chemical
potential $\mu$ can be determined by the density equation and is not
an independent parameter). In real space, the BCS state for the
spinless Fermion takes the form of a Pfaffian. A Pfaffian is a
square root of the determinant of a antisymmetric matrix of even
order\cite{yunoki4}. In our case, the matrix element of the
antisymmetric matrix is given by
\begin{equation}\label{10}
\mathrm{f}_{i,j}=\sum_{\mathrm{k}>0}\frac{v_{\mathrm{k}}}{u_{\mathrm{k}}}\sin(\mathrm{k(i-j)}),
\end{equation}
in which $i$ and $j$ denote the coordinates of the spinless
Fermions. Thus our variational wave function for the one dimensional
$t-J$ model is given by
\begin{equation}\label{11}
\Psi=\mathrm{Pf}(\Delta)|\mathrm{GWF}\rangle,
\end{equation}
in which $\mathrm{Pf}(\Delta)$ is the Pfaffian for the holes which
are now spinless Fermions.
\section{IV.\hspace{0.5cm} Results}
\subsection{A. Ground state phase diagram}
The ground state phase diagram determined from the Pfaffian-type
wave function is presented in Figure 2.
\begin{figure}[h!]
\includegraphics[width=6.5cm,angle=0]{phase.eps}
\caption{Ground state phase diagram of the one dimensional $t-J$
model determined from the Pfaffian-type variational wave function.
The dotted lines indicate the boundaries for the existence of
locally stable phases. Here TLL denotes Tomonaga-Luttinger liquid,
LEL denotes Luther-Emery liquid, while PS denotes phase separated
state.}
\label{fig2}
\end{figure}
The phase diagram contains three distinct phases. For small and
intermediate value of J/t, the system is in the TLL phase in which
both charge and spin excitation are gapless. For larger value of
J/t, the system is unstable towards phase separation. At small $n$
and large J/t, there is a small region in which the system exhibits
a spin gap. In the spin gap phase, the charge excitation is still
gapless. Following the convention, this phase is termed Luther-Emery
liquid.
The phase boundaries are determined as follows. To illustrate the
idea, we plot the variational energy as a function of the electron
density for $J/t=2$ and $J/t=3$ in Figure 3. For $J/t=2$, the energy
curve is concave everywhere so that a homogenous phase is globally
stable for all electron density. For $J/t=3$, a convex region
appears at intermediate values of electron density in the energy
curve. In this case, the boundaries for the globally stable phases
are given by the two tangency points shown in the figure, while the
boundaries for the locally stable phases are given by the two
inflexion points.
\begin{figure}[h!]
\includegraphics[width=6.5cm,angle=0]{psbdry2.eps}
\includegraphics[width=6.5cm,angle=0]{psbdry3.eps}
\caption{Variational energy per site $\epsilon$ as a function of the
electron density $\mathrm{n_{e}}$ for $J/t=2$ and $J/t=3$. For
clarity's sake, a linear decreasing background of the energy is
subtracted.
$\alpha=\frac{\mathrm{d}\varepsilon}{\mathrm{d}\mathrm{n_{e}}}|_{\mathrm{n_{e}}\rightarrow0}$
is the initial slope of the energy curve. The arrows above the curve
indicate the locations of the inflexion points, while the arrows
below the curve indicate the locations of the tangency points. The
phase boundaries are determined from these points as explained in
the text.}
\label{fig3}
\end{figure}
For electron density that lies between the two tangency points, the
system is unstable towards phase separation. The density of the
phase separated phases are given by two tangency points. For
$2.5<J/t<3.2$, the system phase separates into a hole rich phase and
an electron rich phase. For $3.2<J/t<3.4$, the hole rich phase is
replaced by a empty phase. For $J/t>3.4$, a fully phase separated
state is realized in which the electron rich phase is replaced by a
half filled spin chain.
The convex region of the energy curve diminishes to zero at about
$J/t=2.5$. The phase boundary between the TLL phase and the LEL
phase for $J/t<2.5$ is determined by examining the infrared behavior
of the spin structure factor $S(q)$. In the spin gap phase, $S(q)$
should be quadratic at small $q$, while in the TLL phase a linear
behavior is expected\cite{hohenberg}. For the charge excitation, a
similar criteria exists on the density structure factor $N(q)$.
The existence of the LEL phase is quite unexpected from the point of
view of the mean field theory. In the mean field theory, the spinon
is still described by a filled Fermi sea which is by definition
gapless. However, after Gutzwiller projection the spinon get
entangled with the holon. Such an entanglement change drastically
the spin correlation of the system.
\subsection{B. Correlation functions}
Four correlation functions are evaluated in this work. They are the
momentum distribution function defined as
\begin{equation}\label{12}
n(k)=\frac{1}{2N}\sum_{i,j,\sigma}\langle c_{i\sigma}^{\dagger}c_{j\sigma} \rangle
e^{ik(r_{i}-r_{j})},
\end{equation}
the spin structure factor defined as
\begin{equation}\label{12}
S(k)=\frac{4}{N}\sum_{i,j}\langle S_{i}^{z}S_{j}^{z} \rangle
e^{ik(r_{i}-r_{j})},
\end{equation}
the charge structure factor defined as
\begin{equation}\label{12}
C(k)=\frac{1}{N}\sum_{i,j}(\langle n_{i}n_{j} \rangle- \langle n_{i} \rangle \langle n_{j} \rangle)
e^{ik(r_{i}-r_{j})},
\end{equation}
and the pair correlation function defined as
\begin{equation}\label{12}
P(k)=\frac{1}{N}\sum_{i,j}(\langle \Delta_{i}^{\dagger}\Delta_{j} \rangle
e^{ik(r_{i}-r_{j})},
\end{equation}
in which $\Delta_{i}$ is the annihilation operator for a
nearest-neighboring pair
\begin{equation}\label{12}
\Delta_{i}=\frac{1}{\sqrt{2}}(c_{i\uparrow}c_{i+1\downarrow}-c_{i\downarrow}c_{i+1\uparrow}).
\end{equation}
First we present the result for the TLL phase. The correlation
functions for $J/t=0,1$ and 2 at quarter filling are shown in Figure
4. For comparison's sake, we also plot the result calculated from
the Hellberg-Mele wave function. From the figure we see that the
correlation functions calculated from the Pfaffian-type wave
function are almost identical with that calculated from the
Hellberg-Mele wave function, apart from the small deviations due to
critical fluctuations. Since the Pfaffian is derived from a BCS mean
field approximation in which a gap opens up, the residual charge
correlation described by it is short ranged. Thus the Pfaffian-type
wave function should exhibit Fermi-liquid behavior, as is clear in
Figure 4. To recover the critical fluctuations, one should go beyond
the mean field approximation.
\begin{figure}[h!]
\includegraphics[width=9cm,angle=0]{corr1.eps}
\caption{(a) The momentum distribution function $n(k)$, (b) the spin
structure factor $S(k)$, (c) the charge structure factor $C(k)$, and
(d) the singlet pairing correlation function $P(k)$ at quarter
filling for J/t=0(black square), J/t=1(red up triangle), and
J/t=2(green circle). The solid lines denote the result calculated
from the Hellberg-Mele variational wave function.}
\label{fig4}
\end{figure}
For $J/t>2$, the Pfaffian-type wave function becomes less
satisfactory for the quarter filled system. In Figure 5, we plot the
correlation functions of the quarter filled system at $J/t=2.5$ and
3, the latter of which is very close to the boundary of phase
separation. Near the boundary of phase separation, the Hellberg-Mele
wave function starts to develop charge instability, as is clear from
Fig. 5(c). This tendency is missed by the Pfaffian-type wave
function. Instead, a structure at $2k_{F}$ remains evident in the
correlation functions. This is to be expected, since we start from a
Fermionic description of the residual charge correlation. In fact,
it is quite amazing that the BCS approximation remains to be a good
approximation for $J/t$ as high as 2.5(the optimized value for the
BCS gap $\Delta$ is approximately given by $J/t$ at quarter
filling).
\begin{figure}[h!]
\includegraphics[width=9cm,angle=0]{corr2.eps}
\caption{(a) The momentum distribution function $n(k)$, (b) the spin
structure factor $S(k)$, (c) the charge structure factor $C(k)$, and
(d) the singlet pairing correlation function $P(k)$ at quarter
filling for J/t=2.5(black square) and J/t=3(red up triangle). The
solid lines denote the result calculated from the Hellberg-Mele
variational wave function.}
\label{fig5}
\end{figure}
To quantify the above discussion, we plot in Figure 6 the relative
error in the variational energy for both the Pfaffian-type wave
function and the Hellberg-Mele wave function. For small $J/t$, the
energy of the Pfaffian-type wave function is slightly lower than
that of the Hellberg-Mele wave function. For larger value of $J/t$,
the ordering is reversed. However, both wave functions give good
estimate for the ground state energy before phase separation.
\begin{figure}[h!]
\includegraphics[width=9cm,angle=0]{error-half.eps}
\caption{Relative error in variational energy at quarter filling.
The exact value of the ground state energy is taken from
\cite{yokoyama2}.}
\label{fig6}
\end{figure}
Although the Hellberg-Mele wave function provides a good description
for the quarter filled system, it fails badly at low electron
density. On the other hand, the Pfaffian-type wave function
describes quite well the physics in the low density regime,
including the spin gap phase at large $J/t$. To illustrate this, we
plot in Figure 7 the error in variational binding energy for a
single pair of electrons calculated from both wave functions. From
the figure we see the Pfaffian-type wave function is almost exact
for all values of $J/t$ in the low density limit. We think this
explains why the spin gap phase can be correctly reproduced by the
Pfaffian-type variational wave function.
\begin{figure}[h!]
\includegraphics[width=9cm,angle=0]{error-bind.eps}
\caption{Error in variational binding energy for a single pair of
electrons calculated from the Pfaffian-type wave function and the
Hellberg-Mele wave function. The inset shows an expanded view of the
$0<J/t<2$ region.}
\label{fig7}
\end{figure}
Now we present the correlation functions for the LEL phase at small
$n$ and large $J/t$. In Figure 8, the correlation functions for
$J/t=2.8$ and $n=0.06$, a system deeply inside the LEL phase, are
plotted. As mentioned above, the spin gap manifests itself in the
quadratic behavior of the spin structure factor in the small $q$
limit. We note that spin gap state is metastable in a much larger
region than that of the LEL phase.
\begin{figure}[h!]
\includegraphics[width=9cm,angle=0]{corr3.eps}
\caption{Correlation functions for $J/t=2.8$ and $n=0.06$, a system
deeply inside the spin gap phase.(a) The momentum distribution
function $n(k)$, (b) the spin structure factor $S(k)$, (c) the
charge structure factor $C(k)$, and (d) the singlet pairing
correlation function $P(k)$.}
\label{fig8}
\end{figure}
\section{V.\hspace{0.5cm} Summary and Discussion}
In this paper, we have carried out a variational study of the one
dimensional $t-J$ model. We find the failure of the simple GWF
should be attributed to the residual charge correlation.
Reformulating the GWF in terms of the slave Boson theory, we find
the residual charge correlation should be described by a XXZ-type
effective Hamiltonian. Based on this observations, a Pfaffian-type
variational wave function is proposed for the one dimensional $t-J$
model. We find this wave function, which has only one variational
parameter, reproduces correctly the global phase diagram and the
corresponding correlation functions.
It is interesting to note the way in which the spin correlation is
affected by the charge degree of freedom in this model. Through the
investigation of the phase structure of the ground state wave
function, we find the doped holes behave as anti-phase domain walls
for the spin correlation. We show further the spin degree of freedom
of the system is well approximated by a half filled spin chain in
the squeezed coordinates throughout the phase diagram. For small
electron density, the effect of the charge degree of freedom on the
spin part can be so drastic as to induce a gap in the excitation
spectrum of the latter. This spin gap is beyond mean field
description and should be attributed to the strong entanglement of
spin and charge degree of freedom in the projected subspace.
It is also interesting to note the effect of the local constraint
for this system. In conventional GWF, the effect of the local
constraint is taken into account posteriorly by filtering out the
unphysical components in the unprojected state. In this paper, we
find this procedure may fail when the kinematic effect of the local
constraint is essential for establishing(or, more accurately,
destroying) the mean field correlation in the unprojected state. The
one dimensional $t-J$ model provides a particular example of this
type. In the one dimensional $t-J$ model, the Hilbert space for the
charge degree of freedom is disconnected at the single particle
level due to the local constraint. When the local constraint is
relaxed, the connectivity of the Hilbert space for the charge degree
of freedom is changed in a qualitative manner. Such a change in the
connectivity of the Hilbert space is responsible for the appearance
of the Bose condensation of the charged particle in the mead field
theory and is ultimately responsible for the failure of the GWF to
describe the Tomonaga-Luttinger behavior of the system.
For a full understanding of the residual charge correlation in the
one dimensional $t-J$ model, one should also take into account the
attraction due to the exchange term. In the mean field treatment,
the exchange term is decoupled in the
$f_{i\sigma}^{\dagger}f_{j\sigma}$ channel which can not account for
such a charge correlation effect. We find this attraction
counteracts with the effect of the local constraint and cancels it
out around $J/t=2$. This explains the excellentness of the GWF at
the supersymmetric point.
The critical behavior of the one dimensional $t-J$ model is not
correctly described by the Pfaffian-type wave function. This is
natural since the Pfaffian itself is derived from a BCS-type mean
field treatment. Although single particle condensation is gone in
this treatment, a condensate of pair of spinless Fermion still
exists. To recovery the correct critical behavior, one must get rid
of such a condensate. One way to achieve this is to introduce a
second charge correlator of the Hellberg-Mele-type, as is done in
\cite{chen2} and \cite{kobayashi}. According to our analysis, the
problem of finding a good variational description for the one
dimensional $t-J$ model reduces to that for the much simpler one
dimensional XXZ model. We think the correct critical behavior should
be recovered by a more accurate guess for the ground state wave
function of the latter model.
Finally, we mention possible generalization of the idea used in this
work to the study of the two dimensional $t-J$ model. In two spatial
dimension, the kinematic effect of the local constraint should be
less dramatic since the connectivity of the Hilbert space is not
affected by the local constraint. This can also be seen from the
fact that the two dimensional XXZ model does undergo Bose
condensation at zero temperature. However, it is much subtler to
analyze the interplay between the spin and charge degree of freedom
in two spatial dimension since the two frustrate each other. Thus,
the validity of the simple GWF in two dimension remains to be seen.
This work is supported by NSFC Grant No.90303009.
|
1,116,691,497,259 | arxiv | \section{Introduction}
The $A \sim 190$ mass region is a particularly complex one, displaying transitional
behavior such as prolate-oblate deformed shapes, $\gamma$-unstability, triaxial deformation
and/or coexistence of different configurations which present a daunting challenge to
nuclear structure models. Despite this complexity, the $A \sim 190$ mass region has
been a rich source of empirical evidence for the existence of dynamical symmetries in
nuclei both for even-even, odd-proton, odd-neutron and odd-odd nuclei, as well as
supersymmetric pairs \cite{FI,baha} and quartets of nuclei \cite{quartet,metz}.
In this contribution, we present evidence for the existence of a new supersymmetric
quartet in the $A \sim 190$ mass region, consisting of the $^{192,193}$Os and $^{193,194}$Ir
nuclei, and study correlations between different one- and two-nucleon transfer reactions.
\section{Nuclear supersymmetry}
Dynamical supersymmetries (SUSY) were introduced in nuclear physics in
the context of the Interacting Boson Model (IBM) and its extensions \cite{FI}.
The IBM describes collective excitations in even-even nuclei in
terms of a system of interacting monopole ($s^{\dagger}$) and quadrupole
($d^{\dagger}$) bosons \cite{IBM}. The bosons are associated with the number of
correlated proton and neutron pairs, and hence the number of bosons $N$ is
half the number of valence nucleons.
For odd-mass nuclei the IBM was extended to include single-particle
degrees of freedom \cite{olaf}. The ensuing Interacting Boson-Fermion Model
(IBFM) has as its building blocks $N$ bosons with $l=0,2$ and $M=1$ fermion
($a_j^{\dagger}$) with $j=j_1,j_2,\dots$ \cite{IBFM}. The IBM and IBFM can
be unified into a supersymmetry (SUSY) $U(6/\Omega) \supset U(6) \otimes U(\Omega)$
where $\Omega=\sum_j (2j+1)$ is the dimension of the fermion space \cite{FI} .
In this framework, even-even and odd-even nuclei form the members of a
supermultiplet which is characterized by ${\cal N}=N+M$,
i.e., the total number of bosons and fermions.
Supersymmetry distinguishes itself from other symmetries in that it includes,
in addition to transformations among fermions and among bosons, also
transformations that change a boson into a fermion and {\em vice versa}.
The concept of nuclear SUSY was extended in 1985 to include the neutron-proton
degree of freedom \cite{quartet}. In this case, a supermultiplet consists of an
even-even, an odd-proton, an odd-neutron and an odd-odd nucleus. The
the best experimental evidence of a supersymmetric quartet is provided by the
$^{194,195}$Pt and $^{195,196}$Au nuclei as an example of the
$U_{\nu}(6/12) \otimes U_{\pi}(6/4)$ supersymmetry \cite{metz,groeger,wirth,barea1,barea2},
in which the odd neutron is allowed to occupy the $3p_{1/2}$, $3p_{3/2}$ and $2f_{5/2}$
orbits of the 82-126 shell, and the odd proton the $2d_{3/2}$ orbit of the 50-82 shell.
This supermultiplet is characterized by ${\cal N}_{\nu}=5$ and ${\cal N}_{\pi}=2$.
The excitation spectra of the nuclei belonging to the supersymmetric
quartet are described simultaneously by the energy formula
\begin{eqnarray}
E &=& A \left[ N_1(N_1+5)+N_2(N_2+3)+N_1(N_1+1) \right]
\nonumber\\
&& + B \left[ \Sigma_1(\Sigma_1+4)+\Sigma_2(\Sigma_2+2)+\Sigma_3^2 \right]
+ B' \left[ \sigma_1(\sigma_1+4)+\sigma_2(\sigma_2+2)+\sigma_3^2 \right]
\nonumber\\
&& + C \left[ \tau_1(\tau_1+3)+\tau_2(\tau_2+1) \right] + D \, L(L+1) + E \, J(J+1) ~.
\label{npsusy}
\end{eqnarray}
The coefficients $A$, $B$, $B'$, $C$, $D$, and $E$ are determined in a simultaneous
fit of the excitation energies of the four nuclei that make up the quartet.
Recently, the structure of the odd-odd nucleus $^{194}$Ir was investigated by a series
of transfer and neutron capture reactions \cite{balodis}. In particular, the new data
from the polarized $(\vec{d},\alpha)$ transfer reaction provided crucial new information
about and insight into the structure of the spectrum of $^{194}$Ir which led
to significant changes in the assignment of levels as compared to previous work
\cite{joliegarrett}.
The odd-odd nucleus $^{194}$Ir differs from $^{196}$Au by two protons, the
number of neutrons being the same. The latter is crucial, since the dominant
interaction between the odd neutron and the core nucleus is of quadrupole type,
which arises from a more general interaction in the IBFM for very special values
of the occupation probabilities of the $3p_{1/2}$, $3p_{3/2}$ and $2f_{5/2}$
orbits, {\em i.e.} to the location of the Fermi surface for the neutron orbits
\cite{bijker}. This situation is satisfied to a good approximation by the
$^{195}$Pt and $^{196}$Au nuclei, and thus also for $^{193}$Os and $^{194}$Ir.
For this reason, it is reasonable to expect the odd-odd nucleus $^{194}$Ir
to provide another example of the $U(6/12)_{\nu} \otimes U(6/4)_{\pi}$
supersymmetry. Fig.~\ref{ir194} shows the negative parity levels of $^{194}$Ir
in comparison with the theoretical spectrum in which it is assumed that these
levels originate from the $\nu 3p_{1/2}$, $\nu 3p_{3/2}$,
$\nu 2f_{5/2} \otimes \pi 2d_{3/2}$ configuration.
The theoretical energy spectrum is calculated using the energy formula of
Eq.~(\ref{npsusy}) with $A = 26.3$, $B = 8.7$, $B' = -33.6$, $C = 35.1$,
$D = 6.3$, and $E = 4.5$ (all in keV). Given the complex nature of the spectrum
of heavy odd-odd nuclei, the agreement is remarkable. There is an
almost one-to-one correlation between the experimental and theoretical level
schemes \cite{balodis}.
\begin{figure}
\includegraphics[width=80mm]{cgs13_fig1.eps}
\caption{Comparison between the theoretical and experimental spectrum of $^{194}$Ir.}
\label{ir194}
\end{figure}
The successful description of the odd-odd nucleus $^{194}$Ir opens the possibility
of identifying a second quartet of nuclei in the $A \sim 190$ mass region with
$U(6/12)_{\nu} \otimes U(6/4)_{\pi}$ supersymmetry. The new quartet consists of the
nuclei $^{192,193}$Os and $^{193,194}$Ir and is characterized by ${\cal N}_{\nu}=5$
and ${\cal N}_{\pi}=3$. Whereas the $^{192}$Os and $^{193,194}$Ir nuclei are well-known
experimentally, the available data for $^{193}$Os is rather scarce. In Fig.~\ref{os193}
we show the predicted spectrum for $^{193}$Os obtained from Eq.~(\ref{npsusy}) using
the same parameter set as for $^{194}$Ir. We note, that the ground state of $^{193}$Os
has spin and parity $J^P=\frac{3}{2}^{-}$, which implies that the second band with
labels $[7,1]$, $\langle 7,1,0 \rangle$ is the ground state band, rather than
$[8,0]$, $\langle 8,0,0 \rangle$. The relative ordering
of these bands is determined by the coefficients $A$ and $B+B'$. At present, we are
carrying out a simultaneous fit of the excitation energies of all four nuclei that
make up the quartet to see whether it is possible to reproduce the relative ordering
in $^{193}$Os without affecting the successful description of $^{194}$Ir \cite{osir}
\begin{figure}
\includegraphics[width=80mm]{cgs13_fig2.eps}
\caption{Prediction of the spectrum of $^{193}$Os for the
$U_{\nu}(6/12) \otimes U_{\pi}(6/4)$ supersymmetry.}
\label{os193}
\end{figure}
\section{Correlations}
The nuclei belonging to a supersymmetric quartet are described by a single Hamiltonian,
and hence the wave functions, transition and transfer rates are strongly
correlated. As an example of these correlations, we consider here the case
of one-neutron transfer reactions between the Pt and Os nuclei.
In a study of the $^{194}$Pt $\rightarrow$ $^{195}$Pt stripping reaction
it was found \cite{bi} that one-neutron $j=3/2$, $5/2$ transfer
reactions can be described in the $U(6/12)_{\nu} \otimes U(6/4)_{\pi}$
supersymmetry scheme by the operator
\begin{eqnarray}
P_{\nu}^{(j) \, \dagger} &=& \alpha_j \frac{1}{\sqrt{2}} \left[
\left( \tilde{s}_{\nu} \times a^{\dagger}_{\nu,j} \right)^{(j)} -
\left( \tilde{d}_{\nu} \times a^{\dagger}_{\nu,1/2} \right)^{(j)} \right] ~.
\end{eqnarray}
It is convenient to take ratios of intensities, since they do not depend on the
value of the coefficient $\alpha_j$ and hence provide a stringent test of the wave
functions. For the stripping reaction $^{194}$Pt $\rightarrow$ $^{195}$Pt
(ee $\rightarrow$ on) the ratio of intensities for the excitation of the $(1,0)$, $L=2$
doublet with $J=3/2$, $5/2$ belonging to the first excited band with $[N+1,1]$,
$(N+1,1,0)$ relative to that of the ground state band $[N+2]$, $(N+2,0,0)$ is given
by \cite{bi}
\begin{eqnarray}
R_j({\rm ee \rightarrow on}) = \frac{(N+1)(N+3)(N+6)}{2(N+4)} ~,
\label{ratio}
\end{eqnarray}
which gives $R_j=29.3$ for $^{194}$Pt $\rightarrow$ $^{195}$Pt ($N=5$),
in comparison with the experimental value of 19.0 for $j=5/2$, and
$R_j=37.8$ for $^{192}$Os $\rightarrow$ $^{193}$Os ($N=6$).
The equivalent ratio for the inverse pick-up reaction is given by
\begin{eqnarray}
R_j({\rm on \rightarrow ee}) = R_j({\rm ee \rightarrow on})
\frac{N_{\pi}+1}{(N+1)(N_{\nu}+1)} ~.
\label{corr}
\end{eqnarray}
which gives $R_j=1.96$ for $^{195}$Pt $\rightarrow$ $^{194}$Pt ($N_{\pi}=1$ and
$N_{\nu}=4$) and $R_j=3.24$ for $^{193}$Os $\rightarrow$ $^{192}$Os ($N_{\pi}=2$
and $N_{\nu}=4$). This means that the mixed symmetry $L=2$ state is predicted
to be excited more strongly than the first excited $L=2$ state.
This correlation between pick-up and stripping reactions can be derived in a general
way only using the symmetry relations that exist between the wave functions of the
even-even and odd-neutron nuclei of the supersymmetric quartet. It is important
to point out, that Eqs.~(\ref{ratio} and (\ref{corr}) are parameter-independent
predictions which are a direct consequence of nuclear SUSY and which can be
tested experimentally.
\section{Summary and conclusions}
In conclusion, we have presented evidence for the existence of a second quartet
of nuclei in the $A\sim 190$ region with $U_{\nu}(6/12)\otimes U_{\pi}(6/4)$
supersymmetry, consisting of the $^{192,193}$Os and $^{193,194}$Ir nuclei. The
analysis is based on new experimental information on $^{194}$Ir. In particular,
the $(\vec{d},\alpha)$ reaction is important to establish the spin and parity
assignments of the energy levels, and to provide insight into the structure of
the spectrum of $^{194}$Ir. Given the complexity of the $A \sim 190$ mass region,
the simple yet detailed description of $^{194}$Ir in a supersymmetry scheme is
truly remarkable.
Nuclear supersymmetry establishes precise links among the spectroscopic properties
of different nuclei. This relation has been used to predict the energies of
$^{193}$Os. Since the wave functions of the members of a supermultiplet are connected
by symmetry, there exists a high degree of correlation between different one- and
two-nucleon transfer reactions not only between nuclei belonging to the same quartet,
but also for nuclei from different multiplets \cite{barea1,barea2}.
As an example, we studied the correlations between one-neutron transfer reactions
for the Pt and Os isotopes, and predicted that the $L=2$ mixed symmetry states
in the even-even nucleus are populated much stronger than the first excited $L=2$ state.
In order to establish the existence of a second supersymmetric quartet of nuclei
in the $A \sim 190$ mass region, it is crucial that the nucleus $^{193}$Os be studied
in more detail experimentally. The predictions for correlations between one-neutron
transfer reactions in Pt and Os can be tested experimentally by combining for example
$(\vec{d},p)$ stripping and $(p,d)$ pick-up reactions.
\begin{theacknowledgments}
This work was supported in part by PAPIIT-UNAM (grant IN113808),
and in part by the Deutsche Forschungsgemeinschaft (grants JO391/2-3 and GR894/2-3).
\end{theacknowledgments}
|
1,116,691,497,260 | arxiv | \section{Introduction}
The way we see the world is contingent upon the way we move our eyes, behind every eye movement is an inferential process that determines what to place within the central $2^{\circ}$ of the visual field that is processed by 50\% of our primary visual cortex \cite{mason1991central}. Consciously or not, whatever it is that we see, it is usually the case that we sought to see it. This presumption is supported by the pre-motor theory of attention \cite{rizzolatti1987reorienting} which suggests the neural mechanisms underlying the planning and execution of eye movements are closely linked with those responsible for attentional modulation. From this we may presume that by analysing eye movements we may gain insight into the inferential processes guiding their deployment.
In the setting of behavioural experiments, observed behaviours are often contextualised by objective task contingencies or rules, yet less frequently considered are the subjective perceptual representations that enable an individual to recognise these contingencies. In this paper we propose a computational model of visual search that facilitates inference on an individual's behaviour and their subjective perceptual inferences, a problem that has previously been described as one of meta-Bayesian inference \cite{daunizeau2010observing}, i.e., making inferences about inferences. We evaluate our model with a gaze-contingent (moving-window) paradigm in which human participants were asked to classify handwritten digits from the MNIST dataset \cite{lecun2010mnist}. In this task the contents of the visual scene were occluded beyond a window that followed participant's gaze.
\begin{figure}[hbt!]
\centering
\includegraphics[width=12cm]{Figure1.png}
\caption{\textbf{Gaze-contingent paradigm.} Human participants were asked to classify 100 digits as quickly and as accurately as possible. Answers were reported by pressing a button and focusing their gaze on the corresponding choice location. Note that the grey areas of the mask are made transparent for illustrative purposes only.}
\end{figure}
\section{Motivation}
Perhaps the most influential model of visual attention is Itti’s implementation \cite{itti1998model} of Koch \& Ullman’s computational theory of saliency \cite{koch1987shifts} in the primate visual system. However, this and other models of attentional selection that omit the influence of an agent’s internal states and intentions will be challenged by the complexity and scope of many behavioural tasks \cite{Yarbus1967}, in part due to an inability to dynamically reattribute salience with respect to new information. The role of top-down attentional modulation has featured more\ prominently in recent models \cite{einha2008task}\cite{ballard2009modelling} of eye movements, and the concept of ‘salience’ has given some way to that of ‘priority’. The deployment of spatial attention and salience attribution has been attributed to ‘priority maps’ encoded by interactions between lateral interparietal area \cite{gottlieb1998representation} \cite{bisley2010attention}, frontal eye fields \cite{schall2002neural}\cite{thompson2005neuronal} and the superior colliculus \cite{krauzlis2013superior}.
Neural representations of visual salience are not serially processed one fixation at a time \cite{shen2014predictive} but through a process wherein the lateral interparietal area and frontal eye fields select saccade targets in advance of saccades and project to intermediate layers of the superior colliculus to influence lower motor pathways. We reflect this process in the implementation of feature or priority maps in the state-space of our model; while evaluating potential saccade locations the agent will be influenced by the conditional feature probabilities encoded by these maps, allowing it to allocate attention over regions of visual space that are likely to confirm (or refute) hypotheses about the identity of the digit. In our formulation we assume that the priority maps for each digit class are computed a-priori, yet their influence is effectively dynamic, as the agent relies upon ascending messages from the dynamic model to weigh their relative utility. We view this analogously with participants having knowledge about the general form of each digit and utilising this information to locate visually salient visual features.
Due to the retinocentric organisation of neurons in the visual cortex, representations of the visual field may switch or change dramatically between fixations. To account for this, the motor cortex generates a copy of its output in the form of ‘corollary discharge’ \cite{sperry1950neural}. These signals are sent to visual areas to inform predictions of reafferent visual feedback, i.e, in distinguishing retinal displacement from movement in the external environment \cite{sommer2008brain} and of proprioceptive signals from the eye muscles \cite{wang2007proprioceptive}. In our model these signals, induced during policy selection in the Markov Decision Process (MDP), impose empirical priors on the output of a dynamic model that generates predictions about the outcomes of action \cite{friston2017active}. Empirical proprioceptive priors here are simply target locations in 2D visual space while the empirical exteroceptive priors are defined over the latent space of a variational autoencoder, which acts as a proxy for the forward (generative) models in the sensory cortex that compute visual predictions.
\section{Formulation}
All inference schemes based on probability density functions can be reformulated as optimisation problems under a variational formulation of Bayes rule. In brief, this involves approximating the true posterior with a variational (proposal) density that can be optimised with respect to observed data \cite{beal2003variational}. Optimisation in this context corresponds to minimizing variational free energy, a lower bound \cite{feynman1972statistical} on the approximate log-evidence of a model. This technique is commonly used in machine learning for approximating intractable posterior densities with neural networks \cite{dayan1995helmholtz}\cite{kingma2014semi}\cite{mnih2014neural}. It is also central to the Free Energy Principle, a mathematical formulation of how adaptive systems resist a natural tendency to disorder \cite{friston2010free}\cite{friston2019free} and Active inference, a neurobiologically influenced process theory of how the neuronal mechanisms of action and perception are unified by this objective \cite{friston2017active}. In the following section we show how perception and behaviour (eye movements) can be formulated as inference on the hidden states of the world, and how these processes can be simulated by optimising the variational free energy of a generative model of the environment.
We first construct our visual foraging task as a partially observable Markov decision process with categorical task outcomes. Under this model beliefs are expressed as the joint density of observations, hidden states, policies and precision:
\[ P(\tilde{o}, \tilde{s}, \pi,\gamma) = P(\pi|\gamma)P(\gamma) \prod_{t=1}^{T}P(o_t|s_t) P(s_t|s_{t-1,}\pi)\]
Where a likelihood matrix $P\left(o_{t}=i| s_{t}=j\right)=A_{ij}$ defines the probability of an outcome o under every combination of hidden states $ s $ and the transition matrix $P\left(s_{t}=i| s_{t-1}=j,\pi\right)={B\left(u\right)}_{ij}$ defines the probabilistic mapping from the hidden states at the current time step to the hidden states at the next time step under some action $ u $. Under this model, outcomes $ o $ are determined only by the current state $ s_{t} $ and beliefs about state transitions are determined by policies, where each policy $\pi$ comprises a series of actions $u=\pi\left(t\right)$. The mapping between policies and hidden states is influenced by the agent’s prior preferences, or the epistemic value of each outcome:
\[ P(o):=\sigma(C) \]
The precision parameter $ \gamma $ determines an agent’s confidence in its decisions, or expected uncertainty, defined as an inverse-gamma distribution with precision $ \beta $:
\[ P\left(\gamma\right)=\Gamma\left(1,\beta=\frac{1}{\gamma}\right) \]
The state-space of the MDP is comprised of 3 primary hidden state factors, $ \mathit{digit} $, $ \mathit{where} $ and $ \mathit{report} $. The $ \mathit{digit} $ factor defines the possible target classes. The $ \mathit{where} $ factor defines regions in visual space to which the agent can saccade. There are 49 such locations arranged in a 7x7 grid and an additional location that the agent must saccade to before making a decision. The $ \mathit{report} $ factor defines control states that the agent may invoke to either report the target class or remain undecided. In all trials the undecided state persists through the first transition, giving the agent enough time to forage for information before having to report its decision. Finally, a variable number of \textit{feature} factors define visual features such as contrast or orientation. While the number of factors is determined a-priori by a saliency-map algorithm, the possible states within each feature factor represent the presence of the corresponding feature, for example 1 and 5 may represent ‘None’ and ‘Strong’ contrast, respectively. We found that \cite{itti1998model}\cite{garcia2012saliency} and \cite{harel2007graph} produced suitable class-contingent saliency maps.
The model considers 3 outcome modalities, \textit{digit}, \textit{where} and \textit{feedback}. The \textit{digit} and \textit{where} outcomes are mapped directly to their corresponding hidden state factors while the third modality provides the agent with \textit{feedback} in the form 3 possible outcomes, correct, incorrect or undecided. If the invoked control state from the \textit{report} factor is aligned with the target class the model will observe correct feedback, which is associated with high utility, otherwise it will receive incorrect feedback which is associated with low utility. Note that the causal structure of the hidden states precludes direct influence of extrinsic reward on the instantaneous belief updates that occur within the subordinate (continuous) level.
Having defined a generative model, the agent’s approximate posterior Q can be computed by inverting the generative model, allowing the agent to form expectations about the hidden states:
\[ \begin{gathered}
Q(\tilde{s},\pi)=Q(\pi)\prod_{t=1}^{T}{Q(s_{t}|\pi)}
\end{gathered} \]
In our gaze-contingent task, visual observations depend upon sequences of saccades requiring the generative model to entertain expectations under different policies, or sequences of actions. The equation above states that the approximate posterior can be factorised by taking the product of the marginal state and policy distributions over time, assuming that control states are approximately independent from one another at each time step. The variational free energy may now be defined with respect to this factorised distribution:
\[ \begin{gathered}
F=-E_{Q\left(\tilde{s},\pi\right)}\left[lnP\left(\tilde{o},\tilde{s},\pi| m\right)\right]-H\left[Q\left(\tilde{s},\pi\right)\right] \\[9pt]
=E_{Q\left(\pi\right)}\left[-E_{Q\left(\tilde{s}|\pi\right)}\left[lnP\left(\tilde{o},\tilde{s}|\pi\right)\right]-H\left[Q\left(\tilde{s}|\pi\right)\right]\right]+KL\left[Q\left(\pi\right)| P\left(\pi\right)\right]\ \\[9pt]
=E_{Q\left(\pi\right)}\left[F_\pi\right]+KL\left[Q\left(\pi\right)| P\left(\pi\right)\right]
\end{gathered} \]
Where $F_\pi$ is the energy of a policy over each time-step:
\[ \begin{gathered}
F_\pi=\ \sum_{\tau} F_{\pi \tau}
\\[12pt]
F_{\pi\tau}=-E_{Q(s_{\tau}|\pi)Q(s_{\tau-1}|\pi)}[\left[\tau \le t\right]\cdot l n P\left(o_\tau | s_\tau\right)+lnP\left(s_\tau | s_{\tau-1},\pi\right)-lnQ\left(s_\tau |\pi\right)]
\end{gathered}
\]
Having defined an objective function, beliefs about the hidden states may be iteratively optimised by gradient descent:
\[ \begin{gathered}
{\dot{\hat{s}}}_\tau^\pi=\partial_{\hat{s}}\ s_\tau^\pi\ \cdot \varepsilon_\tau^\pi\ \\[7pt]
s_{\tau}^\pi=\sigma\left({\hat{s}}_\tau^\pi\right) \\[7pt]
\varepsilon_{\tau}^{\pi}=\ (\hat{A} \cdot o_{\tau}+ {\hat{B}}_{\tau-1}^\pi\ \cdot s_{\tau-1}^{\pi}+{\hat{B}}_\tau^\pi\ \cdot s_{\tau+1}^\pi)- \hat{s}_\tau^\pi \\[5pt]
=\ -\partial_sF
\end{gathered} \]
Solutions to the above equations converge toward posterior expectations that minimize free energy, providing Bayesian estimates of the hidden states that minimize prediction errors $\varepsilon_{\tau}^\pi$, expressed here as free energy gradients. This model encapsulates a single trial of the task lasting approximately 2 seconds, or a maximum of 8 saccades.
The nature of the task requires the agent to utilise visual information that cannot be evaluated directly within the MDP. This issue is addressed by supplementing the agent’s generative model with an additional (subordinate) continuous-time model that can accumulate evidence from the visual domain \cite{friston2017graphical} i.e. directly from the attended pixels. In this model, conditional expectations $\tilde{\mu}$ about proprioceptive and exteroceptive sensory information are encoded by the internal states of the agent’s ‘brain’ in the form of a recognition density $q(x,v,a|\mu)$ that approximates the true posterior $ p(x,v,a|y,m) $, where $ y $ are the values of the attended pixels and the angular displacement of the eye. As before, this density can be optimised by maximizing Bayesian model evidence, or minimising variational energy:
\[ F(\tilde{y},\tilde{\mu})=-lnp(y|m)+KL[q(x,v,a|\mu)||p(x,v,a|y,m)] \]
By assuming a Gaussian form for the recognition density $p(x,v,a|y,m)$, we assume a local quadratic form for the variational free energy \cite{friston2008hierarchical} under the generative model:
\[
\begin{gathered}
lnp\left(\tilde{y},\tilde{v},\tilde{x},\tilde{a}|\tilde{\mu}\right)=\frac{1}{2}ln\left|\tilde{\Pi}\right|-\frac{1}{2}{\tilde{\varepsilon}}^T\tilde{\Pi} \tilde{\varepsilon} \\[8pt]
\tilde{\Pi}\ =\left[\begin{matrix}{\tilde{\Pi}}^v&&\\&{\tilde{\Pi}}^x&\\&&{\tilde{\Pi}}^a\\\end{matrix}\right] \\[9pt]
\tilde{\varepsilon}=\left[\begin{matrix}{\tilde{\varepsilon}}^v=\ \left[\begin{matrix}y\\v^{\left(1\right)}\\\end{matrix}\right]-\left[\begin{matrix}{g}\\\eta\\\end{matrix}\right]\\[9pt]{\tilde{\varepsilon}}^x=\tilde{x}\ -\ f\\[7pt]\tilde{\varepsilon}^a=\ a\ -\ \eta\\\end{matrix}\right]
\end{gathered}
\]
This formulation shows that the probabilistic generative model can be expressed in terms of prediction errors $ \tilde{\varepsilon} $ and their precision $ \Pi $. Estimates of the causal states $v$, their dynamics $x$ and action $a$ are derived from response $g(t)$ and state $ f(t) $ functions, described below. Empirical priors $ \eta $ are descending messages that convey the expected outcomes from the \textit{what} and \textit{where} modalities of the superordinate MDP.
\begin{figure}[hbt!]
\centering
\includegraphics[width=13cm]{Figure2.png}
\caption{\textbf{Predictive Coding Scheme.} By integrating this scheme targets of interest are selected and brought within the agent’s receptive field, inducing proprioceptive and exteroceptive stimuli $ y $ and prediction errors $ \varepsilon $. State-units encoding conditional expectations $ \tilde{\mu} $ are illustrated in black while error-units encoding precision-weighted prediction errors $ \xi $ are illustrated in red. The blurry prediction $ {\tilde{\mathbf{y}}}_\mathbf{e}\ $ on the left side of the image is generated from the prior network $ \mathbf{p}_\mathbf{\theta}\ $ under the weighted sum of random (uncertain) hypotheses $ \mathbf{v}_\mathbf{h} $. Exteroceptive prediction errors $ \mathbf{\varepsilon}_\mathbf{e} $ are derived from the absolute difference between this prediction and the observed stimulus at the sampling location $ \mathbf{x}^\mathbf{o} $. Proprioceptive prediction errors $ \mathbf{\varepsilon}_\mathbf{p} $ are derived from the difference between the current foveal location $ \mathbf{x}^\mathbf{o} $ and the saccade target $ \mathbf{v}^\mathbf{o} $ determined by top-down empirical priors $ \eta $ induced by policy selection in the MDP. As prediction errors are minimized, uncertainty is reduced and predictions become more accurate. MATLAB implementations of this optimisation scheme 'spm\_MDP\_VB\_X' and 'spm\_ADEM' are available to download as part of the SPM toolbox at \url{fil.ion.ucl.ac.uk/spm}}
\end{figure}
The agent interprets sensory information as though it were derived from two distinct modalities or streams; proprioceptive information $ y_{p} $ corresponds to the angular displacement of the eye, or the centre of gaze in extrinsic (cartesian) coordinates. Exteroceptive information $ y_{e} $ corresponds to visual stimuli sampled from a uniform grid $ \mathcal{R} $ of $ 8^{2} $ pixels centred around $ y_{p} $.
\[ \begin{gathered}
\tilde{y}\ =\ g = \left[\begin{matrix}{\tilde{y}}_p\\{\tilde{y}}_e\\\end{matrix}\right]\ =\ \left[\begin{matrix}y_o\\R(y_e,y_o)\\\end{matrix}\right]+\omega \\[10pt]
y_o= f = v_o-x_o \\[10pt]
y_e=\sum_{h}{p_\theta\left(y_e|\rho,exp(v_h\right))}
\end{gathered} \]
Where the Gaussian innovations $ \omega $ induce small high-frequency perturbations (1-2 pixels) to the foveal sampling location. The causal states v comprise a 2D scalar target location $ v_{o} $ and digit class probabilities $ v_{h} $, both of which are prescribed by the superordinate MDP level. The final equality shows that competing visual hypotheses are scaled to reflect conditional uncertainty using the entropy of their (softmax) probabilities. The hidden states $x$ are the resulting motion of the eye relative to this location and the transitive values of the probability vector describing the subject’s belief about the target digit.
Posterior beliefs about the underlying causes of visual input $q_{\Theta}(z_{c},z_{d}|y) $ are optimised a-priori by a neural network with parameters $ \Theta $. This network encodes the sufficient statistics of a joint distribution over discrete $ z_{d}\equiv v_{h} $ and continuous $ z_{c}\equiv\rho $ factors of variation in the MNIST dataset. Categorical class probabilities $ z_{d} $ are encoded by Gumbel Softmax \cite{jang2016categorical} or concrete \cite{maddison2016concrete} distribution $ p_{\Theta}(z_{d}|y)=Gumbel(z_d) $. Variations in within-class visual features such as orientation and width are encoded by a multivariate normal distribution $ q_{\Theta}(z_{c_i}|y)\ =\ N(\mu_i,\sigma_i^2) $ with a unit prior $ p(z_c)=\mathcal{N}(0,1) $. The generative (prior) component of this model $ p_\theta(y| z_{c},z_{d}) $ is a neural network with parameters $ \theta $ that maps learned beliefs $ z_{c} $ and $ z_{d} $ to observations in the visual domain. Under the assumption that the discrete and continuous variables are conditionally independent, the objective function for both the posterior $ \Theta $ and prior $ \theta $ networks may be composed as per (Dupont, 2018) to facilitate the regularization of the discrete and continuous KL divergence terms during training:
\[ \begin{aligned} L(\Theta,\theta)=\ E&_{q_\Theta(z_{c},z_{d}|y)}[logp_{\theta}(y| z_{c},z_{d})] \\[4pt]
-& r_{c}|KL[{\ q}_{\Theta}(z_{c},y)\ ||\ p(z_{c})]-k_{c}| \\[4pt]
-& r_{d}|KL[{\ q}_{\Theta}(z_{d},y)\ ||\ p(z_{d})]-k_{d}|
\end{aligned} \]
Where $r$ and $k$ are free regularisation parameters. Doing so encourages disentanglement \cite{higgins2017beta} of the latent variables, allowing for each factor of variation in the data to be learned and subsequently coupled with one or more outcome modalities in the MDP. Here we are concerned only with the disentanglement of the target classes and their relationship to the $ \mathit{digit} $ outcome modality, but this technique may prove useful for other experimental paradigms. With this model we may describe the dynamics of the task, including semantic content and ocular-motor dynamics as differential equations that are integrated with respect to sensory input over the duration of a single saccade (~200ms), or a single state transition within the Markov process.
Communication between the Markov process and the dynamic model is mediated by a link function that transforms ascending prediction error messages ${\tilde{\varepsilon}}^v $ from the continuous-time model into posterior expectations $\tilde{o}$ over outcomes in the Markov model and descending predictions from the Markov model into empirical priors $ \eta $ at the continuous level.
We define a set of reduced (competing) models $ \vartheta $ by collapsing a prior density over each possible outcome $R\in[1,10x50]$. Each reduced model $ \vartheta_{m} $ is a prior encoding a visual hypothesis at a target saccade location, evaluated over the duration of the saccade:
\[
\begin{aligned}
{E\left(t\right)}_m=&-lno_{\tau,m}-\int_{0}^{T}{{L\left(t\right)}_mdt} \\[3pt]
{L\left(t\right)}_m=&lnP\left(\tilde{y}\left(t\right)|\vartheta_m\right)-lnP(\tilde{y}_{t}|\eta)
\end{aligned}
\]
\[
\\[4pt]
\begin{array}{cc}
o_\tau=\sum \pi_\pi\cdot o_\pi^\tau & \vartheta=\sum\vartheta_m\cdot o_m^\tau
\end{array}
\]
The free energy E at the last time-step of the sequence takes the place of posterior expectations over outcomes $ o $ in the MDP. See \cite{friston2017graphical} for a neurobiological interpretation of this function.
\section{Results}
We first demonstrate that our model can categorize digits based on limited foveated sampling. In this example we see that the model builds its beliefs about the identity of the digit in only three saccades (Figure 3).
\begin{figure}[hbt!]
\centering
\includegraphics[width=11cm]{Figure3.png}
\caption{\textbf{Model Outcomes}. Column 1) Unfoveated task stimulus $ y $, with a red dot marking the foveation location (which neither the model of human participant could see). Column 2) Task stimulus foveated around $ x_{p} $ (observable by model and human participants). Column 3) The agent’s visual expectation about the global scene generated from the prior network $ p_{\theta} $. Column 4) The agent’s visual expectation about the stimulus at $x_{p} $. 5) Posterior expectation about the target location $ o_{l} $. 6) Ascending posterior expectation about the target digit $ o_{h} $.}
\end{figure}
To estimate subject-specific parameters from observed behaviour we specify an objective model $ m^{o} $ in terms of the likelihood $ p (y |\theta,\lambda(\vartheta),u,m^{o}) $ of (the participant’s) behavioural responses $ y $ and a prior over the unknown parameters $ p(\vartheta,\theta|m^{o}) $. The implicit generative model $ \lambda(\vartheta) $, with subjective parameters $ \vartheta\in x,v,\rho $, in the likelihood function provides a differentiable mapping from task stimuli $u$ to participant’s behaviour $y$. The unknown parameters of the objective model $ \theta\in C,\beta $ correspond to putative neurobiological quantities that we wish to infer; we focus here on the precision of prior preferences over outcomes C, which we presume to be encoded by the ventromedial prefrontal cortex \cite{paulus2003ventromedial} and the inverse precision of beliefs about control states $ \beta $, which we presume to be encoded by dopaminergic projections from ventral tegmental area and substantia nigra to the striatum \cite{fitzgerald2015dopamine}\cite{schwartenbeck2015dopaminergic}.
We optimise this objective model to recover estimates of $C$ and $\beta$ with respect to observed behaviour (Figure 4) using the same variational technique as the subjective model i.e gradient descent on variational free energy \cite{daunizeau2009variational}.
\begin{figure}[hbt!]
\centering
\includegraphics[width=13cm]{Figure4.png}
\caption{\textbf{Model inversion with respect to observed behaviour}. A) The trajectory of the estimated parameters in parameter space. B) The lower bound of the log-model evidence approximated as variational free energy. C) Final conditional parameter estimates.}
\end{figure}
\begin{figure}[hbt!]
\centering
\includegraphics[width=13cm]{Figure5.png}
\caption{\textbf{Behavioural metrics as a function of model parameters.} Red (horizontal) lines indicate the mean human responses. Green (vertical) lines indicate the parameter values recovered from model inversion. The top row displays these metrics as a function of policy precision $ \gamma $ with inverse temperature $ \beta $. As $ \beta $ increases, or as the agent's confidence in its actions decreases, accuracy declines, fixations become longer and the total number of saccades increase. The top-right figure shows the average percentage of (unique) attended pixels as a function of the total number of saccades. The bottom row shows that the accuracy of the model increases as it’s intrinsic motivation to observe \textit{correct} feedback (C) increases, and that for C<=3, the model is not incentivised to make a decision. The bottom-right figure shows the average free energy of the model as a function of the model parameters. }
\end{figure}
By simulating 100 trials for each digit class with the parameters recovered from model inversion, we find that on average, the correct digit is inferred on 88\% of trials after 5.02 saccades (Figure 5). This corresponds to 35.7\% pixels viewed relative to the total number of pixels in the image. The code used to generate these results is available to download from \url{https://github.com/v2c08/M-BMVS}.
\section{Discussion}
An important mechanistic assumption made by the Free Energy Principle and other related Bayesian Brain theories \cite{doya2007bayesian}\cite{knill2004bayesian} is that the physical states of sensory systems encode probabilistic representations of the environment that interact over multiple spatial and temporal scales. Empirical support for these theories comes primarily from studies of predictive coding \cite{friston2009predictive} in the sensory cortices \cite{bastos2012canonical}\cite{shipp2016neural}\cite{spratling2010predictive}. Theories of predictive coding cast the cerebral cortex as a Bayesian generative model that implements approximate (variational) inference through a recursive interchange of prediction and error signals. We appeal to this theory in our work and present a hierarchical model of visual search that incorporates Bayesian interpretations of the physiological processes governing perception and behaviour.
While the model may seem complex, its form and the attendant variational optimisation scheme can be generalised to most if not all behavioural paradigms. The discrete Markov scheme has previously been used to model context learning \cite{friston2016active}, goal-directed behaviour \cite{pezzulo2018hierarchical}, addiction \cite{schwartenbeck2015optimal}, scene construction \cite{mirza2016scene}, and reading \cite{friston2018deep} while the dynamic scheme has been used to simulate perception \cite{kiebel2009perception}, attention \cite{feldman2010attention} and sensorimotor integration \cite{friston2011optimal}.
However, in many cases the coupled differential equations that define the response and dynamic functions cannot be specified due to the complexity of the task-relevant data, i.e. high-dimensional inputs with causal or temporal relationships between modalities. In this work we demonstrate the use of unsupervised generative neural networks in place of the unknown or intractable response function. Appealing once again to predictive coding, we draw a comparison between learned visual representations in the brain and the latent variables of the variational autoencoder. We presume that representations in the brain are derived from prediction errors invoked by sensory interactions with the environment, and that the compositional structure of these representations reflect the causal structure of the stimuli from which they are derived. The scheme presented here is particularly well suited to tasks in which representations of task-relevant stimulus categories can be ‘disentangled’ by neural networks. Doing so allows the experimenter to account for each factor of variation in the experimental design, and their latent representations in the subjective ‘forward’ model of the task.
We have identified several ways in which our model may be extended, for example to facilitate the categorisation of naturalistic images; to do so one may consider replacing our relatively simple decoder network $ p_{\theta} $ with a more powerful autoregressive decoder such as \cite{gulrajani2016pixelvae}. It may also be possible to integrate other (interpretable) mechanisms for class-contingent salience attribution such-as \cite{elsayed2019saccader}, allowing the agent to evaluate the likelihood of policies defined over internal representations of space, rather than a simple 2D grid. One may also wish to improve the proprioceptive component of our model by incorporating continuous ocular kinetic parameters within the subjective generative model as per \cite{adams2015active}. Doing so would allow the model to more accurately account for individual differences in occular-motor dynamics.
\section{Experimental Procedures}
A visual eye-tracking study was performed on healthy participants to analyse scan paths during visual search. Movements of the left eye were tracked and recorded at 1kHz with an Eyelink1000 (SR Research) system. A 12-point calibration procedure was implemented at the beginning of each recording session. We used a Windows 7 desktop computer and a monitor displaying at a resolution of 1280 x 1024 @ 85.3 Hz. This experiment was realised using Cogent Graphics developed by John Romaya at the LON at the Wellcome Department of Imaging Neuroscience, and Psychophysics Toolbox extensions for MATLAB \cite{brainard1997psychophysics}\cite{kleiner2007s}. In total 28 participants were recruited, aged 20-34 (M=26.5, SD=3.5), all participants had a visual acuity of 20/20 read from a Pocket Snellen chart. Approval was granted by the KCL Research Ethics Subcommittee Ref:MRM-18/19-11544 and all participants gave written informed consent.
\newpage
%
|
1,116,691,497,261 | arxiv | \section{Introduction}
Supernova remnants (SNRs) are the result of the interaction of a supernova explosion with its ambient medium.
The X-ray and radio-bright shell characteristic of young SNRs is composed of shocked ambient medium and
stellar ejecta. Internal to the reverse shock there can be some stellar ejecta that have yet to encounter the reverse shock \citep{mckee74}.
These ejecta were initially heated by the passage of the blast wave inside the star, but have since cooled due to adiabatic expansion.
Because this material is internal to a shell bright in X-rays and likely also in the UV, it can be photoionised.
Several hundreds of years after the supernova event, the remnant still retains some imprint of the explosion;
this is particularly the case for the unshocked ejecta.
SNRs have an effect on their surroundings, not only on the shocked ambient medium, but also on the
still to-be-shocked neighbourhood of the SNR. They are bright X-ray sources, as well as likely the sites of cosmic ray acceleration \citep{hillas05}.
Both the high-energy photons and the cosmic rays can deposit energy into the surroundings of the SNR;
for instance, heating and ionising nearby molecular clouds. Furthermore, during its lifetime and its
pre-SN stage, the progenitor star sculpts its ambient medium; for example, through stellar winds and ionising radiation.
The environment of the SNR is therefore a diagnostic of the star's pre-SN life, and of the SNR itself.
Tycho's SNR (SN 1572, G120.1+1.4, hereafter Tycho) is a young SNR, whose reverse shock might not have yet heated
all of the stellar ejecta from the explosion. It is the result of a Type Ia event, as evidenced
from the historical records of the light curve \citep{baade43}, and from the optical spectrum as recovered from
light echoes \citep{krause08b,rest08}.
From comparison of the X-ray spectra to hydrodynamical and spectral models, \cite{badenes06} concluded that the
scenario that best fit the data is one in which 1.3~$M_{\odot}$\ of material were ejected at the time of the explosion
into an ambient density of $\sim0.6-3$~cm$^{-3}$. There is evidence that the density is higher in the
north-east of the remnant, from H$\alpha$ \citep{ghavamian00}, molecular gas \citep{lee04, zhou16},
and dust observations \citep{williams13}. The work of \cite{woods17} placed strict upper limits on the temperature and luminosity of
Tycho's progenitor from the observed fraction of neutrals in the atomic gas,
pointing to the merger of a double white dwarf binary as the most viable scenario for Tycho's SN explosion.
On the other hand, the molecular shell found in \cite{zhou16} is more consistent with a single-degenerate scenario.
The remnant has been studied extensively, including at wavelengths that probe the unshocked ejecta.
\cite{lopez15} observed it with \textit{NuStar}, but they did not not detect any emission associated with the decay of
radioactive $^{44}$Ti, point-like or extended.
\cite{gomez12} observed it in the infrared with \textit{Herschel} and \textit{Spitzer}, and did not detect a cool dust component in the
innermost region of unshocked ejecta, although they did not specifically look for line emission from photoionised, cold material.
At low radio frequencies it has been observed with the Very Large Array (VLA) at 330~MHz \citep{katz-stone00}, and several times at
1.4 GHz \citep{reynoso97, katz-stone00, williams16}. It has also been observed at 660~MHz with the Westerbork Synthesis Radio
Telescope \cite[WSRT,][]{duin75}, and, at lower resolution, at 408~MHz as part of the Canadian Galactic Plane Survey \cite[CGPS,][]{kothes06}.
In this paper we present new observations of Tycho with the LOw Frequency ARray \cite[LOFAR, ][]{vanhaarlem13}, both with the
instrument's High-Band Antenna (HBA, $120-168$~MHz) and the Low-Band Antenna (LBA, $40-75$~MHz).
We compare these maps with higher frequency observations, and we detect localised free-free absorption from
free electrons along the line-of-sight, from foreground material, and possibly also
from material internal to the SNR reverse shock.
We cannot use the measured absorption value to estimate how much mass there is in unshocked ejecta, although
our results suggest that if unshocked material is present, it is in a combination of relatively highly ionised, cold, and significantly clumped states.
The ionised ambient material could be either the diffuse cavity surrounding Tycho or its neighbouring molecular clouds.
Both scenarios have implications for the ionising source.
\section{Observations and data reduction}
\subsection{Observations}
We observed Tycho's SNR with LOFAR under project LC10\_011. The Low-Band Antenna (LBA) observations
were centred at RA=00:25:21.5, Dec=+64:08:26.9, with a time on-source of
10 hours. The data were taken on 2018/05/18, in the LBA-Outer configuration, using 8 bit sampling, 1 second integration,
and a frequency resolution of 64 channels per sub-band.
The central frequency was 53.2 MHz, and the total bandwidth was 43.6 MHz.
A second beam was placed on calibrator 3C48 for the length of the observation.
For the High-Band Antenna (HBA) observations we made use of the possibility of co-observing with the LOFAR
Two Metre Sky Survey \cite[LoTSS,][]{shimwell17}. We identified the LoTSS pointing closest to Tycho, P007+64 (centred at
RA=00:30:40.8, Dec=+63:36:57.9), and requested that it be observed during LOFAR cycle 10 as part of LC10\_011.
The observations were made with the standard LoTSS settings: 8 hours on-source, 48 MHz bandwidth, and an additional
10 minutes at the beginning and end of the observations to observe the calibrators (3C48 and 3C147, in this case).
\subsection{Low-Band Antenna}
The LBA data were reduced with the LOFAR Low-Frequency Pipeline \citep{degasperin19}. The pipeline calibrates the calibrator
and transfers the solutions to the target, taking into account the main systematic effects in the LOFAR telescope,
such as clock drift, polarisation misalignment, ionospheric delay, Faraday rotation, ionospheric scintillation, beam shape, and bandpass.
Due to noise, we had to flag all the data at frequencies less than 40~MHz, as well as two LOFAR stations, CS013 and CS031.
From the calibrator solutions we knew that there were very good ionospheric conditions during the observation, with
almost no Faraday rotation (the calibrator was observed for the full duration of the observation,
so we knew the ionosphere was good throughout). This allowed us to perform one round of self-calibration from our first image of the source,
rather than from a sky model made at a different frequency.
The pipeline split the data into two frequency chunks, one centred at 48.3~MHz, and another centred at 67.0~MHz, which
were imaged separately. We imaged the data with \texttt{wsclean} \citep{offringa14}, which allows for multi-scale, multi-frequency
deconvolution with w-projections, and for applying the LOFAR beam. The visibilities were weighted with a Briggs parameter of zero \citep{briggs95}.
In order to filter out large scale structure and in order to ensure common resolution among the maps, we used a $u-v$ range of
$30-5,000~\lambda$. The two \lq full-bandwidth' LBA images centred at 48.3~MHz and 67.0~MHz are shown in Fig. \ref{fig:lba_maps}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{lba_maps-eps-converted-to.pdf}
\caption{Tycho SNR as observed with the LOFAR Low-Band Antenna (LBA). The LBA bandwidth was split to make these two images,
centred at 48.3~MHz (left) and 67.0~MHz (right), each 18~MHz wide. The elliptical beam size is 41\arcsec$\times31$\arcsec, with position angle $56^\mathrm{o}$,
and the pixel size is 10\arcsec for both maps. The local rms noise is
0.03 Jy~bm$^{-1}$ for the 67.0~MHz map and 0.08 Jy~bm$^{-1}$ for the 48.3~MHz map. The flux density scale in both maps is in Jy~bm$^{-1}$.
\label{fig:lba_maps}}
\end{figure*}
In addition to the broadband maps, to search for spectral curvature, we made a series of narrow-band images, each 1.3~MHz wide, centred at
40.1, 42.5, 44.8, 47.1, 49.5, 51.8, 54.2, 56.5, 58.9, 61.2, 63.5, 65.8, 66.9, 68.1, 70.5, 72.8, and 75.1 MHz.
These maps were also made with a common $u-v$ range of $30-5,000~\lambda$.
\subsection{High Band Antenna}
The HBA data were reduced in a direction-independent manner with the Pre-Facet Calibration Pipeline \citep{vanweeren16},
which obtains diagonal solutions towards the calibrator and then performs clock-TEC separation,
which distinguishes between clock offsets and drifts, and signal delays due to the electron column density in the ionosphere,
and transfers the calibrator amplitudes and clock corrections to the data.
The calibrated data products were then imaged with the latest version of the ddf-pipeline\footnote{Version 2.2, \url{https://github.com/mhardcastle/ddf-pipeline/}}
\citep{shimwell19,tasseinprep},
which is the method used for reducing data from the LoTSS.
The pipeline carries out several iterations of direction-dependent self-calibration, using
DDFacet for imaging \citep{tasse18} and KillMS for calibration \citep{tasse14a,tasse14b,smirnov15}.
The resulting HBA image is shown in Fig. \ref{fig:hba_map}. The pipeline also produced three narrow-band images at 128, 144, and 160 MHz. The LOFAR
HBA in-band spectral index is unreliable, but in order to use these narrow-band maps in our analysis we bootstrapped the maps to the expected
flux densities of neighbouring sources in the field, from the HBA broadband map.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{hba_map-eps-converted-to.pdf}
\caption{Tycho SNR as observed with the LOFAR High-Band Antenna (HBA). The central frequency is 144~MHz, the bandwidth is 48 MHz,
the beam size is 6\arcsec, the pixel size is 1.5\arcsec, and the local rms noise is 1 mJy~bm$^{-1}$. The flux density scale is in Jy~bm$^{-1}$.
\label{fig:hba_map}}
\end{figure}
\subsection{Archival data}
\label{sec:archival}
We obtained the FITS files for the 327 MHz Very Large Array (VLA) observation of Tycho carried out in 1991-1993 \citep{katz-stone00},
as well as for the 1.4~GHz VLA observation carried out in 2013-2015 \citep{williams16}.
\cite{katz-stone00} note that their map is sensitive to scales between 8\arcsec\ and 30\arcmin, which corresponds
to $114-25,800 \lambda$s. The \cite{williams16} L-band map, combining the VLA A, B, C, and D configurations, is
sensitive to scales between 1.3\arcsec\ and 16\arcmin\ ($212-15,800 \lambda$s).
The integrated flux density of the 1382~MHz map from \cite{williams16} is 41.7~Jy, and this is the value that we used for the
analysis. However, if we directly measure the integrated flux density of the 327~MHz image, it is 121.8~Jy. This is 115\%
of the expected value for $S_\mathrm{1GHz}=56$~Jy and $\alpha=0.58$ \citep{green17}, and 117\% for $S_\mathrm{1GHz}=52.3$~Jy
and $\alpha=0.63$, which are the best-fit values we find from a compilation of literature results (see discussion in section
\ref{sec:flux}). We do not measure a level of background in the FITS image that accounts for this difference.
Unfortunately, \cite{katz-stone00} do not report the integrated flux density for their 327~MHz observation.
Our analysis relies on the localised deviation from power-law behaviour at low frequencies due to free-free absorption from ionised
material along the line-of-sight (we discuss the method in detail in section \ref{sec:method}).
The 327~MHz and 1382~MHz maps provide the fit with the information about the spectral
behaviour of the source when no absorption is present. If we take the flux density at 327~MHz to be the 121.8~Jy that we measure directly
from the FITS file, we find it disproportionately affects the measured absorption, by setting an artificially high spectral index value for
any given pixel\footnote{The 121.8~Jy and 41.7~Jy values at 327~MHz and 1382~MHz correspond to a spectral index $\alpha_{327/1382}=0.74$,
much higher than the overall spectral index of the source.}, which then requires a much larger mass of absorbing material to account
for the flux densities at LOFAR frequencies. For this reason, we normalised the flux density of the 327~MHz map to 105.7~Jy, according to the
best-fit power law results for the compiled literature values as shown in section \ref{sec:flux}.
When comparing interferometric maps, it is important to take into account the scales probed by the different instruments.
When the emission is perfectly deconvolved, it
is possible to compare higher resolution maps with lower resolution maps by simply smoothing them to a common resolution.
However, the short-baselines $u-v$ coverage matters if interferometers do not probe the same scales, especially for
Galactic observations, for which the sources might be embedded in large-scale diffuse emission.
We summarise the $u-v$ scales probed by the maps used in our analysis in Table \ref{tb:flux}.
Our LOFAR maps are sensitive to large angular scales, which might result in additional large-scale continuum emission that is
resolved out by the VLA maps. This would result in a spectral index steepening.
We note this issue as a possible source of error.
\section{Results}
\subsection{Total flux density}
\label{sec:flux}
We report the total flux density of Tycho as seen with the LOFAR telescope LBA and HBA in Table \ref{tb:flux}.
We also include the values from the 327~MHz and 1382~MHz VLA observations \citep{katz-stone00, williams16} which we relied on for the analysis.
\begin{deluxetable}{c|cccc}[h]
\tablecaption{Flux densities of Tycho SNR \label{tab:fluxes}}
\tablehead{
\colhead{Freq} & \colhead{Flux density} & \colhead{Error} & \colhead{Year} & \colhead{$\lambda$ coverage}\\
\colhead{(MHz)} & \colhead{(Jy)} & \colhead{(Jy)} & \colhead{} \\
}
\startdata
48.3 & 334 & 33 & 2018 & $30-5,000~\lambda$ \\
67.0 & 275 & 27 & 2018 & $30-5,000~\lambda$ \\
144.6 & 163 & 16 & 2018 & $50-50,000~\lambda$ \\
\hline
327 & 105.7& 10.5 & 1995 & $114-25,800 \lambda$ \\
1382 & 41.7 & 4.2 & 2013 & $212-15,800 \lambda$ \\
\enddata
\tablecomments{Observations at 327~MHz and 1382~MHz were taken with the
VLA and are described by \cite{katz-stone00} and \cite{williams16}, respectively.
See discussion in section \ref{sec:archival} for 327~MHz flux density.}
\label{tb:flux}
\end{deluxetable}
We compiled a series of radio flux densities in the literature, and plotted the LOFAR values alongside them
(Fig. \ref{fig:radio_spectrum}).
Fitting a function of the form $S_\nu = S_\mathrm{1GHz} \left(\frac{\nu}{\mathrm{1GHz}}\right)^{-\alpha}$ gives
a best-fit $S_\mathrm{1GHz}=52.3\pm2.0$~Jy and $\alpha=0.63\pm0.02$, whereas the value listed in the Green
SNRs catalogue is $S_\mathrm{1GHz}=56$~Jy and $\alpha=0.58$ \citep{green17}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{radio_spectrum.pdf}
\caption{Radio spectrum of Tycho, including measurements from this work (in blue). The green line corresponds
to the power-law spectral index (PL SPX) of 0.58 reported in \cite{green17}, and the yellow line is the best-fit (BF) power-law
spectral index from these data points. The literature (lit) values in red are
taken from: \cite{klein79}, \cite{green75}, \cite{hurley-walker09}, \cite{katz-stone00},
\cite{kothes06}, \cite{arnaud16}, \cite{gao11}, \cite{langston00}, \cite{williams66}, \cite{scott71}, \cite{artyukh69},
\cite{bennett63}, \cite{fanti74}, \cite{conway65}, \cite{kellermann69}, \cite{horton69}.
\label{fig:radio_spectrum}}
\end{figure}
The systematic calibration errors in the LOFAR flux scale are of the order of 10\%, which dominates the uncertainties, rather than
the noise. For this reason we take 10\% errors when we report the integrated flux densities of Tycho in the broadband images
in Table \ref{tb:flux} and in Fig. \ref{fig:radio_spectrum}. However, the 10\% errors are on the total flux scale rather than the disagreement between in-band measurements.
They are therefore an over-estimate for the purposes
of our analysis (our fits result in residuals that are much smaller than the error bars).
The fact that we do not know the statistical errors of the flux densities presents an issue for the analysis.
In order to solve this problem,
we artificially shrank the error bars of the LOFAR images (see Fig. \ref{fig:lofar_spectrum}) until
the reduced $\chi^2$ of the best-fit power-law for these points was 1. This provides us with a more meaningful estimate
of the errors in our pixel-by-pixel analysis.
The flux densities of the LBA narrow-band maps are plotted in Fig. \ref{fig:lofar_spectrum}.
If we only consider the LOFAR LBA and HBA results, we measure a steeper spectral index than when
we take into account measurements at higher frequencies ($\alpha=0.67$ instead of $\alpha=0.58$ or $\alpha=0.63$).
The best-fit value of $\alpha$ for the LOFAR points ($\alpha=0.63$) results in a $\Delta\chi^2=23.7$ improvement over the fixed $\alpha=0.58$ scenario,
for one additional degree of freedom.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{lofar_spectrum.pdf}
\caption{Radio spectrum of Tycho at LOFAR frequencies. The magenta points correspond to the full bandwidth maps,
and the blue points correspond to the narrow band maps.
The green line corresponds
to the power-law spectral index (PL SPX) of 0.58 reported in \cite{green17}, and the blue line is the best-fit (BF) power-law
spectral index from the LOFAR data points.
The errors bars have been normalised so the reduced
$\chi^2$ of the best-fit power-law (in blue) is equal to 1, but we note that the uncertainties in the
LOFAR in-band have not been systematically analysed and can be unreliable.
Our measurements agree with earlier reports that the radio spectrum of Tycho steepens at low radio frequencies.
\label{fig:lofar_spectrum}}
\end{figure}
\subsection{Model parameters: external absorption}
\label{sec:method}
A synchrotron source with spectrum $S_\nu \propto \nu^{-\alpha}$ that is subject to free-free absorption
from cold, ionised, ISM material along the line of sight results in the following radio spectrum:
\begin{equation}
S_\nu = S_0 \left( \frac{\nu}{\nu_0} \right)^{-\alpha} \, e^{-\tau_{\nu, \mathrm{ISM}}},
\label{fitting}
\end{equation}
where \citep{rybicky79}:
\begin{equation}
\tau_\nu = 3.014 \times 10^{4} \, Z \,\left( \frac{T}{\rm{K}} \right)^{-3/2} \left( \frac{\nu}{\rm{MHz}} \right)^{-2} \left( \frac{{EM}}{\rm{pc \,cm}^{-6}} \right) g_{\mathrm{ff}},
\label{ff_tau}
\end{equation}
$Ze$ is the charge of the free-free absorbing ions, $T$ is the temperature of the plasma, $EM\equiv \int_{0}^{s} n_\mathrm{e}^2 ds'$ is the emission measure,
$n_\mathrm{e}$ is the number density of electrons,
and $g_{\mathrm{ff}}$ is a Gaunt factor, given by
\begin{equation}
g_{\mathrm{ff}} =
\begin{cases}
\ln\left[49.55 \, Z^{-1} \left(\frac{\nu}{\rm{MHz}}\right)^{-1} \right] + 1.5 \ln \frac{T}{\mathrm{K}} \\ \\ 1 & \hspace{-4cm} \text{for} \,\,\, \frac{\nu}{\rm{MHz}}>>\left(\frac{T}{\mathrm{K}}\right)^{3/2}.
\end{cases}
\end{equation}
We convolved all the images to a resolution of 41\arcsec, and performed a pixel-by-pixel fit (with a pixel size of 10\arcsec) to equation \ref{fitting}. The
results are plotted in Fig. \ref{fig:results}. For each pixel, we fitted for an amplitude $S_0$, the spectral index $\alpha$,
and the optical depth for the ISM material at 40~MHz $\tau_{40, \mathrm{ISM}}$. As errors, we plot the diagonal term of the covariance
matrix corresponding to each parameter.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{results_only_ext_witherr-eps-converted-to.pdf}
\caption{Results of fitting equation \ref{fitting} to the maps. For each pixel we fitted for amplitude $S_0$, the spectral index $\alpha$,
and the optical depth for the ISM material at 40~MHz $\tau_{40, \mathrm{ISM}}$. The units of the $S_0$ map on the left are Jy~bm$^{-1}$.
The errors are the diagonal term of the covariance
matrix corresponding to each parameter.
\label{fig:results}}
\end{figure*}
We also show the fit results for three integrated regions that show external absorption (see Fig. \ref{fig:results}, right panel): the region towards the north-east,
the absorbed region in the centre, and the whole rim of the SNR. These regions are labeled in Fig. \ref{fig:hba_reg}, and their spectral energy distribution (SED)
along with the best-fit
results are shown. The parameters $\alpha$ and $\tau_{40,\mathrm{ext}}$ are correlated (see contour plots in Fig. \ref{fig:conf_intervals}), but for two of the
three regions we require absorption at the $3\sigma$ level or higher.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{abs_regions_smallerr.pdf}
\caption{HBA map with overlaid regions of analysis. The values of $f$, $\tau_{40}$ and $\alpha$ are unitless. For all regions, the errors were rescaled in such a way that the best-fit
power law has a reduced $\chi^2$ of 1. The top plots and the bottom-right plot (corresponding to the green, red, and blue regions as overlaid on Tycho)
are fitted including external absorption (in blue, the
best-fit unabsorbed power-law is in green), and in all cases including the absorption term improves the fit: with a $\Delta\chi^2=16$ for
\lq EXT ABS NORTH', a $\Delta\chi^2=4$ for \lq EXT ABS CENTRE', and a $\Delta\chi^2=10.5$ for \lq RIM' (in all cases, for an additional
degree of freedom). The bottom-left plot corresponds to the region of possible internal absorption.
The mask of the reverse shock radius is plotted in yellow over the map of Tycho. In the legends, \lq UL' stands for \lq upper limit' and
\lq PL' stands for \lq power-law'.
\label{fig:hba_reg}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{conf_intervals}
\caption{Contour plots for the three regions showing external absorption in Fig. \ref{fig:hba_reg}: north (shown in red over Tycho in Fig. \ref{fig:hba_reg}), centre (in blue),
and rim (in red). Plotted are the $1\sigma$, $2\sigma$, and $3\sigma$ confidence intervals for the parameters $\alpha$ and $\tau_{40,\mathrm{ext}}$ for each of the regions.
Only for one region, centre, $\tau_{40,\mathrm{ext}}=0$ (no absorption) is not excluded at the $3\sigma$. For the two other regions, in particular for the northern region
that we base our analysis on, we require the presence of absorption along the line-of-sight at the $3\sigma$ level.
\label{fig:conf_intervals}}
\end{figure*}
\subsection{Model parameters: internal absorption}
A synchrotron source that is subject to internal free-free absorption
from its cold, ionised, unshocked ejecta will have a dimming factor that goes as $(f + (1-f) e^{-\tau_\nu})$, where $f$ is
the fraction of the synchrotron emission that is produced by the front side of the shell and, therefore, cannot be absorbed by its
internal material. This factor multiplies equation \ref{fitting} resulting in the following radio spectrum:
\begin{equation}
S_\nu = S_0 \left( \frac{\nu}{\nu_0} \right)^{-\alpha} (f + (1-f)e ^{-\tau_{\nu, \mathrm{int}}}) \, e^{-\tau_{\nu, \mathrm{ISM}}}.
\label{fitting_all}
\end{equation}
Internal free-free absorption can only occur in the region inside the projected reverse shock, since there cannot be unshocked absorbing material outside the reverse shock.
\cite{warren05} found the reverse shock in Tycho's SNR to have a radius of 183\arcsec\ and centre
RA=0:25:19.40, Dec=+64:08:13.98, from principal component analysis of X-ray data. We measured the flux density for
each image for the region internal to the reverse shock, with the aim to look for internal
absorption.
We do not find any external absorption in the region internal to the reverse shock, save for two clumps in the center
of the SNR (Fig. \ref{fig:results}), and so, to simplify our fit, we
we removed the area of absorption in the centre (the blue region in Fig. \ref{fig:hba_reg}) from our area of internal absorption
(the yellow region in Fig. \ref{fig:hba_reg}), and just fitted for an amplitude, the parameter $f$, and an internal optical depth:
\begin{equation}
S_\nu = S_0 \left( \frac{\nu}{\nu_0} \right)^{-\alpha} (f + (1-f)e ^{-\tau_{\nu, \mathrm{int}}}).
\label{fitting_all}
\end{equation}
As described in section \ref{sec:flux}, we rescaled the error bars in such a way that
the reduced $\chi^2$ of the best-fit power-law ($S_\nu = S_0 \left( \frac{\nu}{\nu_0} \right)^{-\alpha}$, with no absorbing component) was 1. The best-fit power-law
for this region corresponds to $\alpha=0.63$. From here, we compared how including an
internal absorbing component improved the fit.
Setting $f=0.5$ (that is, fixing the synchrotron emission such that half comes from the back and half comes from the front
of the shell) gives a best-fit $\alpha=0.63$, $\tau_{40,\mathrm{int}}=3\times10^{-8}$.
This means that the best-fit value for internal absorption with $f=0.5$ corresponds to no internal absorption.
Setting $f=0.5$,
for $\tau_{40,\mathrm{int}}=0.11$ we obtained a $\Delta \chi^2=4$ (with respect to the best-fit result).
We take this to be the $2\sigma$ upper limit estimate on the
internal optical depth, and so in the internal emission measure $EM_\mathrm{int}$ (for $T=100$, $Z=3$).
Alternatively, if we fit a region that shows internal absorption with a power-law, the spectral
index flattens due to the presence of absorption. \cite{katz-stone00} found
$\alpha=0.71$, rather than $\alpha=0.63$ for this region, from a 330~MHz to 1.4~GHz spectral index
study. In fact, the higher frequency data points, where no absorption is present, should be the ones
that determine the spectral index. At low frequencies the original spectral index should be recovered, but with the amplitude
dimmed by a factor of $f$. Hence, we fixed the spectral index to $\alpha=0.71$ and fitted for the remaining parameters.
This results in a very high value of the optical depth, $\tau_{40,\mathrm{int}}=61$.
\begin{deluxetable}{c|ccccc}[h]
\tablecaption{Fits to region internal to the reverse shock}
\tablehead{
\colhead{Fit} & \colhead{$\alpha$} & \colhead{$f$} & \colhead{$\tau_{40, \mathrm{int}}$} & \colhead{red $\chi^2$} & \colhead{$\Delta \chi^2$} \\
}
\startdata
PL & 0.63 & $-$ & $-$ & 1.0 & - \\
best-fit int abs & 0.63 & 0.5* & $3\times10^{-8}$ & 1.1 & 0 \\
UL in int abs & 0.64 & 0.5* & 0.11 & 1.3 & 4 \\
Fixed $\alpha$ & 0.71* & 0.76 & 61.1 & 0.4 & 16 \\
\enddata
\tablecomments{The best-fit emission measure $EM$ assumes $T=100$~K and $Z=3$. Parameterised, it corresponds to
$EM = EM_\mathrm{table} \mathrm{\, pc \, cm}^{-6} \left( \frac{g_\mathrm{ff}(T=100, Z=3)}{g_\mathrm{ff}(T/100 \,\mathrm{K}, Z/3)} \right)
\times \left( \frac{Z}{3} \right) \left( \frac{T}{100 \,\mathrm{K}} \right)^{-3/2}$. The reduced $\chi^2$ to the
power-law fit is 1 by definition. Values indicated with * are fixed, not fitted for. The $\Delta \chi^2$ for the \lq Fixed $\alpha$' model
is with respect to the power-law model \lq PL', corresponding to 2 additional degrees of freedom. The upper limit \lq UL' was
derived as discussed in the text.
}
\label{tb:fits_int}
\end{deluxetable}
The results of our fits \cite[power-law, internal absorption, $2\sigma$ upper limit in internal absorption, and $\alpha$ fixed to the value
given by][]{katz-stone00} are tabulated in Table \ref{tb:fits_int}.
We also plotted the results for the power-law fit (in blue), the upper limit to the $EM$ (for $T=100$, $Z=3$, in green)
and the fixed $\alpha$ (in magenta; dashed lines indicate the unabsorbed flux density)
in Fig. \ref{fig:hba_reg}, bottom-left corner. Here we show the rescaled errors
rather than the original error bars.
From Table \ref{tb:fits_int}, fixing $\alpha=0.71$ and adding an absorbing component does seem to significantly improve the fit
(the fact that the reduced $\chi^2$ is equal to 0.4 would normally suggest overfitting, but in this case the reduced $\chi^2$ of the
power-law fit was artificially set to 1). The
required emission measure is unphysical (see discussion in section \ref{ush_mass}), but it is very sensitive to the choice
of $\alpha$ and $f$. We cannot confidently
claim a detection of unshocked ejecta in Tycho's SNR because of our limited knowledge of the errors in the flux densities, and because of the degeneracy of the parameters,
but our data are suggestive that there is indeed some unshocked material inside Tycho's reverse shock\footnote{
In the conference Supernova Remnants: An Odyssey in Space after Stellar Death II (Chania, Greece, June 2019)
we presented preliminary results
of a very high $EM$ detection from Tycho's unshocked ejecta (\url{http://snr2019.astro.noa.gr/wp-content/uploads/2019/08/D3-0940-Arias.pdf}).
This was due to us not noticing at first that the 330~MHz map had a very high flux density value,
which steepened the best-fit spectral index, and thus the required amount of absorbing material.}.
In order to better estimate the $EM$ due to internal absorption we need more high-frequency data points in the few GHz range that can unambiguously
determine the unabsorbed flux density and spectral index for this region. Additional observations in the few-hundred MHz range
would help better model the curvature due to the free-free absorption, and, if it were ever possible, observations at even lower frequencies
would further discriminate between the different models.
In this work we are relying on only the points at 327~MHz and 1382~MHz for
information about the unabsorbed flux density and spectrum,
and the 327~MHz map was rescaled (see discussion in section \ref{sec:archival}). Moreover, the behaviour of the LOFAR
in-band seems to be pushing the data point in a steeper spectral index direction.
For this reason, observations that increase the leverage arm in frequency would allow us to better constrain the amount of $EM$ due to
unshocked material.
Having said that,
the integrated flux densities as measured by LOFAR
are in line with what we expect from the literature.
There are some regions where the maps can have artefacts,
but the flux densities that we are considering in this section are taken from the yellow region in
Fig. \ref{fig:hba_reg}, which is much larger than the resolution
of any given map.
Moreover, the LBA and the HBA data both show the effect of absorption, even though
the two LOFAR antennas are effectively different instruments, and the data were reduced with two independent pipelines.
\section{Discussion}
\subsection{Spectral index}
\cite{katz-stone00} carried out a study of Tycho's spectral index at low radio frequencies (330~MHz and 1.5~GHz), and found that Tycho
has localised spectral variations with regions as flat as $\alpha=0.44$ and as steep as $\alpha=0.72$. Our best-fit spectral index map
(middle panel in Fig. \ref{fig:results}) shows values within this range, and, in a few cases, slightly higher values, $\alpha \lesssim 0.8$.
\cite{duin75} reported a significant steepening of the spectrum near the centre of the SNR and suggested that particles
near the boundary might be accelerated with a flatter spectrum, but \cite{klein79} did not find steepening in their observations at 10 GHz.
We do not find a steepening coincident with the centre of the remnant, but rather we find the
spectrum of the western and north-western region of the remnant to be
steeper than the rest.
The question of whether Tycho has a curved spectrum has been discussed in the literature.
\cite{roger73} modelled Tycho's integrated radio spectrum with two power-law components (which results in a locally concave spectrum),
\cite{reynolds92} modelled it with a non-linear shock model of first-order Fermi acceleration and found agreement with a concave-up
synchrotron spectrum, whereas \cite{vinyaikin87} found that a single power-law can describe the radio spectrum at these frequencies.
As we discussed in section \ref{sec:flux}, the LOFAR data points do show a steeper spectral behaviour than expected, although the
in-band response of the LOFAR LBA has not been systematically analysed, and is not yet reliable.
\subsection{External absorption}
\label{sec:ext_abs}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{EM_temps-eps-converted-to.pdf}
\caption{Maps of external emission measure $EM_\mathrm{ISM}$ made from the measured optical depth $\tau_{40,\mathrm{ISM}}$
(right hand side map in in Fig. \ref{fig:results}) combined with equation \ref{ff_tau}, assuming $Z=1$.
We plot the results for three temperatures,
10~K, 100~K, and 10,000~K, relevant for our discussions of molecular clouds, the diffuse, infrared-emitting medium around Tycho,
and the ISM warm ionised gas, respectively.
The units of
$EM_{\mathrm{ISM}}$ are $\mathrm{\, pc \, cm}^{-6}$.
\label{fig:Ems}}
\end{figure*}
In order to convert the value of optical depth in Fig. \ref{fig:results} into a quantity that allows us to derive physical properties of the
gas we use equation \ref{ff_tau}, from which we obtain an emission measure value, $EM_{\mathrm{ISM}}$.
The emission measure depends on the temperature and ionisation state of the plasma. The ISM has a wide range of temperatures,
from $\sim10$~K in molecular clouds to $\sim10,000$~K in the warm ionised medium \citep{draine11}. We therefore
provide three emission measure maps in Fig. \ref{fig:Ems}, assuming $T=10$~K, $T=100$~K, and $T=10,000$~K, to aid our discussion
in the current section. Since the ISM is primarily composed of hydrogen, for all three maps we assume $Z=1$.
The region to the north-east with the high emission measure value (the region in green in Fig. \ref{fig:hba_reg})
seems to match the position of a molecular cloud
found in \cite{lee04} and \cite{zhou16}, seen most clearly in Fig. 1 of the latter paper at velocities between $-62$~km~s$^{-1}$ and $-66$~km~s$^{-1}$.
At these velocities there are also multiple structures that coincide in position with the rim of the source, which our fit also identifies
as having free-free absorption. The region in the north-east of the remnant where we find the
highest values of the $EM_{\mathrm{ISM}}$ also coincides with the region of high H I absorption seen in \cite{reynoso99b}.
The region in the centre of Tycho has some morphological coincidence with the molecular structure seen at $-56$~km~s$^{-1}$ in
\cite{zhou16}, although the similarity is not striking, and there
does not seem to be any associated neutral hydrogen structure.
Our method traces ionised
material, which one does not expect in molecular clouds but could be present at their outer boundary, so it is not necessary that
our measured $EM_{\mathrm{ISM}}$ matches the structure of molecular/neutral material in detail.
The scale and distance of the ionised features are not straightforward from these observations.
Tycho is the background synchrotron source, so the ionised material must be in front
of it, but in principle it could be local to Tycho, unrelated ISM, or a combination of the two (although it would be a big coincidence if
one of the two did not have a dominant effect).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{cartoon.pdf}
\caption{Cartoon showing the geometry assumed for the discussion in section \ref{sec:ext_abs}.
Tycho is surrounded by a diffuse cavity of length $l_\mathrm{cav}$, and the molecular clouds are in a ring-like
shape around it.
\label{fig:cartoon}}
\end{figure*}
We know from \cite{zhou16} that Tycho is likely inside an expanding wind bubble that is sweeping up molecular material.
We depict the structure we assume for our analysis in a cartoon in Fig. \ref{fig:cartoon}. The remnant is surrounded by,
but its shock is still not interacting with, molecular clouds. This means that there is a cavity of thickness $l$ (and radius $R_\mathrm{SNR}+l$)
of low-density material \cite[$\rho=0.1-0.2$~cm$^{-3}$,][]{williams13}, surrounded by dense molecular material with
an average density of $10^2-10^3$~cm$^{-3}$ \citep{zhou16}.
We will consider three possibilities: (1) that the ionised material we see in Fig. \ref{fig:results}, right-hand side, is due to ionised material
along the line-of-sight, unrelated to Tycho; (2) that it is the low-density cavity material that is ionised; and (3) that the molecular clouds
are responsible for the free-free absorption.
In section \ref{sec:source} we briefly mention possible ionising sources.
\vspace{0.2cm}
\noindent
\textbf{1. Ionised ISM}
\cite{hwang02} tabulated the $N_\mathrm{H}$ as measured from \textit{Chandra} data, towards Tycho, and found values ranging from
$N_\mathrm{H}=(5.3-7.5)\times10^{21}$~cm$^{-2}$, depending on the model employed.
For the region in green in Fig. \ref{fig:hba_reg} the optical depth at 40~MHz is $\tau_{40, \mathrm{ISM}}=0.65$,
which corresponds to an
emission measure of $EM=0.30$~pc~cm$^{-6}$ for $T=10$~K, and $EM=2469$~pc~cm$^{-6}$ for $T=10,000$~K. Since $EM = n_\mathrm{e}^2 l$,
$N_\mathrm{H} = n_\mathrm{H} l$, and $n_\mathrm{e} = \chi_\mathrm{e} n_\mathrm{H}$ (where $\chi_\mathrm{e}$ is the ionisation fraction,
$0\leq\chi_\mathrm{e}\leq1$),
then, using $l=d$, the distance to Tycho, we find that the required
ionisation fraction of the intervening ISM is $\chi_\mathrm{e} = \frac{\sqrt{EM \, l}}{N_\mathrm{H}} \sim 0.015 \sqrt{\frac{l}{2.5\,\mathrm{kpc}}}$
for $T=10$~K, or alternatively, $\chi_\mathrm{e} \sim 1.35 \sqrt{\frac{l}{2.5\,\mathrm{kpc}}}$ for $T=10,000$~K.
The 10,000~K assumption for the diffuse ISM gas is more reasonable than the 10~K \citep{draine11}, although, of course, this gas does not extend
evenly along the line-of-sight to Tycho, but is likely in a patchy distribution (which would lower $\chi_\mathrm{e}$ to a more
reasonable value). We do not know the relative depths of these warm ionised
gas along the line-of-sight to Tycho, so unfortunately we cannot constrain a $\chi_\mathrm{e}$ for the case of this ISM.
Another point to note is that $\tau_{40, \mathrm{ISM}}=0.65$ corresponds to
an optical depth of $\tau_{30.9}=1.2$
at 30.9~MHz, although this is for a very small area (3.6~arcmin$^2$). \cite{kassim89} studied optical depths towards 15 Galactic SNRs,
and found only once source with $\tau_{30.9}>1$.
The integrated radio spectrum of Tycho (Fig. \ref{fig:radio_spectrum}) shows no indication of free-free absorption from the ISM kicking in at frequencies
lower than 100 MHz. There is a slight drop visible in the spectrum from LOFAR narrow band maps (Fig. \ref{fig:lofar_spectrum}), although this relies
only on the data point at 40~MHz.
For the integrated spectrum of Tycho's SNR we measure a best-fit $\tau_{30.9}=0.1$, well on the low side of the values measured by \cite{kassim89}.
The relatively high value of the optical depth in the region in green in Fig. \ref{fig:hba_reg}, and its small area suggest that
this is a small clump of ionised material. We cannot know if the clump is relatively close to the source or somewhere along the line-of-sight.
Finally, the low-frequency absorption is only seen in a ring-like structure in the rim of the SNR and in two clumpy regions in the SNR
centre. In the remaining regions in the interior we do not find any detectable absorption. It is unlikely, though, that the foreground
ISM gas has the shape we see over Tycho, with a clear ring and a mostly empty interior. The regular morphology seen in the maps in
Fig. \ref{fig:Ems} does not favour the ionised ISM scenario as the dominant source of absorption.
\vspace{0.2cm}
\noindent
\textbf{2. Ionised diffuse cavity surrounding Tycho}
Consider that it is the cavity surrounding Tycho that is responsible for the ionisation we see at LOFAR frequencies.
The size of the ionised cavity may influence the distributions of the foreground absorption.
As shown in figure \ref{fig:cartoon}, the depth of the ionised materials $l'$ is as a function of the projection radius $r$ ($r=0$ at the SNR center, $r=R$ at the SNR boundary),
the radius of the SNR $R$ and the thickness of the cavity $l$ ($l' = \sqrt{l^2+2lR}$), resulting in:
\begin{equation}
l'(R)\approx \begin{cases}
\sqrt{2Rl}, & \text{if $l \ll R$ }\\
l, & \text{if $l \gg R$}
\end{cases}
\end{equation}
\begin{equation}
l'(0)=l.
\end{equation}
If the cavity size is much larger than the SNR radius, we would see a uniform ionisation distribution as $l'(r)=l$. The ring-like ionisation distribution suggests that the cavity
is small and might be close to the SNR radius.
\cite{williams13} found that
the ISM density around Tycho is only $n_\mathrm{H}=0.1-0.2$~cm$^{-3}$, and that there is dust with temperature $T=100$~K.
The optical depth value we report for the rim of Tycho (the region in red in Fig. \ref{fig:hba_reg}), $\tau_{40,\mathrm{ISM}}=0.29$, assuming $Z=1$ and
$T=100$~K, corresponds to an emission measure of
$EM=2.1$~pc~cm$^{-6}=n_\mathrm{e}^2 \,l_\mathrm{cav}$, where $l_\mathrm{cav}$ is the size of the cavity.
This implies $n_\mathrm{e}=1.5\sqrt{\frac{l_\mathrm{cav}}{1~\mathrm{pc}}}~\mathrm{cm}^{-3}$.
Recall that $n_\mathrm{e} = \chi_\mathrm{e} n_\mathrm{H}$.
\cite{woods17} measured the ionisation fraction of the ambient hydrogen ahead of the forward shock to be
$ \chi_\mathrm{e} < 0.2$ (the ambient hydrogen is more than 80\% neutral).
They obtained the ionisation fraction for the atomic gas, which has a higher density; they used $ n_\mathrm{H}=1$~cm$^{-3}$.
Setting $\chi_\mathrm{e} = 0.2 = \frac{n_\mathrm{e}}{ n_\mathrm{H}}$
means that the cavity must be very small, $l_\mathrm{cav}<0.02$~pc.
As mentioned above, a thin length for $l_\mathrm{cav}$ is supported by the geometry of the external absorption map, which appears to be limb-brightened.
However, this is a very restrictive value, requiring that Tycho be almost but not quite interacting with the
molecular cloud, and not just in one place but around its entire perimeter. This is very unlikely.
\vspace{0.2cm}
\noindent
\textbf{3. Ionised dense molecular environment surrounding Tycho}
In this section we consider whether the ionised structure
is related to the
molecular cloud found by \cite{lee04} and discussed in \cite{zhou16}. The morphological coincidence of the molecular cloud in the north-east with
the region of highest absorption is suggestive of such a relation.
\cite{zhou16} tabulate the molecular hydrogen column density $N_{\mathrm{H}_2}$
for several positions
and find values around $7\times10^{20}$~cm$^{-2}$ in the area where we measure $\tau_{40,\mathrm{ISM}}=0.65$,
implying
$EM=0.30$~pc~cm$^{-6}$ (here the conditions
$Z=1$, $T=10$~K do apply). Since $EM = n_\mathrm{e}^2 l$,
$N_\mathrm{H_2} = n_\mathrm{H_2} l$, and $\chi_\mathrm{e} = \frac{n_\mathrm{e}}{n_\mathrm{H_2}}$, the value
$\frac{EM}{N_{\mathrm{H}_2}} = \chi_\mathrm{e} n_\mathrm{e} = 4.3\times10^{-4}~\mathrm{cm}^{-3}$ is independent of
the size of the molecular cloud.
If we take the size of the molecular clouds to be of the order of Tycho \cite[ $l_\mathrm{MC}\sim5$~pc , see Fig. 1, bottom-right in][]{zhou16},
then $n_\mathrm{e} = 0.25 \sqrt{\frac{l_\mathrm{MC}}{5~\mathrm{pc}}}~\mathrm{cm}^{-3}$, which corresponds to
$\chi_\mathrm{e} = 2\times10^{-3} \left( \frac{l_\mathrm{MC}}{5~\mathrm{pc}} \right)^{-1/2}$.
Generally, dense molecular cores have $\chi_\mathrm{e} \sim 10^{-8} - 10^{-6}$ \citep{caselli98}, while translucent and diffuse
molecular gas has typical $\chi_\mathrm{e} \lesssim 10^{-4}$ \cite[][figure 1]{snow06}. $\chi_\mathrm{e} \sim 10^{-3}$ requires
an external ionising source
\vspace{0.5cm}
\noindent
It is not possible to tell directly from our observations of free-free absorption whether the ionised absorbing component
is in the environs of Tycho or far in the ISM along the line-of-sight.
However, the fact that the absorption occurs where the remnant is
brighter and expanding into a higher density region \citep{reynoso99b, williams13} is suggestive to us of a local effect,
as is the rimmed geometry.
If the thin cavity surrounding Tycho and separating the SNR shock from the molecular ring were responsible
for the absorption, then the cavity would have to be very thin but at the same time the shock could not have reached the
molecular material \textit{anywhere} along its boundary ---a contrived geometry.
The high neutrals
values inferred by \cite{woods17}, the clear presence of Balmer shocks \citep{ghavamian00}, and the morphological coincidence
with the molecular cloud in the north-east all point towards the molecular material being associated with the absorption.
Finally, the bubble-like distribution of the molecular gas provides a natural explanation for the
rimmed absorption morphology. We conclude that the absorption is most likely
due to the presence of over-ionised molecular clouds.
\subsection{What mechanism is responsible for the ionisation of Tycho's surroundings?}
\label{sec:source}
A SIMBAD query towards the direction of Tycho gives no OB associations or bright stars that could be responsible for the
observed ionisation:
Tycho itself is the only likely ionising source towards this line-of-sight. The sources of ionisation could be the X-ray emission from
Tycho, the cosmic rays accelerated in the SNR, or perhaps the ionising radiation emitted by the supernova progenitor or the event
itself. A full discussion of the different ionisation scenarios requires a detailed treatment of ionisation and recombination in the modelling,
and is beyond the scope of this paper.
\iffalse
If the ionised material is in the neighbourhood of Tycho then we should ask what is the ionising source. Candidates
are the X-ray radiation from the SNR, cosmic rays from the SNR, and Tycho's progenitor white dwarf.
A full discussion of the different ionisation scenarios requires detailed modelling and is beyond the scope of this paper.
Here we limit ourselves to pointing out some straightforward consequences of each ionisation mechanism, namely, that if the ionisation
comes from the molecular clouds, then the X-ray photons are the preferred ionising source, and that if the ionisation comes from the
diffuse cavity surrounding Tycho, then the cosmic rays are preferred.
\vspace{0.2cm}
\noindent
\textbf{The X-ray photons from Tycho}
Tycho is a bright X-ray source. X-ray photons can penetrate large column densities and photoionise nearby molecular
clouds.
\cite{maloney96} evaluate the cross-sections for X-ray photons of different energies, finding that for photons with energies
between 0.5~keV and 7~keV the cross section $\sigma_0 = 2.6 \times 10^{-22}~\mathrm{cm}^2$. For a density of 1~cm$^{-3}$
this corresponds to a mean free path $\lambda_\mathrm{MFP} \approx \frac{1}{n \sigma} \sim 1$~kpc.
However, for a density of 10$^3$~cm$^{-3}$, then $\lambda_\mathrm{MFP} \sim 1$~pc, which is of the order of the
size of the molecular clouds (approximately the size of Tycho, 5~pc).
This means that the X-ray photons would go through the cavity of $l_\mathrm{cav}<0.02$~pc with practically no interactions,
whereas they could interact with the particles present in the molecular cloud. This
implies that if the ionised material is the diffuse cavity and not the molecular clouds, then the X-ray emission is not responsible
for the ionisation.
\vspace{0.2cm}
\noindent
\textbf{Cosmic rays from Tycho}
Cosmic rays diffuse away from the shock front as $l_\mathrm{diff} = \sqrt{2Dt}$, where the diffusion coefficient
is $D=\frac{\eta E c}{3 e B}$, where $\eta$ parameterises the deviation from Bohm diffusion \citep{vink12}.
For a source that, like Tycho, is 450 years old,
\begin{equation}
l_\mathrm{diff} = 3.1 \times 10^{-3} \eta^{1/2} \left( \frac{E}{1 \mathrm{GeV}} \right)^{1/2} \left( \frac{B}{10 \mu\mathrm{G}} \right)^{-1/2} \left( \frac{t}{450~\mathrm{yr}} \right)^{1/2} \mathrm{pc}.
\end{equation}
$\eta$ cannot be very large for young SNRs, as evidenced by their X-ray synchrotron emission being confined to thin rims \citep{vink12}.
This very thin diffusion length scale of order $10^{-3}$~pc could not account for the over-ionisation of the molecular clouds, although it is
compatible with the length we calculated in section \ref{sec:ext_abs} for the absorbing cavity in scenario 2 ($l_\mathrm{cav}<0.02$~pc).
\fi
\subsection{Internal absorption and mass in the unshocked ejecta}
\label{ush_mass}
The amount of mass in ionised material internal to the SNR reverse shock is given by \cite[see][]{arias18a}:
\begin{equation}
M = A S l^{1/2} m_\mathrm{p} \frac{1}{Z} \sqrt{EM},
\label{eq_mass}
\end{equation}
where $A$ is the mass number of the ions, $S$ is the area of the region for which we measure the absorption, $l$ is the depth of the absorbing material, $m_\mathrm{p}$ is the
mass of the proton, $Z$ is the number of charges, and $EM$ is the emission measure.
Making certain assumptions about these values, one can derive a value for the mass in unshocked material from our measured
optical depth.
The easiest parameter to estimate is the mass number of the ions $A$. Tycho is the result of a Type Ia explosion; out of the
$\sim1.4$~$M_{\odot}$\ of ejecta it produced, $0.5-0.8$~$M_{\odot}$\ is expected to be iron \citep{badenes06}.
In a spectroscopic analysis of ASCA data \cite{hwang98} noted that iron is in fact the most recently ionised element,
and so it is likely to compose the bulk of the unshocked material.
\cite{hayato10} also found segregation of Fe in the inner ejecta from a study of the expansion velocities of the X-ray emitting material.
Moreover, the X-ray emission from iron in Tycho
is not as prominent as in other type Ia SNRs \cite[e.g. Kepler, ][]{reynolds07}, suggesting that some of it is not visible in the X-rays yet.
For these reasons we take $A=56$, corresponding to Fe. We take $Z=3$, for three-times ionised Fe.
$S$ is the surface area of the absorbing region (the area in yellow in Fig. \ref{fig:hba_reg}). We do not know the thickness
of the absorbing slab $l$, which is actually critical for the mass determination, because we do not have a way of probing the
three-dimensional structure of the absorbing material. For a homogeneous distribution of material within the sphere of the reverse
shock, the average depth is $l=\frac{4}{3}R$ \cite[where $R$, the radius of the reverse shock, is 2.25~pc for a distance of 2.5~kpc, ][]{tian11}.
Finally, the value of the $EM$ depends on $Z$ and the temperature $T$. We do not know what the temperature conditions in
the unshocked ejecta of Tycho are; an accurate determination would require infrared observations that could measure the ratios between
different forbidden lines of the ionised material. To our knowledge, the only time the temperature from the unshocked ejecta of a
SNR has been measured is in the case of Cas A, whose unshocked ejecta has a temperature of 100~K \citep{raymond18}.
Although it is not clear that the radiation from Tycho's SNR could maintain its internal material heated to 100~K, we will take this to
be the value in our mass estimate.
The $EM$ values in Table \ref{tb:fits_int} correspond to the following mass estimates:
\begin{equation}
\begin{split}
M = & 6.5 \pm 2.1 \, M_{\odot}\, \left(\frac{A}{56}\right) \left(\frac{l}{3.0 \,\rm{pc}}\right)^{1/2} \left(\frac{Z}{3}\right)^{-3/2} \\
& \left(\frac{T}{100~\mathrm{K}}\right)^{3/4} \times \sqrt{\frac{g_{\mathrm{ff}}(T=100 \, \mathrm{K},Z=3)}{g_{\mathrm{ff}}(T,Z)}},
\end{split}
\label{eqn_mass}
\end{equation}
in the case of the upper limit with $EM=0.33$~pc~cm$^{-6}$, and in the case of $EM=179$~pc~cm$^{-6}$, $M=146 \pm 39$~$M_{\odot}$,
with the same parametrisation.
\subsection{What are the conditions and structure of the ejecta internal to Tycho's reverse shock?}
Our upper limit above is not useful, and
the mass estimate for the $\alpha=0.71$ fit is completely unreasonable, since the total amount of ejecta resulting from the
explosion of Tycho's progenitor was $\sim1.4$~$M_{\odot}$. As we mention above, a determination of the $EM$ depends very much on the
expected flux if no absorption were present, but if there is indeed absorption noticeable at LOFAR HBA frequencies ($\sim150$~MHz), then the
high mass estimate value implies that
the conditions we assumed in the section above
do not describe the actual physical conditions internal to the SNR reverse shock.
Lowering the temperature or invoking a higher ionisation state alone are not sufficient to
arrive at a meaningful mass estimate. A further way to reduce the mass estimate for a given $EM_\mathrm{int}$ is
if not all unshocked material is iron, but
lighter elements are also present. \cite{decourchelle17} notes that the comparison of iron-L complex and Si-K line images indicates good
mixing of the Si and Fe layers synthesised in the supernova. The mass number of Si is half of that of Fe, so if silicon is
present, the mass estimate could be significantly reduced.
The effects of temperature, ionisation conditions, and composition can be important if combined, but the single effect that
can have the largest contribution to the high absorption value is the degree of clumping in the unshocked material.
The estimate in equation \ref{eqn_mass} assumes that the ejecta are distributed homogeneously within the sphere of the reverse shock.
This is what one expects for an ejecta density profile with a flat core and an exponential outer region \citep{chevalier82}, if the reverse shock has already reached
the core.
\cite{sato19} analysed \textit{Chandra} observations of Tycho and found from its genus statistic that Tycho's X-ray ejecta structure
strongly indicates a skewed non-Gaussian distribution of the ejecta clumps, possibly from initially clumped ejecta.
The radioactive decay of elements synthesised in the explosion could also cause the ejecta to have a foamy distribution,
as is the case for Cas A \citep{milisavljevic15}.
If the unshocked ejecta in Tycho are heavily clumped it can be possible to see absorption in the LOFAR HBA even for modest amounts of
unshocked mass.
\section{Conclusions}
In this work we have mapped Tycho's SNR with the LOFAR Low-Band and High-Band Antennae, centred at 58~MHz and 143~MHz,
respectively. These are the lowest-frequency resolved observations of this source to date, even though the angular resolution of our LBA maps
is modest (41\arcsec). We compared these maps to higher frequency VLA observations at 330~MHz and 1400~MHz \citep{katz-stone00, williams16},
and found that in some regions the LOFAR flux is lower than expected for an
unabsorbed synchrotron source.
We identify this effect as low-frequency free-free absorption due to foreground free electrons
absorbing the background synchrotron radiation from Tycho.
It is unlikely, from the observed geometry, that the low-frequency absorption is due to line-of-sight material far away from Tycho,
but rather it must be in the environment of the SNR.
There are two regions that could be responsible for the ionisation: the diffuse, infrared-emitting region immediately
surrounding Tycho, or its neighbouring molecular clouds. If the former is true, and the absorption is due to an ionised cavity
surrounding Tycho, then this cavity must be very thin ($<0.02$~pc), so as to not contradict earlier results on the neutral fraction ahead of the
shock. Alternatively, if the molecular clouds are responsible for the absorption, then the implied ionisation fraction requires an
external ionising source. Tycho itself is the only candidate, through its X-ray emission, its cosmic rays, or possibly from the ionising
flux of its progenitor white dwarf or the supernova explosion.
Finally, we tried to measure the free-free absorption
in the region internal to the SNR reverse shock from its unshocked ejecta. However, we are limited by our knowledge of the
unabsorbed spectral behaviour of the source at these frequencies: the amount of absorption we measure depends on what is the spectral
index in the region, which is poorly constrained due to systematics error and an incomplete knowledge of the spectral behaviour at high frequencies.
According to our best-fit scenario, the spectral index in the region internal to the reverse shock is relatively
high and a copious amount of free-free absorption is required to explain the LOFAR flux densities.
If real, we attribute the absorption to cold, ionised, unshocked stellar ejecta inside the SNR reverse shock free-free absorbing the synchrotron
emission from the back side of the shell. In order to account for the high value of internal absorption we measure we expect the ejecta
to be colder than 100~K, be somewhat highly ionised, and be heavily clumped.
Radio observations in the few GHz range could determine the unabsorbed, resolved spectral index of the source, and observations in the
$200-1000$~MHz range would allow us to better model the parameters responsible for the absorption, which result in a characteristic
spectrum with curvature at these frequencies. Finally, hyperfine structure infrared line observations of these clumps would be necessary
to better understand their temperature and composition, both critical in determining the mass in unshocked ejecta.
\acknowledgments
We thank N. Kassim for the 330~MHz VLA image, and B. Williams for the 1.4~MHz VLA image.
This paper is based (in part) on data obtained with the International LOFAR Telescope (ILT) under project code LC10\_011. LOFAR \citep{vanhaarlem13} is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefitted from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universit\'e d'Orl\'eans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK.
We acknowledge the use of archival data from the National Radio Astronomy Observatory's Karl G. Jansky Very Large Array (VLA). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
\software{LOFAR Low-Frequency Pipeline \citep{degasperin19}, wsclean \citep{offringa14}, Pre-Facet Calibration Pipeline \citep{vanweeren16}, ddf-pipeline
\cite[v2.2;][]{shimwell19}, LMFIT: Non-Linear Least-Square Minimization and Curve-Fitting for Python \citep{newville14}, APLpy: Astronomical Plotting Library in Python \citep{robitaille12}.}
\vspace{5mm}
\facilities{The LOw Frequency ARray (LOFAR), the Karl G. Jansky Very Large Array (VLA).}
|
1,116,691,497,262 | arxiv | \section{Introduction}
Assume that $p$ be a fixed odd prime number. Throughout this paper
\mathbb{Z}
,$
\mathbb{Z}
_{p},$
\mathbb{Q}
_{p}$ and
\mathbb{C}
_{p}$ will denote by the ring of integers, the field of $p$-adic rational
numbers and the completion of the algebraic closure of
\mathbb{Q}
_{p},$ respectively. Also we denote
\mathbb{N}
^{\ast }
\mathbb{N}
\cup \left\{ 0\right\} $ and $\exp \left( x\right) =e^{x}.$ Let $v_{p}
\mathbb{C}
_{p}\rightarrow
\mathbb{Q}
\cup \left\{ \infty \right\} \left(
\mathbb{Q}
\text{ is the field of rational numbers}\right) $ denote the $p$-adic
valuation of
\mathbb{C}
_{p}$ normalized so that $v_{p}\left( p\right) =1$. The absolute value on
\mathbb{C}
_{p}$ will be denoted as $\left\vert .\right\vert _{p}$, and $\left\vert
x\right\vert _{p}=p^{-v_{p}\left( x\right) }$ for $x\in
\mathbb{C}
_{p}.$ When one talks of $q$-extensions, $q$ is considered in many ways,
e.g. as an indeterminate, a complex number $q\in
\mathbb{C}
,$ or a $p$-adic number $q\in
\mathbb{C}
_{p},$ If $q\in
\mathbb{C}
$ we assume that $\left\vert q\right\vert <1.$ If $q\in
\mathbb{C}
_{p},$ we assume $\left\vert 1-q\right\vert _{p}<p^{-\frac{1}{p-1}},$ so
that $q^{x}=\exp \left( x\log q\right) $ for $\left\vert x\right\vert
_{p}\leq 1.$ We use the following notation
\begin{equation}
\left[ x\right] _{q}=\frac{1-q^{x}}{1-q},\text{ \ }\left[ x\right] _{-q}
\frac{1-\left( -q\right) ^{x}}{1+q} \label{equation 1}
\end{equation}
where $\lim_{q\rightarrow 1}\left[ x\right] _{q}=x;$ cf. [1-24].
For a fixed positive integer $d$ with $\left( d,f\right) =1,$ we se
\begin{eqnarray*}
X &=&X_{d}=\lim_{\overleftarrow{N}
\mathbb{Z}
/dp^{N
\mathbb{Z}
, \\
X^{\ast } &=&\underset{\underset{\left( a,p\right) =1}{0<a<dp}}{\cup }a+d
\mathbb{Z}
_{p}
\end{eqnarray*}
an
\begin{equation*}
a+dp^{N
\mathbb{Z}
_{p}=\left\{ x\in X\mid x\equiv a\left( \func{mod}dp^{N}\right) \right\} ,
\end{equation*}
where $a\in
\mathbb{Z}
$ satisfies the condition $0\leq a<dp^{N}.$
It is known that
\begin{equation*}
\mu _{q}\left( x+p^{N
\mathbb{Z}
_{p}\right) =\frac{q^{x}}{\left[ p^{N}\right] _{q}}
\end{equation*}
is a distribution on $X$ for $q\in
\mathbb{C}
_{p}$ with $\left\vert 1-q\right\vert _{p}\leq 1.$
Let $UD\left(
\mathbb{Z}
_{p}\right) $ be the set of uniformly differentiable function on
\mathbb{Z}
_{p}.$ We say that $f$ is a uniformly differentiable function at a point
a\in
\mathbb{Z}
_{p},$ if the difference quotient
\begin{equation*}
F_{f}\left( x,y\right) =\frac{f\left( x\right) -f\left( y\right) }{x-y}
\end{equation*}
has a limit $f^
{\acute{}
}\left( a\right) $ as $\left( x,y\right) \rightarrow \left( a,a\right) $ and
denote this by $f\in UD\left(
\mathbb{Z}
_{p}\right) .$ The $p$-adic $q$-integral of the function $f\in UD\left(
\mathbb{Z}
_{p}\right) $ is defined b
\begin{equation}
I_{q}\left( f\right) =\int_
\mathbb{Z}
_{p}}f\left( x\right) d\mu _{q}\left( x\right) =\lim_{N\rightarrow \infty
\frac{1}{\left[ p^{N}\right] _{q}}\sum_{x=0}^{p^{N}-1}f\left( x\right) q^{x}
\label{equation 2}
\end{equation}
The bosonic integral is considered by Kim as the bosonic limit $q\rightarrow
1,$ $I_{1}\left( f\right) =\lim_{q\rightarrow 1}I_{q}\left( f\right) .$
Similarly, the $p$-adic fermionic integration on
\mathbb{Z}
_{p}$ \ defined by Kim as follows
\begin{equation*}
I_{-q}\left( f\right) =\lim_{q\rightarrow -q}I_{q}\left( f\right) =\int_
\mathbb{Z}
_{p}}f\left( x\right) d\mu _{-q}\left( x\right)
\end{equation*}
Let $q\rightarrow 1,$ then we have $p$-adic fermionic integral on
\mathbb{Z}
_{p}$ as follows
\begin{equation*}
I_{-1}\left( f\right) =\lim_{q\rightarrow -1}I_{q}\left( f\right)
=\lim_{N\rightarrow \infty }\sum_{x=0}^{p^{N}-1}f\left( x\right) \left(
-1\right) ^{x}.
\end{equation*}
Stirling asymptotic series are defined b
\begin{equation}
\log \left( \frac{\Gamma \left( x+1\right) }{\sqrt{2\pi }}\right) =\left( x
\frac{1}{2}\right) \log x+\sum_{n=1}^{\infty }\frac{\left( -1\right) ^{n+1}}
n\left( n+1\right) }\frac{B_{n+1}}{x^{n}}-x \label{equation 11}
\end{equation}
where $B_{n}$ are familiar $n$-th Bernoulli numbers cf. [6, 8, 9, 25].
Recently, Araci et al. defined modified $q$-Genocchi numbers and polynomials
with weight $\alpha $ and $\beta $ in [4, 5] by the means of generating
function
\begin{equation}
\sum_{n=0}^{\infty }g_{n,q}^{\left( \alpha ,\beta \right) }\left( x\right)
\frac{t^{n}}{n!}=t\int_
\mathbb{Z}
_{p}}q^{-\beta \xi }e^{\left[ x+\xi \right] _{q^{\alpha }}t}d\mu _{-q^{\beta
}}\left( \xi \right) \label{equation 3}
\end{equation}
So from above, we easily get Witt's formula of modified$\ q$-Genocchi
numbers and polynomials with weight $\alpha $ and $\beta $ as follows
\begin{equation}
\frac{g_{n+1,q}^{\left( \alpha ,\beta \right) }\left( x\right) }{n+1}=\int_
\mathbb{Z}
_{p}}q^{-\beta \xi }\left[ x+\xi \right] _{q^{\alpha }}^{n}d\mu _{-q^{\beta
}}\left( \xi \right) \label{equation 12}
\end{equation}
where $g_{n,q}^{\left( \alpha ,\beta \right) }\left( 0\right)
:=g_{n,q}^{\left( \alpha ,\beta \right) }$ are modified $q$ extension of
Genocchi numbers with weight $\alpha $ and $\beta $ cf. [4,5].
In \cite{Rim}, Rim and Jeong are defined modified $q$-Euler numbers with
weight $\alpha $ as follows
\begin{equation}
\widetilde{\xi }_{n,q}^{\left( \alpha \right) }=\int_
\mathbb{Z}
_{p}}q^{-t}\left[ t\right] _{q^{\alpha }}d\mu _{-q}\left( t\right)
\label{equation 24}
\end{equation}
\bigskip From expressions of (\ref{equation 12}) and (\ref{equation 24}), we
get the following Proposition 1:
\begin{proposition}
The followin
\begin{equation}
\widetilde{\xi }_{n,q}^{\left( \alpha \right) }=\frac{g_{n+1,q}^{\left(
\alpha ,1\right) }}{n+1} \label{equation 25}
\end{equation
is true.
\end{proposition}
In previous paper \cite{Araci 6}, Araci, Acikgoz and Park introduced
weighted $q$-Analogue of $p$-Adic $\log $ gamma type functions and they
derived some interesting identities in Analytic Numbers Theory and in $p
-Adic Analysis. They were motivated from paper of T. Kim by "\textit{On a }
q $\textit{-analogue of the }$p$\textit{-adic log gamma functions and
related integrals}, \textit{J. Number Theory}, \textit{76 (1999), no. 2,
320-329}." \ We also introduce $q$-Analogue of $p$-Adic $\log $ gamma type
function with weight $\alpha $ and $\beta .$ We derive in this paper some
interesting identities this type of functions.
\begin{center}
\textbf{On} \textbf{p-adic }$\log $ $\Gamma $ \textbf{function with weight }
\alpha $ and $\beta $
\end{center}
In this part, from (\ref{equation 2}), we begin with the following nice
identity
\begin{equation}
I_{-q}^{\left( \beta \right) }\left( q^{-\beta x}f_{n}\right) +\left(
-1\right) ^{n-1}I_{-q}^{\left( \beta \right) }\left( q^{-\beta x}f\right) =
\left[ 2\right] _{q^{\beta }}\sum_{l=0}^{n-1}\left( -1\right)
^{n-1-l}f\left( l\right) \label{equation 6}
\end{equation}
where $f_{n}\left( x\right) =f\left( x+n\right) $ and $n\in
\mathbb{N}
$ (see \cite{Araci 4}).
In particular for $n=1$ into (\ref{equation 6}), we easily see tha
\begin{equation}
I_{-q}^{\left( \beta \right) }\left( q^{-\beta x}f_{1}\right)
+I_{-q}^{\left( \beta \right) }\left( q^{-\beta x}f\right) =\left[ 2\right]
_{q^{\beta }}f\left( 0\right) . \label{equation 7}
\end{equation}
With the simple application, it is easy to indicate as follows
\begin{equation}
\left( \left( 1+x\right) \log \left( 1+x\right) \right) ^
{\acute{}
}=1+\log \left( 1+x\right) =1+\sum_{n=1}^{\infty }\frac{\left( -1\right)
^{n+1}}{n\left( n+1\right) }x^{n} \label{equation 15}
\end{equation}
where $\left( \left( 1+x\right) \log \left( 1+x\right) \right) ^
{\acute{}
}=\frac{d}{dx}\left( \left( 1+x\right) \log \left( 1+x\right) \right) $
By expression of (\ref{equation 15}), we can deriv
\begin{equation}
\left( 1+x\right) \log \left( 1+x\right) =\sum_{n=1}^{\infty }\frac{\left(
-1\right) ^{n+1}}{n\left( n+1\right) }x^{n+1}+x+c,\text{ where }c\text{ is
constant.} \label{equation 16}
\end{equation}
If we take $x=0,$ so we get $c=0.$ By expression of (\ref{equation 15}) and
\ref{equation 16}), we easily see that
\begin{equation}
\left( 1+x\right) \log \left( 1+x\right) =\sum_{n=1}^{\infty }\frac{\left(
-1\right) ^{n+1}}{n\left( n+1\right) }x^{n+1}+x. \label{equation 17}
\end{equation}
It is considered by T. Kim for $q$-analogue of $p$ adic locally analytic
function on
\mathbb{C}
_{p}\backslash
\mathbb{Z}
_{p}$ as follows
\begin{equation}
G_{p,q}\left( x\right) =\int_
\mathbb{Z}
_{p}}\left[ x+\xi \right] _{q}\left( \log \left[ x+\xi \right] _{q}-1\right)
d\mu _{-q}\left( \xi \right) \text{ (for detail, see[5,6]).}
\label{equation 18}
\end{equation}
By the same motivation of (\ref{equation 18}), in previous paper \cite{Araci
6}, $q$-analogue of $p$-adic locally analytic function on
\mathbb{C}
_{p}\backslash
\mathbb{Z}
_{p}$ with weight $\alpha $\ is considered
\begin{equation}
G_{p,q}^{\left( \alpha \right) }\left( x\right) =\int_
\mathbb{Z}
_{p}}\left[ x+\xi \right] _{q^{\alpha }}\left( \log \left[ x+\xi \right]
_{q^{\alpha }}-1\right) d\mu _{-q}\left( \xi \right) \text{ }
\label{equation 19}
\end{equation}
In particular $\alpha =1$ into (\ref{equation 19}), we easily see that,
G_{p,q}^{\left( 1\right) }\left( x\right) =G_{p,q}\left( x\right) .$
With the same manner, we introduce $q$-Analoge of $p$-adic locally analytic
function on
\mathbb{C}
_{p}\backslash
\mathbb{Z}
_{p}$ with weight $\alpha $ and $\beta $ as follows
\begin{equation}
G_{p,q}^{\left( \alpha ,\beta \right) }\left( x\right) =\int_
\mathbb{Z}
_{p}}q^{-\beta \xi }\left[ x+\xi \right] _{q^{\alpha }}\left( \log \left[
x+\xi \right] _{q^{\alpha }}-1\right) d\mu _{-q^{\beta }}\left( \xi \right)
\text{ } \label{equation 22}
\end{equation}
From expressions of (\ref{equation 7}) and\ (\ref{equation 20}), we state
the following Theorem:
\begin{theorem}
The following identity holds
\begin{equation*}
G_{p,q}^{\left( \alpha ,\beta \right) }\left( x+1\right) +G_{p,q}^{\left(
\alpha ,\beta \right) }\left( x\right) =\left[ 2\right] _{q^{\beta }}\left[
\right] _{q^{\alpha }}\left( \log \left[ x\right] _{q^{\alpha }}-1\right) .
\end{equation*
\
\end{theorem}
It is easy to show that
\begin{eqnarray}
\left[ x+\xi \right] _{q^{\alpha }} &=&\frac{1-q^{\alpha \left( x+\xi
\right) }}{1-q^{\alpha }} \label{equation 20} \\
&=&\frac{1-q^{\alpha x}+q^{\alpha x}-q^{\alpha \left( x+\xi \right) }}
1-q^{\alpha }} \notag \\
&=&\left( \frac{1-q^{\alpha x}}{1-q^{\alpha }}\right) +q^{\alpha x}\left(
\frac{1-q^{\alpha \xi }}{1-q^{\alpha }}\right) \notag \\
&=&\left[ x\right] _{q^{\alpha }}+q^{\alpha x}\left[ \xi \right] _{q^{\alpha
}} \notag
\end{eqnarray}
Substituting $x\rightarrow \frac{q^{\alpha x}\left[ \xi \right] _{q^{\alpha
}}}{\left[ x\right] _{q^{\alpha }}}$ into (\ref{equation 17}) and by using
\ref{equation 20}), we get interesting formula
\begin{equation}
\left[ x+\xi \right] _{q^{\alpha }}\left( \log \left[ x+\xi \right]
_{q^{\alpha }}-1\right) =\left( \left[ x\right] _{q^{\alpha }}+q^{\alpha x
\left[ \xi \right] _{q^{\alpha }}\right) \log \left[ x\right] _{q^{\alpha
}}+\sum_{n=1}^{\infty }\frac{\left( -q^{\alpha x}\right) ^{n+1}}{n(n+1)
\frac{\left[ \xi \right] _{q^{\alpha }}^{n+1}}{\left[ x\right] _{q^{\alpha
}}^{n}}-\left[ x\right] _{q^{\alpha }} \label{equation 21}
\end{equation}
If we substitute $\alpha =1$ into (\ref{equation 21}), we get Kim's $q
-Analogue of $p$-adic $\log $ gamma fuction (for detail, see[8]).
From expression of (\ref{equation 2}) and (\ref{equation 21}), we obtain
worthwhile and interesting theorems as follows:
\begin{theorem}
For $x\in
\mathbb{C}
_{p}\backslash
\mathbb{Z}
_{p}$ the following
\begin{equation}
G_{p,q}^{\left( \alpha ,\beta \right) }\left( x\right) =\left( \frac{\left[
\right] _{q^{\beta }}}{2}\left[ x\right] _{q^{\alpha }}+q^{\alpha x}\frac
g_{2,q}^{\left( \alpha ,\beta \right) }}{2}\right) \log \left[ x\right]
_{q^{\alpha }}+\sum_{n=1}^{\infty }\frac{\left( -q^{\alpha x}\right) ^{n+1}}
n\left( n+1\right) \left( n+2\right) }\frac{g_{n+1,q}^{\left( \alpha ,\beta
\right) }}{\left[ x\right] _{q^{\alpha }}^{n}}-\left[ x\right] _{q^{\alpha }
\frac{\left[ 2\right] _{q^{\beta }}}{2} \label{equation 26}
\end{equation
is true.\
\end{theorem}
\begin{corollary}
Taking $q\rightarrow 1$ into (\ref{equation 26}), we get nice identity
\begin{equation*}
G_{p,1}^{\left( \alpha ,\beta \right) }\left( x\right) =\left( x+\frac{G_{2
}{2}\right) \log x+\sum_{n=1}^{\infty }\frac{\left( -1\right) ^{n+1}}
n\left( n+1\right) \left( n+2\right) }\frac{G_{n+1}}{x}-x
\end{equation*
where $G_{n}$ are called famous Genocchi numbers.
\end{corollary}
\begin{theorem}
The following nice identit
\begin{equation}
G_{p,q}^{\left( \alpha ,1\right) }\left( x\right) =\left( \frac{\left[
\right] _{q}}{2}\left[ x\right] _{q^{\alpha }}+q^{\alpha x}\widetilde{\xi
_{1,q}^{\left( \alpha \right) }\right) \log \left[ x\right] _{q^{\alpha
}}+\sum_{n=1}^{\infty }\frac{\left( -q^{\alpha x}\right) ^{n+1}}{n\left(
n+1\right) }\frac{\widetilde{\xi }_{n,q}^{\left( \alpha \right) }}{\left[
\right] _{q^{\alpha }}^{n}}-\frac{\left[ 2\right] _{q}}{2}\left[ x\right]
_{q^{\alpha }} \label{equation 27}
\end{equation
is true.
\end{theorem}
\begin{corollary}
Putting $q\rightarrow 1$ into (\ref{equation 27}), we have the following
identity
\begin{equation*}
G_{p,1}^{\left( \alpha ,\beta \right) }\left( x\right) =\left(
x+E_{1}\right) \log x+\sum_{n=1}^{\infty }\frac{\left( -1\right) ^{n+1}}
n\left( n+1\right) }\frac{E_{n}}{x^{n}}-x
\end{equation*
where $E_{n}$ are familiar Euler numbers.
\end{corollary}
|
1,116,691,497,263 | arxiv | \section*{Background}
Nature employs proteins for a vast range of tasks, and their capacity to evolve to perform diverse functions is one of the marvels of biology. Recently, it has become possible to reconstruct convincing scenarios for how new protein functions evolve. One of the most important conclusions of this work is that the initial steps may occur even before the new functions come under selection~\cite{O'Brien1999,Aharoni2005,Kondrashov2005,Chothia2003,Copley2004,Bridgham2006}. The reason is that in addition to their primary biological functions, most proteins are at least modestly effective at performing a range of other ``promiscuous'' functions~\cite{O'Brien1999,Aharoni2005,Copley2003,Khersonsky2006,O'Brien2006,O'Loughlin2007}. In laboratory experiments, selection can rapidly increase these promiscuous functions, often without much immediate cost to a protein's original function~\cite{Aharoni2005}. In a particularly compelling set of experiments, Tawfik and coworkers have shown that selection for promiscuous activity likely explains the origin and evolution of a bacterial enzyme that hydrolyzes a synthetic compound only recently introduced into the environment~\cite{Aharoni2005,Roodveldt2005,Afriat2006}. Mounting evidence therefore supports the idea that new protein functions evolve when selection favors mutations that increase an existing weak promiscuous function.\pb
But for as long as 50 years, since Linus Pauling and Emile Zuckerkandl published their seminal analysis of molecular change in proteins~\cite{Zuckerkandl1965}, it has been clear that just a small fraction of the mutations that accumulate in naturally evolving proteins are driven by selection for a new function. Instead, most of the mutations responsible for natural sequence divergence do not change a protein's primary biological function, but rather are due to either neutral genetic drift~\cite{Kimura1983} or pressure for a subtle recalibration of protein properties unrelated to the acquisition of an entirely new function~\cite{Blundell1975}. However, even though most mutations accumulate under the constraint that they not interfere with a protein's primary function, they could still substantially alter other, promiscuous functions. Such alterations could then aid in the subsequent evolution of new functions.\pb
Here we have experimentally investigated this possibility using a set of enzymes that have undergone genetic drift that is neutral with respect to a well-defined laboratory selection criterion for enzymatic activity on a single substrate~\cite{Bloom2007c}. We have examined how these enzymes have changed in their promiscuous activities on five other substrates. As described below, we find that the enzymes have often undergone substantial changes in their promiscuous activities, suggesting that neutral genetic drift could play an important role in enabling future functional evolution.\pb
\section*{Results and Discussion}
\subsection*{A set of neutrally evolved cytochrome P450 enzymes}
We focused our analysis on cytochrome P450 proteins. P450s are excellent examples of enzymes that can evolve to catalyze new reactions, since they are involved in a wide range of important functions such as drug metabolism and steroid biosynthesis~\cite{Montellano1995,Lewis2001}. We worked with P450 BM3, a cytosolic bacterial enzyme that catalyzes the subterminal hydroxylation of medium- and long-chain fatty acids~\cite{Munro2002}. We have previously described a set of P450 BM3 heme domain variants that were created by laboratory neutral evolution from a common parent sequence~\cite{Bloom2007c}. Here we briefly recap the procedure used to create these P450s in order to explain their origin and why they can properly be viewed as the product of neutral genetic drift.\pb
The essential difference between neutral genetic drift and adaptive evolution is that in the former case mutations that have no substantial effect on fitness spread stochastically in a population, while in the latter case mutations spread because they are beneficial and so favored by selection. Of course, it may be difficult to discern whether a specific mutation in a natural population has spread neutrally or due to favorable selection. But in the laboratory it is possible to define an arbitrary selection criterion to ensure that all mutations spread due to neutral genetic drift. Specifically, we imposed the requirement that the P450s had to hydroxylate the substrate 12-$p$-nitrophenoxydodecanoic acid (12-pNCA) with an activity exceeding a specific threshold~\cite{Bloom2007c}. All mutant P450s were therefore straightforwardly classified as either functional (if they exceeded the threshold) or nonfunctional (if they did not). While this selection criterion is obviously a simplification of natural evolution, we believe that for the current purpose it is a reasonable abstraction of the evolutionary requirement that an enzyme's primary activity exceed some critical level in order to allow its host organism to robustly survive and reproduce. To implement laboratory neutral evolution using this selection criterion, we began with a single parent P450 BM3 heme domain variant (called R1-11) and used error-prone PCR to create random mutants of this parent~\cite{Bloom2007c}. Mutants that failed to yield sufficient active protein to hydroxylate at least 75\% of the 12-pNCA of the R1-11 parent when expressed in \textit{Escherichia coli} were immediately eliminated, while all other mutants were carried over to the next generation with equal probability. Any mutations that spread among the offspring sequences were therefore by definition due to neutral genetic drift, since there was no opportunity for any functional mutant to be favored over any other. We emphasize that the fact that the mutations spread due to neutral genetic drift does not mean that they have no effect on the protein's properties. Indeed, one of the growing realizations about protein evolution is that mutations that spread by neutral genetic drift may still have an impact on future evolution~\cite{DePristo2005,Bloom2007}. One mechanism for this impact is that neutral genetic drift can change a protein's stability and so alter its tolerance to future mutations~\cite{Bloom2005,Besenmatter2007,Bloom2006}. As will be demonstrated below, another mechanism is that neutral genetic drift can alter a protein's promiscuous functions.\pb
As described previously~\cite{Bloom2007c}, the end result of the neutral evolution was 44 different P450 variants, each of which satisfied the selection criterion for activity on 12-pNCA (these are the combined final sequences from the monomorphic and polymorphic populations in \cite{Bloom2007c}). For the current study, we analyzed the promiscuous activities of 34 of these neutrally evolved P450 variants. The sequence diversity of these P450s is shown in the phylogenetic tree of Figure \ref{fig:tree}; they have accumulated an average of four nonsynonymous mutations each. \pb
\subsection*{Activities of the neutrally evolved P450 enzymes}
All of the P450 variants had evolved under selection solely for their ability to hydroxylate 12-pNCA. We examined their promiscuous hydroxylation activities on the five other substrates shown at the top of Figure \ref{fig:heatmap}. Two of these promiscuous substrates, propranolol and 2-amino-5-chlorobenzoxazole (also known as zoxazolamine), are drugs that are metabolized by human P450s~\cite{Otey2005,Lasker1982}. The other three promiscuous substrates, 11-phenoxyundecanoic acid, 2-phenoxyethanol, and 1,2-methylenedioxybenzene, are organic compounds of increasing structural dissimilarity to 12-pNCA. The parent P450 possessed at least some hydroxylation activity on all of these substrates (throughout the remainder of this work, ``activity'' refers to total substrate turnovers per enzyme). \pb
We measured the activities of all 34 neutrally evolved P450s on the five promiscuous substrates as well as 12-pNCA. Figure \ref{fig:heatmap} shows the fold change in activity of each of the variants relative to the parent P450 on all six substrates, and Figure \ref{fig:foldchanges} shows the same data with standard errors. As is apparent from these figures, many of the neutrally evolved P450s have undergone changes in their activities that substantially exceeded the standard errors of the measurements. Even on 12-pNCA, some of the variants have undergone modest increases or very mild decreases in activity . The modest increases in 12-pNCA activity were unsurprising, since the parent P450 only hydroxylates 12-pNCA with about a quarter of the activity reported for a P450 engineered for maximal 12-pNCA activity~\cite{Cirino2003}. Likewise, the mild decreases in 12-pNCA activity were due to the fact that during neutral evolution the P450s were only required to maintain this activity above a minimal threshold (75\% of the total 12-pNCA conversion of the parent protein when expressed in \textit{E. coli}~\cite{Bloom2007c}). The changes in the promiscuous activities were often much larger than those on 12-pNCA. For example, several of the neutrally evolved variants have undergone nearly four-fold increases in activity on one or more of 2-phenoxyethanol, 2-amino-5-chlorobenzoxazole, and 1,2-methylenedioxybenzene. Other variants have experienced equally large decreases in one or more of the promiscuous activities.\pb
\subsection*{Broad patterns of change in activity can be rationalized in terms of substrate properties}
The data in Figures \ref{fig:heatmap} and \ref{fig:foldchanges} clearly indicate that some of the P450s have undergone substantial changes in their activities. In an effort to understand the nature of these changes, we sought to determine whether there were any clear patterns in the activities. In Figure \ref{fig:heatmap}, the substrates have been hierarchically clustered so that each successive cluster contains substrates on which the P450s have increasingly similar activities (the clustering is illustrated by the tree-like dendrogram at the top of the figure, with similar substrates in adjacent columns). The clustering of the substrates is readily rationalized in terms of their chemical structures. For example, 2-amino-5-chlorobenzoxazole and 1,2-methylenedioxybenzene cluster, meaning that P450s with high activity on one of these substrates also tend to have high activity on the other. Presumably, they cluster because the similarity of their structures (both are fusions of six and five membered rings) means that they have similar modes of docking in the substrate binding pocket. Likewise, 12-pNCA and 11-phenoxyundecanoic acid are phenoxycarboxylic acids of similar chain length, and are in the same cluster. To a lesser extent, 2-phenoxyethanol resembles 12-pNCA and 11-phenoxyundecanoic acid in its phenolic ether structure, and it falls into a higher level cluster with these two substrates. Propranolol shares a fused ring structure with 2-amino-5-chlorobenzoxazole and 1,2-methylenedioxybenzene, and these three substrates share a common higher level cluster. Overall, the hierarchical clustering indicates that substrates that appear similar to the human eye are also ``seen'' this way by the P450s, since the P450s tend to increase or decrease their activities on these substrates in a coordinated fashion.\pb
Figure \ref{fig:heatmap} also shows the P450 variants arranged in hierarchical clusters. A visual inspection immediately indicates that there is an overall association among all of the activities. Some of the P450 variants (redder rows) tend to show improved activity on most substrates, while others (bluer rows) tend to show decreased activity on most substrates. Taken together with the clustering of the similar substrates, this overall association suggests that there are two main trends in the activity changes. First, the P450s appear to have undergone general changes in their catalytic abilities that are manifested by broad increases or decreases in activity on all substrates. Second, the P450s appear to have experienced shifts in specificity to favor either the fused ring or the phenolic ether substrates.\pb
To test whether these two apparent trends in activity changes are supported by a quantitative examination of the data, we performed principal component analysis. Principal component analysis is a well-established mathematical technique for finding the dominant components of variation in a data set, essentially by diagonalizing the covariance matrix. As suggested by the foregoing visual inspection, principal component analysis revealed that two components explained most of the changes in P450 activity (Table \ref{tab:pca}). The first component contained positive contributions from all six substrates, and so represents a general improvement in catalytic ability. The second component contained positive contributions from the fused ring substrates and negative contributions from the phenolic ether substrates, and so represents an increased preference for the former class of substrates over the latter. Together, these two components explain 82\% of the variance in activities among the 34 P450 variants. The remaining 18\% of the variance is explained by the four remaining components, which represent more subtle shifts in activity that are less easily rationalized with intuitive chemical arguments.\pb
\subsection*{Overall distributions of change in the activities}
The preceding sections have demonstrated that neutral genetic drift can lead to substantial changes in P450 activities, and that many of these changes can be understood as resulting from either fairly general increases/decreases in catalytic ability or shifts in preference for different broad classes of substrate structures. In this section, we examine whether there are any pervasive trends in the distributions of activity changes --- for example, did most of the promiscuous activities tend to increase or decrease? If a property is not under any evolutionary constraint, then during neutral genetic drift its values might be expected to be distributed in a roughly Gaussian fashion, as the neutrally evolving proteins freely sample from the presumably normal underlying distribution. On the other hand, if a property is constrained by selection to remain above a certain threshold, then during neutral genetic drift its values should display a truncated distribution since selection culls proteins with values that fall below the threshold (such a distribution has been predicted for protein stability by simulations~\cite{Taverna2002} and theory~\cite{Bloom2007}). \pb
Figure \ref{fig:changedistribution} shows the distribution of changes in activity for each of the six substrates. The distribution for 12-pNCA appears to be truncated on the left, as expected since the P450s neutrally evolved under a requirement to maintain the ability to hydroxylate 12-pNCA. Some of the P450s have undergone a mild decrease in 12-pNCA activity, reflective of the fact that the neutral evolution selection criterion provided a small amount of latitude by allowing the total amount of hydroxylated 12-pNCA to drop to 75\% of the parental value~\cite{Bloom2007c}. A number of P450s have neutrally evolved 12-pNCA activity that modestly exceeds that of the parent --- again unsurprising, because the parental 12-pNCA activity falls well below the maximal value achievable for this type of protein~\cite{Cirino2003}. The distribution for 11-phenoxyundecanoic acid resembles that for 12-pNCA, probably because activities on these two chemically similar substrates are highly linked, as discussed in the previous section.\pb
The other four promiscuous activities are less linked to 12-pNCA activity, and their distributions are much more symmetric. The symmetric shapes of these distributions suggest that neutral genetic drift has sampled from a roughly Gaussian distribution for these four promiscuous activities. For three of the substrates (propranolol, 2-amino-5-chlorobenzoxazole, and 1,2-methylenedioxybenzene), the distributions of activities are approximately centered around the parental activity. This centering indicates that the promiscuous activities of the parent on these three substrates are typical of what would be expected of a neutrally evolved P450. The distribution for 2-phenoxyethanol, on the other hand, is shifted towards activities higher than that of the parent. This shift indicates that the parent is less active on 2-phenoxyethanol than a typical neutrally evolved P450.\pb
If the activity distributions of Figure \ref{fig:changedistribution} truly reflect what would be expected after a very long period of neutral genetic drift (i.e., if they are ``equilibrium'' distributions), then each variant represents a random sample from the underlying distribution of activities among all P450s that can neutrally evolve under this selection criterion. In this case, there should be no correlation between the extent of change in activity and the number of accumulated mutations, since the P450s should have lost all ``memory'' of the parent's activity. On the other hand, if there has not been enough neutral genetic drift to completely eliminate residual memory of the parent's activity, then variants with fewer mutations should more closely resemble the parent's activity profile. To test whether the activity distributions of the P450 variants had equilibrated, we computed the correlation between the magnitude of each variant's change in activity and the number of nonsynonymous mutations it possessed relative to the parent. Table \ref{tab:mutcorrelation} shows that the magnitude of activity change is positively correlated with the number of mutations for all six substrates. Although the correlations for the individual substrates are mostly not statistically significant due to the small number of samples, the overall correlation for all six substrates is highly significant ($P = 10^{-3}$). Therefore, the P450 activities are still in the process of diverging from the parental values by neutral genetic drift. If the variants were to undergo further neutral genetic drift, we would expect to see even larger changes in their promiscuous activities.\pb
We also examined whether P450 variants with mutations near the substrate binding pocket were more likely to have undergone large changes in their activities. Five of the P450 variants had a mutation to a residue that was within 5 \AA\ of the surrogate substrate in the P450 BM3 crystal structure~\cite{Haines2001}: variant M2 had A74V, M8 had A330V, M13 had M354I, M15 had A74P, and M24 had I263V~\cite{Bloom2007c}. Two of these mutated residues are of clear importance, since mutating residue 74 has previously been shown to shift substrate specificity~\cite{Li2001,Li2001b,Otey2005} and residue 263 plays a role in the substrate-induced conformational shift~\cite{Pylypenko2004}. We compared the activity changes for the five variants with mutations near the binding pocket to those for the 29 variants without any such mutations, computing the magnitude of activity change as the absolute value of the logarithm (base two) of the fold change in activity averaged over all six substrates. The average magnitude of activity change for the five variants with mutations near the active site was 0.88, while the average for the other 29 variants was 0.47. These averages are significantly different, with an unequal variance T-test $P$-value of $10^{-2}$. Therefore, variants with mutations near the substrate binding pocket are especially likely to have altered activities, although many variants without mutations near the pocket also underwent substantial activity changes. \pb
\section*{Conclusions}
We have shown that neutral genetic drift can lead to changes of as much as four-fold in the promiscuous activities of P450 proteins. The ubiquity of these changes is striking --- even though many of the neutrally evolved P450s had only a handful of mutations, most of them had experienced at least some change in their promiscuous activities. P450s may be especially prone to this type of change, since their catalytic mechanism involves large substrate-induced conformational shifts~\cite{Modi1996} that can be modulated by mutations distant from the active site~\cite{Glieder2002,Meinhold2005,Li2001}. In addition, P450s have a tendency to eventually undergo irreversible inactivation that can be promoted by reduced coupling between substrate binding and conformational shifts, as well as by other poorly understood determinants of catalytic stability~\cite{Munro2002,Loida1993,Bernhardt2006}. There are therefore ample opportunities for mutations that spread by neutral genetic drift to cause subtle alterations in a P450's promiscuous activities. But we believe that neutral genetic drift is also likely to cause substantial changes in the promiscuous activities of enzymes with other catalytic mechanisms. In support of this idea, a recent study by Tawfik and coworkers~\cite{Amitai2007} indicates that mutations with little effect on the native lactonase activity of serum paraoxonase can alter this enzyme's promiscuous activities. Taken together, this study and our work suggest that neutral genetic drift allows for changes in promiscuous protein functions. These changes could in turn have important implications for future functional evolution. For example, one can easily imagine a scenario in which neutral genetic drift enhances a promiscuous protein function, and then a subsequent gene duplication allows natural selection to transform one of the genes into the template for a protein with a full-fledged new functional role~\cite{O'Brien1999,Aharoni2005,Kondrashov2005,Chothia2003,Copley2004,Bridgham2006}.\pb
One of the most attractive aspects of our study is the degree to which the changes in P450 activities during neutral genetic drift could be understood in terms of the chemical structures of the substrates. Neutral genetic drift did not simply cause unpredictable shifts in activities. Instead, most of the variation was explained by two eminently intuitive components: an overall increase or decrease in catalytic ability, and a preference for either fused ring or phenolic ether substrates. We have suggested that neutral genetic drift under a fixed selection criterion can be viewed as sampling underlying ``equilibrium'' distributions of activities. The distributions for different activities are linked, since we have shown that P450s with good activity on one substrate will frequently also be highly active on chemically similar substrates (similar linkages have been observed in P450s created by recombination~\cite{Landwehr2007}). So while it may be impossible to know exactly how any specific mutation will affect a given activity, measuring a handful of activities allows one to make relatively accurate predictions about other closely linked activities. The prerequisite for making such predictions is an understanding of the linkages among activities in the set of sequences explored by neutral genetic drift (the neutral network). We have made the first steps in elucidating these linkages for P450s that have neutrally evolved under one specific selection regime. The linkages are very similar to those that would have been made by an organic chemist grouping the substrates on the basis of their chemical structures. Knowledge of these linkages is of use in understanding the origins of enzyme specificity~\cite{O'Loughlin2007,Varadarajan2005} --- if an enzyme displays high activity on one substrate but low activity on another, then either these two activities are negatively linked during neutral genetic drift or selection has explicitly disfavored one of them.\pb
Our work also has implications for the general relationship between neutral genetic drift and adaptive evolution. A number of studies focused on RNA~\cite{Huynen1996,Huynen1996b,Fontana1998} or computational systems~\cite{vanNimwegen1997,vanNimwegen2000} have suggested that genetic drift might aid in adaptive evolution. Our study and that of Tawfik and coworkers~\cite{Amitai2007} support this notion for the evolution of new protein functions. However, the way that drift in promiscuous functions promotes adaptive evolution is slightly different than the paradigm proposed for RNA~\cite{Huynen1996,Huynen1996b,Fontana1998} and computational systems~\cite{vanNimwegen1997,vanNimwegen2000}. In those systems, neutral genetic drift is envisioned as allowing a sequence to move along its neutral network until it reaches a position where it can jump to a new higher-fitness and non-overlapping neutral network. In contrast, promiscuous protein functions change even as a protein drifts along a single neutral network. The adaptive benefits of this drift come when new selective pressures suddenly favor a previously irrelevant promiscuous function, in effect creating a new neutral network that overlaps with parts of the old one.\pb
Overall, experiments have now demonstrated two clear mechanisms by which neutral genetic drift can aid in the evolution of protein functions. In the first mechanism, neutral genetic drift fixes a mutation that increases a protein's stability~\cite{Serrano1993,DePristo2005,Bloom2007}, thereby improving the protein's tolerance for subsequent mutations~\cite{Bloom2005,Besenmatter2007,Bloom2006}, some of which may confer new or improved functions~\cite{Bloom2006}. In the second mechanism, which was the focus of this work and the recent study by Tawfik and coworkers~\cite{Amitai2007}, neutral genetic drift enhances a promiscuous protein function. This enhancement poises the protein to undergo adaptive evolution should a change in selection pressures make the promiscuous function beneficial at some point in the future. \pb
\section*{Methods}
\subsection*{Determination of P450 activities}
We attempted to determine the activities of all 44 neutrally evolved P450 variants described in \cite{Bloom2007c} (22 from the final monomorphic populations and 22 from the final polymorphic population). Ten of these variants expressed relatively poorly in the procedure used here (as described in more detail below), and so were eliminated from further analysis since their low expression led to large errors in the activity measurements. That left activity data for the 34 neutrally evolved P450 variants listed in Figures \ref{fig:heatmap} and \ref{fig:foldchanges}, as well as for the R1-11 neutral evolution parent. The activities for each of these P450 variants were measured on all six substrates (12-pNCA, 2-phenoxyethanol, propranolol, 11-phenoxyundecanoic acid, 2-amino-5-chlorobenzoxazole, and 1,2-methylenedioxybenzene). In all cases, the activities represent the total amount of product produced after two hours, and so are in units of total turnovers per enzyme. P450 BM3 enzymes typically catalyze only a finite number of reaction cycles before becoming irreversibly inactivated, and we believe that all reactions were essentially complete after two hours, so these activities should represent the total turnovers of the enzymes during their catalytic lifetimes.\pb
To obtain P450 protein for the activity measurements, we expressed the protein using catalase-free \textit{Escherichia coli}~\cite{Barnes1991} containing the encoding gene on the isopropyl $\beta$-D-thiogalactoside (IPTG) inducible pCWori~\cite{Barnes1991} plasmid (the catalase is removed since it breaks down the hydrogen peroxide used by the P450). The sequences of the P450 variants are detailed in \cite{Bloom2007c}. We used freshly streaked cells to inoculate 2 ml cultures of Luria Broth (LB) supplemented with 100 $\mu$g/ml of ampicillin, and grew these starter cultures overnight with shaking at 37\mbox{$^{\rm{o}}$C}. We then used 0.5 ml from these starter cultures to inoculate 1 L flasks containing 200 ml of terrific broth (TB) supplemented with 100 $\mu$g/ml of ampicillin. The TB cultures were grown at 30\mbox{$^{\rm{o}}$C}\ and 210 rpm until they reached an optical density at 600 nm of $\approx$0.9, at which point IPTG and $\delta$-aminolevulinic acid were added to a final concentration of 0.5 mM each. The cultures were grown for an additional 19 hours, then the cells were harvested by pelletting 50 ml aliquots at 5,500 g and 4\mbox{$^{\rm{o}}$C}\ for 10 min, and stored at -20\mbox{$^{\rm{o}}$C}. To obtain clarified lysate, each pellet was resuspended in 8 ml of 100 mM [4-(2-hydroxyethyl)-1-piperazinepropanesulfonic acid] (EPPS), pH 8.2 and lysed by sonication, while being kept on ice. The cell debris was pelleted by centrifugation at 8,000 g and 4\mbox{$^{\rm{o}}$C}\ for 10 minutes, and the clarified lysate was decanted and kept on ice. \pb
To perform the assays, various dilutions of the clarified lysate were used to construct a standard curve. For each sample, we prepared dilutions of the clarified lysate in the 100 mM EPPS (pH 8.2) buffer to create samples for the standard curves. The dilutions were 100\% clarified lysate (undiluted), 67\% lysate, 40\% lysate, 25\% lysate, 17\% lysate, 10\% lysate, 6.7\% lysate, and 4.0\% lysate. Similar dilutions were also prepared of the clarified lysate of \textit{E. coli} cells carrying a null pCWori plasmid in order to assess the background readings from lysate without any P450. A pipetting robot was then used to dispense 80 $\mu$l of this series of clarified lysate dilutions into 96-well microtiter plates. Duplicate microtiter plates were then assayed for P450 concentration and total enzymatic activity on each of the six substrates. The R1-11 parent was assayed four times rather than in duplicate. To minimize variation, all of these assays were performed in parallel, with the same stock solutions, and on the same day.\pb
The P450 concentration was determined using the carbon monoxide (CO) difference spectrum assay~\cite{Otey2003}. Immediately before use, we prepared a 5$\times$ stock solution of 50 mM sodium hydrosulfite in 1.3 M potassium phosphate, pH 8.0. A multichannel pipette was used to add 20 $\mu$l of this stock solution to each well of the microtiter plates (which contained 80 $\mu$l of a dilution of clarified lysate), so that the final sodium hydrosulfite concentration was 10 mM in each well. The plates were briefly mixed and the absorbances were read at 450 and 490 nm. The plates were then incubated in a CO binding oven~\cite{Otey2003} for 10 minutes to bind CO to the iron. The absorbance was then again read at 450 and 490 nm. The amount of P450 is proportional to the increase in the magnitude of the absorbance at 450 nm minus the absorbance at 490 nm. At each dilution along the standard curve, the reading for the null control (lysate dilutions without P450) was subtracted from the reading for each P450 variant to control for clarified lysate background. Additional file \ref{add:readings} shows the standard curves for all P450 variants. Ten P450 variants had standard curve slopes less than or equal to 0.020, indicating a low P450 concentration. These were the ten P450 variants that we discarded from further analysis, since the low P450 concentration decreased the accuracy of the measurements.\pb
To determine the activity on 12-pNCA, we monitored the formation of the yellow 4-nitrophenolate compound that is released upon hydroxylation of the twelfth carbon in the 12-pNCA molecule~\cite{Schwaneberg1999,Cirino2003}. Immediately before use, we prepared a 6$\times$ stock solution of 12-pNCA by adding 3.6 parts of 4.17 mM 12-pNCA in DMSO to 6.4 parts 100 mM EPPS, pH 8.2. A multichannel pipette was used to add 20 $\mu$l of this stock solution to each well of the microtiter plates (which contained 80 $\mu$l of a dilution of clarified lysate). The plates were briefly mixed, and the absorbance was read at 398 nm. To initiate the reactions, we then prepared a 6$\times$ stock solution of 24 mM hydrogen peroxide in 100 mM EPPS, pH 8.2, and immediately added 20 $\mu$l of this solution to each well of the microtiter plate and mixed. The final assay conditions were therefore 6\% DMSO, 250 $\mu$M 12-pNCA, and 4 mM hydrogen peroxide. The reactions were incubated on the benchtop for two hours, and the total amount of enzymatic product was quantified by the gain in absorbance at 398 nm. At each dilution along the standard curve, the corresponding null control lysate dilution was subtracted from the reading to control for lysate background. Additional file \ref{add:readings} shows the standard curves for all P450 variants.\pb
The activities on 2-phenoxyethanol, propranolol, 11-phenoxyundecanoic acid, 2-amino-5-chlorobenzoxazole, and 1,2-methylenedioxybenzene were determined using the 4-aminoantipyrene (4-AAP) assay~\cite{Otey2003b,Otey2004}, which detects the formation of phenolic compounds. For each of these five substrates, immediately before use we prepared a 6$\times$ substrate stock solution. These stock solutions were 6\% DMSO and 6\% acetone in 100 mM EPPS, pH 8.2, with an amount of substrate added so that the substrate concentrations in the stock solutions were: 150 mM for 2-phenoxyethanol, 30 mM for propranolol, 5 mM for 11-phenoxyundecanoic acid, 12 mM for 2-amino-5-chlorobenzoxazole, and 120 mM for 1,2-methylenedioxybenzene. The stock solutions were prepared by first dissolving the substrate in the DMSO and acetone, and then adding the EPPS buffer. In some cases, the stock solution became cloudy upon addition of the buffer, but there was no immediate precipitation, so we could still pipette the stock solution. A multichannel pipette was used to add 20 $\mu$l of the appropriate substrate stock solution to each well of the microtiter plates (which contained 80 $\mu$l of a dilution of clarified lysate). To initiate the reactions, we then added 20 $\mu$l of the freshly prepared 6$\times$ hydrogen peroxide stock solution (24 mM hydrogen peroxide in 100 mM EPPS, pH 8.2) and mixed. We incubated the plates on the benchtop for two hours. To detect the formation of phenolic products, a pipetting robot was used to add and mix 120 $\mu$l of quench buffer (4 M urea in 100 mM sodium hydroxide) to each well. We then used the robot to add and mix 36 $\mu$l per well of 0.6\% (w/v) of 4-aminoantipyrene in distilled water, and immediately read the absorbance at 500 nm. To catalyze formation of the red compound produced by coupling a phenolic compound to 4-aminoantipyrene~\cite{Otey2003b,Otey2004}, we then used the pipetting robot to add and mix 36 $\mu$l per well of 0.6\% (w/v) of potassium peroxodisulfate in distilled water. The plates were incubated on the benchtop for 30 minutes, and the amount of product was quantified by the gain in absorbance at 500 nm. At each dilution along the standard curve, the corresponding null control lysate dilution was subtracted from the reading to control for lysate background. Additional file \ref{add:readings} show the standard curves for all P450 variants.\pb
In order to extract enzymatic activities from the standard curves, we fit lines to the data points. For some of the substrates (most notably 12-pNCA and 2-phenoxyethanol), many of the P450 variants were sufficiently active to either saturate the substrate or exceed the linear range of absorbance readings. Therefore, we examined each standard curve by eye to determine which points remained in the linear range. Lines were then fit to the points in the linear range. These fits are shown in Additional file \ref{add:readings}. In the plots in this file, all points that were deemed to fall in the linear range (and so were used for the fits) are shown as filled shapes, while all points that were deemed to fall outside the linear range (and so were not used in the fits) are shown as empty shapes. The figures show the slopes of the lines for all replicates (two replicates for all P450 variants except for R1-11, which had four replicates). These slopes are averaged for a best estimate of the slope, and the standard error computed over these two measurements is also reported. \pb
To compare the activities (total substrate turnovers per enzyme) among the different P450 variants, it is first necessary to normalize to the enzyme concentration. To do this, we took the ratio of the slope for each substrate divided by the slope of the CO different spectrum, propagating the errors. These normalized slopes are proportional to the activity on each substrate. The normalized slopes are given in Additional file \ref{add:activity_data}. This file also lists the number of nonsynonymous mutations that each P450 variant possesses relative to the R1-11 parent sequence, as originally reported in \cite{Bloom2007c}. These normalized slopes allow for accurate comparisons among the P450 variants, and were used in the analyses in this paper. To convert these normalized slopes into total substrate turnovers per enzyme, it is necessary to multiply them by the ratio of extinction coefficients. The extinction coefficient for the CO difference spectrum reading (the absorbance at 450 nm minus that at 490 nm) is 91 mM$^{-1}$cm$^{-1}$~\cite{Otey2003}, and we calculated the extinction coefficient at 398 nm for the 4-nitrophenolate group in our buffer to be 12,000 M$^{-1}$cm$^{-1}$. Therefore, for 12-pNCA, the total number of substrate turnovers per P450 enzyme is 7.58 times the ratio of the 12-pNCA standard curve slope to the CO difference spectrum slope. This indicates that our parent protein had about 250 12-pNCA turnovers per enzyme, compared to the 1,000 reported for a variant engineered for maximal 12-pNCA activity~\cite{Cirino2003}. For the other substrates assayed with the 4-AAP assay, the extinction coefficient at 500 nm for the 4-AAP/phenol complex has been reported to be 4,800 ~\cite{Otey2004}. However, we believe that this extinction coefficient could be of dubious accuracy for our data. Depending on the exact type of phenolic compound created by P450 hydroxylation, the extinction coefficient for the 4-AAP/phenol complex may vary. Assuming the extinction coefficient of 4,800 M$^{-1}$cm$^{-1}$ is accurate, then the total number of substrate turnovers per P450 enzyme is 19.0 times the ratio of the substrate standard curve slope to the CO difference spectrum slope. Using this coefficient, the parent P450 had roughly 1,000 turnovers on 2-phenoxyethanol, 30 turnovers on propranolol, 400 turnovers on 11-phenoxyundecanoic acid, 50 turnovers on 2-amino-5-chlorobenzoxazole, and 80 turnovers on 1,2-methylenedioxybenzene. The high activities on 2-phenoxyethanol and 11-phenoxyundecanoic acid are presumably due to the fact that lack of polar substituents on the aromatic ring allows these compounds to enter the hydrophobic P450 BM3 binding pocket~\cite{Haines2001} more easily than 12-pNCA. However, we emphasize that the exact numerical values for the turnovers for these five substrates are questionable. Definitive determination of the extinction coefficients would require analytical analysis of the enzymatic products for each P450 variant on each substrate, which is beyond the scope of this study.\pb
\subsection*{Analysis of activity data}
The raw activity values computed for the P450 variants are listed in Additional file \ref{add:activity_data}. To analyze and display this data, we computed the fold change in activity of each variant relative to the R1-11 parent P450. The fold change is simply the variant activity divided by the parent activity on each substrate, with the standard errors propagated to give an error on the fold change. In Figures \ref{fig:heatmap} and \ref{fig:foldchanges}, these fold changes are displayed on a logarithmic scale so that each unit corresponds to a two-fold increase or decrease in activity. In Figure \ref{fig:heatmap}, the substrates and the P450 variants have both been clustered, as shown by dendrograms on the side of the heat map. The clustering was performed using the standard hierarchical clustering function of the R statistical package. This is complete linkage hierarchical clustering, with the distances computed as the Euclidian distance between the logarithms of the fold changes in activity. The standard errors on the fold changes in activity are not incorporated into Figure \ref{fig:heatmap} or any of the related analysis. However, these standard errors are shown in Figure \ref{fig:foldchanges}; it is apparent from this figure that the errors tend to be much less than the fold changes in activity themselves.\pb
In Figure \ref{fig:changedistribution}, the histogram bins are logarithmically spaced so that each bin contains a $2^{0.5}$-fold range of activities. For example, the histogram bin centered at one contains all variants with between $2^{-0.25} = 0.84$ and $2^{0.25} = 1.19$ fold the parental activity, while the bin centered at 1.5 contains all variants with between $2^{0.25} = 1.19$ and $2^{0.75} = 1.68$ fold the parental activity.\pb
The principal component analysis shown in Table \ref{tab:pca} was performed using the R statistical package, with inputs being the logarithms of the fold changes in activity. Since these log fold changes in activity contained no arbitrary units (they were already normalized to the parent), the data was neither scaled nor zeroed before performing the analysis. Table \ref{tab:pca} shows the composition and the percent of variance explained (the eigenvalue for that component divided by the sum of all eigenvalues) for the first two components. The remaining four components were relatively unimportant, explaining 7\%, 5\%, 4\%, and 2\% of the total variance.\pb
\subsection*{Phylogenetic tree}
The phylogenetic tree shown in Figure \ref{fig:tree} is based on the number of nonsynonymous mutations the P450 variants have relative to the R-11 neutral evolution parent, as reported in \cite{Bloom2007c}. Each of the P450s that evolved in a monomorphic population (prefix of M) are known to have diverged independently, and so are drawn on their own branch regardless of any sequence identity to other variants. The exact phylogenetic relationship of the P450s that evolved in the polymorphic population (prefix of P) is not known, so they portion of the tree for these mutants was reconstructed by maximum parsimony. The tree is based only on the nonsynonymous mutations, and all mutations weighted equally. Full nucleotide and amino acid sequences of the P450s can be found in \cite{Bloom2007c}.
\section*{Authors contributions}
JDB, PR, and FHA designed the study. JDB, PR, and ZL performed the experiments. JDB and PR analyzed the data. JDB and FHA wrote the paper.
\section*{Acknowledgements}
\ifthenelse{\boolean{publ}}{\small}{}
We thank Andrew Sawayama and Sabine Bastian for helpful comments. JDB was supported by a Howard Hughes Medical Institute predoctoral fellowship. ZL was supported by a summer undergraduate research fellowship from the California Institute of Technology.
{\ifthenelse{\boolean{publ}}{\footnotesize}{\small}
\bibliographystyle{bmc_article}
|
1,116,691,497,264 | arxiv | \section{Introduction}
Despite intensive theoretical efforts over the past decade and more,
we do still not have a quantitative understanding of QCD at large
baryon density. This is primarily due to the sign problem preventing
first-principles Monte Carlo simulations in this r\'egime.
One way of circumventing this
is to study QCD-like theories without a sign problem, and use
these to provide a benchmark for model studies and other methods which
do not suffer from the sign problem. The simplest such theory, which
shares with QCD the properties of confinement and dynamical symmetry
breaking, is 2-colour QCD (QC$_2$D).
In a series of papers \cite{Hands:2006ve,Hands:2010gd,Cotter:2012mb,Boz:2013rca}\ we have studied QC$_2$D with 2 flavours
of Wilson fermion at nonzero baryon chemical potential $\mu$ and
temperature $T$, culminating in a tentative mapping out of the phase
diagram in the $(\mu,T)$ plane \cite{Cotter:2012mb,Boz:2013rca}. Here
we will report on the determination of the phase transition lines
\cite{Boz:2013rca}\ and present new results for the gluon propagator at nonzero
$\mu$ and $T$. Updated results for the equation of state are
presented in a separate talk \cite{Cotter:2013lat}.
We use a standard Wilson gauge and fermion action augmented with a
diquark source term to lift low-lying eigenvalues in the superfluid
phase. The lattice spacing is $a=0.178(6)$fm and $m_\pi/m_\rho$=0.8,
with $am_\pi=0.645(8)$ \cite{Cotter:2012mb}. We have performed simulations at
four fixed temperatures, $T=47, 70, 94$ and 141 MeV, corresponding to
$N_\tau=24, 16, 12$ and 8 respectively, for a range of chemical
potentials $\mu a=$0.0--0.9. At $\mu a=0.35, 0.4, 0.5$ and 0.6 we
have also performed temperature scans on $16^3\times N_\tau$ lattices
with $N_\tau=$4--16. For the diquark source $j$ we have used
$ja=0.02, 0.04$ in order to allow an extrapolation to the physical
$j=0$ limit. We refer to \cite{Cotter:2012mb,Boz:2013rca}\ for further details about the action
and parameters.
\section{Superfluid to normal transition}
\begin{figure}[tb]
\includegraphics*[width=\textwidth]{diquark_proc.eps}
\caption{Diquark condensate $\braket{qq}$ as a function of temperature $T$ for
chemical potential $\mu a=0.35, 0.4, 0.5, 0.6$ (top to bottom). The
circles are data extrapolated to $j=0$ using a linear Ansatz for
$ja\leq0.04$; the shaded circles denote the results of a linear
extrapolation using $ja=0.02,0.03$ only.}
\label{fig:diquark}
\end{figure}
Figure~\ref{fig:diquark} shows the order parameter for superfluidity,
the (unrenormalised) diquark condensate $\braket{qq}$,
as a function of the temperature $T$, for $\mu a=0.35, 0.4,0.5$ and 0.6.
Also shown are the results of a linear extrapolation to $j=0$. We can
clearly observe a transition from a superfluid phase, characterised by
$\braket{qq}\neq0$, at low temperature, to a normal phase with $\braket{qq}=0$ at high
temperature, with a transition in the region $0.08\lesssim
Ta\lesssim0.12$ for all four values of $\mu$.
We have estimated the critical temperatures $T_s$
for the superfluid to normal transition by determining the inflection
points for $\braket{qq}$ at $ja=0.02$ and 0.04, and extrapolated the resulting
values to $j=0$ using a linear Ansatz. The results are shown in fig.~\ref{fig:phasediag}.
We see that $T_s$ is remarkably constant over the whole range of
$\mu$-values considered. The indications are that the transition
happens at a somewhat lower temperature at $\mu a=0.35$, but this
point is already very close to the onset from vacuum to superfluid at
$T=0$, $\mu_oa=m_\pi a/2=0.32$, suggesting that $T_s(\mu)$ rises very
rapidly from zero at $\mu=\mu_o$ before suddenly flattening off.
\section{Deconfinement transition}
\label{sec:deconfine}
\begin{figure*}
\includegraphics*[width=\textwidth]{polyakov_Tscan_ren.eps}
\caption{The renormalised Polyakov loop $\braket{L}$ as a function of
temperature $T$ for $ja=0.04$ and $\mu a=0.35,0.4,0.5, 0.6$, with
two different renormalisation schemes: Scheme A (solid symbols) and
Scheme B (open symbols), see text for details. The solid (dashed)
lines are the derivatives of cubic spline interpolations of the data
points for Scheme A (B). The smaller, shaded symbols are results
for $ja=0.02$. The black circles and thick lines in the bottom
right panel are the $\mu=j=0$ results from \cite{Cotter:2012mb}.}
\label{fig:polyakov-allmu}
\end{figure*}
The Polyakov loop $\braket{L}$ serves as the traditional order
parameter for deconfinement in gauge theories, with $\braket{L}\neq0$
signalling the transition to a deconfined phase. Strictly speaking,
$\braket{L}$ is never zero in a theory with dynamical fermions, but it
typically increases with temperature from a very small value in a
fairly narrow region, which may be identified with the deconfinement
transition region.
Unlike the diquark condensate, the renormalisation of the Polyakov
loop depends on temperature; specifically, the relation between
the bare Polyakov loop $L_0$ and the renormalised Polyakov loop $L_R$
is given by
$L_R(T,\mu)= Z_L^{N_\tau}L_0((aN_\tau)^{-1},\mu)$.
In order to investigate the sensitivity of our results to the
renormalisation scheme, we have used two different conditions to
determine the constant $Z_L$ \cite{Boz:2013rca}, $L_R(T=T_0,\mu=0)=c$, with
$T_0=\frac{1}{4}a^{-1}$ and $c=1$ (Scheme A) or $c=0.5$ (Scheme B).
Figure~\ref{fig:polyakov-allmu} shows $\braket{L}$ evaluated in both
schemes, as a function of temperature. The Scheme B data have been
multiplied by 2 to ease the comparison with the Scheme A data. Also
shown are cubic spline interpolations of the data and the derivative
of these interpolations, with solid lines corresponding to Scheme A
and dotted lines to Scheme B.
At all $\mu$, we see a transition from a low-temperature confined
region to a high-temperature deconfined region. In contrast to the
diquark condensate, we see a clear, systematic shift in the transition
region towards lower temperatures as the chemical potential increases.
For all four $\mu$-values, the Polyakov loop shows a nearly linear
rise as a function of temperature in a broad region, suggesting that
the transition is a smooth crossover rather than a true phase
transition. This is reinforced by the difference between Scheme A and
Scheme B, with the crossover occuring at higher temperatures in Scheme
B. At $\mu=0$, the difference between the two schemes is small, but
increases with increasing $\mu$, suggesting a broadening of the
crossover.
Because of the smaller value of $Z_L$, our results for Scheme B are
considerably less noisy than those for Scheme A. For this reason, we
choose to define the crossover region to be centred on the inflection
point from Scheme B, with a width chosen such that it also encompasses
the onset of the linear region from Scheme A.
The transition region taken from the $ja=0.04$ data
is shown in fig.~\ref{fig:phasediag}. From
Fig.~\ref{fig:polyakov-allmu} we see that at low $T$, the value of $\braket{L}$
increases as $j$ is reduced, and at $\mu a=0.6$, the
crossover region will most likely move to smaller $T$ in the $j\to0$
limit. However, we do not have sufficient statistics for $ja=0.02$ at
low $T$ to make any quantitative statement about this.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{phasediag.eps}
\end{center}
\caption{Phase diagram of QC$_2$D with $m_\pi/m_\rho=0.8$. The black
circles denote the superfluid to normal phase transition; the green
band the deconfinement crossover. The blue diamonds are the
estimates for the deconfinement line from \cite{Cotter:2012mb}.}
\label{fig:phasediag}
\end{figure}
\section{Gluon propagator}
\label{sec:gluon}
\begin{figure*}
\includegraphics*[width=\textwidth]{gluon_T_mu.eps}
\caption{The zeroth (top) and first (bottom) Matsubara mode of the
magnetic (left) and electric (right) gluon propagator as a
function of chemical potential $\mu$ for selected values of the
spatial momentum $q_s=|\vec{q}|$, and different temperatures.}
\label{fig:gluon-compare}
\end{figure*}
One of the main motivations for studying dense QC$_2$D on the lattice
is to provide constraints on approaches which do not suffer from the
sign problem. The gluon propagator provides a key input for several
of these approaches, in particular functional studies using the
functional renormalisation group or Dyson--Schwinger equations. These
are most often carried out in the Landau gauge.
In Landau gauge only the
transverse part of the vacuum propagator is non-zero. However, the
external parameters break manifest Lorentz invariance, hence the gluon
propagator $D$ must be decomposed into chromoelectric and
chromomagnetic modes, $D_E$ and $D_M$, respectively,
\begin{equation}
D_{\mu\nu}(q_0,\vec{q}) = P_{\mu\nu}^{M} D_M(\vec{q}^2,q_0^2) +
P_{\mu\nu}^{E} D_E(\vec{q}^2,q_0^2)\,.
\label{eq:gluon_decomposition}
\end{equation}
The projectors on the longitudinal and
transversal spatial subspaces, $P_{\mu \nu}^{E}$ and $P_{\mu\nu}^{M}$,
are defined by
\begin{align}
P_{\mu \nu}^{M} (\vec{q\,},q_0)
&= \left(1-\delta_{0\mu} \right)\left(1-\delta_{0\nu} \right)
\left(\delta_{\mu\nu} -\frac{q_\mu q_\nu}{\vec{q\,}^2} \right)\,,\nonumber\\
P_{\mu \nu}^{E}(\vec{q\,},q_0)
&= \left(\delta_{\mu\nu}-\frac{q_\mu q_\nu}{q^2} \right)
-P_{\mu \nu}^{M} (\vec{q\,},q_0)\,.
\label{eq:projectors}
\end{align}
In this section we extend the results presented in \cite{Boz:2013rca}\ to a wider
area of the $(\mu,T)$ plane.
We have fixed our gauge configurations to the minimal Landau gauge
using the standard overrelaxation algorithm. The Landau gauge
condition has been imposed with a precision $|\partial_\mu
A_\mu|<10^{-10}$.
In figure~\ref{fig:gluon-compare} we show the two lowest Matsubara
modes for selected spatial momenta as a function of chemical potential
for $N_\tau=24,16,12,8$. The results shown are for $ja=0.04$, but we
have found no significant difference for $ja=0.02$. We have
investigated the volume dependence
on the $N_\tau=24$ lattices and found it to be very mild \cite{Boz:2013rca}.
At the three lower temperatures, both the electric and magnetic form
factors are roughly independent of $\mu$ up to $\mu a\approx0.5$, and
become suppressed for large $\mu$. This changes dramatically at the
highest temperature shown ($N_\tau=8$), where for the lowest (static)
Matsubara mode the electric form factor becomes strongly suppressed
with increasing $\mu$, while the magnetic form factor for small
spatial momenta has a clear enhancement at intermedate $\mu$ and an
enhancement at large $\mu$ for larger spatial momenta. On closer
inspection it is possible to see the onset of this behaviour also for
$N_\tau=12$. No qualitative differences are seen between the electric
and magnetic form factors for the first nonzero Matsubara mode.
\begin{figure}[tbh]
\begin{center}
\includegraphics*[width=\textwidth]{gluonprop_T_mu0500.eps}
\caption{Thermal behaviour of the zeroth Matsubara mode of the
magnetic (left) and electric (right) propagators at $\mu a=0.5$ and
$ja=0.04$ on $16^3\times N_\tau$ lattices, for selected spatial
momenta $q_s=|\vec{q}|$.}
\label{fig:zeromodes_vac}
\end{center}
\end{figure}
We now turn to the thermal behaviour of the gluon propagator at fixed
chemical potential. Fig.\ \ref{fig:zeromodes_vac} shows the zeroth
Matsubara modes of the propagators for $\mu a=0.5$ and $ja=0.04$ on
$16^3\times N_\tau$ lattices as a function of temperature. The
magnetic component has a very mild enhancement at intermediate
temperatures and a slight suppression at very high $T$. In contrast,
the electric propagator shows a strong suppression with increasing
temperature. We note that the deconfinement crossover for this value
of $\mu$ happens for $0.8\lesssim Ta\lesssim2.0$, and that this
coincides roughly with the region where the magnetic propagator is
enhanced. In contrast to early studies in pure Yang--Mills theory,
but in line with a recent study in QCD with twisted-mass Wilson
fermions \cite{Aouane:2012bk}, there is no enhancement in the electric
mode in the transition region.
\section{Summary and outlook}
We have studied the superfluid and deconfinement transition lines in
QC$_2$D in the $(\mu,T)$ plane. We find that the superfluid
transition temperature is remarkably insensitive to $\mu$ for the
quark mass we are using, while the deconfinement temperature is
clearly decreasing as $\mu$ increases.
At low temperature, the low-momentum modes of both the electric and
magnetic Landau-gauge gluon propagator become suppressed relative to
the (already infrared suppressed) vacuum propagator at large $\mu$,
with no qualitative differences between the two form factors found.
At high temperature, the static electric and magnetic propagators are
found to exhibit very different behaviours, with a strong suppression
of the electric form factor and an enhancement of the magnetic form
factor at intermediate $\mu$.
We are in the process of extending these studies to smaller quark
masses as well as finer lattice spacings. In a forthcoming
publication we will also study the response of the quark propagator to
$\mu$ and $T$. This will enable us to directly confront the results
from functional methods for these quantities.
\section*{Acknowledgments}
This work has been
carried out with the support of Science Foundation Ireland grant
11-RFP.1-PHY3193. We acknowledge the use of the computational
resources provided by the UKQCD collaboration and the
DiRAC Facility jointly funded by STFC, the Large Facilities Capital
Fund of BIS and Swansea University. We thank the DEISA Consortium
(www.deisa.eu), funded through the EU FP7 project RI-222919, for
support within the DEISA Extreme Computing Initiative. The simulation
code was adapted with the help of Edinburgh Parallel Computing Centre
funded by a Software Development Grant from EPSRC.
We thank Pietro Giudice, Simon
Hands and Jan Pawlowski for stimulating discussions and advice.
|
1,116,691,497,265 | arxiv | \section{Introduction}
AdS/CFT has passed many serious tests and does an excellent job of describing a four-dimensional strongly coupled conformal field theory. Shortly after its inception, it was proposed that this correspondence could be used to describe QCD physics. Specficially, by deforming the string theory, the conformal gauge theory would develop confinement behavior. ~\cite{Maldacena1998,Polyakov1998} One of the early successes of the AdS/CFT was that the geometric scaling of the AdS theory could soften the historically troublesome energy dependence of high energy string scattering.~\cite{Polchinski2002} This in turn indicated that the gauge gravity correspondence might be able to be used for physical processes, like deep inelastic scattering (DIS), where strongly coupled physics plays an important role.~\cite{Polchinski2003} In this holographic picture, glueballs could be described~\cite{Brower2000b} and the AdS Pomeron was unambiguously identified as the Regge trajectory of the graviton.~\cite{Brower2007} The strongly coupled dynamics of this Pomeron and its eikonalization were identified~\cite{Brower2009a}, and then these techniques were extended to the AdS Odderon.~\cite{Brower2009} Pomeron exchange in AdS was then applied to fit small-x HERA data for DIS, DVCS and vector meson production.~\cite{Brower2010,Costa:2012fw,Costa:2013uia}
In this paper, we consider the graviton fluctuations of type IIB string theory in a compactified AdS$_5\times$S$^5$ background, which is geometrically deformed with a soft (gradual) confinement.
\begin{equation}
ds^2= \frac{R^2}{z^2}\left[dz^2+dx\cdot dx\right]+R^2d\Omega_5 \rightarrow e^{2A(z)}\left[dz^2+dx\cdot dx\right]+R^2d\Omega_5
\end{equation}
\noindent Here R is the radius of both spaces, x is a usual 4-dimensional minkowski coordinate, and z is the AdS radial direction. For a \emph{purely geometric} softwall confinement, the scaling function can be identified as $A(z)=\Lambda^2z^2+$ln$(R/z)$. Where $\Lambda$ will set the confinement scale.
We examined a DIS process, where a lepton scatters from a proton via the exchange of a virtual photon. In terms of the virtual Compton subprocess, we specifically examined the Regge limit: s$\approx$Q$^2$/x large and Q$^2$ fixed. In this so-called small-x limit, confinement effects will play a particularly imprtant role. We will obtain physical results via the optical theorem, where the forward limit of the virtual photon scattering will tell us about the total cross section, $\sigma_{total}=\frac{1}{s}Im\left[\mathcal{A}(s,t=0)\right]\sim \frac{1}{s}Im\left[\chi(s,t=0)\right]$ The total cross section can then be used to fit one of the hadronic structure functions, F$_2$\footnote{The hadronic tensor, the hadronic contribution to a scattering amplitude, can be written in terms of several structure functions. In certain kinematic regimes, the structure functions can be written in terms of each other, and thus there a determination of the F$_2$ behavior determines the hadronic behavior of the scattering process.}, from the combined HERA and ZEUS experiments. Explicitly,\hspace{5pt}$
F_2(x,Q^2) = \frac{Q^2}{4\pi^2\alpha_{em}}\left(\sigma_{trans}+\sigma_{long}\right)$
\section{SoftWall Model}
The soft wall model was originally proposed in~\cite{Karch2006a}. It showed what type of AdS confinement would lead to linear meson trajectories. Several dynamical softwall toy models, where the confinement is due to a non-trivial dilaton field, have subsequently been described.~\cite{Batell2008b} There has even been some success in using the softwall model to fit QCD mesons.~\cite{Katz2006} \footnote{These models involve a dynamical dilaton and tachyon field, but there is still debate about some of the signs of some parameters.~\cite{Karch2011,Teramond2010c} However, for our pourpuse, we only need to consider graviton fluctions to describe the AdS Pomeron. For the dynamical soft wall models, the graviton does \emph{not} couple to the dilaton field--and thus a purely geometric confinement model is sufficient to consider.} Significant effort has been put forth to develop standard model and QCD features in these softwall models.~\cite{Batell2008c,Csaki2007,Erlich2005}
In the softwall model, the graviton dynamics involve a spin dependent mass-like term $\alpha^2(j)=2\sqrt{\lambda}(j-j_0)$. The Pomeron propagator can take several forms: for quantized momentum transfer, $t\rightarrow t_n$, the solution behaves like Laguerre polynomials: $\chi\sim L^{\alpha}_n(2\Lambda^2z^2)$. More generally, for a continuous t spectrum, the solution is a combination of Whittaker's functions
\begin{equation}
\chi_P(j,z,z',t)=\frac{M_{\kappa,\mu}(z_<)W_{\kappa,\mu}(z_>)}{W(M_{\kappa,\mu},W_{\kappa,\mu})}
\end{equation}
for $\kappa=\kappa(t)$ and $\mu=\mu(j)$ . $\Lambda$ controls the strength of the soft wall and in the limit $\Lambda \rightarrow 0$ one recovers the conformal solution\footnote{This has a similar behavior to the weak coupling BFKL solution where Im$(\chi(p_{\perp},p_{\perp}',s))\sim\frac{s^{j_0}}{\sqrt{\pi \mathcal{D}ln(s)}}exp(-(ln (p_{\perp}')-ln(p_{\perp}))^2/\mathcal{D}ln(s))$}\hspace{5pt}$ Im(\chi_P^{conformal}(t=0))=\frac{g_0^2}{16}\sqrt{\frac{\rho^3}{\pi}}(zz')\frac{e^{(1-\rho)\tau}}{\tau^{1/2}}exp\left(\frac{-(ln(z)-ln(z'))^2}{\rho\tau}\right)$
If we look at the energy dependence of the pomeron propagator, we can see a softened behavior in the regge limit. In the forward limit, $t=0$, the conformal amplitude scales as $-s^{\alpha_0}log^{-1/2}(s)$, but this behavior is softened to $-s^{\alpha_0}log^{-3/2}(s)$ in the hardwall and softwall models This corresponds to the softening of of a j-plane singularity from $1/\sqrt{j-j_0}\rightarrow\sqrt{j-j_0}$.
\section{Numerics}
The data examined comes from the combined H1 and ZEUS experiments at HERA.~\cite{Aaron2010a} A fit was done with the same methods used previously for the conformal and hardwall models in~\cite{Brower2010}, making the results directly comparable.
\begin{figure}[ht]
\begin{minipage}{0.5\linewidth}
\centerline{\includegraphics[width=0.7\linewidth]{chiconformal.pdf}}
\end{minipage}
\hfill
\begin{minipage}{0.5\linewidth}
\centerline{\includegraphics[width=0.7\linewidth]{chihardwall.pdf}}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\begin{minipage}{0.5\linewidth}
\centerline{\includegraphics[width=0.7\linewidth]{chisoftwall.pdf}}
\end{minipage}
\hfill
\begin{minipage}{0.5\linewidth}
\centerline{\includegraphics[width=0.7\linewidth]{softwallF2resize.pdf}}
\end{minipage}
\hfill
\caption[]{Contour plots of Im($\chi$) for the conformal (top left), hardwall (top right), and softwall (bottom left) models. The softwall was also used to fit the F$_2$ proton structure function (bottom right).}
\label{fig:chi}
\end{figure}
\begin{table}[hb]\footnotesize
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Model & $\rho$ & $g_0^2$ & $z_0$ (GeV$^{-1}$) & Q' (GeV)& $\chi^2_{dof}$ \\ \hline
conformal & $0.774^*\pm$0.0103 & $110.13^*\pm1.93$ & -- & $0.5575^*\pm0.0432$ & 11.7 $(0.75^*)$ \\ \hline
hard wall & $0.7792\pm0.0034$ & $103.14\pm1.68$ & $4.96\pm0.14$ & $0.4333\pm0.0243$ & 1.07 $(0.69^*)$ \\ \hline
softwall & 0.7774 & 108.3616 & 8.1798 & 0.4014 & 1.1035 \\ \hline
softwall*& 0.6741 & 154.6671 & 8.3271 & 0.4467 & 1.1245\\ \hline
\end{tabular}
\caption[]{ Comparison of the best fit (including a $\chi$ sieve) values for the conformal, hard wall, and soft wall AdS models. The final row includes the soft wall with improved intercept.}
\label{tab:fit}
\end{table}
The softwall* row describes indicates that the fit was run using a pomeron intercept (which determines $\lambda$) up to order $\mathcal{O}(\lambda^{-5/2})$.~\cite{Brower2013} This quantity has been calculated to high order using integrability and Regge techniques in $\mathcal{N}=4$ SYM~\cite{Basso2011,Gromov2014b,Kotikov2013,Costa2012a}.
\section{Conclusions}
The softwall model continues to fit the known DIS data extremely well. The fits all had similar success to that of the previously investigated hardwall model. In both cases, the models lead to an \emph{extremely} better fit than the conformal case, indicating that at the considered x and Q scales, confinement plays an important role.
There are still things left to investigate for the softwall model. The propagator in general can be solved to higher order in j. This would in principle improve the accuracy, but it requires doing a difficult string calculation. Also, the details of describing mesons and other composite particles is still not completed. Immediately however, the current softwall model can still be applied to various situations. In the limit $t\rightarrow10\Lambda^2$ the equations of motion greatly simplify and the model reduces to a $1+1$ dimensional conformal model where CFT techniques might be able to improve understanding.~\cite{Alfaro1976}
\section*{Acknowledgements}
The works of C-I.T. and T.R. were funded by DE-SC0010010-Task A. The work of M.D. was partially funded by grants PTDC/FIS/099293/2008 and CERN/FP/116358/2010 and supported by the FCT/Marie Curie Welcome II program. The authors would like to thank Miguel Costa for many beneficial talks on this and related works.
\section*{References}
|
1,116,691,497,266 | arxiv | \section{Introduction}
The physical process known as double-diffusive convection was first described in the 1950's by \citet{stommel1956} who observed the effect in an experiment. Shortly afterwards, it was also found in astrophysics when
the first detailed stellar models were computed and \citet{schwarzschild_haerm_1958} found irregularities in their calculations concerning whether or not a zone with a gradient in molecular weight was
stable according to the Ledoux criterion or the Schwarzschild criterion.
But even more than fifty years after its discovery, the field is still actively researched which is due to two reasons: on the one hand it lacked immediate practical incentives that have accelerated the
development of other branches of fluid mechanics. On the other hand, numerical simulations were not possible for a long time because of the considerable computational expenses they demand.
For a summary of the historical development of the area see the paper by \citet{huppert_1981}, for a recent physical review about semiconvection see \citet{zaussinger_kupka_muthsam_2012}. \\
Double-diffusive convection occurs in situations where the effect of a thermal gradient on stability and the effect of a molecular weight gradient on stability compete with each other: if the temperature gradient stabilises the
system and the molecular weight gradient destabilises it, thermohaline convection can occur. Its distinguishing property is the appearance of flow structures known as salt-fingers (thus also the name salt-fingering convection).
In the opposite case (temperature gradient unstable and molecular weight gradient stable) layering convection/semiconvection can occur.
Note that we used the term ``can occur''. Whether thermohaline/layering convection really does occur
depends on the ratio of the molecular weight buoyancy frequency to the thermal buoyancy frequency, the so called stability ratio $R_\rho=-N^2_{\mu}/N^2_{T}$. In the incompressible case it is equivalent to the ratio of the Rayleigh
numbers associated with the thermal instability and the instability caused by the molecular weight. In this paper, our focus will be on layering convection. Situations where this process occurs on earth include the convection in
the arctic ocean where cool and fresh melt water from above leads to a destabilising negative temperature gradient and a stabilising negative molecular gradient in salt \citep{turner2010}. Other examples are East-African rift
lakes which are heated from below by volcanic activity. This leads to a temperature gradient (unstable) and
causes dissolved gases like methane and carbon dioxide to be introduced into the system, thus causing a stabilising molecular weight gradient \citep{Schmid2010225}.
But not only systems on earth are prone to double-diffusive convection: it can also occur in astrophysical systems like in icy satellites, giant planets and massive stars.
Very recently, \citet{ORourke2014} have investigated the effects of a stabilising compositional gradient and the resulting double-diffusive convection in Titan.
The role of semiconvection for the interior of giant planets
has been discussed by \citet{stevenson1982a} and recently by \citet{chabrier2007} who suggested that it might be responsible for
the radius anomalies of some hot Jupiters. This thought is further developed by \citet{Leconte2012}; they investigated the effect of semiconvection on the interior structure of planets and showed that it
could explain the luminosity anomaly of Saturn \citep{Leconte2013}.
They also point out: ``Determining the solute transport properties in the regime of
layered convection more precisely, however, will be central to evolutionary calculations. 3D hydrodynamical simulations in a realistic parameter range are thus strongly needed.'' \citep{Leconte2012}.
However, numerical simulations of double-diffusive systems in a realistic astrophysical parameter range pose a serious challenge (and are, in fact, still impossible in the stellar regime with today's computers)
because of the huge spread of length and time scales of which the smallest length scale --- the size of the diffusive boundary layer --- needs to be resolved. For example, the ratio of the diffusivities of temperature
and solute (the Lewis number) for the plasma
in the interior of a semiconvective region of a star is $ \Le \approx 10^{-9}$ \citep{zaussinger_diss}. This would require an impossible spatial resolution if one were to attempt a DNS of such a zone \citep[also][]{zaussinger_diss}.
While simulations are nowhere near the realistic parameter range for stellar astrophysical conditions yet, the parameter regime of giant planets has become feasible with today's computers since their Prandtl and Lewis numbers are
much more moderate: the Prandtl number ranges from $\Pran = 10^{-2}$ to $1$, the Lewis number is about $\Le = 0.01$ \citep{chabrier2007}.
For idealised microphysics, there are a number of simulations in two dimensions \citep[e.g.][]{zaussinger_scn_2013} and in three dimensions \citep[e.g.][]{wood_2013} in this parameter regime.
Recently, a simulation in a realistic parameter range for the
Atlantic Ocean that correctly reproduces measurements has been conducted by \citet{Flanagan20132466}.
However, all of the mentioned studies have neglected the effect of rotation on the development of double-diffusive convection.
While this may be justifiable in the case of thin layers as they are occuring in the Arctic ocean (layer thickness 1 to 5 m, see \citealt{Timmermanns2008}) or in lake Kivu (average thickness of the mixed layers 0.48 m,
see \citealt{Schmid2010225}),
it might not be negligible for large layers that could be forming in global convection zones on rapidly rotating giant planets and stars.
It might even prove to be essential if trying to determine if layered convection is indeed occurring in giant planets and stars and what its precise
influences on the transport properties are. Our work is a first step
in the direction of investigating the effects of rotation on semiconvective layers. We note that while \citet{Net2012} did study thermosolutial
convection in rotating spherical shells, they investigated a different parameter regime than the one where layers are expected to form
so their work gives us no lead as to how semiconvective \textit{layers} are influenced by rotation.
We want to give a remark on nomenclature here:
in oceans the molecular weight gradient is caused by dissolved salt. That is why salinity
gradient is another common term for the molecular weight gradient, particularly in oceanography. We will use the term salinity in this papers as well because it is handier than molecular weight of second species. We
assume salinity to be the concentration of the solute, no matter what exactly the solute is.
The publication is structured as follows: in chapter \ref{sec:model_description} we present the physical model and the underlying equations. We introduce the governing dimensionless numbers and the boundary conditions.
In chapter \ref{sec:numerical_implementation} we discuss the numerical setup. In chapter \ref{sec:results} we present the results of our simulations. First, we show the results for a run with
one set of parameters without rotation to have a reference framework to which we can compare the following runs (chapter \ref{sec:nonrotating}). Next, we present the results of the simulations with rotation
(chapter \ref{sec:rotation}) and highlight some differences before investigating the influence of a change of the Prandtl number $\Pran$ and the density ratio $R_\rho$ chapter \ref{sec:modifying_pr_and_rrho}.
This is followed by a discussion in chapter \ref{sec:discussion} and conclusions in chapter \ref{sec:conclusion}.
\section{Model description}\label{sec:model_description}
\subsection{The governing equations}
Two concentric spherical shells are maintained at different, constant temperatures:
a hot inner sphere (radius $R_1$, temperature $T_1$) and a cool outer sphere (radius $R_2$, temperature $T_2$). The axis of rotation is taken to be the $z$-axis,
the rate of rotation is constant and parallel to the z-direction: $\boldsymbol{\Omega} = \Omega \boldsymbol{e_z}$. Gravity operates inwards in radial direction: $\boldsymbol{g}= - g \boldsymbol{e_r}$.
The main simulation was run over a simulation time of almost one full
thermal diffusion time scale. We studied the effects of an increase of the rate of rotation on the temporal evolution of a double-diffusive state.
For constant viscosities and diffusivities, the Navier--Stokes equations in the Boussinesq form including rotation and conservation of solute, read
\begin{equation}
\bnabla \bcdot \boldsymbol{u} = 0,
\label{eq:ns1}
\end{equation}
\begin{equation}
\left( \frac{\partial \boldsymbol{u} }{\partial t} \right) + (\boldsymbol{u} \bcdot \bnabla) \boldsymbol{u} = - \frac{\bnabla p}{\rho_0} + \nu \nabla^{2} \boldsymbol{u} + \frac{\rho}{\rho_0} \boldsymbol{g}
- 2 \boldsymbol{\Omega} \times \boldsymbol{u} - \boldsymbol{\Omega} \times (\boldsymbol{\Omega} \times \boldsymbol{r}),
\label{eq:ns2}
\end{equation}
\begin{equation}
\frac{\partial T}{\partial t} + (\boldsymbol{u} \bcdot \bnabla) T = \kappa_T \nabla^{2} T,
\label{eq:ns3}
\end{equation}
\begin{equation}
\frac{\partial S}{\partial t} + (\boldsymbol{u} \bcdot \bnabla) S = \kappa_S \nabla^{2} S,
\label{eq:ns4}
\end{equation}
with $\rho = \rho_0 [1 - \alpha (T-T_0) + \beta ( S- S_0)] $.\\
$\boldsymbol{u}$ is the velocity of the flow, $p$ the pressure, $\rho$ the density,
$\rho_0$ a reference density, $\nu$ the kinematic viscosity, $\boldsymbol{\Omega}$ the angular velocity, $\boldsymbol{r}$ the position vector, $T$ the temperature, $T_0$ the reference temperature where $\rho = \rho_0$,
$S$ the salinity, $S_0$ the reference salinity where $\rho = \rho_0$, $\kappa_T$ the thermal diffusivity and $\kappa_S$
the molecular diffusivity. $\alpha$ is the thermal expansion
coefficient $-\rho_0^{-1} \, (\partial \rho/\partial T)_S$, $\beta$ is the saline expansion coefficient $\rho_0^{-1} \, (\partial \rho / \partial S)_T$.
With a later application in astrophysics in mind, we concentrated on systems where the centrifugal force is assumed much smaller than the gravitational force.
Hence, the term describing centrifugal forces $(- \boldsymbol{\Omega} \times (\boldsymbol{\Omega} \times \boldsymbol{r}))$
will be neglected.
The equations are nondimensionalized with the scales
\begin{eqnarray}
r &=& L r^*, \quad T - T_0 = \Delta T T^*, \quad S - S_0 = \Delta S S^*, \quad \boldsymbol{u} = \frac{\kappa_T}{L} \boldsymbol{u^*}, \quad t = \frac{L^2}{\kappa_T} t^*.
\label{eq:nondim}
\end{eqnarray}
$L=R_2 - R_1$ is the difference between outer and inner radius, $\Delta T$ and $\Delta S$ are the differences of temperature and salinity between outer and inner radius. The time scale used is the thermal diffusion time scale.
Inserting (\ref{eq:nondim}) into (\ref{eq:ns1}) -- (\ref{eq:ns4}) and dropping the asterisks leads to the dimensionless form of the equations:
\begin{equation}
\bnabla \bcdot \boldsymbol{u} = 0,
\end{equation}
\begin{equation}
\Pran^{-1} \left[ \left( \frac{\partial \boldsymbol{u} }{\partial t} \right) + (\boldsymbol{u} \bcdot \bnabla) \boldsymbol{u} \right] = - \bnabla p_{\mathrm{eff}} + \nabla^{2} \boldsymbol{u} +
Ra_T T \boldsymbol{e_r} - Ra_S S \boldsymbol{e_r} - \sqrt{\Ta} \boldsymbol{e_z} \times \boldsymbol{u},
\end{equation}
\begin{equation}
\frac{\partial T}{\partial t} + (\boldsymbol{u} \bcdot \bnabla) T = \nabla^2 T,
\end{equation}
\begin{equation}
\frac{\partial S}{\partial t} + (\boldsymbol{u} \bcdot \bnabla) S = \Le \, \nabla^2 S,
\end{equation}
where we introduced the dimensionless numbers
\begin{eqnarray*}
\Pran &=& \frac{\nu}{\kappa_T}, \quad \Le = \frac{\kappa_S}{\kappa_T}, \quad Ra_T = \frac{\alpha L^3 \Delta T g }{ \kappa_T \nu}, \quad Ra_S = \frac{\beta L^3 \Delta S g }{ \kappa_T \nu},
\quad \Ta=\frac{4 \Omega^2 L^4 }{ \nu^2}.
\end{eqnarray*}
$\Pran$ is the Prandtl number, $\Le$ is the Lewis number, $Ra_{T/S}$ are thermal and saline Rayleigh numbers, respectively, and $\Ta$ is the Taylor number.
$Ra_T$ and $Ra_S$ are related to each other by the stability parameter $R_\rho=Ra_S/Ra_T$. Note that $t$ now stands for the time in thermal diffusion time scales, so that $t=1$ means that one thermal
diffusion time scale has passed. We also introduced the effective pressure $p_\mathrm{eff}$ which is the gradient of the scaled original pressure
plus a constant term. This can be written in this form because the constant term will vanish in the course of solving the equations.
\subsection{Boundary and initial conditions, parameter regime}
The idea behind our setup is to observe a growing double-diffusive layer and to investigate how it is influenced by rotation. To achieve that, we chose the following boundary and initial conditions.
\subsubsection{Boundary conditions}
We applied no-slip boundary conditions which read
\[ \boldsymbol{u}(R_1) = 0, \quad \boldsymbol{u}(R_2) = 0, \quad T(R_1) = 1, \quad T(R_2) = 0, \quad S(R_1) = 1, \quad S(R_2) = 0. \]
These are reasonable boundary conditions because we assume our spherical shell to be one layer of a so called double-diffusive stack. The appropriateness of these boundary conditions is explained in
chapter 3.2 of \citet{zaussinger_scn_2013}.
\subsubsection{Initial conditions}
The same assumption (taking the shell to be one layer of many) demands a step-like initial distribution of temperature and salt. However, since the purpose of this work is to investigate the effects of
rotation on a developed double-diffusive layer it was imperative for a layer to form within our simulated spherical shell. We ran some simulations with different initial conditions: step-like initial distributions
of both temperature and salinity and a step-like distribution of one quantity and a linear distribution of the other one. Our goal was to test the influence of these different initial conditions on the thermal Nusselt number.
The thermal and saline Nusselt numbers are measures for the convective heat and salt flux at the boundaries of the system, respectively. For incompressible flows (which we are looking at)
they are defined as the ratio of the total heat or salt flux and the heat or salt flux that would be transported by conduction alone:
\begin{equation}
\Nut = \frac{F_{\mathrm{T}}}{F_{\mathrm{cT}} } \qquad \mathrm{and} \qquad \Nus = \frac{F_{\mathrm{S}}}{F_{\mathrm{cS}}}
\end{equation}
with $F_{\mathrm{T}}$ being the total heat flux, $F_{\mathrm{cT}}$ the flux
that would be transported if the temperature profile was linear between the bottom and the top, $F_{\mathrm{S}}$ the measured saline flux and $F_{\mathrm{cS}}$ the saline flux that would be transported if the concentration
profile was linear between the bottom and the top.
The result is shown in figure \ref{fig:ic_vgl}. %
\begin{figure}
\begin{minipage}[]{0.49\textwidth}
\includegraphics[width=\textwidth]{./Dry_initial_conditions.png}
\end{minipage}
\begin{minipage}[]{0.49\textwidth}
\includegraphics[width=\textwidth]{./Initial_conditions_vgl_therm_nusselt_taylor0.png}
\end{minipage}
\caption{Left-hand side: step-like and linear initial distributions of temperature or salinity as discussed in the text.
Right-hand side: average thermal Nusselt number vs. simulation time in thermal diffusion time scales for three different initial conditions for temperature (T) and salinity (S): a step in both T and S,
a step in T and a linear distribution in S and linear distributions for both T and S.
It can easily be seen that each initial condition leads to the same asymptotic range of values for the average thermal Nusselt number. The time that it takes to reach this
asymptotic value differs, however. $\Rat = 10^7, \Pran=1, \Le=0.1$}
\label{fig:ic_vgl}
\end{figure}
Each initial condition leads to the same asymptotic range of values for the average thermal Nusselt number which means that the physical state after relaxation is the same.
Only the time it takes to reach this state differs.
For ``T step, S step'', plumes immediately reached the upper boundary of the shell without any layering in between. But as we wanted to investigate
the effects of rotation on layering, this initial condition was no viable option for us.
To reduce computational costs, we did not chose ``T linear, S linear''.
This left us with the initial condition of a step in the temperature field and a
linear distribution of salinity which offered a good compromise between observing layering and keeping simulation time within affordable limits.
\subsubsection{Parameter regime}
We investigated three different parameter regimes. The main simulations were run with $\Pran=1, \Le=0.1$ and $\Rat= 10^7$. To be able to reach the layered convective state, $R_\rho$ has to be sufficiently
small. There exist two upper bounds for the maximum value of $R_\rho$, for which layer formation occurs. The one given by linear stability analysis is $R_{\rho,\mathrm{max}} = (1+\Pran) / (\Le + \Pran)$ which gives
$R_{\rho,\mathrm{max,lin}} = 1.8$ in our case. For larger values, the flow would be damped by viscous friction.
The other one is given by the model of \citet{spruit_theory_2013} (figure 3 therein) and is $R_{\rho,\mathrm{max,Spuit}} \approx 1.6$ with our parameters and provides an upper limit for the subcritical instability that triggers
layer formation.
Additionally, to avoid the simple case of convective mixing due to an unstable stratification in the sense of \citet{ledoux_1947}, $R_{\rho}$ should be larger than $1$.
In simulations without rotation an increase of $R_\rho$ has a stabilising effect
on the flow \citep{zaussinger_diss, zaussinger_scn_2013,Rosenblum2011} and different regimes as a function of $R_{\rho}$ are established (figure 3 and chapter 3.2 in \citet{zaussinger_scn_2013} and the schematic
illustration in figure 1 of \citet{mirouh2012}). We have investigated if the same applies when rotation is present and
have chosen $R_\rho=1.3$ for our main simulations and $R_\rho=1.5$ for comparison runs.
In another set of comparison runs we reduced the Prandtl number to $0.5$ to get a hint of the effects of viscosity on the rotational constraints. \\
A commonly used dimensionless number used to measure the
respective importance of buoyancy and rotation on a system is the Rossby number
\begin{equation}
\Ro = \frac{V}{2 \Omega L \, \mathrm{sin}(\Lambda)}
\label{eq:rossby}
\end{equation}
where $V$ is the characteristic flow speed and $\Lambda$ is the colatitude.
This can be written as
\begin{equation}
Ro = \frac{1}{\mathrm{sin}(\Lambda)}\sqrt{\frac{Ra}{Pr Ta}}.
\label{eq:ro_taylor}
\end{equation}
In order for a motion to be significantly influenced by rotation the Rossby number must be of
order one or less.
We have chosen a range of Rossby numbers
near unity: $0.1, 0.3, 0.5, 1, 3, 4, 10$ and the case without rotation. Since the used code uses the Taylor number as an input parameter, we rewrite (\ref{eq:ro_taylor}) as
\begin{equation}
Ta = \frac{Ra}{(Ro \cdot \mathrm{sin}(\Lambda))^2 Pr }.
\end{equation}
A point to note is that for the same Rossby number, we have
different Taylor numbers when the Prandtl number varies. This is important because we ran simulations with Prandtl numbers $1$ and $0.5$. Accordingly, the Taylor numbers of the simulations with $\Pr = 0.5$ had to be
doubled so that the Rossby number was the same.
The corresponding Taylor numbers for Prandtl numbers 0.5 and 1 at different colatitudes are shown in table \ref{tab:taylor_rossby}.
\begin{table}
\begin{center}
\begin{tabular}{ccccccc}
\hline
\multicolumn{3}{c}{Taylor number at} & \multicolumn{4}{c}{Rossby Number at colatitude} \\
\multicolumn{1}{l}{$Pr=1$} & \multicolumn{1}{l}{$Pr=0.5$} & &\multicolumn{1}{c}{$\Lambda = \pi/2$} & \multicolumn{1}{c}{$\Lambda = \pi/3$} & \multicolumn{1}{c}{$\Lambda=\pi/4$} &
\multicolumn{1}{c}{$\Lambda=\pi/6$} \\ \hline \hline
$1.00\cdot 10^9$ & $2.00\cdot 10^9$ & & 0.10 & 0.12 & 0.14 & 0.20 \\
$1.11\cdot 10^8$ & $2.22\cdot 10^8$ & & 0.30 & 0.35 & 0.42 & 0.60 \\
$4.00\cdot 10^7$ & $8.00\cdot 10^7$ & & 0.50 & 0.58 & 0.71 & 1.00 \\
$1.00\cdot 10^7$ & $2.00\cdot 10^7$ & & 1.00 & 1.15 & 1.41 & 2.00 \\
$1.11\cdot 10^6$ & $2.22\cdot 10^6$ & & 3.00 & 3.46 & 4.24 & 6.00 \\
$4.00\cdot 10^5$ & $8.00\cdot 10^5$ & & 5.00 & 5.77 & 7.07 & 10.00 \\
$1.00\cdot 10^5$ & $2.00\cdot 10^5$ & & 10.00 & 11.55 & 14.14 & 20.00 \\
$0$ & $0$ & & $\infty$ &$\infty$ & $\infty$ & $\infty$ \\ \hline
\end{tabular}
\end{center}
\caption{Rossby numbers and corresponding Taylor numbers for Prandtl numbers $1$ and $0.5$}
\label{tab:taylor_rossby}
\end{table}
\section{Numerical implementation}\label{sec:numerical_implementation}
The numerical solution of the Navier-Stokes equations in the spherical shell is based on a spectral method. The radial direction is discretised on Chebyshev nodes ($k$), which brings desired clustering at the boundaries. However,
the variables are expanded in spherical harmonics along the meridional ($l$) and the zonal ($m$) direction. While the linear parts of the equations are solved in the spectral space, the non-linear ones are calculated in real space. The
time integration is based on a modified Runge-Kutta scheme of second order, where the diffusive terms are treated implicitly. This numerical method is fast and accurate especially for double-diffusive flows with high Prandtl
number. Another advantage of the method is the uncoupled resolution of the vector fields and the scalar fields. This means that the expansion of each variable can be treated independently. Mainly low Mach number flows benefit
from this feature. The velocity field is discretised on $k=71$ nodes in radial direction, $l=71$ modes in meridional and $m=71$ modes in zonal direction, respectively, for all simulations. The expansion of the
temperature and the solute equations use $k=71$ radial nodes and $l=m=71$ modes. The time step is set to $\tau=2 \cdot 10^{-6}$ thermal diffusion time scales.
We refer to \citet{hollerbach_2000} for a detailed description of the method that was presented here briefly. In contrast to the
original version of the code, the set of equations was extended for solute transport. This introduces the Lewis number $Le=\kappa_S/\kappa_T$ and the stability ratio $R_{\rho}=Ra_S/Ra_T$ as new input parameters.
\section{Results}\label{sec:results}
\subsection{The non-rotating reference run: $\Pran=1, R_{\rho}=1.3, \Le=0.1, Ra_T=10^7,\Ta=0$ }\label{sec:nonrotating}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_00100_bis_step_001500.png}
\end{center}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_02000_bis_step_02500.png}
\end{center}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_08000_bis_step_010000.png}
\end{center}
\caption{Temporal evolution of the temperature field (left above each plot) and the salinity field (right above each plot) for $\Ta=0$ and plots of averaged potential
temperature (T) and salinity (S) in the equatorial plane vs. radius. Note: the snapshots are not equidistant in time.}
\label{fig:taylor0overview}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_12500_bis_step_15000.png}
\end{center}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_17500_bis_step_20000.png}
\end{center}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_25000_bis_step_30000.png}
\end{center}
\caption{Temporal evolution of the temperature field (left above each plot) and the salinity field (right above each plot) for $\Ta=0$ and plots of averaged potential temperature (T) and salinity (S)
in the equatorial plane vs. radius. Note: the snapshots are not equidistant in time.}
\label{fig:taylor0overview2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{./Salinity_added_T0overview_eqplane_step_50000_bis_step_235000.png}
\end{center}
\caption{Temporal evolution of the temperature field (left above each plot) and the salinity field (right above each plot) for $\Ta=0$ and plots of averaged potential temperature (T) and salinity (S)
in the equatorial plane vs. radius. Note: the snapshots are not equidistant in time.}
\label{fig:taylor0overview3}
\end{figure}
First, we take a look at the temporal evolution of the reference run with parameters $\Pran=1$ and stability parameter $R_{\rho}=1.3$.
In order to validate the reference simulation we compare the numerical outcome with theoretical results. The convective flux parameterised as Nusselt number $Nu_T$ follows typically a power law in the form of $Nu_T=a Ra_T^b$,
whereas $0.1< a < 0.3$ is a constant factor and $b\approx 2/7$. However, dozens of different power laws have been found until now, which makes it nearly impossible to compare two simulations exactly. The $2/7$ power law seems to
be valid for most applications, which gives $a=0.19/\pi$ for our simulations. This is in good agreement with \citet{Castaing1989} and \citet{Kerr1996}. To check the saline flux, we come back to the linear stability analysis, e.g.
\citet{huppert1976}, which gives $Nu_S=Le^{-1/2} Nu_T$ for $Nu_T>>1$. A correction for $Nu_T=\mathcal O(1)$ was considered by \citet{spruit_theory_2013} and tested by \citet{zaussinger_scn_2013},
\begin{equation}
Nu_S-1=q Le^{-1/2}/R_{\rho} \, (Nu_T-1),
\label{NuS-NuT}
\end{equation}
where $q\approx1$ is a fitting parameter. Our reference simulation gives mean values of $Nu_T=5.77$ and $Nu_S=11.62$, thus (\ref{NuS-NuT}) seems to fit very well for $q=0.915$.
To get an overview, the temporal evolution of the temperature and salinity fields in a semiconvective setup in a non-rotating spherical shell is shown in figure \ref{fig:taylor0overview}.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Fancy_thrice_nusselt_and_kinetic_rrho13_pr1_onet0_upto_dts01.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Thrice_nusselt_and_kinetic_rrho13_pr1_onet0_upto_dts047.png}
\end{center}
\end{minipage}
\caption{Kinetic energy (highest curve), average saline (middle curve) and average thermal (lowest) Nusselt numbers vs. simulation time for $\Ta=0$ from $t=0$ to $t=0.01$ (left) and from $t=0$ to $t=0.47$ (right)}
\label{fig:thrice_taylor0}
\end{figure}
After starting the simulation it takes some time until convection sets in. The plumes do not, however, rise to the top of the shell. Convection is restricted to a zone with a thickness $d$.
$d$ increases as time passes until the top of the zone touches the upper boundary at $t\approx0.03$. At $t\approx0.05$ the zone not just touches the upper boundary at
some points
but fills the whole simulation domain. At $t\approx0.47$, the whole region is thoroughly mixed. Also shown in figure
\ref{fig:taylor0overview} are plots of averaged potential temperature (T) and salinity (S) in the equatorial plane vs. radius. We can see the relaxation of the initial
conditions to the solution which consists of a plateau of constant temperature and salinity throughout the spherical shell. This plateau is typical for a convective region. It is interesting to see that at $t\approx0.02$
there is a similar plateau but is not as wide as the one at $t\approx0.47$, meaning that the convective overturning region is limited to a smaller region in the shell. This is
is also visible in the temperature field
itself. Another point to note is the height of the plateau which decreases as time passes. Interestingly, the plateaus of saline and thermal composition reach their final height at different times. This is visible in
figure \ref{fig:taylor0overview3}. At $t\approx0.1$ the plateau of $T$ is at $\approx 0.25$ while the plateau of $S$ is a bit less than $\approx 0.4$. At $t\approx 0.47$, the thermal plateau has only moved by a very modest amount to
$\approx 0.2$ while the saline plateau has dropped a comparatively large amount to $\approx 0.25$.
This phenomenon can also be observed when looking at the convective
flux and kinetic energies contained in the system (figure \ref{fig:thrice_taylor0}).
While both the thermal and saline Nusselt numbers reach their maximum at the same time
$t\approx 0.054$, the thermal Nusselt number keeps that value. The saline Nusselt number, however, decreases until at $t\approx 0.2$ .
The Nusselt numbers also emphasise the distinction of the flow state into different phases through which the convective flow develops. We have the ``plume phase''
which is characterised by the first appearance and upward movement of the convective plumes. Once they break we enter the ``layered convection phase'' at $t\approx 0.006$. In this phase we can identify
two regions,
based on the slope of Nusselt numbers and kinetic energy: one region with a low slope, corresponding to the rising of the semiconvective layer to the upper boundary, and one region with a high slope, corresponding to the case that
one semiconvective layer fills out all of the shell, but semiconvection is still taking place. At $t\approx 0.054$ , the thermal part of the semiconvective layer turns into a fully mixed layer. The saline
part of the semiconvective layer, however, needs much longer than the thermal part to reach equilibrium. While this difference is not visible in figure \ref{fig:taylor0overview}, it is clearly
visible in figure \ref{fig:thrice_taylor0} (b): while the thermal Nusselt number
reaches its asymptotic limit at $t \approx 0.054$ , the saline Nusselt reaches the equilibrium state at $t \approx 0.2$ . This is about the same time at which the kinetic
energy reaches its limiting average value. This is understandable because the flux of both the solute and the temperature add to the kinetic energy, hence they are dependent on each other.
Looking at figure \ref{fig:taylor0overview2} we see that the convective layer has already reached the top boundary at $t\approx 0.06$ , so the constant thermal convective flux is no measure for
how long a semiconvective layer lasts until it reaches its final state. Thermal and saline processes clearly have different lifetimes and the longer timescale on which saline processes take place agrees well with $\Le < 1$.
Comparing figures \ref{fig:taylor0overview2} and \ref{fig:thrice_taylor0}, we observe that at $t \approx 0.03$, when the semiconvective layer reaches the top
boundary, the slope of the saline Nusselt number starts to increase. This increase continues up to about $t \approx 0.055$ where it reaches a maximum. Looking at a later simulation time we see that this maximum is in fact
the global maximum of the saline Nusselt number. From there, it slowly decreases until it reaches an asymptotic limit at $t \approx 0.2$.
In summary, we have observed a growing double-diffusive layer that fills out the whole volume at the end of the simulation. We will now study the effects of rotation on this process.
\subsection{The effects of rotation}\label{sec:rotation}
To help emphasising some effects, we split up the results into three time scales:
$t=0-0.03$ is the time it takes the semiconvective layer to reach the upper
boundary in the non-rotating case (see figures \ref{fig:taylor0overview2} and \ref{fig:ratio_of_nusselts}).
$t=0-0.1$ is a time scale on which it becomes clear that the time a simulation needs to run is higher than expected from dynamical (flow related) timescales and $t=0-1$ is the
complete simulation time where global effects
are visible. Since the fields of temperature and salinity look alike in figures \ref{fig:taylor0overview} to \ref{fig:taylor0overview3}, we restrict ourselves to showing the temperature field from now on.
We start with a discussion of the initial development phase.
\subsubsection{The temporal evolution with rotation up to $t=0.03$ }
After $t=0.03$ the semiconvective layer has reached the upper boundary in the non-rotating case (see figures \ref{fig:taylor0overview} and \ref{fig:taylor0overview2}).
The situation is very different in the rotating case as is shown
in figure \ref{fig:at_dts_003} and in the movie online:
\begin{figure}
\begin{minipage}[]{\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./at_dts_003_5in1_eqplane_step_015000_taylor_0_bis_111e6.png}
\end{center}
\end{minipage}
\begin{minipage}[ ]{\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./at_dts_003_5in1_eqplane_step_015000_taylor_1e7_bis_1e9.png}
\end{center}
\end{minipage}
\caption{The temperature field at $t=0.03$ for different Taylor numbers and plots of averaged potential temperature (T) and salinity (S) in the equatorial plane vs. radius. See also the movie online for the temporal
evolution of a few chosen Taylor numbers from $t=0$ up to $t=0.0564$.}
\label{fig:at_dts_003}
\end{figure}
a moderate rotation rate already has a stabilising effect on semiconvection, similar to an increase of the salinity gradient.
At $\Ta=10^7$ this effect is very clearly visible. The semiconvective zone has a thickness of about 0.75.
Further out, temperature
and salinity diffuse out. At a certain critical Taylor number $\Tac$ convection is suppressed completely. A point to note is that if $\Ta < \Tac$ the onset of convection occurs practically simultaneously and is not influenced by
the rate of rotation. This is shown in figure \ref{fig:dts003}: the Nusselt numbers start to increase at the same time $t\approx0.003$. It can also be seen in the movie supplementing the paper that convection starts at the same
time if $\Ta < \Tac$.
\begin{figure}
\begin{minipage}[]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Saline_nusselt_rrho13_pr1_dts_003_bunt_disruptedline.png}
\end{minipage}
\begin{minipage}[]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Nusselt_rrho13_pr1_bunt_disruptedline_dts003.png}
\end{minipage}
\begin{minipage}[]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Ekin_rrho13_pr1_dts_003_bunt_disruptedline.png}
\end{minipage}
\caption{Average saline and thermal Nusselt number and kinetic energy as function of simulation time for the specified Taylor numbers up to $t=0.03$ . Note the practically simultaneous onset of
instability for $\Ta < 10^9$.}
\label{fig:dts003}
\end{figure}
Figure \ref{fig:dts003} also shows the kinetic energy in the system and the average saline and thermal Nusselt numbers as a function of the simulation time up to $t=0.03$.
At higher rotation rates of $\Ta \geq 10^7$ both the Nusselt numbers and the kinetic energy have lower values than at lower rotation rates.
\subsubsection{The temporal evolution with rotation up to $t=0.1$ }
We will now take a look at the real space temperature fields at $t=0.1$ for different rotation rates. These are shown in figure \ref{fig:at_dts_01} together with the averaged values of temperature
and salinity.
For Taylor numbers $0, 10^5, 4 \cdot 10^5$ and $1.11 \cdot 10^6$ we can see that the whole spherical shell is thoroughly mixed and the convective plumes have reached the outer boundary.
This results in a constant thermal Nusselt number as shown in figure \ref{fig:dts01udts1}.
For Taylor numbers $10^7$ and $4 \cdot 10^7$ the simulation is still in the phase of semiconvective layering which is also visible in the real space pictures as well as in the plots of averaged $T$ and $S$.
At $\Ta=1.11 \cdot 10^8$ no convective plateau is observable in the plot of averaged $T$ and $S$, so semiconvection already is seriously dampened by the high rate of rotation. This could
correspond to the diffusive turbulent case which \citet[figure 3]{zaussinger_scn_2013} observe for simulations with high $R_\rho$, i.e. $R_\rho > R_{\rho,\mathrm{crit}}$ where $R_{\rho,\mathrm{crit}}$ is the maximum
value of $R_\rho$ for which layer formation can occur \citep{radko2003,spruit_theory_2013}.
At $\Ta=10^9$ there is no convection at all.\\
\begin{figure}
\begin{minipage}[]{\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./at_dts_003_5in1_eqplane_step_050000_taylor_0_bis_111e6.png}
\end{center}
\end{minipage}
\begin{minipage}[]{\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./at_dts_003_5in1_eqplane_step_050000_taylor_1e7_bis_1e9.png}
\end{center}
\end{minipage}
\caption{The temperature field at $t=0.1$ for different Taylor numbers
and plots of averaged potential temperature (T) and salinity (S) in the equatorial plane vs. radius.}
\label{fig:at_dts_01}
\end{figure}
The left-hand column of figure \ref{fig:dts01udts1} shows the plots of kinetic energy and
average thermal and saline Nusselt numbers vs. time up to a simulation time of $t=0.1$. As in the temperature fields
three distinctively different behaviours depending on the rotation rate are visible.
The cases $\Ta=0, 10^5, 4 \cdot 10^5$ and
$1.11 \cdot 10^6$ are almost indistinguishable: the average thermal Nusselt number of these runs reaches an asymptotic limit of about $6$ like the non-rotating case..
Also, the four mentioned simulations have a similar temporal evolution of the convective flow: they all show an increase in energy and Nusselt numbers starting at $t \approx 0.02$ and an only slightly varying slope.
However, they do reach the
asymptotic limit at different times. This will be investigated further when we take a look at the ratios of Nusselt numbers in chapter \ref{sss:ratio_nusselts_rotation}.
The simulation with $\Ta=10^7$ shows a distinct behaviour: energy and Nusselt number rise much slower than the simulations with a lower rotation rate but they are rising nonetheless.
The simulations with
$\Ta=4 \cdot 10^7, 1.11 \cdot 10^8$ and $10^9$ show no increase of either kinetic energy nor Nusselt number. From looking at these plots alone, one could think that the simulations with these rotation rates
will lead to a purely diffusive state and hence could be aborted. As we will see later, this would be a grave mistake because the case with $\Ta=4 \cdot 10^7$ is the most interesting one.
\begin{figure}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Saline_nusselt_rrho13_pr1_bis_dts01_bunt_disruptedline.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Saline_nusselt_rrho13_pr1_bunt_disruptedline.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Timegetter_for_nusselt_rrho13_pr1_bis_dts01_bunt_disruptedline.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Timegetter_for_nusselt_rrho13_pr1_bunt_disruptedline.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Ekin_rrho13_pr1_bis_dts01_bunt_disruptedline.png}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{./Ekin_rrho13_pr1_bunt_disruptedline.png}
\end{minipage}
\caption{Average saline and thermal Nusselt number and kinetic energy as function of simulation time for the specified Taylor numbers up to $t=0.1$ (left) and $t=1$ (right). The vertical bars represent the times when the
average thermal Nusselt number reaches its statistically stable asymptotic state.}
\label{fig:dts01udts1}
\end{figure}
\subsubsection{The temporal evolution from $t=0.1$ to $t=1$ }
In order to get a clear picture of the global temporal evolution one has to look at the final equilibrium/unstable states which the systems take on after a very long simulation time. Figure \ref{fig:dts01udts1} shows the
average saline and thermal Nusselt numbers and kinetic energy as a function of the simulation time for up to one full thermal diffusion time scale for all simulated Taylor numbers. Only after such a long time each simulation has
reached its final state, as is indicated by the asymptotically constant Nusselt numbers.
Although Feudel et al. (2011) showed changes in convective patterns after 60 and more thermal time scales, these simulations are located in the laminar parameter space and the Nusselt number changes only in the second digit.
Once a simulation reaches the asymptotic limit of statistically constant Nusselt number, nothing changes in the convective
state even after a longer simulation time. Letting the simulations
run any longer would provide no additional information. We regard every simulation which has reached this limit as finished. As already mentioned,
especially the simulation with $4 \cdot 10^7$ has attracted our attention. While it looked as if this rate of rotation had enough of a dampening effect on convection to suppress it completely
in the left column of figure \ref{fig:dts01udts1} the picture is quite different if looking at the right column of figure \ref{fig:dts01udts1} or at the real temperature fields. Therefore, we need
to point out that a sufficiently long simulation time is mandatory for this
kind of simulation in order to avoid any wrong conclusions. \\
Having performed an analysis of the time evolution over one full thermal time scale lets us see very clearly that an increase of rotation slows down the temporal evolution and reduces the amount of the convective flux.
The real space temperature fields after $t=0.47$
are shown in figure \ref{fig:at_dts_047}. At that time, only the simulation with $\Ta=4\cdot10^7$ has not reached its final stable state yet. The ones with $\Ta=0 - 10^7$ have all reached the fully convective overturning state while
the ones with $\Ta=1.11 \cdot 10^8$ and $10^9$ show no convection at all and have diffused out. This is, in fact, close to the analytical solution of the
diffusion problem:
when solving the diffusion equation
\begin{equation}
\frac{\partial T }{\partial t } = \kappa_T \nabla^2 T
\label{eq:A}
\end{equation}
for the steady state in spherical coordinates and using as boundary conditions $T(R_1) = 1$ and $T(R_2) = 0$ and the fact that $R_2 = 2 R_1$, we obtain the solution
\begin{equation}
T(r) = \frac{2}{r} - 1 \quad \mathrm{for} \quad r \in [1,2].
\label{eq:T_ana}
\end{equation}
This is plotted in figure \ref{fig:at_dts_047} for the case of $\Ta = 10^9$. We see that our numerical result converges nicely to the analytical one for the case of high rotation rates.
Note that since $\kappa_T$ does not appear in (\ref{eq:T_ana}) --- basically because $\partial T / \partial t = 0$ in the steady state --- the solution for $S(r)$ is the same as for $T(r)$ thanks
to $S(R_1) = 1$ and $S(R_2) = 0$. This immediately follows from comparing (\ref{eq:ns3}) and (\ref{eq:ns4}) with (\ref{eq:A}), whence $S$ is governed by the same asymptotic laws and thus are found to converge to the same analytical
solution in figure \ref{fig:at_dts_047}.
\begin{figure}
\begin{minipage}[]{\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./at_dts_003_5in1_eqplane_step_235000_taylor_0_bis_111e6.png}
\end{center}
\end{minipage}
\begin{minipage}[]{\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./at_dts_003_5in1_eqplane_step_235000_taylor_1e7_bis_1e9.png}
\end{center}
\end{minipage}
\caption{The temperature field at $t=0.47$ for different Taylor numbers
and plots of averaged potential temperature (T) and salinity (S) in the equatorial plane vs. radius. For $\Ta=10^9$, we plotted the analytical solution of (\ref{eq:A})
as a means for comparison (see text).}
\label{fig:at_dts_047}
\end{figure}
\subsection{Modifying $\Pran$ and $R_{\rho}$}\label{sec:modifying_pr_and_rrho}
A first conclusion we can draw from the previous observations is that rotation has a stabilising effect on the lifetime of semiconvective layers as a function of $\Ta$
with a variety of cases distinguished by the specific value of $\Ta$.
The next question we look into is, if the effects of rotation were similar
if we reduced the Prandtl number or increased the density ratio $R_\rho$. As we have seen, only rotation above a critical Taylor number has an effect on semiconvection. Because of this, we have neglected Taylor numbers
$0$ and $10^5$ in this chapter. This leaves us with six Taylor numbers which can be grouped into three categories based on the rate of rotation: low rotation rates ($\Ta=4\cdot10^5, 2.22\cdot10^6)$,
medium rotation rates ($\Ta=4\cdot 10^6,2\cdot10^7$) and high rotation rates ($\Ta=1.11\cdot10^8, 2 \cdot 10^9$). We will have a look at each regime consecutively.
\subsection{The effect of different $\Pran$ and $R_{\rho}$ at low rotation rates}
\begin{figure}
\centering
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Saline_nusselt_therm_vergleich_ta4e5.png}
\end{minipage}%
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Saline_nusselt_therm_vergleich_ta222e6.png}
\end{minipage}%
\centering
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Saline_nusselt_therm_vergleich_ta1e7.png}
\end{minipage}%
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Saline_nusselt_therm_vergleich_ta4e7.png}
\end{minipage}%
\centering
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Saline_nusselt_therm_vergleich_ta111e8.png}
\end{minipage}%
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Saline_nusselt_therm_vergleich_ta1e9.png}
\end{minipage}%
\caption{Comparison of average saline Nusselt numbers vs. simulation time at low (first row), medium (second row) and high (third row) rotation rates for a variation of Prandtl number $\Pran$ and stability factor $R_{\rho}$. }
\label{fig:nusselt}
\end{figure}
\begin{figure}
\centering
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Ekin_vergleich_bunt_ta4e5.png}
\end{minipage}%
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Ekin_vergleich_bunt_ta222e6.png}
\end{minipage}%
\centering
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Ekin_vergleich_bunt_ta1e7.png}
\end{minipage}%
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Ekin_vergleich_bunt_ta4e7.png}
\end{minipage}%
\centering
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Ekin_vergleich_bunt_ta111e8.png}
\end{minipage}%
\begin{minipage}[t]{.49\linewidth}
\centering
\includegraphics[width=1.0\textwidth]{./Ekin_vergleich_bunt_ta1e9.png}
\end{minipage}%
\caption{Comparison of kinetic energy vs. simulation time at low (first row), medium (second row) and high (third row) rotation rates for a variation of Prandtl number $\Pran$ and stability factor $R_{\rho}$. }
\label{fig:kinetic}
\end{figure}
\subsubsection{Reduction of $\Pran$}
Looking at figures \ref{fig:nusselt} (a) and (b), we can observe that a reduction of $\Pran$ has no significant effect on the Nusselt number. Although the less viscous simulation reaches the asymptotic state of $\Nu\approx 6$
a bit earlier, this could very well be a coincidence. The reason for this may be that with lower viscosity there is less damping at the onset of convection so the final convective state can be reached sooner
but this is not clearly visible at low rotation rates. \\
Comparing the kinetic energies in figures \ref{fig:kinetic} (a) and (b), a significant influence of a lower $\Pran$ is obvious: although the asymptotic limit is again reached at the same time, the value of the kinetic
energy differs significantly. This can be explained by noting that for a more viscous system, more energy has to be stored in the convective motion in order to maintain a specific level of convection because more energy is
converted to heat due to higher friction.
\subsubsection{Increase of $R_{\rho}$}
Even at low rotation rates, the stability parameter $R_{\rho}$ exerts a significant influence on the convective state of the system as well as on the stored kinetic energy. Figures \ref{fig:nusselt} (a) and (b) show that the system
with a higher $R_\rho$ not only reaches a lower asymptotic limit of $\Nu\approx 5.5$ than the reference run ($\Nu \approx 6$), but it also reaches this limit at a later time which hints at a longer occurrence of a layered state.
The same holds true for the time development of kinetic energy, as is shown in figures \ref{fig:kinetic} (a) and (b).
\subsection{The effect of different $\Pran$ and $R_{\rho}$ at medium rotation rates}
\subsubsection{Reduction of $\Pran$}
Looking at figure \ref{fig:nusselt}(c) we see that a reduction of $\Pran$ to $0.5$ has a slight effect on the time when the system reaches the asymptotic limit of the average saline
Nusselt number for $\Ta=1\cdot10^7$. It has no significant effect
on the height of the limit. The effect on the kinetic energy at $\Ta = 1 \cdot 10^7$ is similar to the low rotation rate cases:
for lower Prandtl numbers the asymptotic limit is reached a little sooner and it is lower than in the reference run.\\
At $\Ta=4\cdot10^7$ the effect of a reduced Prandtl number is clearly visible (figure \ref{fig:nusselt} d): the asymptotic limit is approached much earlier than it is in the reference run.
We observe a similar behaviour for the kinetic energy as we did for the Nusselt numbers: the asymptotic limit for the less viscous case is reached sooner and it is lower than in the reference run.
\subsubsection{Increase of $R_{\rho}$}
Increasing $R_\rho$ has a significant effect on the convection at medium rotation rates: at $\Ta=1\cdot 10^7$ the asymptotic limit of $\Nu \approx 4$ is reached much later for $R_\rho=1.5$ than for $R_\rho=1.3$
(see figure \ref{fig:nusselt} c). It is also lower than the reference run ($\Nu \approx 5$). \\
A very clear effect can be seen for $\Ta=4 \cdot 10^7$ (see figure \ref{fig:nusselt} d): while for $R_\rho=1.3$ the Nusselt number and hence the heat transport by convection slowly increases, for $R_\rho=1.5$ it decreases instead
and no convection is taking place. For kinetic energy the effect is similar (see figure \ref{fig:kinetic} d).
\subsection{The effect of different $\Pran$ and $R_{\rho}$ at high rotation rates}\label{sec:modifying_pr_and_rrho_at_higher_rotation_rates}
At high rotation rates of $\Ta=O(10^8)$ and $\Ta=O(10^9)$ convection is hindered so drastically by rotation that the effects of reducing $\Pran$ and increasing $R_\rho$ are negligible.
No convection and hence no double-diffusive convection takes place, the Nusselt number approaches $1$ (see figure \ref{fig:nusselt} f)
and the kinetic energy contained is very low.
\section{Discussion}\label{sec:discussion}
\subsection{Rotational Constraints}
\subsubsection{Characterisation through $\Ra$ and $\Ta$}
As we have seen, it also depends on the Taylor number whether heat and salt are transported by convection or diffusion only. \citet{king2013} have recently suggested that a power law constructed from the product
of Ekman and Rayleigh number can be used to describe how strong rotation affects convection. Although they have studied Rayleigh-B\'enard convection we think it is interesting to compare their results with ours.
They have found three important convection regimes: rotationally constrained convection occurs for $\Ra E^{3/2} \lesssim 10$, weakly rotating convection for $10 \lesssim \Ra E^{3/2} < \infty$
and non-rotating convection for $E^{-1} = 0$. $E$ is the Ekman number $E = \nu / (2 \Omega L^2)$, which is closely related to the Taylor number we used: $E = 1/ \sqrt{\Ta}$. Transferred to our
studies, the limiting value is given by
\begin{equation}
\frac{\Ra}{\Ta^{3/4}}
\label{eq:lim_value}
\end{equation}
and the three regimes are non-rotating convection for $\Ta = 0$, weakly rotating convection for $10 \lesssim \Ra / \Ta^{3/4} < \infty $ and rotationally constrained convection for
$\Ra / \Ta^{3/4} \lesssim 10 $. Which regimes our Taylor numbers belong to is seen in table \ref{tab:rot_constraint}; and indeed, the three regimes coincide very nicely with our results:
convection is effectively prevented for $\Ta = 1.11 \cdot 10^8$ and
$\Ta = 1 \cdot 10^9$ which correspond to the regime of rotationally constrained convection of $\Ra / \Ta^{3/4} \lesssim 10 $.
However, the proposed regimes have to be expanded for the case of semiconvection. We have added an overview of the effect of a change of the density ratio $R_\rho$ on convection to table \ref{tab:rot_constraint}. We
see that for $\Ta = 4 \cdot 10^7$ it crucially depends on $R_\rho$ if a global convective layer forms or if diffusion is the only transport mechanism of heat and salinity.
Although the system should actually be only weakly influenced by rotation since
$\Ra / \Ta^{3/4} = 19.9 > 10$, convection is completely subdued by rotation when $R_\rho = 1.5$. This suggests that the stability ratio $R_\rho$ has to enter (\ref{eq:lim_value}) in a way that a higher $R_\rho$ leads to a
reduced value (because rotation has a stronger influence):
\begin{equation}
\frac{\Ra}{\Ta^{3/4}} \cdot f( {R_{\rho}})
\label{eq:rrho_lim_value}
\end{equation}
This is an interesting result and worth to be studied in greater detail. In this paper, however, we restrict ourselves to the short remark that the convective regimes that \citet{king2013} proposed could also be
applicable to the case of semiconvection in a spherical shell if extended to be also a function of $R_\rho$.
\begin{table}
\begin{center}
\begin{tabular}{ccccc}
& & \multicolumn{2}{c}{Convection for} \\
$\Ta $ \quad &$\Ro_{\pi / 6}$ & $\Ra / \Ta^{3/4}$ & $R_\rho = 1.3$ & $R_\rho = 1.5$ \\[3pt]\hline
$0 $ & $\infty$ & $\infty$ & y & y \\
$1 \cdot 10^5$ & 20 & 1780 & y & y \\
$4 \cdot 10^5$ & 10 & 629 & y & y \\
$1 \cdot 10^6$ & 6.0 & 292 & y & y \\ \hline
$1 \cdot 10^7$ & 2.0 & 56.2& y & y \\
$4 \cdot 10^7$ & 1.0 & 19.9 & y & n \\ \hline
$1 \cdot 10^8$ & 0.6 & 9.2 & n & n \\
$1 \cdot 10^9$ & 0.2 & 1.78& n & n
\end{tabular}
\end{center}
\caption{ Taylor numbers, Rossby numbers at colatitude $\Lambda=\pi /6$ and corresponding values of the convective regime following \cite{king2013}. The two right columns indicate if convection occurred in our simulations
with the indicated value for the density ratio $R_\rho$. The horizontal lines divide the table into rotationally non constrained (upper part), weakly constrained (middle part) and strongly constrained (lower part) regimes
for $R_\rho=1.3$.
For all simulations: $\Pran=1,\Le=0.1,\Ra = 10^7$. }
\label{tab:rot_constraint}
\end{table}
\subsubsection{Characterisation through $\Ro$}
As we have seen in the course of this paper, for $\Pran=1,\Le=0.1,R_\rho=1.3$ we have strongly constrained
convection for Taylor numbers $\Ta \in \{10^8,10^9 \}$, weakly constrained convection for $\Ta \in \{ 10^7, 4 \cdot 10^7\}$ and non constrained convection for $\Ta \in \{0, 10^5, 4 \cdot 10^5, 10^6 \}$. This is indicated by the
horizontal lines in table \ref{tab:rot_constraint}.
It is interesting to compare the Rossby numbers from table \ref{tab:taylor_rossby} to the constraint that rotation exerts on the flow in our simulations. For the stability ratio $R_{\rho} = 1.3$, we get the result that if
$\Ro<0.5$ in the bulk of the shell (indicated by the Rossby number at colatitude $\Lambda=\pi /6$ in table \ref{tab:rot_constraint}),
we have the case of rotationally strongly constrained convection. The weakly constrained (or transition) cases correspond to $0.5 < \Ro < 2$ while the non constrained cases correspond to
$\Ro > 2$.
It is interesting to note that the Reynold stress correlations investigated in \citet{chan2001} and the structure of the temperature field shown in \citet{chan2007}, both times for so-called f-box simulations of rotating
convection with uniform composition and a fully compressible flow,
show a similar transition region at Coriolis numbers (which are defined as $1/\Ro$)
that correspond to exactly the same regime of Rossby numbers as in our case with $R_{\rho}=1.3$: a transition region for $0.5< \Ro<1$ and a rotation dominated
flow for $\Ro < 0.5$, at equatorial co-latitude respectively.
However, as in the case of King et al.'s model, the Rossby number alone seems to be insufficient for figuring out the effect of rotation on semiconvection. Again, $R_\rho$ presents itself to be a crucial factor.
For $R_\rho = 1.5$ we have a higher influence of rotation than for $R_\rho=1.3$. An increase to $R_\rho=1.5$ seems to shift the Rossby numbers by a factor of about $0.5$, meaning that the effective Rossby number for $\Ta=4 \cdot
10^7$ would be $\Roeff=0.5$. With this value, it enters the rotationally strongly constrained regime. And indeed, for $\Ta=4 \cdot 10^7$ and $R_{\rho}=1.5$ we have no convection (see figure \ref{fig:nusselt}). So the stability ratio
also affects the Rossby number in a way that a higher $R_\rho$ leads to a lower effective Rossby number \Roeff:
\begin{equation}
\Roeff = \frac{\Ro}{f(R_\rho)}.
\end{equation}
\subsection{The relationship between $\Nus$ and $\Nut$}
\subsubsection{The relationship between $\Nus$ and $\Nut$ for $\Ta=0$}
There exist different models for the relationship between the thermal and saline Nusselt number.
According to \citet{spruit_theory_2013} they are related via
\begin{equation}
\Nus - 1 = \frac{q}{\Le^{1/2} R_\rho} (\Nut - 1)
\label{eq:spruit_nusselt_theory}
\end{equation}
for $R_\rho < \Le^{-1/2}$. $q$ is a fit parameter. According to (32) of \citet{Rosenblum2011} the relationship is
\begin{equation}
\Nus - 1 \approx \frac{1}{\Le R_\rho } (\Nut - 1),
\label{eq:rosenblums_theory}
\end{equation}
which is especially for very low Lewis numbers in strong contrast to theoretical results from the linear stability theory.
According to (42) and (43) of \citet{wood_2013}, the relation is given by
\begin{equation}
\Nus - 1 = \frac{B}{A} \frac{\Pran^{1/12}}{\Le} \Rat^{0.37-1/3} (\Nut - 1).
\label{eq:woods_theory}
\end{equation}
The latter is given for $\Pran \ll 1$ which is not the case in our simulations, so a deviation can be expected. Typical values for $A$ and $B$ are given as $A\approx 0.1$ and $B \approx 0.03$, so
$B/A \approx 0.3$.
These three models are tested here against our results of average thermal and saline Nusselt numbers.
The result for the simulation without rotation is seen in figure \ref{fig:theory_data}.
\begin{figure}
\begin{center}
\begin{minipage}[]{0.7\linewidth}
\includegraphics[width=\textwidth]{./Function_nusselts_t0_rrho13_pr1_onet0_upto_dts04_q095.png}
\end{minipage}
\caption{Average saline Nusselt number vs. average thermal Nusselt number for $\Ta=0$. The black dots are data points from our simulation, the (red) solid line is a plot of
(\ref{eq:spruit_nusselt_theory}) with $q=0.95$,
the (green) dotted line is a plot of (\ref{eq:rosenblums_theory}), the (blue) dash-dotted line is a plot of (\ref{eq:woods_theory}) with $B/A=0.3$, the (black)
dashed line a plot of (\ref{eq:woods_theory}) with $B/A=0.128$.}
\label{fig:theory_data}
\end{center}
\end{figure}
We see that Spruit's theoretical prediction (solid line in figure \ref{fig:theory_data})
lies in the vicinity of the data points but does not exactly reproduce them except in one area where there is a big amount of data points at $\Nut \approx 6$.
This area of abundant data represents the system, when it has reached the statistically stable
end state. The line of data points, on the other hand, represents the system while it is relaxing to its end state.
Rosenblum et al.'s model (dotted line in figure \ref{fig:theory_data}) does not fit our data. Since it does not have a fitting parameter, it cannot be adjusted to fit the data, either.
Wood et al.'s model does not fit our data with their proposed values for $A$ and $B$. It does, however, fit the data equally good as Spruit's model does when adjusting $A$ and $B$ accordingly.
We can therefore conclude that Spruit's model and the adjusted version of Wood et al.'s model both make successful predictions for the ratio of saline and thermal Nusselt numbers in the parameter range that we simulated in the
non-rotating case.
Albeit, they do so only after the system has reached its equilibrium state.
Taking Spruit's model as a starting point, we calculate the interval that $\Nus/\Nut$ has to lie in.
Starting from (\ref{eq:spruit_nusselt_theory}) after a few elementary transformations we get
\begin{equation}
\frac{\Nus}{\Nut} = \frac{q}{\Le^{1/2} R_\rho} - \frac{1}{\Nut} \left( \frac{q}{Le^{1/2} R_\rho} - 1 \right) \equiv g(\Nut)
\end{equation}
The smallest possible value of $\Nut$ is one, which corresponds to the case of pure diffusion:
\begin{equation}
g(1) = \frac{q}{\Le^{1/2} R_\rho} - \frac{q}{Le^{1/2} R_\rho} + 1 = 1.
\end{equation}
In the limit of $\Nut \rightarrow \infty$ we get
\begin{equation}
\mathrm{lim}_{\Nut \rightarrow \infty} g(\Nut) = \frac{q}{Le^{1/2} R_\rho}.
\end{equation}
Provided that $q \ge \Le^{1/2}R_\rho$ the ratio of Nusselt numbers $g(\Nut)$ is therefore bounded by
\begin{equation}
1 \le \frac{\Nus}{\Nut} \le \frac{q}{Le^{1/2} R_\rho}.
\end{equation}
In our reference case, where $Le=0.1$ and $R_\rho=1.3$ this gives
\begin{equation}
1 \le \frac{\Nus}{\Nut} \le 2.43 \, q.
\end{equation}
Setting the fit parameter $q=0.95$, which is the value that fits the data in figure \ref{fig:theory_data}, we end up with
\begin{equation}
1 \le \frac{\Nus}{\Nut} \le 2.31
\end{equation}
Our data confirms this for all times except for the initial plume phase, as can be observed from figures \ref{fig:thrice_taylor0} and \ref{fig:ratio_of_nusselts}. The maximal value of $\overline{\Nus}/\overline{\Nut}$ is $\approx 2.25$.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Ratio_nusselts_t0_rrho13_pr1_onet0_upto_dts01.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Ratio_nusselts_t0_rrho13_pr1_onet0_upto_dts04.png}
\end{center}
\end{minipage}
\caption{Ratio of average saline and thermal Nusselt number vs. simulation time for up to $t=0.1$ (left) and $t=0.4$ for $\Ta=0$. The state of the system can be
separated into different regions of sharply rising and slowly declining ratio of Nusselt numbers that correspond to states of layered convection, boundary layer creation and overturning convection.}
\label{fig:ratio_of_nusselts}
\end{figure}
The plots
show another interesting result.
It appears that the ratio of Nusselt numbers is a good classification for the state of the flow.
After the plumes have broken, a convective layer is established (at $t\approx 0.013$ in
figure \ref{fig:ratio_of_nusselts}). The thickness of the layer $d_s$ increases with time until the top of the layer reaches the upper boundary of the shell (at $t\approx 0.034$ ). Then, thermal and saline diffusive
boundary layers at the shell boundary are established. This is indicated by a rise of $\overline{\Nus} / \overline{\Nut}$. We suspect that if we had a second layer on top,
these would then start to merge at this point. But since we have imposed boundary
conditions there, the state of the system relaxes to a homogeneous convective layer embedded between diffusive transition ranges at both top and bottom, a state that is reached at $t \approx 0.2$ .
We note here that during layer formation and extension $q$ and, likewise, $A$ and $B$ in the models of Spruit and Wood et al., respectively, may not remain constant and their values may not be the same for differently sized stacks
of layers or during a ``merging process'' or for different boundary conditions.
If and which influence rotation and a change of $\Pran$ and $R_\rho$ have on $\overline{\Nus} / \overline{\Nut}$ will be investigated next.
\subsubsection{Influence of rotation on the relationship between $\Nus$ and $\Nut$}\label{sss:ratio_nusselts_rotation}
The first row of figure \ref{fig:nusselt_relation} shows $\overline{\Nus} / \overline{\Nut}$ against the simulation time for different rotation rates up to $t=0.2$ (left) and up to $t=1$ (right).
\begin{figure}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Relation_nusselt_rrho13_pr1_bunt_disruptedline_bis_dts02.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Relation_nusselt_rrho13_pr1_bunt_disruptedline.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Relation_nusselt_rrho13_pr05_bunt_disruptedline_bis_dts02.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Relation_nusselt_rrho13_pr05_bunt_disruptedline_bis_dts1.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Relation_nusselt_rrho15_pr1_bunt_disruptedline_bis_dts02.png}
\end{center}
\end{minipage}
\begin{minipage}{0.5\linewidth}
\begin{center}
\includegraphics[width=\textwidth]{./Relation_nusselt_rrho15_pr1_bunt_disruptedline_bis_dts1.png}
\end{center}
\end{minipage}
\caption{The ratio of saline and thermal Nusselt numbers for the three parameter pairs $(\Pran=1, R_\rho=1.3)$ (upper row), $(\Pran=0.5, R_\rho=1.3)$ (middle row) and $(\Pran=1, R_\rho=1.5)$ (lower row) for simulation
times up to $t=0.02$ (left column) and $t=1$ (right column).}
\label{fig:nusselt_relation}
\end{figure}
For no rotation, at low simulation times, the ratio of saline and thermal Nusselt numbers rises monotonously with a high slope until it relaxes to a slightly falling ``plateau of (slowly growing) layered convection''
only to sharply rise again and relax to a statistically continuous value. Rotation seems to stretch the plateau which exists while the thickness $d_s$ of the layer grows and turns it into
more of a sink and makes it much wider for higher rotation rates. For the highest rotation rates,
the ones where the final state is one of diffusion, there is no plateau/sink at all, the ratio of Nusselt numbers falls monotonously and approaches one.
The maximum value for $\overline{\Nus} / \overline{\Nut}$ does not change for $\Ta \le 1.11 \cdot 10^6$. For $\Ta=10^7$ it decreases from $\approx 2.25$ obtained for all lower Taylor numbers to $\approx 2$. For
$Ta=4 \cdot 10^7$ it decreases further to $\approx 1.8$. For $\Ta=1.11 \cdot 10^8$ the ratio of Nusselt numbers does not reach its global maximum after $t\approx 0.05$ but before. This is before the system is in statistic
equilibrium. When it reaches the equilibrium state, convection is suppressed and both Nusselt numbers tend to one. The same is true for $\Ta=10^9$. \\
Since fast rotation does have a significant effect on the ratio of Nusselt numbers and hence the flux ratio, a measure of the rate of rotation has to enter the theoretical prediction (\ref{eq:spruit_nusselt_theory}) or the fit
formula (\ref{eq:woods_theory}).
\subsubsection{Modifying $\Pran$ and $R_\rho$}
Looking at the middle and last row of figure \ref{fig:nusselt_relation} we get the same result that was seen in chapters \ref{sec:modifying_pr_and_rrho} to \ref{sec:modifying_pr_and_rrho_at_higher_rotation_rates}. Lowering the Prandtl
number decreases the influence
that a higher rotation rate has on the stability of
semiconvection. While for $\Ta=4\cdot 10^7$ the asymptotic state of the ratio of Nusselt numbers occurs at $t \approx 0.7$, for a Prandtl number half as high, the corresponding case with $\Ta=8\cdot 10^7$ reaches the asymptotic state
already at $t \approx 0.35$. So the phases of layered convection and the creation of a diffusive boundary layer happen on a shorter time scale for lower Prandtl numbers. Likewise, increasing $R_{\rho}$ to $1.5$ on the other hand
slows down the time development for $\Ta = 10^7$ and turns $\Ta = 10^7$ into a diffusive case where $\Nus/\Nut$ drops to 1 after an early maximum value. The saturation of $\Nus/\Nut$ for the non-diffusive cases occurs at the same
time as for $\Nus$.
\subsection{The influence of rotation on the lifetime of a layer}
Next, we take note of the time at which the
thermal Nusselt number reaches the asymptotic value. We chose the thermal Nusselt number because it has the sharpest kink when reaching the statistically stable state.
The results are summarised in table \ref{tab:asymptotic_times}. The times were taken from figure \ref{fig:dts01udts1}.
\begin{table}
\begin{center}
\begin{tabular}{rccc}
$\Ta \cdot \Pran$ \quad & \multicolumn{3}{c}{$t_{\mathrm{asymptotic}}$} \\[3pt]\hline
& \multicolumn{1}{l}{ $\quad \Pran = 1, R_\rho = 1.3 \quad $} & $\Pran = 1, R_\rho = 1.5$ \quad & $\Pran = 0.5, R_\rho = 1.3$ \quad \\ \hline
\multicolumn{1}{r}{0} & 0.056 & n/a & n/a \\
$1 \cdot 10^5$ & 0.056 & n/a & n/a \\
$4 \cdot 10^5$ & 0.059 & \multicolumn{1}{c}{0.110} & \multicolumn{1}{c}{0.046} \\
$1.11 \cdot 10^6$ & 0.060 & \multicolumn{1}{c}{0.124} & \multicolumn{1}{c}{0.048} \\
$1 \cdot 10^7$ & 0.132 & \multicolumn{1}{c}{0.428} & \multicolumn{1}{c}{0.106} \\
$4 \cdot 10^7$ & 0.724 & $\infty$ & \multicolumn{1}{c}{0.328} \\
$1.11 \cdot 10^8$ & \multicolumn{1}{c}{$\infty$} & $\infty$ & $\infty$ \\
$1 \cdot 10^9$ & \multicolumn{1}{c}{$\infty$} & $\infty$ & $\infty$ \\
\end{tabular}
\end{center}
\caption{Time in thermal diffusion time scales when thermal Nusselt number reaches the statistically stable asymptotic state. n/a means that we did not run simulations for these parameters. $\infty$ means that there is no
statistically stable convective state for these parameters.}
\label{tab:asymptotic_times}
\end{table}
Obviously, increasing $R_{\rho}$ delays the time development of $\Nut$ while lowering $\Pran$ accelerates it.
But since these are far too few data points to make a sound assumption about an underlying law we restrict ourselves to just listing them.
\section{Summary and Outlook}\label{sec:conclusion}
We have studied the influence of rotation on semiconvection in a three dimensional spherical shell. First, we have run simulations without rotation with $\Pran=1$ and a stability ratio $R_\rho=1.3$
to set up a reference calculation.
Then, we compared simulations with different rates of rotation to the non-rotating case and compared the values for thermal and saline Nusselt numbers and kinetic energies.
We concluded that slow rotation has hardly any influence on layer formation while fast rotation suppresses convective transport completely. At intermediate rotation rates
the temporal development of layers may take an entire thermal diffusion time scale at some critical \Ta. This is why short simulation times can be highly misleading as there may be a long relaxation phase.
At low rotation rates
the relaxation of $\Nut$ to its quasi-equilibrium value occurs on a much smaller timescale that for $\Nus$.
Furthermore, for higher $\Ta$ the equilibrium state is no longer quasi-adiabatic but becomes increasingly more
superadiabatic with ever more extended, diffusive transition regions at the layer boundaries.
We have also compared results from simulations with a modified Prandtl number or a modified stability ratio $R_\rho$ and conclude that
the critical value of the Taylor number $\Tac$ at which convection is suppressed
depends weakly on $\Pran$ and strongly on $R_{\rho}$. For lower $\Pran$ the equilibrium state requires less (turbulent) kinetic energy, since less viscous friction occurs which converts kinetic energy into heat.
For higher $R_{\rho}$ the maximum $\Ta$ for which convection
develops, drops, so the critical $R_{\rho}$ in the sense of \citet{spruit_theory_2013} and \citet{radko2003}
should depend on $\Ta$. We have also compared our results with the three regimes of rotational constraints that \citet{king2013} suggested and conclude that they are a valid means of classifying the effect of rotation on
semiconvection but have to be extended by a dependence on $R_\rho$. Similarly, if using the Rossby number as a means for the influence of rotation it has to be extended by a dependence on $R_\rho$ as well.
We have studied the relationship of $\Nut$ and $\Nus$ which seems to be a good indication for the state of the flow in terms of layered convection, boundary layer creation, relaxation to thoroughly mixed convection and the final
state of overturning convection.
For the case without rotation, we compared our data with model predictions by \citet{spruit_theory_2013}, \citet{Rosenblum2011} and \citet{wood_2013}.
For the equilibrium state of the system, Spruit's model fits our data perfectly, while Wood et al.'s model does so only after readjusting their fitting parameters. Rosenblum et al.'s model does not deliver a satisfactory fit to our
data. For the rotating case, the models have to be readjusted, however, since fast rotation is found to significantly affect the ratio of Nusselt numbers.
Our work is a step on the ladder of correctly understanding the influence of double-diffusive convection on the heat and solute transport in rapidly rotating systems like some stars and planets. We already
know that semiconvection significantly constrains heat transport. If we now respect that high rates of rotation can further decrease the effective heat and solute fluxes, the assumed
overall heat flux in a rapidly rotating system undergoing double-diffusive convection has to be lowered even further when modeling planets or stars. Future works can include a more precise study of the influence
of rotation on the lifetime of a layer and an investigation of the opposite regime of double-diffusive convection: that of salt-fingering. \\
PB \& FK are grateful to financial support from the Austrian Science Fund (FWF) through project P25229-N27. RH is supported by STFC grant ST/K000853/1.
\bibliographystyle{chicago}
|
1,116,691,497,267 | arxiv | \section{Introduction}
\label{sec:intro}
Face anti-spoofing (FAS) systems have been successfully established in face authentication, and widely used in online banking, electronic payments, and securities as a crucial technique. Despite its substantial success, FAS still shows vulnerability to various presentation attacks (PAs) such as printed materials, replay-videos, and 3D-masks. To alleviate such vulnerability, previous deep learning-based FAS methods ~\cite{Liu_2018_CVPR, yu2020searching} learn discriminative features for distinguishing real faces against PAs, and such methods mostly treat the FAS problem as a binary classification of whether a given face is real or a spoof, as shown in Fig.~\ref{fig:fig1}(a).
However, such binary classification-based approaches suffer from non-trivial attacks because they are prone to an over-fitting to the training data, resulting in poor generalization~\cite{Liu_2018_CVPR}. To mitigate the over-fitting problem, regression-based methods~\cite{feng2018prn,MEGC,yu2021dual,yu2020searching} have been proposed, which find sparse evidence for known spoof types and generalize to unseen attacks. For regression-based neural networks, two approaches are considered:
First, pseudo-define based supervision~\cite{DBEL,MEGC,yu2021dual,yu2020searching,Liu_2018_CVPR} is designed for context-agnostic discrimination describing the local cues from the pixel level, such as the depth and reflection.
For example, a pseudo-map based CNN~\cite{feng2018prn} utilizes pseudo-depth supervision using the mean square error (MSE) to reconstruct a sparse depth-map and a flattened-map for a real and spoof image, respectively, as illustrated in Fig.~\ref{fig:fig1}(a).
Secondly, user-define based supervision~\cite{ordinalreg, Wang_2022_CVPR} is designed for constrained learning using the relative distances among real and PAs to improve the generalization ability. For instance, ordinal regression~\cite{ordinalreg} introduces user-defined ordinal labels. Based on the user-defined labels, the model is trained to finely compel the relative distances among the features of different spoof categories within the latent variable. Another example is PatchNet~\cite{Wang_2022_CVPR}, which subdivides binary labels (a real or a spoof) into fine-grained labels (reals or spoofs). Despite previous efforts, we found that the pseudo-define based supervisions depend on the accuracy of additional works (e.g., depth-~\cite{feng2018prn} and texture-based~\cite{zhang2018single}),
and that user-define based supervision relies on that user-specified guides, and the correctness is not guaranteed.
\section{Introduction}
\label{sec:intro}
Face anti-spoofing (FAS) systems have been successfully established in face authentication, and widely used in online banking, electronic payments, and securities as a crucial technique. Despite its substantial success, FAS still shows vulnerability to various presentation attacks (PAs) such as printed materials, replay-videos, and 3D-masks. To alleviate such vulnerability, previous deep learning-based FAS methods ~\cite{Liu_2018_CVPR, yu2020searching} learn discriminative features for distinguishing real faces against PAs, and such methods mostly treat the FAS problem as a binary classification of whether a given face is real or a spoof, as shown in Fig.~\ref{fig:fig1}(a).
However, such binary classification-based approaches suffer from non-trivial attacks because they are prone to an over-fitting to the training data, resulting in poor generalization~\cite{Liu_2018_CVPR}. To mitigate the over-fitting problem, regression-based methods~\cite{feng2018prn,MEGC,yu2021dual,yu2020searching} have been proposed, which find sparse evidence for known spoof types and generalize to unseen attacks. For regression-based neural networks, two approaches are considered:
First, pseudo-define based supervision~\cite{DBEL,MEGC,yu2021dual,yu2020searching,Liu_2018_CVPR} is designed for context-agnostic discrimination describing the local cues from the pixel level, such as the depth and reflection.
For example, a pseudo-map based CNN~\cite{feng2018prn} utilizes pseudo-depth supervision using the mean square error (MSE) to reconstruct a sparse depth-map and a flattened-map for a real and spoof image, respectively, as illustrated in Fig.~\ref{fig:fig1}(a).
Secondly, user-define based supervision~\cite{ordinalreg, Wang_2022_CVPR} is designed for constrained learning using the relative distances among real and PAs to improve the generalization ability. For instance, ordinal regression~\cite{ordinalreg} introduces user-defined ordinal labels. Based on the user-defined labels, the model is trained to finely restrict the relative distances among the features of different spoof categories within the latent variable. Another example is PatchNet~\cite{Wang_2022_CVPR}, which subdivides binary labels (a real or a spoof) into fine-grained labels (reals or spoofs). Despite previous efforts, we found that the pseudo-define based supervisions depend on the accuracy of additional works (e.g., depth-~\cite{feng2018prn} and texture-based~\cite{zhang2018single}),
and that user-define based supervision relies on that user-specified guides, and the correctness is not guaranteed.
\begin{figure}[]
\centering
{\includegraphics[width=\columnwidth]{images/fig_0_pdf.pdf}}\vspace{-1.3em}
\caption{Comparison between previous methods and our method for face anti-spoofing. (a) Previous methods utilize either binary supervision to detect spoof cues, or pseudo depth supervision, or both. (b) Our method discretizes binary labels and exchanges real and spoof images for our expected liveness score. The discretized label $\lambda$ indicates the ratio of a real image over an image.}
\label{fig:fig1}
\end{figure}
\begin{figure*}[htb]
\includegraphics[width=\textwidth]{images/fig_2_pdf.pdf}\\[-0.1pt]
\vspace{-2.5em}
\caption{Overview of our approach for a value regression neural network. Our framework consists of a label encoding (PDLE) for the data and label expansion, a encoder network for the feature extractor, an expected liveness score estimator for the regression network learning, and a discriminator for the domain-invariant latent-variable learning.}
\label{fig:fig2}
\end{figure*}
In this paper, as described in Fig.~\ref{fig:fig1}(b), we introduce a discretized label encoding for increasing data distribution and generating data relationships, which has no dependencies on the prior works. For our proposed label encoding, we present a novel pre-processing method, called the pseudo-discretized label encoding (PDLE) scheme, in which an image is randomly selected in a mini-batch, then the opposite labeled image is also arbitrarily chosen from the whole batch, and then parts of the images are exchanged to generate a new image and its discretized dense label.
Our main contributions are as follows:
\begin{itemize}
\item We re-formulate face anti-spoofing as a value regression problem that directly optimizes a deep neural network with mean square error for improving performance, instead of using binary cross-entropy.
\item We propose a simple yet effective pseudo-discretized label encoding (PDLE), which enforce the regression network to represent the ratio of information of the real image to that of the given input image for a prediction of the liveness score.
\item We conduct extensive experiments, and obtain the state-of-the-art and outstanding performance on the intra- and cross-dataset respectively.
\end{itemize}
\section{Proposed Method}
\label{sec:method}
\subsection{Overview}
For an expansion of the training image and label distribution without an information corruption, we introduce a discretized label encoding schema to preserve the spoof and real cues in the images, and indicate the amount of a real image information over that of the input image. To leverage the PDLE, we propose learning a value regression neural network using the MES between the expected liveness scores and the pseudo-labels. In addition, we apply a domain-invariant learning scheme (GRL)~\cite{ganin2015unsupervised} as an adversarial training to our regression neural network using the domain labels. The framework of our method is illustrated in Fig.~\ref{fig:fig2}.
\subsection{Pseudo-Discretized Label Encoding}
We assume that $X=\{x_{s},x_{r}\}\in $ \(\mathbb{R}^{H\times W\times 3} \) and $Y=\{y_{s}=0.0,y_{r}=1.0\}$ denote the spoof and real color image space and the class label space in each. To sample the discretized labels between $y_{s}$ and $y_{r}$, we use the following formula:
\begin{equation}
\begin{aligned}
u &\sim \mathcal{U}\{1, K\}, \\
\lambda &= \frac{u}{K},
\end{aligned}
\label{eq:eq1}
\end{equation}
where $u$ is sampled from the discrete uniform distribution (1, $K$), and $K$ is a pre-defined discretized level, a cardinality of an encoding label $\tilde{Y}$ set, and a number of outputs for the last $FC$ in Fig.~\ref{fig:fig2}. $\lambda$ implies a pseudo-discretized label presenting the amount of a partial real image over a whole image. Inspired by CutMix~\cite{yun2019CutMix}, we first exchange a real image and a spoof image through a random rectangular box as follows:
\begin{equation}
\begin{aligned}
\tilde{x} &= M\odot x_{a} + (1-M)\odot x_{b}, \text{where } y_{a} \neq y_{b}\\
\tilde{y} &=
\begin{cases}
1-\lambda,& \text{if } x_{a} = x_{r}\\
\lambda, & \text{otherwise},
\end{cases}
\end{aligned}
\label{eq:eq3}
\end{equation}
where $M \in \{0,1\}^{H\times W}$ is a random rectangular mask based on $\lambda$, with $0$ and $1$ indicating inside and outside the mask. $\odot$ is an element-wise multiplication operator, and $x_{a}$ is an anchor to choose a sample from a mini-batch, whereas $x_{b}$ is the opposite sample selected from the entire training set. $\tilde{x}$ indicates the exchanged image, and $\tilde{y}$ is the pseudo-discretized label determined based on whether $x_{a}$ is a real image or not. We exchange between images with different labels ($y_{a} \neq y_{b}$) to expand data and label distribution. As shown in Fig.~\ref{fig:fig2}, we use $\tilde{X} \in (x_{s}, \tilde{x}, x_{r})$ and $\tilde{Y} \in (y_{s}, \tilde{y}, y_{r})$ as the training data and the supervision for the regression network to learn the liveness score.
\subsection{Expected Liveness Score}
Let $\mathbb{P}:\mathbb{R}^{H\times W \times 3} \rightarrow \mathbb{R}^{K}$ denote the probability of the liveness evidence estimated using
$SoftMax$, $FC_{k}$, and $Encoder(\tilde{X})$, as illustrated in Fig.~\ref{fig:fig2}. We employ $K$ in Eq.~\ref{eq:eq1} to formulate a
random variable $C$ with a finite list $\{c_{0}, ..., c_{K}\}$ whose the $i^{th}$ element $c_{i}$ is denoted as follows:
\begin{equation}
\begin{aligned}
c_i &=
\begin{cases}
0.0,& \text{if } i = 0\\
interval\times i ,& \text{if } i > 0 \text{ and } i < K\\
1.0,& \text{if } i = K\\
\end{cases}
\end{aligned}
\label{eq:eq4}
\end{equation}
where $interval=\lceil \frac{y_{r} - y_{s}}{K}\rceil$. The random variable $c_{i}$ and its probability $p_{i}$ are exploited to calculate the expected liveness score as follows:
\begin{equation}
\mathbb{E}[C] = \rho = \sum_{i=0}^{K} c_{i}*p_{i},
\label{eq:eq5}
\end{equation}
where $p_{i}$ is the $i^{th}$ element of $P$ which is the predicted probability vector of real cues from the input $\tilde{X}$. We write $\mathbb{E}[C]$ with $\rho$, which is calculated using the sum over the element-wise multiplication between the random variables and their corresponding probabilities.
\subsection{Objective Function}
Our objective function is defined as follows:
\begin{equation}
L^{\rho}_{mse} = -\frac{1}{N}\sum_{j=1}^{N}(\rho_{j} - \tilde{Y}_{j})^{2},
\label{eq:eq6}
\end{equation}
where $N$ is a mini-batch size, and $\tilde{Y}_{j}$ and $\rho_{j}$ are the $j^{th}$ supervision and expected liveness score in the mini-batch. We calculate the distance between $\tilde{Y}_{j}$ and $\rho_{j}$ for our main objective function $L^{\rho}_{mse}$.
To further improve the performance, we exploit not only a regression network but also an adversarial learning technique GRL~\cite{ganin2015unsupervised}.
Finally, our overall loss function can be formulated as follows:
\begin{equation}
\begin{aligned}
L_{final} &= \alpha*L^{\rho}_{mse} + (1-\alpha)*L_{adv},
\end{aligned}
\label{eq:eq7}
\end{equation}
where $L^{\rho}_{mse}$ is a liveness score-based regression training loss and $L_{adv}$ is an adversarial training loss for jointly learning our livensss score-based regression neural network. $\alpha$ is a non-negative parameter to balance the importance of two losses, and we empirically set $\alpha$ to $0.5$.
\begin{table}[b]
\centering
\caption{Evaluation results for ACER (\%) in comparison with the previous methods and the proposed \textbf{PDLE} approach within the intra-dataset (OULU-NPU protocols).}
\label{tab:tab1}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c||c|c|c|c|}
\hline
\multirow{2}{*}{Method} & Protocol 1 & Protocol 2 & Protocol 3 & Protocol 4 \\ \cline{2-5}
& ACER(\%) & ACER(\%) & ACER(\%) & ACER(\%) \\ \hline \hline
Auxiliary~\cite{Liu_2018_CVPR} & 1.6 & 2.7 & 2.9±1.5 & 9.5±6.0 \\ \hline
CDCN~\cite{yu2020searching} & 1.0 & 1.45 & 2.3±1.4 & 6.9±2.9 \\ \hline
FaceDs~\cite{eccv18jourabloo} & 1.5 & 4.3 & 3.6±1.6 & 5.6±5.7 \\ \hline
DC-CDN~\cite{yu2021dual} & 0.4 & 1.3 & 1.9±1.1 & 4.3±3.1 \\ \hline
LMFD-PAD~\cite{LMFDPAD} & 1.5 & 2.0 & 3.4±3.1 & 3.3±3.1 \\ \hline
NAS-FAS~\cite{yu2020nasfas} & 0.2 & \textbf{1.2} & 1.7±0.6 & 2.9±2.8 \\ \hline
PatchNet~\cite{Wang_2022_CVPR} & \textbf{0} & \textbf{1.2} & 1.18±1.26 & 2.9±3.0 \\ \hline
\rowcolor{Gray} Ours & \textbf{0} & \textbf{1.2} & \textbf{0.96±1.03} &\textbf{ 0.63±1.04} \\ \hline
\end{tabular}%
}
\end{table}
\section{Experiments}
\label{sec:expr}
We demonstrate the effectiveness of the proposed approach on an intra- and cross-dataset. Based on the result of experiment, the characteristics of our algorithm will be discussed in this section.
\begin{table*}[]
\caption{Comparison results of cross-domain testing on MSU-MFSD (M), CASIA-MFSD (C), Replay-Attack (I), and OULU-NPU (O). PE and LE mean patch-exchange and label-encoding, respectively.}
\label{tab:tab3}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c||cc||cc||cc||cc||}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c||}{O\&C\&I to M} & \multicolumn{2}{c||}{O\&M\&I to C} & \multicolumn{2}{c||}{O\&C\&M to I} & \multicolumn{2}{c||}{I\&C\&M to O} \\ \cline{2-9}
& \multicolumn{1}{c|}{HTER(\%)} & AUC(\%)& \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) & \multicolumn{1}{c|}{HTER(\%)} & AUC(\%) \\ \hline \hline
NAS-FAS~\cite{yu2020nasfas} & \multicolumn{1}{c|}{19.53} & 88.63 & \multicolumn{1}{c|}{16.54} & 90.18 & \multicolumn{1}{c|}{14.51} & 93.84 & \multicolumn{1}{c|}{13.80} & 93.43 \\ \hline
NAS-FAS w/ D-Meta ~\cite{yu2020nasfas} & \multicolumn{1}{c|}{16.85} & 90.42 & \multicolumn{1}{c|}{15.21} & 92.64 & \multicolumn{1}{c|}{11.63} & 96.98 & \multicolumn{1}{c|}{13.16} & 94.18 \\ \hline
DRDG~\cite{georgecvpr2021} & \multicolumn{1}{c|}{12.43} & 95.81 & \multicolumn{1}{c|}{19.05} & 88.79 & \multicolumn{1}{c|}{15.56} & 91.79 & \multicolumn{1}{c|}{15.63} & 91.75 \\ \hline
ANRL~\cite{ANRL} & \multicolumn{1}{c|}{10.83} & 96.75 & \multicolumn{1}{c|}{17.83} & 89.26 & \multicolumn{1}{c|}{16.03} & 91.04 &
\multicolumn{1}{c|}{15.67} & 91.90 \\ \hline
LMFD-PAD~\cite{LMFDPAD} & \multicolumn{1}{c|}{10.48} & 94.55 & \multicolumn{1}{c|}{12.50} & 94.17 & \multicolumn{1}{c|}{18.49} & 84.72 & \multicolumn{1}{c|}{12.41} & 94.95 \\ \hline
CAFD~\cite{CAFD} & \multicolumn{1}{c|}{11.64} & 95.27 & \multicolumn{1}{c|}{17.51} & 89.98 & \multicolumn{1}{c|}{15.08} & 91.92 & \multicolumn{1}{c|}{14.27} & 93.04 \\ \hline
DBEL~\cite{DBEL} & \multicolumn{1}{c|}{8.57} & 95.01 & \multicolumn{1}{c|}{20.26} & 85.80 & \multicolumn{1}{c|}{13.52} & 93.22 & \multicolumn{1}{c|}{20.22} & 88.48 \\ \hline
SSDG-R~\cite{Jia_2020_CVPR_SSDG} & \multicolumn{1}{c|}{7.38} & 97.17 & \multicolumn{1}{c|}{10.44} & 95.94 & \multicolumn{1}{c|}{11.71} & 96.59 & \multicolumn{1}{c|}{15.61} & 91.54 \\ \hline
SSAN-R~\cite{ssan} & \multicolumn{1}{c|}{6.67} & 98.75 & \multicolumn{1}{c|}{\textbf{10.00}} & \textbf{96.67} & \multicolumn{1}{c|}{8.88} & 96.79 & \multicolumn{1}{c|}{13.72} & 93.63 \\ \hline
PatchNet~\cite{Wang_2022_CVPR} & \multicolumn{1}{c|}{7.10} & 98.46 & \multicolumn{1}{c|}{11.33} & 94.58 & \multicolumn{1}{c|}{13.40} & 95.67 & \multicolumn{1}{c|}{11.82} & 95.07 \\ \hline
Ours w/o PE\&LE & \multicolumn{1}{c|}{10.83} & 94.58 & \multicolumn{1}{c|}{15.08} & 91.14 & \multicolumn{1}{c|}{14.50} & 93.55 & \multicolumn{1}{c|}{13.88} & 93.16 \\ \hline
Ours w/o PE & \multicolumn{1}{c|}{10.41} & 94.93 & \multicolumn{1}{c|}{13.59} & 91.04 & \multicolumn{1}{c|}{11.17} & 93.92 & \multicolumn{1}{c|}{12.50} & 94.35 \\ \hline
Ours w/o LE & \multicolumn{1}{c|}{9.58} & 94.47 & \multicolumn{1}{c|}{12.47} & 92.28 & \multicolumn{1}{c|}{12.25} & 94.55 & \multicolumn{1}{c|}{13.29} & 93.62 \\ \hline
\rowcolor{Gray} Ours & \multicolumn{1}{c|}{\textbf{5.41}} & \textbf{98.85} & \multicolumn{1}{c|}{10.05} & 94.27 & \multicolumn{1}{c|}{\textbf{8.62}} & \textbf{97.60} & \multicolumn{1}{c|}{\textbf{11.42}} & \textbf{95.52} \\ \hline
\end{tabular}%
}
\end{table*}
\subsection{Datasets and Metrics}
\textbf{Datasets.} We utilized four public datasets, CASIA-FASD (labeled C)~\cite{casiamfsd}, OULU-NPU (labeled O)~\cite{oulunpu}, MSU-MFSD (labeled M)~\cite{msumfsd}, and Replay-Attack (labeled I)~\cite{replayattak} for our experiments. OULU-NPU is a high-resolution database with four protocols for verifying the improved performance on the intra-dataset. The videos of each dataset are saved under different scenarios with various devices and subjects, and they are employed for cross-dataset testing to validate the generalization ability for testing examples with unconstrained distribution shifts.
\textbf{Evaluation Metrics.}
We employed average classification error rate (ACER) for the intra-dataset testing on OULU-NPU. The half total error rate (HTER) and area under curve (AUC) are measured for the cross-dataset testing protocols.
\subsection{Implementation Details}
\textbf{Primitive Data Preparation and Augmentation.}
Because the four FAS datasets are in video format, we extracted images at certain intervals. After obtaining the images, we used RetinaFace~\cite{deng2019retinaface} to detect faces, and then cropped and resized the color image to a resolution of 256$\times$256. Data augmentation, including horizontal flipping and random cropping, was used for training, and center cropping was employed for testing.
And we empirically set $K$ to 10 for our approach after testing variant $K$ as depicted in Fig.~\ref{fig:graph1}.
\textbf{Experimental Setting.}
To train the FAS task, we used ResNet18~\cite{resnet18} as the encoder with the Adam optimizer under an initial learning rate and weight decay of 1e-4 and 2e-4, respectively, for all testing protocols. We trained the models with a batch size of 32 and a max epoch of 200, whereas decreasing the learning rate through an exponential LR with a gamma of 0.99. For the domain labels on the intra-dataset, we used the number of sessions in each protocol.
\subsection{Intra-Dataset Testing on OULU-NPU}
OULU-NPU has four protocols for evaluating the generalization ability under mobile scenarios with previously unseen sensors and spoof types. As shown in Table.~\ref{tab:tab1}, our PDLE approach presents the best performance for all protocols, and the expected liveness scores clearly validate the ability to generalize better latent embedding features. In particular, our proposed PDLE achieves the significant performance improvement for protocol 4 (unseen lighting, spoof type, and sensor type).
The results demonstrate that the effectiveness to train a liveness score-based regression neural network using the amount of swapping as pseudo-discrete labels. Note that our proposed PDLE improves the overall ACER performance over the previous SOTA (PatchNet~\cite{Wang_2022_CVPR}) approach.
\subsection{Cross-Dataset Testing}
To evaluate our proposed method, we select three out of four datasets to train and use the remaining one for testing, denoted by $\{\cdot\&\cdot\&\cdot\}$ to $\{\bullet\}$. We compare our proposed method with the latest methods as shown in Table.~\ref{tab:tab3}. With our method, the O\&C\&I to M, O\&C\&M to I, and I\&C\&M to O protocols show the best performance, and the other protocol O\&M\&I to C displays the very competitive performance.
By split testing on each capture device in the dataset C, we found that our method show relatively the low performance on low quality images (93.73\% AUC) compared to normal (94.79\% AUC) and high quality (96.47\% AUC) images.
This result proves that the proposed method achieves satisfactory performance on all protocols because our liveness score-based regression network estimates probabilities of the real cues under various presentation attacks.
\subsection{Ablation Study}
We conducted ablation studies on cross-dataset testing to explore the contribution of each component in our method, as depicted in Table ~\ref{tab:tab3}. To analyze the effect of discretization, we separated the proposed PDLE into patch exchange (PE) and label encoding (LE). And we confirmed that each of them is the essential element for improving performance, and also observed the best performance when both were used.
In addition, we verified the influence of the pre-defined $K$ in PDLE for determining the representation power of the liveness against an input image. As shown in Fig.~\ref{fig:graph1}, we tested various values of $K$ on the O\&C\&M to I protocol to investigate the impact of $K$ on AUC. With $K$ between $2$ and $17$, our method outperforms the baseline.
\begin{figure}[htp]
\centering
{\includegraphics[width=\columnwidth]{images/fig_3_pdf.pdf}}\vspace{-1.2em}%
\caption{Ablation study on the discretized level $K$.}
\label{fig:graph1}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have proposed the PDLE approach for training a face anti-spoofing regression model. The regression model allows the probability to estimate our liveness score. Our approach not only has the effect of a data augmentation because different labels and domains are densely exchanged, new data combinations are also created, which results in the improved domain generalization. Through our experiments, we confirm the effectiveness, robustness, and generalization of the proposed PDLE and expected liveness score.
|
1,116,691,497,268 | arxiv | \section{Introduction}
\label{sec:intro}
Gaussian process (GP) models are widely popular nonlinear models ubiquitous in spatial statistics \cite{gelfand2010handbook, stein2012interpolation} and commonly utilized in machine learning applications \cite{rasmussen2006gaussian, liu2020gaussian}.
Gaussian process regression (GPR) tractably accounts for the correlation among all observed data, making them favored models in the interpolation of highly-nonlinear responses.
GPR is popular in practice due to its sample efficiency and closed-form posterior distribution inference.
Consequently, GPs are widely employed in applications where observations are expensive to record or simulate and where overconfident prediction or extrapolation has a high cost.
A process is said to follow a GP if any finite set of $n$ realizations of the process follows a multivariate normal distribution.
We conventionally assume that the covariance has a parametric form depending on a pairwise \emph{kernel function} $K_\theta(\cdot, \cdot)$ depending upon hyperparameters $\theta$.
Unfortunately, learning $\theta$ is typically difficult and computationally expensive.
The computing and storing a covariance has $O(n^2)$ cost, while realizations of GPR and evaluating the likelihood required in the conventional training of $\theta$ have $O(n^3)$ cost.
Consequently, training a full GP model is prohibitively expensive for large $n$ using common hardware.
The design of computationally efficient methods of Gaussian process estimation is an active area of research.
Many approximate methods for GP inference attempt to sparsify the correlation matrix.
Some approaches partition the domain into spatially contiguous partition blocks and estimate independent models within those blocks \cite{gramacy2007tgp, fuentes2002spectral, sang2011covariance}.
Alternately, others randomly sample data partitions across the domain for independent Bayesian stationary GP analyses and subsequently use the geometric median of the subset posteriors as a means of model averaging \cite{guhaniyogi2018meta}.
Covariance tapering \cite{furrer2006covariance, furrer2010spam} alters the covariance matrix so that near-zero entries are considered independent and therefore set to zero.
Finally, the locally approximate GP (laGP) \cite{gramacy2015local, gramacy2016lagp} uses a similar sparsification - although it does not realize an overall covariance matrix - by fitting independent models within a local window of each prediction location and assuming that all other observations.
Other methods assume instead that the precision matrix, the inverse of the covariance matrix, is sparse.
One approach is to assume that data observed on a regular grid is spatially autogressive, which yields a sparse precision matrix, and to employ multiple resolutions of radial basis functions for efficient prediction \cite{nychka2015multiresolution}.
Next, \cite{lindgren2011explicit} rely on the equivalence between stochastic partial differential equations (PDEs) and the Mat\'ern covariance fields.
This method induces sparsity in the precision matrix by fitting piecewise linear basis functions on a triangularization of the domain.
Other approaches still rewrite the multivariate normal likelihood in terms of chained conditional distributions, inducing sparsity by modeling only a subset of conditioning sets, such as those induced by the nearest neighbors structure of the data \cite{vecchia1988estimation, datta2016hierarchical}.
This is mathematically equivalent to a sparse precision matrix defined through its Cholesky decomposition.
Spectral methods form effective models on gridded data.
Such methods forgo modeling the correlation function in favor of the spectral density, which is the Fourier transformation of the covariance matrix.
This allows estimating $\theta$ via the Whittle Likelihood \cite{whittle1954stationary} or stochastic score approximation \cite{stein2013stochastic}.
Although the Whittle likelihood requires a full grid, \cite{guinness2019spectral} use an iterative imputation scheme to allow for missing values.
Further, \cite{muyskens2018non} demonstrate a fine grid can be used to approximately apply spectral methods to irregularly-spaced observations.
Variational low-rank approximations to the covariance are particularly popular in the machine learning literature \cite{lazaro2011variational, tran2015variational, gibbs2000variational}.
The spatial statistics literature refers to such models as predictive processes, where a small number of knot locations across the domain induce a low-rank Nystr\"om approximation to the dense covariance matrix \cite{banerjee2008gaussian}.
A related approach fits compactly supported basis functions on recursively partitioned subregions of the domain, using computations that can be parallelized \cite{katzfuss2017multi, jurek2019multiresolution}.
This basis function selection uses a low-rank approach similar to that of the predictive process \cite{banerjee2008gaussian}.
Another method that relies on multiple resolutions for estimation uses a spatial process assumed to be the sum of a set of resolutions of Gaussian basis functions, which they refer to as basic areal units (BAUs) \cite{zammit2017frk, zammitmangion2018frk}.
Other methods do not rely on Gaussian processes, or any statistical distribution at all.
For example, the Gapfill method relies on quantile regression within a local window of the observation \cite{gerber2018predicting} .
Since this method is originally described for space-time data, but the data is often over space only, the "time" dimension is approximated by shifting the original image \cite{heaton2019case}.
Heaton et al. \cite{heaton2019case} directly compared many of these methods on a benchmark land surface temperature dataset.
This survey is particular notable because the authors coordinated a blind competition, where each team implemented the method in which they invented or otherwise owned expertise.
Therefore, there was no chance of a method underperforming due to misunderstanding or misuse of the model or method.
The central data problem on which the paper focuses is the prediction of missing measurements from a large land surface temperature gridded dataset sourced from the MODIS satellite.
Instead of sampling the testing set randomly across the domain, the authors use realistic cloud coverage from an image taken on another day to form a realistic and challengingly irregularly-shaped region of missing data.
Ultimately, the authors conclude that there is no one best method for computationally efficient GP estimation as some methods produce estimates quickly while others are more accurate in terms of root mean squared error (RMSE).
Additionally, \cite{edwards2020precision} demonstrate their new methods on the same dataset.
These methods are improved in terms of time of estimation from most of the original methods, but are not the top competitors in terms of RMSE.
\begin{figure}
\includegraphics[width=\linewidth]{traintest.png}
\caption{Full (left) and training (right) land surface temperature datasets used in \cite{heaton2019case} as a benchmark competition and sourced from the MODIS satellite on August 4, 2016.}
\label{fig:data}
\end{figure}
We have developed a GP estimation method designed to remain both accurate and fast as data size grows.
We invoke sparsity through locality neighborhoods inspired by methods like \cite{gramacy2015local}.
However, instead of assuming independent models within those local neighborhoods, we assume an overall stationary GP model across the domain.
Further, we entirely avoid maximum likelihood evaluation in favor of leave-one-out cross-validation, where we produce predictions using only the local neighborhoods.
This amounts to the assumption that the weights that are applied to the data vector in order to obtain the predictions (kriging weights) are sparse, rather than the covariance matrix or precision matrix as in previous approximation methods.
We formally optimize the hyperparameter values against the leave-one-out cross-validation objective function.
We limit ourselves in this work to mean squared error loss for our objective functions, but other loss functions are possible.
We then use these estimated hyperparameters to realize responses at the prediction locations, which are similarly formulated with only their nearest neighbors.
Leave-one-out cross-validation is typically a computationally expensive process when predictions are obtained using the entire dataset.
In fact, it can be more expensive than the original maximum likelihood estimation problem ($O(n^4)$), since a large matrix ($(n-1)\times (n-1)$)must be inverted for each prediction.
However, by limiting predictions to be based on only their $k$ nearest neighbors, computation of each prediction reduces to many small, parallelizable solves instead of one prohibitively large solve.
We can further improve our time-to-solution by relying upon a batch of $b$ training examples for the leave-one-out training procedure, reducing the overall time complexity to $O(bk^3)$.
In addition to making the time complexity formally independent of $n$, this dramatically improves cross validation speed when $b \ll n$ and $k \ll n$.
Additionally, obtaining the $k$ nearest neighbors is more efficient to compute than likelihood evaluation, and can be further accelerated by approximate KNN algorithms.
We find in our experiments that this deceptively simple method is effective, fast, and leaves open the opportunity for further future acceleration by way of distributed computation.
In this manuscript, we develop a novel method of approximate Gaussian processes hyperparameter estimation that is efficient, scalable and accurate.
In section \ref{sec:gp} we review a general stationary GP model.
In section \ref{sec:muygp} we describe our novel hyperparameter estimation method.
In section \ref{sec:sim} we demonstrate the performance of our method in the benchmark dataset from \cite{heaton2019case}, and show it performs favorably in accuracy and computation time in comparison to all of the existing state-of-the-art competitors.
Finally in \ref{sec:discuss}, we summarize our work and outline future advancements that could be made in order to improve on our methods.
\section{Background: Gaussian Processes}
\label{sec:gp}
We will consider throughout a univariate response $Y : \mathcal{X} \rightarrow \mathbb{R}$, where $\mathcal{X} \subseteq \mathbb{R}^p$ is the observation space.
For notational convenience and by convention we assume that $Y$ is de-trended and therefore has zero mean.
Extensions to non-zero and multivariate processes are trivial, so we will avoid them for the sake of clarity.
We say that $Y$ follows a Gaussian process if the response at any finite set of $n$ points $X = (\mathbf{x}_1, \dots, \mathbf{x}_n) \in \mathcal{X}^n$ follows a multivariate normal distribution.
That is,
\begin{equation} \label{eq:gp_prior}
Y(X) = (Y(\mathbf{x}_1), \dots, Y(\mathbf{x}_n))^T \sim \mathcal{N} \left ( \widetilde{0}, K_\theta(X, X) \right ),
\end{equation}
where $\mathcal{N}$ is the multivariate Gaussian distribution, $\widetilde{0}$ is the $n$-dimensional zero vector, and $K_\theta(X, X)$ is an $n \times n$ positive definite, symmetric covariance matrix between the elements of $X$ that is controlled non-linearly through kernel function $K_\theta(\cdot, \cdot)$ and hyperparameters $\theta$.
Similarly, any finite set of $n^*$ unobserved data $X^* = (\mathbf{x}^*_1, \dots, \mathbf{x}^*_{n^*}) \in \mathcal{X}^{n^*}$ is also jointly normal with observed data $X$ by the GP assumption.
Thus, the conditional distribution for the response at the new locations $X^*$ given responses observed at $X$ is also multivariate normal with mean and variance
\begin{align}
\widehat{Y}(X^* \mid X) &= K_\theta(X^*, X) K_\theta(X, X)^{-1} Y(X), \text{ and}
\label{predmean}\\
\text{Var}(\widehat{Y}(X^* \mid X)) &= K_\theta(X^*, X^*) - K_\theta(X^*, X) K_\theta(X, X)^{-1} K_\theta(X, X^*),
\label{predvar}
\end{align}
where $K_\theta(X^*, X) = K_\theta(X, X^*)^T$ is the cross-covariance matrix between the elements of $X^*$ and $X$, and $K_\theta(X^*, X^*)$ is the covariance matrix between the elements of $X^*$, similar to $K_\theta(X, X)$.
Note that the conditional mean in Equation~\ref{predmean} is the best linear unbiased predictor (BLUP) for $Y(X^*|X)$, the conditional distribution of the response at $X^*$ given data at $X$, even when the normality assumption of the Gaussian process is violated.
The $n$-dimensional row vectors of $K_\theta(X^*, X) K_\theta(X, X)^{-1}$ are referred to as the \emph{kriging weights}, and these are the vectors our method assumes are sparse for computational efficiency.
Taking the posterior mean prediction given in Equation~\ref{predmean} consists of computing the inner product of the kriging weights with the de-trended data vector $Y(X)$.
Most covariances exhibit no general closed form solution to directly compute these weights without forming and inverting the matrix $K_\theta(X, X)$, which can be prohibitively expensive in large training data.
By the properties of the GP assumption, the log-likelihood of the observations $Y(X)$ follow a jointly multivariate normal distribution:
\begin{equation}
\label{ll}
log(L(\theta, Y(X))) = - \frac{p}{2}log(2 \pi) - \frac{1}{2} log(|K_\theta(X, X)|) - \frac{1}{2} Y(X)^T K_\theta(X, X)^{-1} Y(X).
\end{equation}
Often $K_\theta(X, X)$ is assumed to be stationary and isotropic.
That is, the covariance function is assumed to be a function of only the distances between the elements of $X$.
Stationarity implies that for locations $\mathbf{x}_i$ and $\mathbf{x}_j$,
\begin{equation}
K_\theta (\mathbf{x}_i, \mathbf{x}_j) = \phi_\theta(||\mathbf{x}_i - \mathbf{x}_j||_2),
\end{equation}
where $\phi_\theta$ is a functional form with parameters $\theta$.
One of the most common covariance forms $\phi_\theta$ is the Mat\' ern covariance function.
With covariance hyperparameters $\theta = \{\sigma^2, \rho, \nu, \tau^2\}$, and where $d$ is the distance between two locations in $X$,
\begin{equation} \label{eq:matern}
\phi_\theta(d) = \sigma^2 \left[\frac{2^{1- \nu}}{\Gamma(\nu)} \Bigg( \sqrt{2\nu} \frac{d}{\rho} \Bigg)^{\nu} K_\nu \Bigg( \sqrt{2\nu} \frac{d}{\rho} \Bigg) + \tau^2 \mathbb{I} \{d=0\}\right],
\end{equation}
where $K_\nu$ is the modified Bessel function of the second kind.
As $\nu \to \infty$, this form converges pointwise to the well-known Gaussian (RBF) covariance function.
Conventional approaches to training GP models estimate the covariance parameters using the log-likelihood in Equation \ref{ll} via maximum likelihood estimation, Bayesian analysis using Markov Chain Monte Carlo (MCMC), or by way of grid search cross validation.
However, these estimation methods are too expensive to compute in large data since they require at least $O(n^3)$ computation and $O(n^2)$ memory.
Investigators have developed scalable approximate methods and models like those aforementioned in the introduction in order to perform such estimation in large datasets.
However, trends thus far in the literature indicate a trade-off between speed and accuracy: those methods with the fastest-time-to-solution on benchmark data yield posterior mean predictions that are not competitive in terms of RMSE with more expensive but accurate approximate methods \cite{heaton2019case}.
We aim to obtain the best of both worlds: accurate stationary Gaussian process predictions that maintain best-in-class efficiency and scalability, while maintaining the accurate uncertainty quantification.
\section{\texttt{MuyGPs}}
\label{sec:muygp}
We describe a novel approximate method for training stationary Gaussian process hyperparameters using leave-one-out cross-validation restricted to local predictions.
Our methodology derives from the union of two insights: optimization by way of leave-one-out cross-validation allows us to avoid evaluating the likelihood in Equation~\ref{ll}, and restriction to the $k$ nearest neighbors of a prediction location limits the cost of computing the kriging weights $K_\theta(X^*, X) K_\theta(X, X)^{-1}$ in Equation~\ref{predmean} to $O(k^3)$.
While other investigators may have exploited both of these observations in different ways, our \texttt{MuyGPs} estimation method is the first to take advantage of both simultaneously to accelerate kernel hyperparameter estimation by enforcing sparsity in the kriging weights.
Leave-one-out cross-validation seeks hyperparameter values that minimize the sum of an out-of-sample loss associated with computing the posterior distribution of each training observation conditioned on all of the others.
To formally describe the procedure, let $\theta$ denote the hyperparameters that require estimation and $K_\theta(\cdot,\cdot)$ a GP kernel function of interest.
In the Mat\'ern kernel, define $\theta=( \sigma^2 , \nu, \ell, \tau^2)^T$, but this method is applicable to other kernel forms.
Here, we fix $\sigma^2=1$ in estimation of the other parameters here since posterior mean predictions do not depend on overall scale parameter $\sigma^2$.
Hence, $\sigma^2$ is not estimable via any cross-validation method for all kernel forms.
We will introduce a different efficient estimation protocol for $\sigma^2$ at the end of this section after its fellow parameters have been estimated.
We must formulate the leave-one-out prediction of $Y(\mathbf{x}_i)$ given the set of all training points excluding $\mathbf{x}_i$.
Define $X_{-i}=(\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_{i-1}, \mathbf{x}_{i+1}, \dots, \mathbf{x}_n)$ and $Y(X_{-i}) = (Y(\mathbf{x}_1), Y(\mathbf{x}_2), \dots, Y(\mathbf{x}_{i-1}), Y(\mathbf{x}_{i+1}), \dots, Y(\mathbf{x}_n))$ to be the training locations and observed responses excluding the $i$th training observation.
Then we modify Equation \ref{predmean}
to obtain the mean GP prediction by regressing the training labels on the corresponding inputs,
\begin{equation}
\label{cvpred1}
\widehat{Y}(\mathbf{x}_i \mid X_{-i}) = K_\theta(\mathbf{x}_i, X_{-i}) K_\theta(X_{-i}, X_{-i})^{-1} Y(X_{-i}).
\end{equation}
Here $K_\theta(\mathbf{x}_i, X_{-i})$ is the cross-covariance between $\mathbf{x}_i$ and $X_{-i}$, while $K_\theta(X_{-i}, X_{-i})$ is the covariance among points in $X_{-i}$, both in terms of $K_\theta(\cdot, \cdot)$.
Although many criterion for predictions accuracy are possible, we select the mean squared error criterion
\begin{equation} \label{eq:loss}
Q(\theta) = \frac{1}{n} \sum_{i=1}^{n} \left ( Y(\mathbf{x}_i) - \widehat{Y}(\mathbf{x}_i \mid X_{-i}) \right )^2.
\end{equation}
The hyperparameters are estimated to minimize the leave-one-out cross-validation loss
\begin{equation} \label{eq:objective}
\widehat{\theta}=\min_{\theta} Q(\theta).
\end{equation}
In our experiments we utilize the L-BFGS-B algorithm \cite{zhu1997algorithm} to minimize Equation~\ref{eq:objective}.
When there are a large number of observations, realizing an instance of the cross-validation loss in Equation~\ref{eq:loss} so described is even more computationally expensive than the loglikelihood in Equation~\ref{ll};
the procedures have complexity $O(n^4)$ and $O(n^3)$, respectively.
This suggests that the optimization given by Equation~\ref{eq:objective} is even less efficient than traditional maximum likelihood estimation.
In order to achieve our scalability objectives, we replace the full kriging of Equation~\ref{cvpred1} with \emph{local kriging}, using the $k$ nearest neighbor locations of $\mathbf{x}_i$ instead of all of the $n-1$ locations denoted by $X_{-i}$.
This modifies the complexity of leave-one-out cross-validation (computing $Q(\theta)$) to $O(n k^3)$, which is much more scalable than likelihood-based approaches when $k \ll n$.
We now illustrate our method precisely.
Let $N_i$ be the set of $k$ indices in $\{1, \dots, i-1, i+1, \dots, n\}$ indicating those elements of $X_{-i}$ that are nearest to $\mathbf{x}_i$ in terms of distance.
Similarly, define $X_{N_i}$ as the set of training observations nearest to $\mathbf{x}_i$, and let $Y(X_{N_i})$ be their corresponding responses.
This allows us to modify our leave-one-out prediction in Equation~\ref{cvpred1} to
\begin{equation}
\label{cvpred2}
\widehat{Y}_{NN}(\mathbf{x}_i \mid X_{N_i}) = K_\theta(\mathbf{x}_i, X_{N_i}) K_\theta(X_{N_i}, X_{N_i})^{-1} Y(X_{N_i}),
\end{equation}
where the kriging weights and observed responses are defined in terms of $N_i$.
Modifying Equation~\ref{eq:loss} in terms of the nearest neighbor leave-one-out predictions from Equation~\ref{cvpred2}, the resulting objective function requires $O(n k^3)$ operations to evaluate.
We can further reduce our training complexity overhead of $Q(\theta)$ by utilizing batching, a common technique in machine learning.
Let $B$ be a subset of $b$ indices randomly sampled without replacement from $\{1, \dots, n\}$.
We can then modify Equation~\ref{eq:loss} by summing only over the nearest neighbor leave-one-out squared error of locations $\mathbf{x}_i$ such that $i \in B$, whereby we obtain the modified loss function
\begin{equation} \label{eq:batch_loss}
Q_{B}(\theta) = \frac{1}{b} \sum_{i \in B} \left ( Y(\mathbf{x}_i) - \widehat{Y}_{NN}(\mathbf{x}_i \mid X_{N_i}) \right )^2.
\end{equation}
This approximation introduces some additional variability into the optimization problem since the batch indices are randomly selected, but the computational savings can be significant if $b \ll n$, bringing the cost of evaluating the loss function to $O(bk^3)$, which importantly is independent of $n$.
It is also important to note that the nearest neighbor index sets $N_i$ utilized in the evaluations of $\widehat{Y}_{NN}(\mathbf{x}_i)$ for $i \in B$ in Equation~\ref{eq:batch_loss} still range over the full $n$ data points $X$, rather than only those data indicated by $B$.
If nearest neighbor candidates were restricted to batched points only, then the nearest neighbors of each batched observation would be artificially distant from one another.
Hence, more data is ultimately used to estimate the hyperparameters than is included directly in the batch.
Although $\sigma^2$ is not needed for the mean predictions, the prediction errors depend linearly on its estimation since for batched observations it is defined as
\begin{equation} \label{cverr}
\text{Var}(\widehat{Y}_{NN}(\mathbf{x}_i \mid X_{N_i})) = K_\theta(\mathbf{x}_i, \mathbf{x}_i) - K_\theta(\mathbf{x}_i, X_{N_i}) K_\theta(X_{N_i}, X_{N_i})^{-1} K_\theta(X_{N_i}, \mathbf{x}_i) .
\end{equation}
After all other parameters are estimated via minimization of \eqref{eq:batch_loss}, we obtain $\widehat{\sigma^2}$.
Define $\Omega_\theta= \frac{K_\theta}{\sigma^2}$.
In general, there is a closed fom equation for the maximum likelihood estimate for $\sigma^2$ given $\theta$.
However, this closed form solution involves forming and inverting the full training covariance matrix $K_\theta(X, X)$, which is not possible for large $n$.
Our approximate estimate is the mean scale parameter within each batched $k$ neighborhood or precisely
\begin{equation}
\widehat{\sigma^2} = \frac{1}{kb} \sum_{i \in B} Y(X_{N_i})^T \Omega_\theta(X_{N_i}, X_{N_i})^{-1} Y(X_{N_i}).
\end{equation}
Finally, predictions at testing (prediction) locations are computed using a similarly local design.
Define $X_{N_i^{\star}}$ as the set of training observations nearest to the $i$th testing location $\mathbf{x}_i^{\star}$, and let $Y(X_{N_i^{\star}})$ be their corresponding responses.
Then, approximate predictions and prediction errors are obtained via \texttt{MuyGPs} as
\begin{align}
\widehat{Y}(X^* \mid X) &\approx K_{\hat{\theta}}(X^*, X_{N_i^{\star}}) K_{\hat{\theta}}(X_{N_i^{\star}}, X_{N_i^{\star}})^{-1} Y(X_{N_i^{\star}}), \text{ and}
\label{predmean2}\\
\text{Var}(\widehat{Y}(X^* \mid X)) &\approx K_{\hat{\theta}}(X^*, X^*) - K_{\hat{\theta}}(X^*, X_{N_i^{\star}}) K_{\hat{\theta}}(X_{N_i^{\star}}, X_{N_i^{\star}})^{-1} K_{\hat{\theta}}(X_{N_i^{\star}}, X^*),
\label{predvar2}
\end{align}
where $\hat{\theta}$ are the covariance parameters trained as aforementioned.
\section{Numerical Studies}
\label{sec:sim}
We demonstrate the effectiveness of our novel GP hyperparameter estimation on a benchmark temperature dataset.
The dataset considered is sourced from \cite{heaton2019case}, is pictured in Figure \ref{fig:data}, and is available for download at https://github.com/finnlindgren/heatoncomparison.
The data were collected using the Terra instrument onboard the MODIS satellite.
It is composed of daytime land surface temperatures for August 4, 2016 in longitudes -95.91153 to -91.28381 and latitudes 34.29519 to 37.06811 observed on a $500 \times 300$ grid.
This region is attractive due to its completeness, as 98.9\% of locations were observed in the area of interest on August 4, 2016.
The cloud coverage pattern from August 6, 2016 was applied to the data in order to create a realistic missingness pattern for the training/testing data.
This yielded 105,569 training observations, 42,740 testing observations, and 1,691 missing (non-testing) observations, which are excluded from analysis.
We normalize the gridded observations to be in $[0,1]^2$ while maintaining the relationship among latitude and longitude by
$$ \mathbf{x}_i = \frac{(latitude_i, longitutde_i)+218}{464},$$
for $i=1,2,...105,569$ for all observations given in the training dataset.
This transformation creates a range of (0.2102833, 0.8078952) in latitude and (0.001297902, 0.998651208) in longitude in the training observations.
We apply this same normalizing transformation to the testing locations.
We compute the same statistics to evaluate our methods used in the competition in \cite{heaton2019case}.
We consider mean absolute error (MAE), root mean squared error (RMSE), continuous rank probability score (CRPS) \cite{gneiting2007strictly}, interval score (INT) \cite{gneiting2007strictly}, and empirical coverage of 95\% confidence intervals (COV).
We computed CRPS and INT source code drawn directly from \cite{heaton2019case} for consistency.
The timings of our method were obtained with a 2016 MacBook Pro with 2.9 CHz Quad-Core Intel Core i7 with 16 GB of RAM.
In \cite{heaton2019case}, a machine with 256 GB of RAM and 2 Intel Xeon E5-2680 v4 2.40GHz CPUs with 14 coreseach and 2 threads per core - totaling 56 possible threads for use in parallel computing was utilized for comparison.
We implemented our newly-developed \texttt{MuyGPs} methodology as described in Section \ref{sec:muygp}.
Prior to fitting we must make modeling decisions including the selection of a mean function and covariance function, the choice of batch size, and the number of nearest neighbors to be employed.
We realized models using a variety of different choices in order to demonstrate the robustness of our results.
First, we assumed that the response function $Y$ has mean zero throughout the entirety of this manuscript.
Although this is a common assumption when fitting stationary GPs, it is conventional and necessary in real-world data to first remove mean trends prior to GP fitting.
Therefore, define $Y^{obs}$ to be the vector of non-zero mean temperature responses,
and $Y_k(x) = Y^{obs}(X) - \mu_k(X)$, where $Y_k(X)$ is the mean zero response with mean function $\mu_k(X)$ removed (previously notated $Y(X)$).
We consider three simple mean functions so that $k=1,2,3$.
First, we consider a constant mean function. Let
$$ \mu_1(X) = c.$$
This simple mean is estimated by the sample mean of the training data \\
$\hat{c}=\frac{1}{n} \sum_{i=1}^nY^{obs}(\mathbf{x}_i)$.
Next, we consider a linear mean function with an interaction term.
Define $Z$ to be the $n \times 4$ matrix that contains a $n\times 1$ vector of ones, $X$, and the multiplication of the rows elements of $X$ so that $Z=[1_n, X^T, X^T[,1]*X^T[,2]]$. Then define
$$\mu_2(X)=Z\beta,$$
where $\beta$ is a $4 \times 1$ vector of mean parameters.
These parameters are estimated using the ordinary least squares solution of the training data.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{mu1}
\caption{$\mu_1$}
\label{fig:mu1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{y1}
\caption{$Y_1$}
\label{fig:y1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{mu2}
\caption{$\mu_2$}
\label{fig:mu2}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{y2}
\caption{$Y_2$}
\label{fig:y2}
\end{subfigure}
\hfill \begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{mu3}
\caption{$\mu_3$}
\label{fig:mu3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{y3}
\caption{$Y_3$}
\label{fig:y3}
\end{subfigure}
\caption{Mean functions considered, and the residual observations that are fit via \texttt{MuyGPs} using the testing data. }
\label{fig:mean}
\end{figure}
Finally, we consider a mean that is based on kernel smoothing.
In particular, we utilize the Nadaraya-Watson kernel smoother \cite{nadaraya1964estimating,watson1964smooth}.
Define
$$\mu_3(X)= \frac{\sum_{i=1}^nG(X,\mathbf{x}_i)Y(\mathbf{x}_i)}{\sum_{i=1}^n G(X,\mathbf{x}_i)},,$$
where $G(X,\mathbf{x}_i)$ is the Gaussian kernel observed for locations $X$ and $\mathbf{x}_i$ \\ ($\exp{-||X-\mathbf{x}_i||/\rho}$).
We use the $smooth.2d$ function from the package $fields$ in order to estimate implement this smoothing with length scale $\rho=25$ on the $500 \times 300$ grid \cite{fields}. This implementation utilizes a 2-D fast Fourier transform (FFT) to efficiently estimate the smoother using the partially gridded training data \cite{fields}.
Employing this mean is similar to the multi-resolution principle in \cite{nychka2015multiresolution,katzfuss2017multi}, where very smooth low frequency trends are fit by this filtering mean, and the high-frequency trend are fit on resulting residuals.
We plot the three mean functions ($\mu_k$ ), and the corresponding data fit in each mean ($Y_k$) is seen in Figure \ref{fig:mean}.
Note that because $\mu_2(X)$ and $\mu_3(X)$ depend on $X$, they lead to first order non-stationary predictions.
\texttt{MuyGPs} can theoretically estimate the parameters of most kernel functions.
For its flexibility, we employ the Mat\'ern covariance as stated in Equation \ref{eq:matern} in our experiments.
Since the impact on prediction is low, we fix $\tau^2=0.001$, and if otherwise not stated, we fix the length scale $\rho = 1.0$.
In our experiments, $\nu$ and $\rho$ are difficult to simultaneously estimate, and we found better performance when $\nu$ is allowed to vary.
Further, we explore other fixed values of $\rho$ in the results in Table \ref{tab:perf}.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{batching.png}
\caption{Empirical 90\% confidence intervals of RMSE over 100 simulation iterations for 50 exact nearest neighbors with $\mu_3$.}
\label{fig:batch}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{nn.png}
\caption{Performance in RMSE and computing time for 500 batched values using approximate nearest neighbors and $\mu_3$.}
\label{fig:nn}
\end{figure}
Next, we must select a batch size for the \texttt{MuyGPs} estimation procedure.
In all of our experiments we sampled batch elements from our training data randomly and without replacement.
Using exact nearest neighbor sets of size 50 and $\mu_3$, we demonstrate the variability in prediction performance using RMSE for various batch sizes in Figure~\ref{fig:batch} for 100 independent simulation iterations at each batch size.
We then compute empirical 95\% empirical confidence intervals at each batch size to determine the variability of estimates from that batch size.
In even extremely small batch sizes of 25 members, the variability in the RMSE is very small at a difference of about 0.014 length in a 95\% empirical confidence interval of the simulations.
When considering batch sizes of 500, there is less than 0.001 in variability, and at batch sizes of 2,000, there is nearly no variability in prediction RMSE.
Note that this dataset has 105,569 training observations, so a batch size of 2,000 offers significant improvement where we can use less that 1.8\% of the data in the leave-one-out cross-validation optimization without impacting the results.
\begin{figure}
\includegraphics[width=\linewidth]{compare.png}
\caption{Comparison in RMSE and computing time in comparison of \texttt{MuyGPs} to all methods in \cite{heaton2019case} and \cite{edwards2020precision}.
The multiple \texttt{MuyGPs} observations are the results from Figure \ref{fig:nn} with varying nearest neighbor set sizes between 25 and 200 in 25 incriments.
}
\label{fig:compare}
\end{figure}
Following the tradeoffs associated with batch size selection, we demonstrate selecting the number of nearest neighbors.
As more neighbors are incorporated into the model, the predictions become more accurate, but the computing time is increased.
We can achieve practical speed improvements by employing approximate nearest neighbor algorithms such as the hierarchical navigable small world (HNSW) algorithm \cite{malkov2018efficient}.
HNSW offers further scaling benefits when the number of variables in $\mathbf{x}_i$ is large.
We implement this via bindings for the 'hnswlib' library available at https://github.com/nmslib/hnswlib.
We demonstrate this performance tradeoff in including different numbers of approximate nearest neighbors in Figure \ref{fig:nn} using $\mu_3$ as an example.
There is a large accuracy performance gap between 25 and 50 nearest neighbors, but all increases in number of nearest neighbors improves the prediction accuracy, but increases total computing time due to the $O(k^3)$ term associated with solving the nearest neighbor systems of linear equations.
\begin{table}[ht]
\centering
\caption{Numerical scoring for various versions of \texttt{MuyGPs} with competing method original results from \cite{heaton2019case} and \cite{edwards2020precision} for comparison.}
\begin{tabular}{rrrrrrrr}
Method& $\ell $ & MAE & RMSE & CRPS& INT & COV & Time (min)\\
\hline
&0.1& 1.14 & 1.66 & 0.86 & 9.25 & 0.95 & 0.70 \\
& 0.25 & 1.15 & 1.64 & 0.84 & 8.40 & 0.95 & 0.68 \\
\texttt{MuyGPs}, $\mu_1$ & 0.5 & 1.19 & 1.67 & 0.85 & 8.02 & 0.93 & 0.75 \\
&0.75 & 1.21 & 1.69 & 0.86 & 7.90 & 0.93 & 0.75 \\
& 1.0 & 1.22 & 1.68 & 0.86 & 7.85 & 0.92 & 0.63 \\
\hline
& 0.1 & 1.12 & 1.64 & 0.86 & 9.38 & 0.95 & 0.64 \\
& 0.25 & 1.13 & 1.62 & 0.83 & 8.31 & 0.94 & 0.64 \\
\texttt{MuyGPs}, $\mu_2$ & 0.5 & 1.15 & 1.62 & 0.83 & 8.00 & 0.94 & 0.64 \\
&0.75 & 1.19 & 1.65 & 0.85 & 7.85 & 0.93 & 0.63 \\
& 1.0 & 1.19 & 1.64 & 0.84 & 7.80 & 0.93 & 0.63 \\
\hline
& 0.1 & 1.07 & 1.54 & 0.84 & 9.35 & 0.95 & 0.76 \\
& 0.25 & 1.08 & 1.53 & 0.80 & 8.24 & 0.94 & 0.66 \\
\texttt{MuyGPs}, $\mu_3$ & 0.5 & 1.12 & 1.55 & 0.81 & 7.95 & 0.94 & 0.62 \\
& 0.75 & 1.14 & 1.56 & 0.81 & 7.78 & 0.93 & 0.63 \\
& 1.0 & 1.15 & 1.57 & 0.82 & 7.71 & 0.93 & 0.65 \\
\hline
\hline
FRK& &1.96 & 2.44 & 1.44 & 14.08 & 0.79 & 2.32 \\
Gapfill && 1.33 & 1.86 & 1.17 & 34.78 & 0.36 & 1.39 \\
LatticeKrig& &1.22 & 1.68 & 0.87 & 7.55 & 0.96 & 27.92 \\
LAGP&& 1.65 & 2.08 & 1.17 & 10.81 & 0.83 & 2.27 \\
Metakriging&& 2.08 & 2.50 & 1.44 & 10.77 & 0.89 & 2888.52 \\
MRA&& 1.33 & 1.85 & 0.94 & 8.00 & 0.92 & 15.61 \\
NNGP& & 1.21 & 1.64 & 0.85 & 7.57 & 0.95 & 2.06 \\
NNGP2& & 1.24 & 1.68 & 0.87 & 7.50 & 0.94 & 42.85 \\
Partition && 1.41 & 1.80 & 1.02 & 10.49 & 0.86 & 79.98 \\
Pred. Proc. && 2.15 & 2.64 & 1.55 & 15.51 & 0.83 & 160.24 \\
SPDE & & 1.10 & 1.53 & 0.83 & 8.85 & 0.97 & 120.33 \\
Tapering& &1.87 & 2.45 & 1.32 & 10.31 & 0.93 & 133.26 \\
Per Embed& &1.29 & 1.79 & 0.91 & 7.44 & 0.93 & 9.81 \\
\hline
\hline
PALM & &1.59 & 1.93 & 1.15 & 11.78 & 0.78 & 4.64\\
Global + PALM2 & &1.44& 1.76 & 1.03 & 9.28 & 0.84 & 4.64 \\
\end{tabular}
\label{tab:perf}
\end{table}
Finally, we compare the performance of our method to all previous methods computed on this data in \cite{heaton2019case} and \cite{edwards2020precision}.
We plot the tradeoffs between RMSE and time visually in Figure \ref{fig:compare}, where we consider multiple numbers of nearest neighbors in \ref{fig:nn}, with $\mu_3$ , 500 batched values, and $\rho=1$.
Notably, \texttt{MuyGPs} with a small number of nearest neighbors is the fastest method presented and remains among the top five methods in terms of RMSE.
Additionally, \texttt{MuyGPs} with a large number of nearest neighbors is the most accurate method considered in all manuscripts.
Further, \texttt{MuyGPs} with number of nearest neighbors greater than or equal to 50 ranks 2nd or better in all the methods in accuracy.
In \cite{heaton2019case}, SPDE \cite{lindgren2011explicit} is the most accurate method, but it took approximately 120 minutes to compute, which is roughly more than 12 times slower than the slowest presented \texttt{MuyGPs} setting.
A similarly accurate solution is obtained by \texttt{MuyGPs} in approximately only 5 minutes.
We consider the other statistics based on uncertainty in the results in Table \ref{tab:perf}.
In all previous studies, the parameter $\ell$ from the Mat\'ern covariance has been fixed at 1.0.
This is because the $\ell$ and $\nu$ parameters are unidentifiable to estimate simultaneously.
Given this were a blind competition as the participants of \cite{heaton2019case}, this value would likely have been fixed at a reasonable value.
To show the robustness of our method, we demonstrate the performance of \texttt{MuyGPs} for several selections of fixed $\ell$ values.
In fact, our previous selection of $\ell=1.0 $ is the least favorable reasonable value for $\ell$ in the main results of Table \ref{tab:perf}.
With all three of the mean functions, even with only 50 approximate nearest neighbors, the accuracy of \texttt{MuyGPs} is competitive.
Further, all methods compared were under a minute in computing time, which is faster than any method previously tested on this data.
In all cases, the coverage is near to that of the the expected 0.95, and the other uncertainty interval statistics perform favorable in the field of methods.
In summary, the \texttt{MuyGPs} method offers more accurate and faster computation time than any existing method, and no matter the modeling choices, is a competitive with all known scalable GP computation methods.
\section{Discussion}
\label{sec:discuss}
We have presented in this manuscript \texttt{MuyGPs}, a new method for stationary Gaussian process hyperparameter estimation and prediction that is both computationally efficient and highly performant when compared to state-of-the-art approximate GP estimation methods.
\texttt{MuyGPs} minimizes the leave-one-out cross-validated mean squared error using only data from the $k$ nearest neighbors for selected locations across the domain.
Similarly, predictions depend only upon their $k$ nearest neighbor training observations.
We demonstrate that although this method is simple, it is powerful and performs well against state-of-the-art approximation methods.
Using this method requires the selection of a mean function, covariance function, batch size, and number of nearest neighbors.
We demonstrate that our method performs well with very few batched points.
In fact, when exact nearest neighbors are employed, the accuracy has nearly no variance with a batch size of about 2,000, which is approximately 1.8\% the training data.
Increasing the number of nearest neighbors improves performance in terms of RMSE at the cost of increasing computing time.
Finally, we show that a non-stationary mean function improves the accuracy of our model, but the magnitude of this effect is small compared to changes in GP estimation methods.
Although these selections ultimately define a tradeoff between performance in accuracy and computation time, our method performs favorably compared to existing methods regardless of these modeling choices.
Settings of our method offers both the fastest, and the most accurate method among all known methods, and performs among the top few methods in all other considered statistics.
To date we have optimized our method only with respect to MSE, but other objective functions are possible.
Using optimal parameters for this testing dataset yields an RMSE of 1.42 with 0.95 coverage via our model with only 50 approximate nearest neighbors (although these are not the parameters selected by our training method).
This implies there may be room for improvement of parameter estimation using a more complex objective function.
One possible idea could be to incorporate other statistics from Table \ref{tab:perf} into the objective function.
Although we achieve a first-order non-stationary predictions through \\non-stationary mean functions, our hyperparameter estimation method is amenable to estimation of second-order non-stationary models that have previously been considered too expensive for large data.
For example, one could assume hierarchically that, for example, $\nu$ parameter follows a GP over the domain.
Although we found small batch sizes relatively effective with a random sample under a stationarity assumption, more care may be needed in order to select the batched observations and their size under this more complex non-stationary model case.
\section*{Acknowledgments}
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 with IM release number LLNL-JRNL-822013.
Funding for this work was provided by LLNL Laboratory Directed Research and Development grant 19-SI-004.
This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
\bibliographystyle{siamplain}
|
1,116,691,497,269 | arxiv | \section{Introduction}\label{sec:01}
In classical Iwasawa theory we usually study various modules up to pseudo-null modules.
For instance, Iwasawa main conjecture is often formulated as a relation between the characteristic ideals of Iwasawa modules and $p$-adic $L$-functions, and in general characteristic ideals ignore pseudo-null modules.
On the other hand, Greenberg's conjecture claims that the unramified Iwasawa modules are pseudo-null in certain situations (see \cite[Conjecture 3.4.1]{BCG+}).
In that case, therefore, there was not satisfactory research on more detailed structure of unramified Iwasawa modules.
Against this background, F.~M.~Bleher, T.~Chinburg, R.~Greenberg, M.~Kakde, G.~Pappas, R.~Sharifi, and M.~J.~Taylor \cite{BCG+} began studying unramified Iwasawa modules that are assumed to be pseudo-null.
More concretely, when the base field is an imaginary quadratic field,
they obtained a relation between unramified Iwasawa modules and pairs of $p$-adic $L$-functions.
In subsequent work \cite{BCG+b},
when the base field is a CM-field, an analogous result is obtained, though we have to replace unramified Iwasawa modules by certain alternatives that are defined via exterior powers.
An analogue for Selmer groups of elliptic curves is also studied by A.~Lei and B.~Palvannan \cite{LP19}.
In this paper, we extend and refine the results of \cite{BCG+} and \cite{BCG+b} from (abelian) equivariant perspective.
The term ``equivariant'' means that we allow $p$ to divide the order of the finite abelian extension concerned.
In that case, the ring theoretic property of the Iwasawa algebra gets worse; for instance, it is no longer normal, so the characteristic ideals are not defined in a classical way.
We obtain equivariant versions of the results of \cite{BCG+} and \cite{BCG+b} by studying arithmetic complexes whose cohomology groups know the Iwasawa modules.
Our method also gives refined results even in the non-equivariant setting.
For example, a main result of \cite{BCG+b} (which we recall in Theorem \ref{thm:81}) is formulated using the second Chern classes of modules, so it studies the codimension two behavior only.
We generalize this theorem in a form without localization at height two prime ideals, so it also knows higher codimension behavior.
This paper has two kinds of main results, which respectively generalize main results of \cite{BCG+b} and \cite{BCG+}.
In \S \ref{subsec:24} and \S \ref{subsec:15}, we illustrate those main results.
In \S \ref{subsec:49}, we explain the central idea of this paper to prove them.
\subsection{The first main theorem}\label{subsec:24}
We state the first main theorem in a special case; the general statement is Theorem \ref{thm:103}.
In order not to impair the readability, for now we omit to introduce all the notations that are necessary for the precise statement and instead refer to later sections for them.
Let $p$ be a fixed odd prime number.
Let $E$ be a CM-field.
We write $S_p(E)$ (resp.~$S_{\infty}(E)$) for the set of $p$-adic primes (resp.~infinite places) of $E$.
A subset $\mathcal{S} \subset S_p(E)$ is called a ($p$-adic) CM-type if $S_p(E)$ is the disjoint union of $\mathcal{S}$ and $\ol{\mathcal{S}}$, where $\ol{\mathcal{S}}$ denotes the set of the complex conjugates of primes in $\mathcal{S}$.
We assume that there exists a CM-type for $E$, namely, that $E$ satisfies the $p$-ordinary condition.
We write $\tilde{E}$ for the compositum of all $\mathbb{Z}_p$-extensions of $E$.
We consider an abelian extension $K/E$ such that $K$ contains $\tilde{E}(\mu_p)$ and $K/\tilde{E}$ is a finite extension.
Here, $\mu_p$ denotes the group of $p$-th roots of unity.
A key point is that we deal with the ``equivariant'' case, namely, we allow the $p \mid [K: \tilde{E}]$ case.
The articles \cite{BCG+} and \cite{BCG+b} deal with the $p \nmid [K: \tilde{E}]$ case only.
For a finite set $S$ of finite primes of $E$, let $X_{S}(K)$ denote the $S$-ramified Iwasawa module for $K$, which is by definition the Galois group of the maximal abelian $p$-extension of $K$ that is unramified at all primes not lying above $S$.
We also put $X(K) = X_{\emptyset}(K)$.
It is known that $X_S(K)$ is a finitely generated module over the Iwasawa algebra $\mathcal{R} = \mathbb{Z}_p[[\Gal(K/E)]]$.
It is expected that $X_{\mathcal{S}}(K)$ is a torsion $\mathcal{R}$-module (i.e., annihilated by a non-zero-divisor) if $\mathcal{S}$ is a CM-type (see Assumption \ref{ass:41} and Remark \ref{rem:92}).
In this introduction, we assume this property.
Let $\Sigma$ be a finite set of places of $E$ that contains $S_p(E) \cup S_{\infty}(E)$.
We also impose an additional condition labeled as \eqref{eq:60}.
Note that if $p \nmid [K: \tilde{E}]$, this condition is trivial so we may take $\Sigma = S_p(E) \cup S_{\infty}(E)$.
We write $\Sigma_f$ for the set of finite primes in $\Sigma$.
Let $\mathcal{S}$ be a CM-type.
Then we have a natural exact sequence of $\mathcal{R}$-modules
\[
D_{\mathcal{S}}(K) \oplus D_{\ol{\mathcal{S}}}(K) \to X_{\Sigma_f}(K) \to X_{\Sigma_f \setminus S_p(E)}(K) \to 0.
\]
Here, the modules $D_{\mathcal{S}}(K)$ and $D_{\ol{\mathcal{S}}}(K)$ are defined using local information (see \S \ref{subsec:60}).
This sequence implies that the first map offers information on the Iwasawa module $X_{\Sigma_f \setminus S_p(E)}(K)$.
We can check that (Lemma \ref{lem:rank}) the generic ranks over $\mathcal{R}$ of $D_{\mathcal{S}}(K)$, $D_{\ol{\mathcal{S}}}(K)$, and $X_{\Sigma_f}(K)$ are all $d = [E: \mathbb{Q}]/2$.
Then, inspired by \cite{BCG+b},
we take the $d$-th exterior powers, which gives rise to a map
\[
\bigwedge_{\mathcal{R}}^d D_{\mathcal{S}}(K) \oplus \bigwedge_{\mathcal{R}}^d D_{\ol{\mathcal{S}}}(K)
\to \bigwedge_{\mathcal{R}}^d X_{\Sigma_f}(K).
\]
The first main theorem studies the cokernel of this map or, more accurately, the cokernel after taking the quotient of the target module modulo its torsion part, which is denoted by $(-)_{/\tor}$.
We will introduce the other notations used in the statement afterward.
\begin{thm}\label{thm:102}
We have an exact sequence
\[
0 \to \left(\frac{\left(\bigwedge_{\mathcal{R}}^d X_{\Sigma_f}(K) \right)_{/\tor}}
{\bigwedge_{\mathcal{R}}^d D_{\mathcal{S}}(K) + \bigwedge_{\mathcal{R}}^d D_{\ol{\mathcal{S}}}(K) }\right)_{\mathfrak{q}}
\to \frac{\mathcal{R}_{\mathfrak{q}}}{(\mathcal{L}^{\alg}_{\Sigma, \Sigma_f \setminus \mathcal{S}}, \mathcal{L}^{\alg}_{\Sigma, \Sigma_f \setminus \ol{\mathcal{S}}})}
\to \frac{\mathcal{R}_{\mathfrak{q}}}{\Fitt_{\mathcal{R}_{\mathfrak{q}}}(H^2_{\Iw}(K_{\Sigma}/K, \mathbb{Z}_p)^{\iota}_{\mathfrak{q}})}
\to 0
\]
for each prime ideal $\mathfrak{q}$ of $\Lambda$ which is not in the support of
\[
\bigoplus_{\mathfrak{p} \in S_p(E)} Z_{\mathfrak{p}}(K) \oplus \bigoplus_{\mathfrak{p} \in S_p(E)} Z_{\mathfrak{p}}(K)^{\iota}(1).
\]
\end{thm}
We briefly explain the notations (see also \S \ref{subsec:130}):
\begin{itemize}
\item
$\Lambda$ is an auxiliary subalgebra of $\mathcal{R}$ (see \S \ref{subsec:131}), which is introduced only to suitably formulate ``avoiding the supports of $Z_{\mathfrak{p}}(K)$ and $Z_{\mathfrak{p}}(K)^{\iota}(1)$.''
\item
$Z_{\mathfrak{p}}(K)$ is an $\mathcal{R}$-module which is isomorphic to $\mathbb{Z}_p[[\Gal(K/E)/G_{\mathfrak{p}}(K/E)]]$, where $G_{\mathfrak{p}}(K/E)$ denotes the decomposition group (see \S \ref{subsec:60}).
\item
$H^2_{\Iw}(K_{\Sigma}/K, \mathbb{Z}_p)$ is the Iwasawa cohomology group, which is closely related to $X(K)$ (see \S \ref{subsec:60}).
\item
For a subset $S \subset \Sigma_f$ such that $S \cap S_p(E)$ is a CM-type, $\mathcal{L}_{\Sigma, S}^{\alg}$ denotes the algebraic $p$-adic $L$-function (see \S \ref{sec:23}).
Note that in this paper we do not study a main conjecture (i.e., a relation with analytic aspects), but see the final paragraph of \S \ref{sec:23}.
\end{itemize}
In \S \ref{subsec:132}, we will explain how to recover a main theorem of \cite{BCG+b} from our theorem.
A main tool is the additivity of the (second) Chern classes with respect to exact sequences, though we will also need an additional algebraic proposition proved in \S \ref{sec:87}.
Let us also stress that our main theorem does not require localization at height two prime ideals, which is an improvement on \cite{BCG+b}.
It is natural to ask if the exterior powers in Theorem \ref{thm:102} have arithmetic interpretations.
In \cite[Theorem D]{BCG+b}, an answer is given when $d = 2$ (more generally when $n = 2$ and $l = 2$ in the notation of Theorem \ref{thm:103}).
This answer is actually also valid in our equivariant situation without changing the discussion, so we omit the details.
\subsection{The second main theorem}\label{subsec:15}
Contrary to the first main theorem, our second main theorem (Theorem \ref{thm:105}) describes, under more restrictions (including $d = 1$), unramified Iwasawa modules themselves without avoiding any prime ideal $\mathfrak{q}$.
\begin{thm}\label{thm:101}
Suppose that $E$ is an imaginary quadratic field (i.e., $d = 1$).
Let $S_p(E) = \{\mathfrak{p}, \ol{\mathfrak{p}}\}$.
Suppose that we may take $\Sigma = S_p(E) \cup S_{\infty}(E)$ (note that this is unfortunately restrictive; see condition \eqref{eq:60}).
Also, suppose that $X(K)$ is pseudo-null.
Then we have an exact sequence of $\mathcal{R}$-modules
\[
\mathbb{Z}_p(1) \to X(K) \to \frac{\mathcal{R}}{(\mathcal{L}_{\Sigma, \{\mathfrak{p}\}}^{\alg}, \mathcal{L}_{\Sigma, \{\ol{\mathfrak{p}}\}}^{\alg})} \to E^2(X(K))^{\iota}(1) \to 0.
\]
Moreover, the image of the first map from $\mathbb{Z}_p(1)$ is finite.
\end{thm}
Here, we put $E^2(-) = \Ext^2_{\mathcal{R}}(-, \mathcal{R})$ (see \S \ref{subsec:130} for the convention on $\mathcal{R}$-module structure).
When $p \nmid [K: \tilde{E}]$, the character-component of Theorem \ref{thm:101} recovers \cite[Theorem 5.2.1]{BCG+}.
We also have the following corollary of Theorem \ref{thm:101} (see Corollary \ref{cor:73} for a generalization).
Let $X(K)_{\fin}$ denote the maximal finite submodule of $X(K)$.
\begin{cor}\label{cor:72}
In the situation of Theorem \ref{thm:101},
$X(K)_{\fin}$ is a quotient of $\mathbb{Z}_p(1)$ as an $\mathcal{R}$-module.
\end{cor}
\subsection{The key idea of the proof}\label{subsec:49}
We outline the proofs of the main theorems, putting focus on difference from \cite{BCG+}, \cite{BCG+b}.
\subsubsection{Review of methods of \cite{BCG+} and \cite{BCG+b}}
First we explain a key idea of \cite{BCG+} and \cite{BCG+b}.
We assume $p \nmid [K: \tilde{E}]$ and $\Sigma_f = S_p(E)$.
In this case, $\mathcal{R}$ is a finite product of regular local rings.
The following algebraic proposition is a key observation.
Recall that a finitely generated module $M$ over a regular local ring is said to be reflexive if the canonical homomorphism $\alpha_M: M \to M^{**}$ is isomorphic, where in general $(-)^*$ denotes the linear dual.
It is known that $M^{**}$ is automatically reflexive, and $M^{**}$ is called the reflexive hull of $M$.
\begin{prop}\label{prop:51}
Let $A$ be a regular local ring and $M$ a reflexive $A$-module.
\begin{itemize}
\item[(1)] If $\dim(A) \leq 2$, then $M$ is free.
\item[(2)] (\cite[Lemma A.1]{BCG+}). If the generic rank of $M$ over $A$ is one, then $M$ is free.
\end{itemize}
\end{prop}
Then a key idea of \cite[\S 5]{BCG+} and \cite[\S 4]{BCG+b} is to replace the arithmetic modules $X_{S_p(E)}(K)$ and $D_{\mathcal{S}}(K)$ by their reflexive hulls.
We consider a natural commutative diagram
\begin{equation}\label{eq:55}
\xymatrix{
\bigwedge_{\mathcal{R}}^d D_{\mathcal{S}}(K) \oplus \bigwedge_{\mathcal{R}}^d D_{\ol{\mathcal{S}}}(K) \ar[r] \ar[d]
& \bigwedge_{\mathcal{R}}^d D_{\mathcal{S}}(K)^{**} \oplus \bigwedge_{\mathcal{R}}^d D_{\ol{\mathcal{S}}}(K)^{**} \ar[d]\\
\bigwedge_{\mathcal{R}}^d X_{S_p(E)}(K) \ar[r]
& \bigwedge_{\mathcal{R}}^d X_{S_p(E)}(K)^{**}.
}
\end{equation}
Concerning Theorem \ref{thm:102}:
After taking $(-)_{/\tor}$ of the lower left module, the cokernel of the left vertical arrow is what we want to understand.
Thanks to Proposition \ref{prop:51}(1), the modules on the right are free after localization at primes of height two.
Hence the right vertical arrow is easy to understand, yielding the middle term of the sequence in Theorem \ref{thm:102} that involves the algebraic $p$-adic $L$-functions.
Another important point is that the cokernel of the maps to the reflexive hulls can be explicitly described via spectral sequences from duality theorems (Proposition \ref{prop:20}).
Then we can also investigate the cokernels of the horizontal arrows in the above diagram.
By the assumption on $\mathfrak{q}$, we can see that the cokernel of the upper arrow vanishes, and that the cokernel of the lower arrow yields the last term of the sequence in Theorem \ref{thm:102} that involves the Iwasawa cohomology group.
Using these facts, we obtain the result by applying snake lemma.
Concerning Theorem \ref{thm:101} (so $d = 1$):
The idea is the same, but we can remove several defects in the statement of Theorem \ref{thm:102}, as follows.
By Proposition \ref{prop:51}(2), we do not have to take localization at height two primes.
It is known that $X_{S_p(E)}(K)$ is torsion-free (under our hypothesis that $X(K)$ is pseudo-null), so we do not have to take $(-)_{/\tor}$.
Finally, the vertical arrow between the cokernels of the horizontal arrows can be studied directly, so we do not have to avoid $\mathfrak{q}$ as in Theorem \ref{thm:102}.
These facts imply the result.
\subsubsection{Idea of this paper}
A difficulty in generalization to the $p \mid [K: \tilde{E}]$ case is that, contrary to Proposition \ref{prop:51}, reflexive modules cannot be expected to be free over our algebra $\mathcal{R}$.
Our key idea in this paper is to use, instead of exterior powers of reflexive hulls, determinant modules of perfect complexes, which are free of rank one by definition.
More concretely, in Proposition \ref{prop:30}, we introduce a natural homomorphism
\[
\Psi_C: \bigwedge_{\mathcal{R}}^d H^1(C) \to \Det_{\mathcal{R}}^{-1}(C)
\]
for a perfect complex $C$ over $\mathcal{R}$ that satisfies several conditions.
The kernel of $\Psi_C$ is precisely the torsion part, and a description of the cokernel of $\Psi_C$ is also obtained.
Note that, contrary to the reflexive hull, the cokernel of $\Psi_C$ is not necessarily pseudo-null, but this does not matter for our purpose.
The arithmetic modules $X_{\Sigma_f}(K)$ and $D_{\mathcal{S}}(K)$ have interpretations as the first cohomology of perfect complexes $C_{\Sigma, \Sigma_f}[1]$ and $C_{\mathcal{S}}^{\loc}$ respectively (\S \ref{subsec:60}), and we can apply the general construction of the maps to the determinant modules.
As a consequence, we obtain a commutative diagram
\begin{equation}\label{eq:56}
\xymatrix{
\bigwedge_{\mathcal{R}}^d D_{\mathcal{S}}(K) \oplus \bigwedge_{\mathcal{R}}^d D_{\ol{\mathcal{S}}}(K) \ar[r] \ar[d]
& \Det_{\mathcal{R}}^{-1}(C_{\mathcal{S}}^{\loc}) \oplus \Det_{\mathcal{R}}^{-1}(C_{\ol{\mathcal{S}}}^{\loc}) \ar[d]\\
\bigwedge_{\mathcal{R}}^d X_{\Sigma_f}(K) \ar[r]
& \Det_{\mathcal{R}}(C_{\Sigma, \Sigma_f})
}
\end{equation}
(see Proposition \ref{prop:46}).
This diagram plays the same role as \eqref{eq:55}.
The cokernel of the right vertical arrow can be described by the algebraic $p$-adic $L$-functions (this is almost the definition), yielding the middle term of Theorem \ref{thm:102}.
The cokernel of the horizontal arrows can be described by using duality theorems.
By these facts, we obtain Theorem \ref{thm:102} again by the snake lemma.
The proof of Theorem \ref{thm:101}, however, requires reflexive hulls.
A main contribution of this paper is, under the assumption of Theorem \ref{thm:101}, to prove that the reflexive hulls of $X_{S_p(E)}(K)$ and $D_{\mathcal{S}}(K)$ are free of rank one.
In fact, we will show that the reflexive hulls are isomorphic to the determinant modules that appeared in \eqref{eq:56} (recall $d = 1$).
Then the proof proceeds similarly as in the $p \nmid [K: \tilde{E}]$ case.
\subsection{Organization of this paper}\label{subsec:141}
In \S \ref{sec:02}, we will establish the key algebraic tool for the main theorems, as explained in \S \ref{subsec:49}.
In \S \ref{sec:14}, we review several facts on arithmetic complexes, including duality theorems.
In \S \ref{sec:23}, we introduce the algebraic $p$-adic $L$-functions, which are used in the statements of the main theorems.
In \S \S \ref{sec:33}, \ref{sec:11}, we prove the two main theorems.
\S \ref{sec:87} is an appendix on some properties of Fitting ideals that are used in \S \ref{subsec:132}.
\subsection{A list of notations}\label{subsec:130}
For a field $k$, let $\ol{k}$ denote an algebraic closure of $k$.
The superscript $(-)^{\vee}$ denotes the Pontryagin dual for modules which are either compact or discrete.
For a (commutative) ring $A$, let $\dim(A)$ denote the Krull dimension of $A$.
Let $Q(A)$ denote the total ring of fractions of $A$.
For a prime ideal $\mathfrak{p}$ of $A$, let $\height(\mathfrak{p})$ denote its height.
Let $M$ be a finitely generated module over a noetherian ring $A$.
For an integer $l \geq 0$, let $\bigwedge_{A}^l M$ denote the $l$-th exterior power of $M$.
Let $\pd_A(M)$ denote the projective dimension of $M$.
Let $\Ann_A(M)$ denote the annihilator ideal of $M$.
Let $\Fitt_A(M)$ denote the (initial) Fitting ideal of $M$.
By definition, if $A^a \overset{H}{\to} A^b \to M \to 0$ is a finite presentation of $M$, then $\Fitt_A(M)$ is generated by all $b \times b$ minors of the presentation matrix of $H$.
We put $E^i(M) = \Ext^i_{A}(M, A)$ for integers $i \geq 0$.
In particular, we define the linear dual by $M^* = E^0(M) = \Hom_A(M, A)$.
Then $E^i(M)$ is an $A$-module; when $i = 0$, we define the module structure by $(a\phi)(x) = \phi(a x)$ for $a \in A$, $\phi \in \Hom_{A}(M, A)$, and $x \in M$.
Note that this is the opposite of the convention adopted in \cite{BCG+} and \cite{BCG+b}.
Let $\mathcal{G}$ be an abelian compact group which contains an open subgroup $\Gamma$ which is (non-canonically) isomorphic to $\mathbb{Z}_p^d$ for some $d \geq 1$.
Let $M$ be a finitely generated module over $\mathbb{Z}_p[[\mathcal{G}]]$.
We say that $M$ is torsion over $\mathbb{Z}_p[[\mathcal{G}]]$ if it is annihilated by a non-zero-divisor of $\mathbb{Z}_p[[\mathcal{G}]]$.
We also say that $M$ is pseudo-null if $M_{\mathfrak{Q}} = 0$ for any prime $\mathfrak{Q}$ of $\mathbb{Z}_p[[\mathcal{G}]]$ with $\height(\mathfrak{Q}) \leq 1$.
Then being torsion (resp.~pseudo-null) over $\mathbb{Z}_p[[\mathcal{G}]]$ is equivalent to being torsion (resp.~pseudo-null) over $\mathbb{Z}_p[[\Gamma]]$.
Therefore, we have no afraid of confusion about the base ring for these properties.
Let $M_{\tor}$ (resp.~$M_{\PN}$, resp.~$M_{\fin}$) denote the maximal torsion (resp.~pseudo-null, resp.~finite) submodule of $M$.
We also put $M_{/\bullet} = M/M_{\bullet}$ for $\bullet \in \{\tor, \PN, \fin\}$.
\section{The key algebraic propositions}\label{sec:02}
As discussed in \S \ref{subsec:49}, this section constitutes the technical heart of this paper.
\subsection{Preliminaries on perfect complexes}\label{subsec:a21}
For a noetherian ring $A$, let $\De^{\perf}(A)$ be the derived category of perfect complexes over $A$.
For integers $a \leq b$, let $D^{[a, b]}(A)$ be the full subcategory of $\De^{\perf}(A)$ that consists of complexes
which are quasi-isomorphic to complexes of the form
\[
[\cdots \to 0 \to C^a \to \dots \to C^b \to 0 \to \cdots],
\]
concentrated in degrees $a, a+1, \dots, b$, where $C^i$ is finitely generated and projective for each $a \leq i \leq b$.
For each $C \in \De^{\perf}(A)$ which is quasi-isomorphic to the displayed complex, we define the Euler characteristic of $C$ by
\[
\chi_{A}(C) = \sum_i (-1)^{i-1} \rank_A(C^i).
\]
Here, $\rank_A(-)$ denotes the (locally constant) rank for projective modules.
The determinant module of $C$ is defined by
\[
\Det_{A}(C) = \bigotimes_i \ddet_{A}^{(-1)^i}(C^i),
\]
where, for a finitely generated projective $A$-module $F$, we put
\[
\ddet_{A}(F) = \bigwedge_{A}^{\rank_A(F)} F,
\qquad \ddet_{A}^{-1}(F) = \ddet_{A}(F)^*
\]
(recall that $(-)^*$ denotes the linear dual).
We also put
\[
\Det_{A}^{-1}(C) = \Det_{A}(C)^*.
\]
Since no problem occurs, throughout this paper, we ignore the degrees on determinant modules.
Given an $A$-algebra $A'$, for $C \in D^{[a, b]}(A)$, let $A' \otimes^{\mathbb{L}}_A C \in D^{[a, b]}(A')$ denote the derived tensor product.
Then we have a natural isomorphism $A' \otimes_A \Det_A(C) \simeq \Det_{A'}(A' \otimes^{\mathbb{L}}_A C)$.
\subsection{The key propositions}\label{subsec:a22}
Let $\mathcal{R}$ be a (commutative) ring which contains a regular local ring $\Lambda$ such that $\mathcal{R}$ is free of finite rank over $\Lambda$ and moreover there exists an isomorphism
\[
\Hom_{\Lambda}(\mathcal{R}, \Lambda) \simeq \mathcal{R}
\]
as $\mathcal{R}$-modules.
Note that this implies that $\mathcal{R}$ is Gorenstein.
Moreover, we obtain isomorphisms $\Ext_{\mathcal{R}}^i(M, \mathcal{R}) \simeq \Ext_{\Lambda}^i(M, \Lambda)$ for $\mathcal{R}$-modules $M$, so no confusion about the base ring occurs when we write $E^i(M)$.
The following is a special case of Proposition \ref{prop:30}.
\begin{prop}\label{prop:31}
Let $C \in D^{[0, 1]}(\mathcal{R})$ with $H^0(C) = 0$ and set $l = \chi_{\mathcal{R}}(C)$.
Then we have a natural homomorphism
\[
\Psi_C: \bigwedge_{\mathcal{R}}^l H^1(C) \to \Det_{\mathcal{R}}^{-1}(C)
\]
which satisfies the following.
\begin{itemize}
\item[(1)]
We have $\Ker(\Psi_C) = \left( \bigwedge_{\mathcal{R}}^l H^1(C)\right)_{\tor}$.
\item[(2)]
We have an isomorphism
\[
\Coker(\Psi_C) \simeq \mathcal{R} / \Fitt_{\mathcal{R}}(E^1(H^1(C))).
\]
\end{itemize}
\end{prop}
\begin{proof}
We consider a natural homomorphism
\[
\ol{\Psi_C}: \bigwedge_{\mathcal{R}}^l H^1(C)
\to Q(\mathcal{R}) \otimes_{\mathcal{R}} \bigwedge_{\mathcal{R}}^l H^1(C)
\simeq \Det_{Q(\mathcal{R})}^{-1}(Q(\mathcal{R}) \otimes^{\mathbb{L}}_{\mathcal{R}} C)
\simeq Q(\mathcal{R}) \otimes_{\mathcal{R}} \Det_{\mathcal{R}}^{-1}(C),
\]
where the middle isomorphism follows from $Q(\mathcal{R}) \otimes^{\mathbb{L}}_{\mathcal{R}} C \in D^{[1, 1]}(Q(\mathcal{R}))$.
We shall show that $\Image(\ol{\Psi_C}) \subset \Det_{\mathcal{R}}^{-1}(C)$, and define $\Psi_C$ as the induced homomorphism.
In fact, we will give a more explicit description of $\Psi_C$.
Let us take a quasi-isomorphism $C \simeq [C^0 \overset{\alpha}{\to} C^1]$ with $C^0$ and $C^1$ finitely generated and projective.
Put $a = \rank_{\mathcal{R}}(C^0)$, so $\rank_{\mathcal{R}}(C^1) = a+l$.
We consider the exact sequence
\[
0 \to C^0 \overset{\alpha}{\to} C^1 \overset{\beta}{\to} H^1(C) \to 0.
\]
Then we are able to define a homomorphism
\[
\Psi_C': \bigwedge_{\mathcal{R}}^{a} C^0 \otimes_{\mathcal{R}} \bigwedge_{\mathcal{R}}^l H^1(C)
\to \bigwedge_{\mathcal{R}}^{a + l} C^1
\]
by
\[
(x_1 \wedge \dots \wedge x_a) \otimes (\beta(y_1) \wedge \dots \wedge \beta(y_l))
\mapsto \alpha(x_1) \wedge \dots \wedge \alpha(x_a) \wedge y_1 \wedge \dots \wedge y_l
\]
for $x_1, \dots, x_a \in C^0$ and $y_1, \dots, y_l \in C^1$.
The well-definedness of $\Psi_C'$ follows from $\bigwedge_{\mathcal{R}}^{a + 1} C^0 = 0$ as in \cite[Lemma 2.1]{Sak18}.
Taking $\Det_{\mathcal{R}}^{-1}(C^0) \otimes_{\mathcal{R}} (-)$ to the both sides, we obtain a homomorphism
\[
\bigwedge_{\mathcal{R}}^l H^1(C)
\to \Hom_{\mathcal{R}} \left( \bigwedge_{\mathcal{R}}^{a} C^0, \bigwedge_{\mathcal{R}}^{a + l} C^1 \right) \simeq \Det_{\mathcal{R}}^{-1}(C).
\]
By construction, the homomorphism $\ol{\Psi_C}$ coincides with this map (followed by the canonical inclusion).
This implies $\Image(\ol{\Psi_C}) \subset \Det_{\mathcal{R}}^{-1}(C)$ as claimed.
Then assertion (1) is clear from the construction of $\Psi_C$.
For assertion (2), we first note an isomorphism
\[
\Coker(\Psi_C)
\simeq \Coker(\Psi_C').
\]
Then we consider maps
\[
\Det_{\mathcal{R}}(C^0) \otimes_{\mathcal{R}} \bigwedge_{\mathcal{R}}^l C^1
\overset{\id \otimes \bigwedge^l \beta}{\twoheadrightarrow} \Det_{\mathcal{R}}(C^0) \otimes_{\mathcal{R}} \bigwedge_{\mathcal{R}}^l H^1(C)
\overset{\Psi_C'}{\to} \Det_{\mathcal{R}}(C^1).
\]
By a direct computation after choosing bases of $C^0$ and $C^1$, we see that $\Image(\Psi_C')$
in $\Det_{\mathcal{R}}(C^1) \simeq \mathcal{R}$ is generated by the $a \times a$ minors of the matrix $\alpha$.
By the induced exact sequence
\[
0 \to H^1(C)^* \to (C^1)^* \overset{\alpha^*}{\to} (C^0)^* \to E^1(H^1(C)) \to 0,
\]
and by the definition of Fitting ideals, we obtain
\[
\Coker(\Psi_C') \simeq \mathcal{R} / \Fitt_{\mathcal{R}}(E^1(H^1(C))).
\]
This completes the proof.
\end{proof}
For a prime $\mathfrak{q}$ of $\Lambda$, the subscript $(-)_{\mathfrak{q}}$ denotes the localization with respect to the multiplicative set $\Lambda \setminus \mathfrak{q}$.
The following is the key proposition, which we prove by using Proposition \ref{prop:31}.
\begin{prop}\label{prop:30}
Let $C$ be a perfect complex over $\mathcal{R}$ such that $H^i(C)$ is pseudo-null for any $i \neq 1$.
We set $l = \chi_{\mathcal{R}}(C)$.
Then we have a natural homomorphism
\[
\Psi_C: \bigwedge_{\mathcal{R}}^l H^1(C) \to \Det_{\mathcal{R}}^{-1}(C)
\]
which satisfies the following.
\begin{itemize}
\item[(1)]
We have $\Ker(\Psi_C) = \left( \bigwedge_{\mathcal{R}}^l H^1(C)\right)_{\tor}$.
\item[(2)]
Let $\mathfrak{q}$ be a prime ideal of $\Lambda$ such that $\mathcal{R}_{\mathfrak{q}} \otimes^{\mathbb{L}}_{\mathcal{R}} C \in D^{[0, 1]}(\mathcal{R}_{\mathfrak{q}})$ and that $H^0(C)_{\mathfrak{q}} = 0$.
Then we have an isomorphism
\[
\Coker(\Psi_C)_{\mathfrak{q}} \simeq \mathcal{R}_{\mathfrak{q}} / \Fitt_{\mathcal{R}_{\mathfrak{q}}}(E^1(H^1(C))_{\mathfrak{q}}).
\]
\item[(3)]
Let $\mathfrak{q}$ be a prime ideal of $\Lambda$ such that $\mathcal{R}_{\mathfrak{q}} \otimes^{\mathbb{L}}_{\mathcal{R}} C \in D^{[1, 2]}(\mathcal{R}_{\mathfrak{q}})$ and that $\pd_{\mathcal{R}_{\mathfrak{q}}}(H^2(C)_{\mathfrak{q}}) \leq 2$.
Then $\left(\Psi_C\right)_{\mathfrak{q}}$ is bijective.
\end{itemize}
\end{prop}
\begin{proof}
Since $H^i(C)$ is torsion for any $i \neq 1$, as in the proof of Proposition \ref{prop:31}, we have a natural homomorphism
\[
\ol{\Psi_C}: \bigwedge_{\mathcal{R}}^l H^1(C)
\to
Q(\mathcal{R}) \otimes_{\mathcal{R}} \Det_{\mathcal{R}}^{-1}(C).
\]
We claim that $\Image(\ol{\Psi_C}) \subset \Det_{\mathcal{R}}^{-1}(C)$.
Let $\mathfrak{q}$ be any prime ideal of $\Lambda$ with $\height(\mathfrak{q}) \leq 1$.
By the assumption, we have $H^i(C)_{\mathfrak{q}} = 0$ for $i \neq 1$.
Hence the complex $\mathcal{R}_{\mathfrak{q}} \otimes^{\mathbb{L}}_{\mathcal{R}} C$ satisfies the condition in Proposition \ref{prop:31} over $\mathcal{R}_{\mathfrak{q}}$ (since $\dim(\mathcal{R}_{\mathfrak{q}}) \leq 1$).
Therefore, Proposition \ref{prop:31} implies that $\Image(\ol{\Psi_C})_{\mathfrak{q}} \subset \Det_{\mathcal{R}}^{-1}(C)_{\mathfrak{q}}$.
Since $\Det_{\mathcal{R}}^{-1}(C)$ is reflexive, this proves $\Image(\ol{\Psi_C}) \subset \Det_{\mathcal{R}}^{-1}(C)$.
Therefore, we are able to define $\Psi_C$ as the homomorphism induced by $\ol{\Psi_C}$.
Now assertion (1) is clear and assertion (2) follows directly from Proposition \ref{prop:31}(2).
We show assertion (3).
By the assumptions, the module $H^1(C)_{\mathfrak{q}}$ is projective over $\mathcal{R}_{\mathfrak{q}}$ of rank $l$.
Therefore, $\left(\Psi_C\right)_{\mathfrak{q}}$ is a homomorphism between free modules of rank one.
For any height one prime $\mathfrak{q}'$ of $\Lambda$ contained in $\mathfrak{q}$, since $\mathcal{R}_{\mathfrak{q}'} \otimes^{\mathbb{L}}_{\mathcal{R}} C \in D^{[1, 1]}(\mathcal{R}_{\mathfrak{q}'})$ and $H^1(C)_{\mathfrak{q}'}$ is projective, assertion (2) implies that $\left(\Psi_C\right)_{\mathfrak{q}'}$ is surjective.
Hence we see that $\left(\Psi_C \right)_{\mathfrak{q}}$ is bijective.
\end{proof}
\subsection{Relations with reflexive hulls}\label{subsec:a23}
Let $\mathcal{R} \supset \Lambda$ be as in \S \ref{subsec:a22}.
Recall that, for a finitely generated $\mathcal{R}$-module $M$, the reflexive hull is defined as $M^{**}$ together with the natural map $\alpha_M: M \to M^{**}$.
It is known that the kernel of $\alpha_M$ is $M_{\tor}$ and the cokernel of $\alpha_M$ is pseudo-null, and that those properties characterizes $\alpha_M$ up to isomorphism.
For that reason, by slight abuse of notation, we say that a homomorphism $f$ from $M$ to a reflexive $\mathcal{R}$-module is the reflexive hull of $M$ if the kernel of $f$ is $M_{\tor}$ and the cokernel of $f$ is pseudo-null.
The following lemma gives a relation between Proposition \ref{prop:30} and the reflexive hull.
\begin{lem}\label{lem:40}
In the situation of Proposition \ref{prop:30},
the map $\Psi_C$ is the reflexive hull of $\bigwedge_{\mathcal{R}}^l H^1(C)$ if and only if $H^1(C)_{\tor}$ is pseudo-null.
\end{lem}
\begin{proof}
By definition, $\Psi_C$ is the reflexive hull of $\bigwedge_{\mathcal{R}}^l H^1(C)$ if and only if $\Coker(\Psi_C)$ is pseudo-null.
For each prime $\mathfrak{q}$ of $\Lambda$ with $\height(\mathfrak{q}) \leq 1$, we have
\begin{align}
\Coker(\Psi_C)_{\mathfrak{q}} = 0
& \Leftrightarrow \Fitt_{\mathcal{R}_{\mathfrak{q}}}(E^1(H^1(C))_{\mathfrak{q}}) = \mathcal{R}_{\mathfrak{q}}\\
& \Leftrightarrow E^1(H^1(C))_{\mathfrak{q}} = 0\\
&\Leftrightarrow \text{$H^1(C)_{\mathfrak{q}}$ is projective over $\Lambda_{\mathfrak{q}}$}\\
&\Leftrightarrow \text{$H^1(C)_{\mathfrak{q}}$ is torsion-free over $\Lambda_{\mathfrak{q}}$},
\end{align}
where the first equivalence follows from Proposition \ref{prop:30}(2) and the final from $\Lambda_{\mathfrak{q}}$ being either a field or a discrete valuation ring.
Hence the lemma follows.
\end{proof}
We also mention a relation with the notion of exterior power biduals, which is developed by Burns, Sano, and Sakamoto, in the theory of Euler systems (e.g., \cite{BS19}, \cite{Sak18}).
In general, for a finitely generated $\mathcal{R}$-module $M$, we define the $l$-th exterior power bidual by
\[
\bigcap_{\mathcal{R}}^l M = \left( \bigwedge_{\mathcal{R}}^l M^* \right)^*.
\]
Then we have a natural map
\[
\alpha_M^l: \bigwedge_{\mathcal{R}}^l M \to \bigcap_{\mathcal{R}}^l M
\]
defined by
\[
x_1 \wedge \dots \wedge x_l \mapsto [\phi_1 \wedge \dots \wedge \phi_l \mapsto \det(\phi_i(x_j))_{ij}]
\]
for $x_1, \dots, x_l \in M$ and $\phi_1, \dots, \phi_l \in M^*$.
For example, when $l = 1$, then $\alpha_M^1$ is identified with the natural map $\alpha_M: M \to M^{**}$.
Now we prove the following.
\begin{prop}\label{prop:43}
In the situation of Proposition \ref{prop:30}, if moreover $H^1(C)_{\tor}$ is pseudo-null, then we have a natural isomorphism
\[
\bigcap_{\mathcal{R}}^l H^1(C) \simeq \Det^{-1}_{\mathcal{R}}(C).
\]
In particular, $\bigcap_{\mathcal{R}}^l H^1(C)$ is free of rank one over $\mathcal{R}$.
\end{prop}
\begin{proof}
Since $H^i(C)$ is torsion for any $i \neq 1$ and the exterior power bidual is torsion-free, we have a natural injective map
\[
\bigcap_{\mathcal{R}}^l H^1(C)
\hookrightarrow Q(\mathcal{R}) \otimes_{\mathcal{R}} \bigcap_{\mathcal{R}}^l H^1(C)
\simeq \Det_{Q(\mathcal{R})}^{-1}(Q(\mathcal{R}) \otimes^{\mathbb{L}}_{\mathcal{R}} C)
\simeq Q(\mathcal{R}) \otimes_{\mathcal{R}} \Det_{\mathcal{R}}^{-1}(C).
\]
We shall show that the image of this composite map coincides with $\Det_{\mathcal{R}}^{-1}(C)$.
For each prime ideal $\mathfrak{q}$ with $\height(\mathfrak{q}) \leq 1$, the assumption and the Auslander-Buchsbaum formula imply that $H^1(C)_{\mathfrak{q}}$ is free.
Hence the claim is true after localization at $\mathfrak{q}$.
Since both $\bigcap_{\mathcal{R}}^l H^1(C)$ and $\Det_{\mathcal{R}}^{-1}(C)$ are reflexive, we obtain the claim.
\end{proof}
The $l = 1$ case of this proposition will be used to prove Theorem \ref{thm:101}.
The author does not think that we can essentially simplify the proof of the proposition even if we assume $l = 1$.
Note that if $H^1(C)_{\tor}$ is pseudo-null, Proposition \ref{prop:43} implies that we can use $\bigcap_{\mathcal{R}}^l H^1(C)$ instead of $\Det^{-1}_{\mathcal{R}}(C)$ in \eqref{eq:56}.
However, we will have to deal with the case where $H^1(C)_{\tor}$ is not pseudo-null, and then this alternative is not available.
\section{Facts on complexes}\label{sec:14}
In this section, we review facts on arithmetic complexes.
Essentially the statements have already been written in \cite{BCG+} and \cite{BCG+b}, but we need a slight modification, for example, from $S_p(E)$ to larger $\Sigma_f$.
For more details, see also Nekov\'{a}\v{r}'s book \cite{Nek06}.
We consider the following situation.
Let $E$ be a number field (in this section we do not assume that $E$ is a CM-field).
Let $K$ be an abelian extension of $E$ that is a $\mathbb{Z}_p^r$-extension ($r \geq 1$) of a number field.
Suppose that $K$ contains $\mu_{p^{\infty}} = \bigcup_{n} \mu_{p^n}$.
Put $\mathcal{R} = \mathbb{Z}_p[[\Gal(K/E)]]$.
\subsection{Generalities}\label{subsec:59}
As in the introduction, we write $S_p(E)$ and $S_{\infty}(E)$ as the sets of $p$-adic primes and infinite places of $E$, respectively.
We define $S_{\ram, p}(K/E)$ as the set of finite primes $v \not \in S_p(E)$ of $E$ such that the ramification index of $v$ in $K/E$ is divisible by $p$.
Let $\Sigma$ be a finite set of places of $E$ such that
\begin{equation}\label{eq:60}
\Sigma \supset S_{\infty}(E) \cup S_p(E) \cup S_{\ram, p}(K/E).
\end{equation}
For example, if $p \nmid [K: \tilde{E}]$, then $S_{\ram, p}(K/E) = \emptyset$, so we only have to assume $\Sigma \supset S_{\infty}(E) \cup S_p(E)$.
This condition on $\Sigma$ will be needed in Proposition \ref{prop:a23} below.
For such a set $\Sigma$, we put $\Sigma_f = \Sigma \setminus S_{\infty}(E)$.
We define $K_{\Sigma}$ as the maximal algebraic extension of $K$ which is unramified at any prime not lying above a prime in $\Sigma$.
Let $T$ be a finitely generated free $\mathbb{Z}_p$-module equipped with an action of $\Gal(K_{\Sigma}/E)$.
(We will need only the case $T = \mathbb{Z}_p(1)$ in this paper.)
Let $\derR\Gamma_{\Iw}(K_{\Sigma}/K, T)$ be the complex over $\mathcal{R}$ that computes the global Iwasawa cohomology groups
\[
H^i_{\Iw}(K_{\Sigma}/K, T) = \varprojlim_{F} H^i(K_{\Sigma}/F, T),
\]
where $F$ runs over all intermediate number fields in $K/E$.
As local counterparts, for each finite prime $v$ of $E$, let $\derR\Gamma_{\Iw}(K_{v}, T)$ be the complex over $\mathcal{R}$ that computes the local Iwasawa cohomology groups
\[
H^i_{\Iw}(K_{v}, T) = \varprojlim_{F} H^i(F \otimes_E E_{v}, T).
\]
The actual construction of these complexes will be given in the proof of the next proposition.
\begin{prop}\label{prop:a23}
We have
\[
\derR\Gamma_{\Iw}(K_{\Sigma}/K, T) \in D^{[0, 2]}(\mathcal{R})
\]
and
\[
\derR\Gamma_{\Iw}(K_{v}, T) \in D^{[0, 2]}(\mathcal{R})
\]
for each finite prime $v$ of $E$.
\end{prop}
\begin{proof}
We can define the complexes concerned by
\[
\derR\Gamma_{\Iw}(K_{\Sigma}/K, T) = \derR\Gamma(K_{\Sigma}/E, T \otimes_{\mathbb{Z}_p} \mathcal{R})
\]
and
\[
\derR\Gamma_{\Iw}(K_v, T) = \derR\Gamma(E_v, T \otimes_{\mathbb{Z}_p} \mathcal{R}),
\]
where $T \otimes_{\mathbb{Z}_p} \mathcal{R}$ is regarded as a Galois representation of $\Gal(\ol{E}/E)$ over $\mathcal{R}$
by defining the action of the Galois group on $\mathcal{R}$ via
\[
\Gal(\ol{E}/E) \twoheadrightarrow \Gal(K/E) \hookrightarrow \mathcal{R}^{\times} \overset{\iota}{\to} \mathcal{R}^{\times},
\]
where the first two maps are the natural ones and $\iota$ denotes the involution which inverts every group element.
Then Shapiro's lemma implies the above-mentioned descriptions of the cohomology groups as inverse limits.
By \cite[Proposition 4.2.9]{Nek06}, it is therefore enough to show that $\cd_p \Gal(K_{\Sigma}/E) \leq 2$ and $\cd_p \Gal(\ol{E_v}/E_v) \leq 2$, where $\cd_p$ denotes the $p$-cohomological dimension.
The latter claim $\cd_p \Gal(\ol{E_v}/E_v) \leq 2$ is a well-known fact (see \cite[Theorem (7.1.8)]{NSW08}).
For $\cd_p \Gal(K_{\Sigma}/E) \leq 2$, let $E_1$ be the maximal intermediate number field of $K/E$ such that $p \nmid [E_1: E]$.
Thanks to assumption \eqref{eq:60}, the extension $K/E_1$ is unramified outside $\Sigma$.
Therefore, $K_{\Sigma}$ coincides with the maximal algebraic extension of $E_1$ which is unramified outside $\Sigma$.
Then we have $\cd_p \Gal(K_{\Sigma}/E_1) \leq 2$ (see \cite[Proposition (8.1.18)]{NSW08}).
Since $p \nmid [E_1:E]$, we obtain $\cd_p \Gal(K_{\Sigma}/E) \leq 2$ (see \cite[Proposition (3.3.5)]{NSW08}), as claimed.
\end{proof}
\begin{defn}\label{defn:32}
For each subset $S \subset \Sigma_f$, we define a complex $C_{\Sigma, S}(T)$ by a triangle
\begin{equation}\label{eq:16}
C_{\Sigma, S}(T) \to
\derR\Gamma_{\Iw}(K_{\Sigma}/K, T)
\to \bigoplus_{v \in S} \derR\Gamma_{\Iw}(K_{v}, T),
\end{equation}
where the second map is the natural restriction morphisms.
\end{defn}
For example, we have
\[
C_{\Sigma, \emptyset}(T) \simeq \derR\Gamma_{\Iw}(K_{\Sigma}/K, T).
\]
By the Poitou-Tate duality, we also have a triangle
\begin{equation}\label{eq:17}
C_{\Sigma, S}(T)
\to \bigoplus_{v \in \Sigma_f \setminus S} \derR\Gamma_{\Iw}(K_{v}, T)
\to \derR\Gamma(K_{\Sigma}/K, T^{\vee}(1))^{\vee}[-2],
\end{equation}
so in particular
\[
C_{\Sigma, \Sigma_f}(T) \simeq \derR\Gamma(K_{\Sigma}/K, T^{\vee}(1))^{\vee}[-3].
\]
The following duality theorem plays an important role.
\begin{prop}\label{prop:44}
Let $T^{\sharp}$ be the $\mathbb{Z}_p$-linear dual of $T$.
\begin{itemize}
\item[(1)]
For each subset $S \subset \Sigma_f$, we have a quasi-isomorphism
\[
\derR\Hom_{\mathcal{R}}(C_{\Sigma, S}(T), \mathcal{R})^{\iota} \simeq C_{\Sigma, \Sigma_f \setminus S}(T^{\sharp}(1))[3].
\]
\item[(2)]
For each finite prime $v$ of $E$, we have a quasi-isomorphism
\[
\derR\Hom_{\mathcal{R}}(\derR\Gamma_{\Iw}(K_v, T), \mathcal{R})^{\iota} \simeq \derR\Gamma_{\Iw}(K_v, T^{\sharp}(1))[2]
\]
\end{itemize}
\end{prop}
\begin{proof}
See \cite[Proposition 2.1]{BCG+b}.
\end{proof}
\begin{prop}\label{prop:18}
If $S = \emptyset$, we have $C_{\Sigma, \emptyset}(T) \in D^{[0, 2]}(\mathcal{R})$.
If $S = \Sigma_f$, we have $C_{\Sigma, \Sigma_f}(T) \in D^{[1,3]}(\mathcal{R})$.
If $\emptyset \subsetneqq S \subsetneqq \Sigma_f$, we have $C_{\Sigma, S}(T) \in D^{[1, 2]}(\mathcal{R})$.
\end{prop}
\begin{proof}
By Proposition \ref{prop:a23}, we have $C_{\Sigma, S}(T) \in D^{[0, 3]}(\mathcal{R})$ in general.
If $S \subsetneqq \Sigma_f$, then \eqref{eq:17} implies that $H^3(C_{\Sigma, S}(T)) = 0$, so we have $C_{\Sigma, S}(T) \in D^{[0, 2]}(\mathcal{R})$.
If $S \neq \emptyset$, then Proposition \ref{prop:44}(1) and $C_{\Sigma, \Sigma_f \setminus S}(T^{\sharp}(1)) \in D^{[0, 2]}(\mathcal{R})$ imply that $C_{\Sigma, S}(T) \in D^{[1, 3]}(\mathcal{R})$.
Then the proposition follows.
\end{proof}
\subsection{Applications to $T = \mathbb{Z}_p(1)$}\label{subsec:60}
We specialize the facts in \S \ref{subsec:59} to $T = \mathbb{Z}_p(1)$.
For each finite prime $v$ of $E$, we put $K_v = K \otimes_E E_v$.
Then we define $\mathcal{R}$-modules $D_v(K)$ and $Z_v(K)$ by
\[
D_{v}(K) = H^1(K_{v}, \mathbb{Q}_p/\mathbb{Z}_p)^{\vee}
\]
and
\[
Z_{v}(K) = H^0(K_{v}, \mathbb{Q}_p/\mathbb{Z}_p)^{\vee} \simeq \mathbb{Z}_p[[\Gal(K/E)/G_{v}(K/E)]],
\]
where $G_{v}(K/E)$ denotes the decomposition group of $K/E$ at $v$.
For each finite set $S$ of finite primes of $E$, we also put
\[
D_{S}(K) = \bigoplus_{v \in S} D_{v}(K),
\qquad Z_{S}(K) = \bigoplus_{v \in S} Z_{v}(K).
\]
For a finite set $S$ of finite primes of $E$, let $X_{S}(K)$ denote the Iwasawa module defined as the Galois group over $K$ of the maximal abelian pro-$p$ extension of $K$ which is totally split outside $S$.
It is known that $X_S(K)$ is a finitely generated $\mathcal{R}$-module.
Also, we put $X(K) = X_{\emptyset}(K)$.
Note that, concerning the Iwasawa modules, we are considering the ``totally split'' condition rather than the ``unramified'' condition.
Since $K$ contains the cyclotomic $\mathbb{Z}_p$-extension of $E$, the residue degree of $K/E$ at any non-$p$-adic prime is divisible by $p^{\infty}$.
Moreover, if $E$ is a $p$-ordinary CM-field and $K$ contains $\tilde{E}$, then the residue degree at any $p$-adic prime is also divisible by $p^{\infty}$ by \cite[Lemma 3.1(ii)]{BCG+b}.
Therefore, the notation $X_S(K)$ is compatible with that in the introduction.
Now we review descriptions of the local and global cohomology groups (see, e.g., \cite[\S 2]{BCG+b}).
We put
\[
C_v^{\loc} = \derR\Gamma_{\Iw}(K_v, \mathbb{Z}_p(1))
\]
for each finite prime $v$ of $E$ and also put $C_S^{\loc} = \bigoplus_{v \in S} C_v^{\loc}$.
Then by the local duality, the local cohomology groups are described as
\begin{equation}\label{eq:42}
H^i(C_v^{\loc}) \simeq
\begin{cases}
D_{v}(K) & (i=1)\\
Z_{v}(K) & (i = 2)\\
0 & (i \neq 1, 2).
\end{cases}
\end{equation}
For each subset $S \subset \Sigma_f$, we put
\[
C_{\Sigma, S} = C_{\Sigma, S}(\mathbb{Z}_p(1)).
\]
Then by the Poitou-Tate duality, we obtain the following description.
For $S = \Sigma_f$, we have
\begin{equation}\label{eq:75}
H^i(C_{\Sigma, \Sigma_f}) \simeq
\begin{cases}
X_{\Sigma_f}(K) & (i=2)\\
\mathbb{Z}_p & (i = 3)\\
0 & (i \neq 2, 3).
\end{cases}
\end{equation}
If $S \subsetneqq \Sigma_f$, then we have $H^i(C_{\Sigma, S}) = 0$ for $i \neq 1, 2$ and also exact sequences
\begin{equation}\label{eq:76}
0 \to H^1(C_{\Sigma, S}) \to H^1_{\Iw}(K_{\Sigma}/K, \mathbb{Z}_p(1))
\to \bigoplus_{v \in S} H^1_{\Iw}(K_{v}, \mathbb{Z}_p(1))
\end{equation}
and
\begin{equation}\label{eq:77}
0 \to X_{S}(K) \to H^2(C_{\Sigma, S}) \to Z_{\Sigma_f \setminus S}^0(K) \to 0.
\end{equation}
Here, when $S$ is a nonempty finite set of finite primes of $E$, we put
\[
Z_S^0(K) = \Ker(Z_S(K) \to \mathbb{Z}_p),
\]
the augmentation kernel.
In particular, \eqref{eq:77} for $S = \emptyset$ implies
\[
0 \to X(K) \to H^2_{\Iw}(K_{\Sigma}/K, \mathbb{Z}_p)(1) \to Z_{\Sigma_f}^0(K) \to 0
\]
as mentioned just after Theorem \ref{thm:102}.
Finally we record the results of Proposition \ref{prop:44}:
because of $\mathbb{Z}_p(1)^{\sharp}(1) \simeq \mathbb{Z}_p \simeq (\mathbb{Z}_p(1))(-1)$,
we have isomorphisms
\begin{equation}\label{eq:93}
\derR\Hom_{\mathcal{R}}(C_{\Sigma, S}, \mathcal{R})^{\iota}(1)[-3] \simeq C_{\Sigma, \Sigma_f \setminus S}
\end{equation}
and
\begin{equation}\label{eq:94}
\derR\Hom_{\mathcal{R}}(C_v^{\loc}, \mathcal{R})^{\iota}(1)[-2] \simeq C_v^{\loc}.
\end{equation}
\section{Algebraic $p$-adic $L$-functions associated to CM-fields}\label{sec:23}
In the rest of this paper, we consider the same situation as \S \ref{subsec:24}:
$E$ is a CM-field that satisfies the $p$-ordinary condition and $K$ is an abelian extension of $E$ which is a finite extension of $\tilde{E}(\mu_p)$.
We put $2d = [E: \mathbb{Q}]$.
We also assume the following.
\begin{ass}\label{ass:41}
For each CM-type $\mathcal{S}$ of $E$, the module $X_{\mathcal{S}}(K)$ is torsion over $\mathcal{R}$.
\end{ass}
\begin{rem}\label{rem:92}
Assumption \ref{ass:41} is a fundamental property to study Iwasawa theory for CM-fields.
In fact, it is claimed to be true in general by \cite[Theorem 1.2.2]{HT94}.
However, the proof does not seem complete because the auxiliary algebraic lemma \cite[Lemma 1.2.4]{HT94}, which is stated without proof, has counter-examples when the closed subgroup $H$ has torsion.
On the other hand, Assumption \ref{ass:41} holds when either $K$ is a $\mathbb{Z}_p^r$-extension of a CM-field or $E$ is an imaginary quadratic field, for the following reasons.
When $K$ is a $\mathbb{Z}_p^r$-extension of CM-field, the proof of \cite[Theorem 1.2.2]{HT94} is valid because we only have to apply \cite[Lemma 1.2.4]{HT94} for $H$ without torsion.
When $E$ is an imaginary quadratic field, the proof is easier since the $\mathfrak{p}$-adic Leopoldt conjecture is known to be true for finite abelian extensions of $E$.
\end{rem}
Let $\Sigma$ be a set satisfying \eqref{eq:60}.
The Euler characteristics of the arithmetic complexes defined in \S \ref{subsec:60} are computed as follows.
For each prime $\mathfrak{p} \in S_p(E)$, we put $\deg(\mathfrak{p}) = [E_{\mathfrak{p}}:\mathbb{Q}_p]$.
\begin{lem}\label{lem:42}
For a subset $S \subset \Sigma_f$,
we have
\[
\chi_{\mathcal{R}}(C_{\Sigma, S}) = d - \sum_{\mathfrak{p} \in S \cap S_p(E)} \deg(\mathfrak{p}).
\]
\end{lem}
\begin{proof}
By the global and local Euler characteristic formulas (see, e.g., \cite[(7.3.1), (8.7.4)]{NSW08}), we obtain the following.
\begin{itemize}
\item We have $\chi_{\mathcal{R}}(\derR\Gamma_{\Iw}(K_{\Sigma}/K, \mathbb{Z}_p(1))) = d$.
\item For each $\mathfrak{p} \in S_p(E)$, we have $\chi_{\mathcal{R}}(\derR\Gamma_{\Iw}(K_{\mathfrak{p}}, \mathbb{Z}_p(1))) = \deg(\mathfrak{p})$.
\item For each finite prime $v$ of $E$ not lying above $p$, we have $\chi_{\mathcal{R}}(\derR\Gamma_{\Iw}(K_{v}, \mathbb{Z}_p(1))) = 0$.
\end{itemize}
Then the lemma follows from the definition of $C_{\Sigma, S}$.
\end{proof}
\begin{lem}\label{lem:37}
Let $S \subset \Sigma_f$ be a subset such that $S \cap S_p(E)$ is a CM-type for $E$.
Then we have $C_{\Sigma, S} \in D^{[1, 2]}(\mathcal{R})$,
$H^1(C_{\Sigma, S}) = 0$, and $H^2(C_{\Sigma, S})$ is torsion.
\end{lem}
\begin{proof}
By Proposition \ref{prop:18}, we have $C_{\Sigma, S} \in D^{[1, 2]}(\mathcal{R})$.
Since $X_S(K)$ is torsion by Assumption \ref{ass:41}, the exact sequence \eqref{eq:77} implies that $H^2(C_{\Sigma, S})$ is torsion.
Then we also deduce $H^1(C_{\Sigma, S}) = 0$ from $C_{\Sigma, S} \in D^{[1, 2]}(\mathcal{R})$ and $\chi_{\mathcal{R}}(C_{\Sigma, S}) = 0$ by Lemma \ref{lem:42}.
This completes the proof.
\end{proof}
In general, for a complex $C \in \De^{\perf}(\mathcal{R})$ whose cohomology groups are all torsion, we have a trivialization map
\[
\iota_C: \Det_{\mathcal{R}}^{-1}(C) \hookrightarrow \Det_{Q(\mathcal{R})}^{-1}(Q(\mathcal{R}) \otimes^{\mathbb{L}}_{\mathcal{R}} C) \simeq Q(\mathcal{R}),
\]
where the isomorphism follows from the assumption that $Q(\mathcal{R}) \otimes^{\mathbb{L}}_{\mathcal{R}} C$ is acyclic.
We put
\[
\mathrm{d}_{\mathcal{R}}(C) = \iota_C(\Det_{\mathcal{R}}^{-1}(C)),
\]
which is an invertible fractional ideal of $\mathcal{R}$.
If moreover $C \in D^{[1, 2]}(\mathcal{R})$ and $H^1(C) = 0$, then we have
\[
\mathrm{d}_{\mathcal{R}}(C) = \Fitt_{\mathcal{R}}(H^2(C)) \subset \mathcal{R}.
\]
This is because the $C$ can be regarded as a finite presentation of $H^2(C)$ (we omit the detail; see \cite[\S 3]{Kata_10}).
Therefore, we can define the algebraic $p$-adic $L$-function as follows.
\begin{defn}\label{defn:38}
Let $S \subset \Sigma_f$ be a subset such that $S \cap S_p(E)$ is a CM-type for $E$.
Then we define the algebraic $p$-adic $L$-function $\mathcal{L}^{\alg}_{\Sigma, S} \in \mathcal{R}$ (defined up to a unit) as a generator of $\mathrm{d}_{\mathcal{R}}(C_{\Sigma, S}) = \Fitt_{\mathcal{R}}(H^2(C_{\Sigma, S}))$.
\end{defn}
By Lemma \ref{lem:37} and \eqref{eq:77}, the element $\mathcal{L}^{\alg}_{\Sigma, S} \in \mathcal{R}$ has information on $X_{S}$ (up to the easy factor $Z_{\Sigma_f \setminus S}^0(K)$).
It is then natural to formulate a main conjecture as a relation between $\mathcal{L}^{\alg}_{\Sigma, S}$ and a certain analytically defined $p$-adic $L$-function.
In fact, in the article \cite{Kata_10} of the author, such a kind of main conjecture is obtained when $E$ is an imaginary quadratic field.
However, in this paper we do not study the analytic aspects and focus on the algebraic side only.
\section{The first main theorem}\label{sec:33}
\subsection{A choice of CM-types}\label{subsec:choice}
In our two main theorems, we consider the following situation (motivated by the work \cite{BCG+b}).
We keep the notations in \S \ref{sec:23}.
Let $\mathcal{V}$ be a set satisfying $S_p(E) \subset \mathcal{V} \subset \Sigma_f$.
Let $\mathcal{S}_1, \dots, \mathcal{S}_n$ be distinct CM-types for $E$ with $n \geq 2$.
Put
\[
\mathcal{U} = \bigcup_{i=1}^n \mathcal{S}_i,
\qquad \mathcal{T}_i = \mathcal{U} \setminus \mathcal{S}_i
\]
and put
\[
l = \sum_{\mathfrak{p} \in \mathcal{T}_i} \deg(\mathfrak{p}) = \sum_{\mathfrak{p} \in \mathcal{U}} \deg(\mathfrak{p}) - d.
\]
For a subset $S$ of $S_p(E)$, we put $S^c = S_p(E) \setminus S$.
Note that a CM-type $\mathcal{S}$ satisfies $\mathcal{S}^c = \ol{\mathcal{S}}$.
The following is a motivation for introducing the integer $l$.
\begin{lem}\label{lem:rank}
The following are true.
\begin{itemize}
\item[(1)]
For each $1 \leq i \leq n$,
the module $D_{\mathcal{T}_i}(K)$ is of rank $l$ over $\mathcal{R}$.
\item[(2)]
The module $X_{\mathcal{V} \setminus \mathcal{U}^c}(K)$ is of rank $l$ over $\mathcal{R}$.
\end{itemize}
\end{lem}
\begin{proof}
(1)
For each $\mathfrak{p} \in \mathcal{T}_i$, we have the complex $C_{\mathfrak{p}}^{\loc}$ satisfying \eqref{eq:42}.
Since $Z_{\mathfrak{p}}(K)$ is torsion (see also Lemma \ref{lem:a13} below) and $\chi_{\mathcal{R}}(C_{\mathfrak{p}}^{\loc}) = \deg(\mathfrak{p})$ as in the proof of Lemma \ref{lem:42}, the rank of $D_{\mathfrak{p}}(K)$ is $\deg(\mathfrak{p})$.
Then the claim follows.
(2)
We have the complex $C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}$ whose second cohomology has the same rank as $X_{\mathcal{V} \setminus \mathcal{U}^c}(K)$ by \eqref{eq:75} and \eqref{eq:77}.
By Lemma \ref{lem:42}, we have $\chi_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) = d - \sum_{\mathfrak{p} \in \mathcal{U}} \deg(\mathfrak{p}) = - l$.
We also have $H^1(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) = 0$ since, for any $i$, we have $H^1(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) \subset H^1(C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}) = 0$ by Lemma \ref{lem:37}.
Then the claim follows.
\end{proof}
We also record a lemma that will be used later.
\begin{lem}\label{lem:a13}
For each $\mathfrak{p} \in S_p(E)$, the module $Z_{\mathfrak{p}}(K)$ is pseudo-null over $\mathcal{R}$.
\end{lem}
\begin{proof}
By \cite[Lemma 3.1(i)]{BCG+b}, the $\mathbb{Z}_p$-rank $r_{\mathfrak{p}}$ of $G_{\mathfrak{p}}(\tilde{E}/E)$ satisfies $r_{\mathfrak{p}} = 1 + \deg(\mathfrak{p})$.
In particular, we have $r_{\mathfrak{p}} \geq 2$, which implies the lemma.
\end{proof}
\subsection{The statement and the proof}\label{subsec:131}
Let us take an auxiliary intermediate number field $E'$ of $K/E$ such that
$E'/E$ is a $p$-extension and $\Gal(K/E')$ is $p$-torsion-free.
Put $\Lambda = \mathbb{Z}_p[[\Gal(K/E')]]$, which is a subring of $\mathcal{R}$.
For example, if $p \nmid [K: \tilde{E}]$, then we may take $E' = E$ and then $\Lambda = \mathcal{R}$.
Note that $\Lambda$ is a finite product of regular local rings.
For a prime ideal $\mathfrak{q}$ of $\Lambda$, the localization $\Lambda_{\mathfrak{q}}$ is then a regular local ring, but $\mathcal{R}_{\mathfrak{q}}$ is not even a domain in general.
The following is the first main theorem of this paper.
\begin{thm}\label{thm:103}
Let $\mathfrak{q}$ be a prime ideal of $\Lambda$ such that
\[
\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}}) \leq 2,
\qquad Z_{\mathfrak{p}}(K)^{\iota}(1)_{\mathfrak{q}} = 0
\]
for every $\mathfrak{p} \in \mathcal{T}_i$ ($1 \leq i \leq n$).
Then we have an exact sequence
\[
0 \to \left(\frac{\left(\bigwedge_{\mathcal{R}}^l H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) \right)_{/\tor}}
{\sum_{i=1}^n \bigwedge_{\mathcal{R}}^l D_{\mathcal{T}_i}(K)}\right)_{\mathfrak{q}}
\to \frac{\mathcal{R}_{\mathfrak{q}}}{\sum_{i=1}^n (\mathcal{L}^{\alg}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c})}
\to \frac{\mathcal{R}_{\mathfrak{q}}}{\Fitt_{\mathcal{R}_{\mathfrak{q}}}(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\iota}(1)_{\mathfrak{q}})}
\to 0.
\]
\end{thm}
Recall that the descriptions of $H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})$ and of $H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})$ are given in \S \ref{subsec:60}.
We immediately deduce Theorem \ref{thm:102} from Theorem \ref{thm:103} by putting $\mathcal{V} = \Sigma_f$, $n = 2$, $\mathcal{S}_1 = \mathcal{S}$, and $\mathcal{S}_2 = \ol{\mathcal{S}}$.
Concerning the condition $\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}}) \leq 2$ in Theorem \ref{thm:103}, we observe the following.
\begin{prop}\label{prop:80'}
Let $\mathfrak{p} \in S_p(E)$ and let $\mathfrak{q}$ be a prime ideal of $\Lambda$.
We have $\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}}) = - \infty$ unless $\mathfrak{q} \supset \Ann_{\Lambda}(Z_{\mathfrak{p}}(K))$.
If $\mathfrak{q} \supset \Ann_{\Lambda}(Z_{\mathfrak{p}}(K))$ holds, then we have
\[
\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}})
= \begin{cases}
\deg(\mathfrak{p}) + 1 & (\text{if $p \not \in \mathfrak{q}$ or $G_{\mathfrak{p}}(K/E)$ is $p$-torsion-free})\\
+ \infty & (\text{if $p \in \mathfrak{q}$ and $G_{\mathfrak{p}}(K/E)$ is not $p$-torsion-free}).
\end{cases}
\]
Moreover, $G_{\mathfrak{p}}(K/E)$ is $p$-torsion-free unless $E_{\mathfrak{p}}$ contains a primitive $p$-th root of unity.
\end{prop}
\begin{proof}
We write $\mathcal{G} = \Gal(K/E)$, $\mathcal{G}' = \Gal(K/E')$, $\mathcal{G}_0 = G_{\mathfrak{p}}(K/E)$, and $\mathcal{G}_0' = \mathcal{G}' \cap \mathcal{G}_0$.
Then we have $\mathcal{R} = \mathbb{Z}_p[[\mathcal{G}]]$, $\Lambda = \mathbb{Z}_p[[\mathcal{G}']]$, and $Z_{\mathfrak{p}}(K) = \mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]]$.
We also put $\mathcal{R}_0 = \mathbb{Z}_p[[\mathcal{G}_0]]$ and $\Lambda_0 = \mathbb{Z}_p[[\mathcal{G}_0']]$.
As we used in Lemma \ref{lem:a13}, by \cite[Lemma 3.1(i)]{BCG+b}, the $\mathbb{Z}_p$-rank $r_{\mathfrak{p}}$ of the $p$-component of $\mathcal{G}_0$ is $r_{\mathfrak{p}} = \deg(\mathfrak{p}) + 1$.
If $\mathfrak{q} \not \supset \Ann_{\Lambda}(\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]])$, then $\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]]_{\mathfrak{q}} = 0$, so the claim is clear.
Let us assume $\mathfrak{q} \supset \Ann_{\Lambda}(\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]])$.
First we show the inequality $\leq$ of the displayed assertion.
Note that $\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]] \simeq \mathbb{Z}_p \otimes_{\mathcal{R}_0} \mathcal{R}$ implies $\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]]_{\mathfrak{q}} \simeq (\mathbb{Z}_p)_{\mathfrak{q}_0} \otimes_{(\mathcal{R}_0)_{\mathfrak{q}_0}} \mathcal{R}_{\mathfrak{q}}$, where we put $\mathfrak{q}_0 = \mathfrak{q} \cap \Lambda_0$.
Therefore, we have
\[
\pd_{\mathcal{R}_{\mathfrak{q}}}(\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]]_{\mathfrak{q}}) \leq \pd_{(\mathcal{R}_0)_{\mathfrak{q}_0}}((\mathbb{Z}_p)_{\mathfrak{q}_0}).
\]
Using the well-known description of $\mathcal{R}_0$ as a ring of power series,
we see that $\pd_{\mathcal{R}_0}(\mathbb{Z}_p) = r_{\mathfrak{p}}$ if $\mathcal{G}_0$ is $p$-torsion-free.
We also have $\pd_{\mathcal{R}_0[1/p]}(\mathbb{Q}_p) = r_{\mathfrak{p}}$, which implies $\pd_{(\mathcal{R}_0)_{\mathfrak{q}_0}}((\mathbb{Z}_p)_{\mathfrak{q}_0}) \leq r_{\mathfrak{p}}$ if $p \not \in \mathfrak{q}$.
Therefore, we have the inequality $\leq$ of the displayed assertion.
Next we show the opposite inequality $\geq$.
For each $i \geq 0$, we have
\[
\Ext^i_{\mathcal{R}}(\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]], \mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]])
\simeq \Ext^i_{\mathcal{R}_0}(\mathbb{Z}_p, \mathbb{Z}_p) \otimes_{\mathcal{R}_0} \mathcal{R},
\]
so
\[
\Ext^i_{\mathcal{R}_{\mathfrak{q}}}(\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]]_{\mathfrak{q}}, \mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]]_{\mathfrak{q}})
\simeq \Ext^i_{\mathcal{R}_0}(\mathbb{Z}_p, \mathbb{Z}_p)_{\mathfrak{q}_0} \otimes_{(\mathcal{R}_0)_{\mathfrak{q}_0}} \mathcal{R}_{\mathfrak{q}}.
\]
If $p \not \in \mathfrak{q}$ or $\mathcal{G}_0$ is $p$-torsion-free, then the description of $\mathcal{R}_0$ as a ring of power series again implies $\Ext^{r_{\mathfrak{p}}}_{\mathcal{R}_0}(\mathbb{Z}_p, \mathbb{Z}_p)_{\mathfrak{q}_0} \simeq (\mathbb{Z}_p)_{\mathfrak{q}_0}$.
This does not vanish because the assumption $\mathfrak{q} \supset \Ann_{\Lambda}(\mathbb{Z}_p[[\mathcal{G}/\mathcal{G}_0]])$ implies $\mathfrak{q}_0 \supset \Ann_{\Lambda_0}(\mathbb{Z}_p)$.
If $\mathcal{G}_0$ is not $p$-torsion-free,
then we can show that $\Ext^i_{\mathcal{R}_0}(\mathbb{Z}_p, \mathbb{Z}_p)$ is a non-zero finite module for arbitrarily large $i \geq 0$, which implies $\Ext^i_{\mathcal{R}_0}(\mathbb{Z}_p, \mathbb{Z}_p)_{\mathfrak{q}_0} \neq 0$ if moreover $p \in \mathfrak{q}_0$.
Therefore, we obtain the claimed inequality.
We show the final assertion.
By class field theory, the decomposition group $G_{\mathfrak{p}}(K/E)$ is isomorphic to a quotient of the profinite completion $\widehat{E_{\mathfrak{p}}^{\times}}$ of $E_{\mathfrak{p}}^{\times}$.
If $E_{\mathfrak{p}}$ does not contain a primitive $p$-th root of unity, then the pro-$p$-component of $\widehat{E_{\mathfrak{p}}^{\times}}$ is a free $\mathbb{Z}_p$-module of rank $\deg(\mathfrak{p}) + 1$.
Combining this with the fact $r_{\mathfrak{p}} = \deg(\mathfrak{p}) + 1$, we conclude that the pro-$p$-component of $G_{\mathfrak{p}}(K/E)$ is a free $\mathbb{Z}_p$-module of rank $\deg(\mathfrak{p}) + 1$.
This completes the proof.
\end{proof}
\begin{cor}\label{cor:a55}
We have $\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}}) \leq 2$ if and only if
either $\mathfrak{q} \not \supset \Ann_{\Lambda}(Z_{\mathfrak{p}}(K))$ or $\deg(\mathfrak{p}) = 1$.
\end{cor}
\begin{proof}
If $\deg(\mathfrak{p}) = 1$, then $E_{\mathfrak{p}} \simeq \mathbb{Q}_p$ does not contain a primitive $p$-th root of unity (since $p \geq 3$).
Therefore, this corollary follows from Proposition \ref{prop:80'}.
\end{proof}
The proof of Theorem \ref{thm:103} occupies the rest of this subsection.
The following is the key diagram (see \eqref{eq:56}).
\begin{prop}\label{prop:46}
We have a commutative diagram with exact rows
\[
\xymatrix{
0 \ar[r]
& \bigoplus_{i=1}^n \left(\bigwedge_{\mathcal{R}}^l H^1(C^{\loc}_{\mathcal{T}_i})\right)_{/\tor} \ar[r]^-{\oplus \Psi} \ar[d]_{f_1}
& \bigoplus_{i=1}^n \Det^{-1}_{\mathcal{R}}(C^{\loc}_{\mathcal{T}_i}) \ar[d]_{f_2} \ar[r]
& \bigoplus_{i=1}^n \Coker (\Psi_{C^{\loc}_{\mathcal{T}_i}}) \ar[r] \ar[d]_{f_3}
& 0\\
0 \ar[r]
& \left(\bigwedge_{\mathcal{R}}^l H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) \right)_{/\tor} \ar[r]_-{\Psi}
& \Det_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) \ar[r]
& \Coker (\Psi_{C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1]}) \ar[r]
& 0.
}
\]
\end{prop}
\begin{proof}
First, in order to construct the lower sequence, we show that Proposition \ref{prop:30} is applicable to $C = C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1]$.
If $\mathcal{U}^c = \emptyset$ and $\mathcal{V} = \Sigma_f$, then we know by \eqref{eq:75} that $H^i(C_{\Sigma, \Sigma_f})$ is pseudo-null unless $i = 2$ (in fact it vanishes except for $H^3 \simeq \mathbb{Z}_p$).
If either $\mathcal{U}^c \neq \emptyset$ or $\mathcal{V} \subsetneqq \Sigma_f$, then by \eqref{eq:76} and Lemma \ref{lem:37}, we have $H^i(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) = 0$ unless $i = 2$.
Thus the assumptions of Proposition \ref{prop:30} hold for $C = C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1]$.
Moreover, $l = \chi_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1])$ holds by Lemma \ref{lem:42}.
Therefore, we have the homomorphism
\[
\Psi_{C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1]}:
\bigwedge_{\mathcal{R}}^l H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})
\to \Det_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}).
\]
Note that Proposition \ref{prop:31} is also applicable unless $\mathcal{U}^c = \emptyset$ and $\mathcal{V} = \Sigma_f$.
Next we construct the upper sequence by applying Proposition \ref{prop:30} to $C = C^{\loc}_{\mathcal{T}_i}$.
We know $H^i(C^{\loc}_{\mathcal{T}_i})$ is pseudo-null unless $i = 1$ by the description \eqref{eq:42} and Lemma \ref{lem:a13}.
We also have $l = \chi_{\mathcal{R}}(C^{\loc}_{\mathcal{T}_i})$ by the formula in the proof of Lemma \ref{lem:42}.
Therefore, we have the homomorphism
\[
\Psi_{C^{\loc}_{\mathcal{T}_i}}:
\bigwedge_{\mathcal{R}}^l H^1(C^{\loc}_{\mathcal{T}_i}) \to \Det^{-1}_{\mathcal{R}}(C^{\loc}_{\mathcal{T}_i}).
\]
Finally we construct the vertical arrows.
Since $\mathcal{U}^c \cup \mathcal{T}_i = \mathcal{S}_i^c$, we have a triangle
\begin{equation}\label{eq:137}
C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c} \to C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c} \to C^{\loc}_{\mathcal{T}_i}
\end{equation}
by Definition \ref{defn:32}.
Then the arrow $f_1$ is induced by the induced connecting homomorphism between $H^1$ and $H^2$.
The arrow $f_2$ is defined by
\[
\Det^{-1}_{\mathcal{R}}(C^{\loc}_{\mathcal{T}_i})
\simeq \Det_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})
\otimes_{\mathcal{R}} \Det^{-1}_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c})
\hookrightarrow \Det_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}),
\]
where the last arrow is induced by the trivialization map
\[
\iota_{C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}}: \Det^{-1}_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}) \hookrightarrow \mathcal{R}
\]
introduced just before Definition \ref{defn:38}.
The commutativity of the diagram is clear and we define $f_3$ as the induced one.
\end{proof}
We study the diagram in Proposition \ref{prop:46}.
\begin{prop}\label{prop:47}
We have an isomorphism
\[
\Coker(f_2) \simeq \frac{\mathcal{R}}{\sum_{i=1}^n (\mathcal{L}^{\alg}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c})}.
\]
\end{prop}
\begin{proof}
By the construction of $f_2$ we immediately obtain
\[
\Coker(\Det^{-1}_{\mathcal{R}}(C^{\loc}_{\mathcal{T}_i}) \hookrightarrow \Det_{\mathcal{R}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}))
\simeq \mathcal{R} / (\mathcal{L}^{\alg}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}).
\]
Then the proposition follows.
\end{proof}
\begin{prop}\label{prop:45}
Let $\mathfrak{q}$ be a prime ideal of $\Lambda$.
\begin{itemize}
\item[(1)]
In case $\mathcal{U}^c = \emptyset$ and $\mathcal{V} = \Sigma_f$, we suppose that $(\mathbb{Z}_p)_{\mathfrak{q}} = 0$ (i.e., $\mathfrak{q}$ does not contain the augmentation ideal of $\Lambda$).
Then we have
\[
\Coker (\Psi_{C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1]})_{\mathfrak{q}}
\simeq \mathcal{R}_{\mathfrak{q}} / \Fitt_{\mathcal{R}_{\mathfrak{q}}}(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\iota}(1)_{\mathfrak{q}}).
\]
\item[(2)]
Suppose that $Z_{\mathfrak{p}}(K)_{\mathfrak{q}} = 0$ for each $\mathfrak{p} \in \mathcal{T}_i$.
Then we have
\[
\Coker (\Psi_{C^{\loc}_{\mathcal{T}_i}})_{\mathfrak{q}} \simeq \mathcal{R}_{\mathfrak{q}} / \Fitt_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathcal{T}_i}(K)^{\iota}(1)_{\mathfrak{q}}).
\]
\item[(3)]
Suppose that $\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}}) \leq 2$ and that $Z_{\mathfrak{p}}(K)^{\iota}(1)_{\mathfrak{q}} = 0$ for each $\mathfrak{p} \in \mathcal{T}_i$.
Then $(\Psi_{C^{\loc}_{\mathcal{T}_i}})_{\mathfrak{q}}$ is an isomorphism.
\end{itemize}
\end{prop}
\begin{proof}
(1)
By the assumption, we have $H^1(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) = 0$ and $H^3(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})_{\mathfrak{q}} = 0$ (when $\mathcal{U}^c \neq \emptyset$ or $\mathcal{V} \subsetneqq \Sigma_f$, we do not have to take the localization).
Hence we can apply Proposition \ref{prop:30}(2) to obtain
\[
\Coker (\Psi_{C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}[1]})_{\mathfrak{q}}
\simeq \mathcal{R}_{\mathfrak{q}} / \Fitt_{\mathcal{R}_{\mathfrak{q}}}(E^1(H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}))_{\mathfrak{q}}).
\]
By \eqref{eq:93}, we also have
\[
E^1(H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}))_{\mathfrak{q}}
\simeq H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\iota}(1)_{\mathfrak{q}}.
\]
(2)
Since $H^0(C^{\loc}_{\mathcal{T}_i}) = 0$ and $H^2(C^{\loc}_{\mathcal{T}_i})_{\mathfrak{q}} = 0$ by assumption, we can apply Proposition \ref{prop:30}(2) to obtain
\[
\Coker (\Psi_{C^{\loc}_{\mathcal{T}_i}})_{\mathfrak{q}}
\simeq \mathcal{R}_{\mathfrak{q}} / \Fitt_{\mathcal{R}_{\mathfrak{q}}}(E^1(H^1(C^{\loc}_{\mathcal{T}_i}))_{\mathfrak{q}}).
\]
By \eqref{eq:94}, we also have
\[
E^1(H^1(C^{\loc}_{\mathcal{T}_i}))_{\mathfrak{q}}
\simeq H^2(C_{\mathcal{T}_i}^{\loc})^{\iota}(1)_{\mathfrak{q}}
\simeq Z_{\mathcal{T}_i}(K)^{\iota}(1)_{\mathfrak{q}}.
\]
(3) Since $H^2((C_{\mathcal{T}_i}^{\loc})^{\iota}(1))_{\mathfrak{q}} = 0$ by assumption, we have $(C_{\mathcal{T}_i}^{\loc})^{\iota}(1)_{\mathfrak{q}} \in D^{[0, 1]}(\mathcal{R}_{\mathfrak{q}})$.
By the duality \eqref{eq:94}, it follows that $(C_{\mathcal{T}_i}^{\loc})_{\mathfrak{q}} \in D^{[1, 2]}(\mathcal{R}_{\mathfrak{q}})$.
We also have $\pd_{\mathcal{R}_{\mathfrak{q}}}(H^2(C_{\mathcal{T}_i}^{\loc})_{\mathfrak{q}}) \leq 2$ by assumption.
Therefore, we may apply Proposition \ref{prop:30}(3).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:103}]
The snake lemma applied to the diagram in Proposition \ref{prop:46}, together with Proposition \ref{prop:47}, gives us an exact sequence
\begin{align}\label{eq:5.9}
0 \to \Coker \left(\Ker(f_2) \to \Ker(f_3) \right)
& \to \frac{\left(\bigwedge_{\mathcal{R}}^l H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) \right)_{/\tor}}
{\sum_{i=1}^n \bigwedge_{\mathcal{R}}^l D_{\mathcal{T}_i}(K)}\\
& \qquad \to \frac{\mathcal{R}}{\sum_{i=1}^n (\mathcal{L}^{\alg}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c})}
\to \Coker(f_3)
\to 0.
\end{align}
By Proposition \ref{prop:45}(3), the domain of $f_3$ vanishes after localization at $\mathfrak{q}$.
Moreover, we know the description of the target module of $f_3$ after localization at $\mathfrak{q}$ by Proposition \ref{prop:45}(1).
Therefore, Theorem \ref{thm:103} follows from \eqref{eq:5.9} and localization at $\mathfrak{q}$.
\end{proof}
Note that sequence \eqref{eq:5.9} can be viewed as a more general version of Theorem \ref{thm:103} that incorporates the idea of \cite[Theorem 5.9]{BCG+b}.
However, the descriptions of $\Ker(f_2)$, $\Ker(f_3)$, and $\Coker(f_3)$ seem to be complicated, so we stated only Theorem \ref{thm:103}.
\subsection{How to recover the previous result}\label{subsec:132}
In order to illustrate a relation with the work \cite{BCG+b}, we recall the notion of higher Chern classes, which is a key idea in \cite{BCG+}, \cite{BCG+b}.
For the sake of brevity, we only define Chern classes over (finite products of) regular local rings.
\begin{defn}\label{defn:136}
Let $A$ be a finite product of regular local rings and $m$ a positive integer.
We define $Z^m(A)$ as the free $\mathbb{Z}$-module on the set of height $m$ prime ideals of $A$.
For each $A$-module $M$ whose codimension is at least $m$, we define the $m$-th Chern class of $M$ by
\[
c_m(M) = \sum_{\mathfrak{q}} \length_{A_{\mathfrak{q}}}(M_{\mathfrak{q}}) [\mathfrak{q}] \in Z^m(A),
\]
where $\mathfrak{q}$ runs over the height $m$ primes of $A$.
\end{defn}
We now recall a main theorem of \cite{BCG+b} and will deduce it from Theorem \ref{thm:103}.
Recall that when $p \nmid [K: \tilde{E}]$, the ring $\mathcal{R}$ is a product of regular local rings, so we have the notions of greatest common divisors and of characteristic ideals.
\begin{thm}[{\cite[Theorem 5.6]{BCG+b}}]\label{thm:81}
Suppose $p \nmid [K: \tilde{E}]$ and set $\Sigma_f = \mathcal{V} = S_p(E)$.
Let $\theta \in \mathcal{R}$ be a greatest common divisor of $\mathcal{L}^{\alg}_{\Sigma, \mathcal{S}_i}$ for $1 \leq i \leq n$.
Let $\theta_0 \in \mathcal{R}$ be a generator of the characteristic ideal $\cha_{\mathcal{R}}(X_{\mathcal{U}}(K)_{\tor})$.
Then $\theta_0$ divides $\theta$ and we have
\begin{align}
& c_2 \left(\frac{\mathcal{R}}{\sum_{i=1}^n \theta^{-1} (\mathcal{L}^{\alg}_{\Sigma, \mathcal{S}_i})}\right)
\equiv c_2
\left(\left(\frac{\left(\bigwedge_{\mathcal{R}}^l X_{\mathcal{U}}(K) \right)_{/\tor}}{\sum_{i=1}^n \bigwedge_{\mathcal{R}}^l D_{\mathcal{T}_i}(K)}\right)_{\PN}\right)
+ c_2 \left( \frac{\theta}{\theta_0} \frac{\mathcal{R}}{\Fitt_{\mathcal{R}}(E^2(X_{\mathcal{U}^c}(K))^{\iota}(1))}\right)
\end{align}
in $Z^2(\mathcal{R})$, where the congruence means the equality outside the support of $Z_{\mathfrak{p}}(K)$ for $\mathfrak{p} \in \mathcal{U}^c$ and that of $Z_{\mathfrak{p}}(K)^{\iota}(1)$ for $\mathfrak{p} \in \mathcal{U}$.
\end{thm}
Before the proof, we observe the following proposition.
\begin{prop}\label{prop:20}
\begin{itemize}
\item[(1)]
We have an exact sequence
\begin{align}
0 & \to E^1(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}))^{\iota}(1)
\to H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})
\to H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{**}\\
& \to E^2(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}))^{\iota}(1)
\to W_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c},
\end{align}
where in general we put $W_S = \mathbb{Z}_p$ if $S = \emptyset$ and $W_S = 0$ otherwise.
\item[(2)]
For each $\mathfrak{p} \in S_p(E)$, we have an exact sequence
\[
0 \to E^1(H^2_{\Iw}(K_{\mathfrak{p}}, \mathbb{Z}_p))^{\iota}
\to D_{\mathfrak{p}}(K) \to D_{\mathfrak{p}}(K)^{**}
\to E^2(H^2_{\Iw}(K_{\mathfrak{p}}, \mathbb{Z}_p))^{\iota}
\to 0.
\]
\end{itemize}
\end{prop}
\begin{proof}
This proposition is a direct generalization of \cite[Propositions 2.7 and 2.11]{BCG+b} (or \cite[Corollary 4.1.6 and Theorem 4.1.14]{BCG+} in a special case).
The key ingredient is the duality theorems \eqref{eq:93}, \eqref{eq:94}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:81}]
We deduce Theorem \ref{thm:81} from Theorem \ref{thm:103}.
Let $\mathfrak{q}$ be a prime ideal of $\mathcal{R}$ with $\height(\mathfrak{q}) = 2$ outside the supports of the stated modules.
By Corollary \ref{cor:a55} and $r_{\mathfrak{p}} = \deg(\mathfrak{p}) + 1$, the assumption $\height(\mathfrak{q}) = 2$ implies $\pd_{\mathcal{R}_{\mathfrak{q}}}(Z_{\mathfrak{p}}(K)_{\mathfrak{q}}) \leq 2$ for any $\mathfrak{p} \in S_p(E)$.
Therefore, the condition on $\mathfrak{q}$ required in Theorem \ref{thm:103} holds, so we may apply Theorem \ref{thm:103}.
By the assumption on $\mathfrak{q}$, we have isomorphisms
\[
X_{\mathcal{U}^c}(K)^{\iota}(1)_{\mathfrak{q}} \simeq H^2(C_{\Sigma, \mathcal{U}^c})^{\iota}(1)_{\mathfrak{q}}
\]
by \eqref{eq:77}, and
\[
X_{\mathcal{U}}(K)_{\mathfrak{q}} \simeq H^2(C_{\Sigma, \mathcal{U}})_{\mathfrak{q}}
\]
by \eqref{eq:75} and \eqref{eq:77} (note that $\Sigma_f \setminus \mathcal{U}^c = \mathcal{U}$).
Combining these with Proposition \ref{prop:20}(1), we obtain
\[
E^1(X_{\mathcal{U}^c}(K))^{\iota}(1)_{\mathfrak{q}} \simeq (X_{\mathcal{U}}(K)_{\tor})_{\mathfrak{q}}.
\]
Therefore,
\[
\cha_{\mathcal{R}_{\mathfrak{q}}}(X_{\mathcal{U}^c}(K)^{\iota}(1)_{\mathfrak{q}})
= \cha_{\mathcal{R}_{\mathfrak{q}}}(E^1(X_{\mathcal{U}^c}(K))^{\iota}(1)_{\mathfrak{q}})
= \theta_0 \mathcal{R}_{\mathfrak{q}}.
\]
Then Proposition \ref{prop:91} implies that
\begin{align}\label{eq:133}
\Fitt_{\mathcal{R}_{\mathfrak{q}}}(H^2(C_{\Sigma, \mathcal{U}^c})^{\iota}(1)_{\mathfrak{q}})
&= \cha_{\mathcal{R}_{\mathfrak{q}}}(X_{\mathcal{U}^c}(K)^{\iota}(1)_{\mathfrak{q}})
\Fitt_{\mathcal{R}_{\mathfrak{q}}}(E^2(X_{\mathcal{U}^c}(K))^{\iota}(1)_{\mathfrak{q}})\\
&= \theta_0 \Fitt_{\mathcal{R}_{\mathfrak{q}}}(E^2(X_{\mathcal{U}^c}(K))^{\iota}(1)_{\mathfrak{q}}).
\end{align}
It is easy to see that
\[
\left( \frac{\mathcal{R}}{\sum_{i=1}^n (\mathcal{L}^{\alg}_{\Sigma, \mathcal{S}_i})} \right)_{\PN}
= \frac{\theta \mathcal{R}}{\sum_{i=1}^n (\mathcal{L}^{\alg}_{\Sigma, \mathcal{S}_i})}
\simeq \frac{\mathcal{R}}{\sum_{i=1}^n (\theta^{-1}\mathcal{L}^{\alg}_{\Sigma, \mathcal{S}_i})}.
\]
Hence Theorem \ref{thm:103} tells us an exact sequence
\[
0 \to \left(\frac{\left(\bigwedge_{\mathcal{R}}^l X_{\mathcal{U}}(K) \right)_{/\tor}}
{\sum_{i=1}^n \bigwedge_{\mathcal{R}}^l D_{\mathcal{T}_i}(K)}\right)_{\PN, \mathfrak{q}}
\to \frac{\mathcal{R}_{\mathfrak{q}}}{\sum_{i=1}^n \theta^{-1} (\mathcal{L}^{\alg}_{\Sigma, \mathcal{S}_i})}
\to \theta \frac{\mathcal{R}_{\mathfrak{q}}}{\Fitt_{\mathcal{R}_{\mathfrak{q}}}(H^2(C_{\Sigma, \mathcal{U}^c})^{\iota}(1)_{\mathfrak{q}})}
\to 0.
\]
By applying \eqref{eq:133} to the final module, we obtain the theorem.
\end{proof}
\begin{rem}\label{rem:K-gp}
As explained above, the main results of \cite{BCG+} and \cite{BCG+b} are formulated using the Chern classes of modules.
Let us now briefly discuss generalization of the notion of the Chern classes to equivariant settings.
For each positive integer $m$, let $\mathcal{C}_{\mathcal{R}}^m$ denote the category of finitely generated $\mathcal{R}$-modules whose projective dimensions are finite and whose codimensions are at least $m$.
Note that, if $\mathcal{R}$ is a finite product of regular local rings, then the finiteness of the projective dimension always holds.
Let us consider the Grothendieck group $K_0(\mathcal{C}_{\mathcal{R}}^m)$ of $\mathcal{C}_{\mathcal{R}}^m$ as an exact category.
Recall that $K_0(\mathcal{C}_{\mathcal{R}}^m)$ is defined by describing generators and relations: the generators are $[M]$ for objects $M$ and the relations are $[M] = [M'] + [M'']$ for exact sequences $0 \to M' \to M \to M'' \to 0$.
For $m = 1$, it is well-known that we have a natural isomorphism $K_0(\mathcal{C}_{\mathcal{R}}^1) \simeq Q(\mathcal{R})^{\times}/\mathcal{R}^{\times}$, and this group is often used to formulate equivariant main conjectures in Iwasawa theory.
For general $m \geq 1$, if $p \nmid [K: \tilde{E}]$, one can also check that the map $[M] \mapsto c_m(M)$ gives an isomorphism $K_0(\mathcal{C}_{\mathcal{R}}^m) \simeq Z^m(\mathcal{R})$ (we omit the proof here).
These observations lead us to the idea of generalizing the Chern class $c_m(M) \in Z^m(\mathcal{R})$ in the non-equivariant settings to the class $[M] \in K_0(\mathcal{C}_{\mathcal{R}}^m)$ in the equivariant settings.
However, a crucial obstruction is that, in general, the modules in the main results of this paper (e.g., Theorem \ref{thm:102}) are not necessarily of finite projective dimension, so we cannot deduce a formula in the Grothendieck group.
Note also that, even in the non-equivariant settings, the exact sequences have more information than equations of the Chern classes.
Therefore, it is reasonable to formulate the main results in the form of exact sequences.
In \cite[\S 1.1]{BCG+}, the $m$-th Chern classes are also defined for complexes that are exact in codimension less than $m$.
This notion also seems to be amenable to equivariant situations, but it is unclear how to apply it to our arithmetic situation.
This is because we cannot expect that the arithmetic complexes concerned are exact in codimension less than $2$.
\end{rem}
\section{The second main theorem}\label{sec:11}
Our second main theorem focuses on the case where $l = 1$ in \S \ref{subsec:choice}.
Note that we have $l = 1$ if and only if $n = 2$ and $\mathcal{T}_1$ (and thus $\mathcal{T}_2$) consists of a unique prime of degree one.
Let $\Gal(K/\widetilde{E})^{(p')}$ denote the maximal subgroup of $\Gal(K/\widetilde{E})$ whose order is prime to $p$.
Let $\overline{\mathbb{Q}_p}$ be a fixed algebraic closure of $\mathbb{Q}_p$.
For each character $\psi: \Gal(K/\widetilde{E})^{(p')} \to \overline{\mathbb{Q}_p}^{\times}$, let $\mathcal{R}^{\psi}$ denote the $\psi$-component of $\mathcal{R}$.
Then we have a decomposition
\[
\mathcal{R} = \prod_{\psi} \mathcal{R}^{\psi},
\]
where $\psi$ runs over equivalence classes of characters of $\Gal(K/\widetilde{E})^{(p')}$; two characters $\psi, \psi'$ are said to be equivalent if $\psi' = \psi^{\sigma}$ for some $\sigma \in \Gal(\ol{\mathbb{Q}_p}/\mathbb{Q}_p)$.
Note that $\mathcal{R}^{\psi}$ is a local ring but not a domain unless $\Gal(K/\widetilde{E})^{(p')} = \Gal(K/\widetilde{E})$, i.e., $p \nmid [K: \tilde{E}]$.
The following is the second main theorem, of which Theorem \ref{thm:101} is a special case.
\begin{thm}\label{thm:105}
Keep the notation in \S \ref{subsec:choice} and suppose $l = 1$.
Let $\psi$ be a character of $\Gal(K/\widetilde{E})^{(p')}$ such that
\begin{equation}\label{eq:134}
Z_v(K)^{\omega\psi^{-1}} = 0
\end{equation}
for any $v \in \mathcal{V} \setminus S_p(E)$, and moreover that $X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}$ is pseudo-null.
\begin{itemize}
\item[(1)]
Suppose $d = 1$ and $\mathcal{V} = S_p(E)$ (note that condition \eqref{eq:134} is trivial in this case).
We write $S_p(E) = \{\mathfrak{p}, \ol{\mathfrak{p}}\}$.
Then there exists an $\mathcal{R}^{\psi}$-module $\mathcal{A}$ which fits in an exact sequence
\[
0 \to X(K)^{\psi} \to \mathcal{A} \to Z_{\Sigma_f \setminus S_p(E)}^0(K)^{\psi} \to 0
\]
such that we have an exact sequence
\[
\mathbb{Z}_p(1)^{\psi}
\to \mathcal{A}
\to \frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \{\mathfrak{p}\}}, \mathcal{L}^{\alg, \psi}_{\Sigma, \{\ol{\mathfrak{p}}\}})}
\to E^2(X_{\Sigma_f \setminus S_p(E)}(K)^{\omega \psi^{-1}})^{\iota}(1) \to 0.
\]
Moreover, the image of the first map from $\mathbb{Z}_p(1)^{\psi}$ is finite.
\item[(2)]
Suppose either $d \geq 2$ or $\mathcal{V} \supsetneqq S_p(E)$.
Then there exist $\mathcal{R}^{\psi}$-modules $\mathcal{A}$ and $\mathcal{B}$ which fit in exact sequences
\begin{equation}\label{eq:62}
0 \to X_{\ol{\mathcal{U}^c}}(K)^{\psi} \to \mathcal{A} \to Z_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}^0(K)^{\psi} \to 0
\end{equation}
and
\begin{equation}\label{eq:63}
0 \to X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}} \to \mathcal{B} \to Z_{\ol{\mathcal{U}^c}}^0(K)^{\omega \psi^{-1}} \to 0
\end{equation}
such that we have an exact sequence
\[
0 \to \mathcal{A} \to
\frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_1^c}, \mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_2^c})}
\to E^2(\mathcal{B})^{\iota}(1) \to 0.
\]
\end{itemize}
\end{thm}
We will give explicit constructions of $\mathcal{A}$ and $\mathcal{B}$ in the proof.
Before that, we give a remark and a corollary.
\begin{rem}\label{rem:BCG+b_main}
We can deduce the main result of \cite{BCG+b} from Theorem \ref{thm:105} as follows.
The result \cite[Theorem 5.12]{BCG+b} deals with the case where $p \nmid [K: \tilde{E}]$ and $\Sigma = S_p(E) \cup S_{\infty}(E)$, and concerns the second Chern classes.
The condition \eqref{eq:134} holds trivially by $\Sigma_f = S_p(E)$.
Then we only have to recall that $c_2$ is additive with respect to exact sequences, and that $c_2(E^2(M)) = c_2(M)$ for pseudo-null module $M$ (see \cite[Remark 5.11]{BCG+b}).
\end{rem}
\begin{cor}\label{cor:73}
Suppose $l = 1$.
Let $\psi$ be a character of $\Gal(K/\widetilde{E})^{(p')}$.
Suppose \eqref{eq:134} for any $v \in \mathcal{V} \setminus S_p(E)$ and that $Z_v(K)^{\psi} = 0$ for $v \in \Sigma_f \setminus \mathcal{V}$.
Then the following are equivalent.
\begin{itemize}
\item[(a)]
The module
\[
\frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_1^c}, \mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_2^c})}
\]
is pseudo-null.
\item[(b)]
Both $X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}$ and $X_{\ol{\mathcal{U}^c}}(K)^{\psi}$ are pseudo-null.
\end{itemize}
Moreover, if these conditions hold, we have
\[
X_{\ol{\mathcal{U}^c}}(K)^{\psi}_{\fin} = 0
\]
unless $d = 1$, $\mathcal{V} = S_p(E)$, and $\psi = \omega$, in which case $X(K)^{\omega}_{\fin}$ is cyclic.
\end{cor}
\begin{proof}
This is a generalization of \cite[Proposition 5.10]{BCG+b}, whose proof we will follow.
Firstly, Theorem \ref{thm:105} directly shows (b) $\Rightarrow$ (a).
The final assertion on $X_{\ol{\mathcal{U}^c}}(K)^{\psi}_{\fin}$ under (a) and (b) follows from Theorem \ref{thm:105} and an algebraic proposition (e.g., \cite[Lemma A.3]{BCG+}) that $\left(\frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_1^c}, \mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_2^c})}\right)_{\fin} = 0$.
It remains to show (a) $\Rightarrow$ (b), so let us suppose (a).
We first observe that, by Definition \ref{defn:38}, the element $\mathcal{L}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}^{\alg}$ annihilates $H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c})$ for each $i = 1, 2$.
We shall show the pseudo-nullity of $X_{\ol{\mathcal{U}^c}}(K)^{\psi}$.
By \eqref{eq:77} and the natural surjective homomorphism
\[
X_{\mathcal{V} \setminus \mathcal{S}_i^c}(K) \twoheadrightarrow X_{\mathcal{V} \setminus \ol{\mathcal{U}}}(K),
\]
the element $\mathcal{L}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}^{\alg}$ also annihilates $X_{\mathcal{V} \setminus \ol{\mathcal{U}}}(K)$.
Therefore, the pseudo-nullity of $X_{\ol{\mathcal{U}^c}}(K)^{\psi} = X_{\mathcal{V} \setminus \ol{\mathcal{U}}}(K)^{\psi}$ (similar as Lemma \ref{lem:135}) follows from (a).
Next we show the pseudo-nullity of $X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}$.
As in the proof of Proposition \ref{prop:46}, triangle \eqref{eq:137} induces an exact sequence
\[
0 \to D_{\mathcal{T}_i}(K) \to H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}) \to H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}),
\]
where the injectivity follows from Lemma \ref{lem:37}.
We know that $E^1(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}))^{\iota}(1)$ coincides with $H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})_{\tor}$ by Proposition \ref{prop:20}(1).
Moreover, $D_{\mathcal{T}_i}(K)$ is torsion-free by Proposition \ref{prop:22}(2).
From these observations, we see that $E^1(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}))^{\iota}(1)$ maps injectively into $H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c})$.
In particular, the element $\mathcal{L}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_i^c}^{\alg, \psi}$ annihilates $E^1(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}})^{\iota}(1)$.
By assumption (a), it follows that $E^1(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}})^{\iota}(1)$ is pseudo-null.
Therefore, $H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}}$ is pseudo-null.
\end{proof}
In the rest of this section, we prove Theorem \ref{thm:105}.
We first show a couple of propositions that are valid without assuming $l = 1$.
The following proposition is a motivation for the conditions in Theorem \ref{thm:105}.
Note that the pseudo-nullity of $X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}$ is a stronger condition than Greenberg's conjecture.
In fact it is not necessarily true; in \cite{Kata_04}, the author obtained examples for which the pseudo-nullity does not hold for tamely ramified Iwasawa modules (when $E$ is imaginary quadratic, $K = \tilde{E}(\mu_p)$, and $\psi = \omega$).
\begin{prop}\label{prop:22}
The following are true.
\begin{itemize}
\item[(1)]
Let $\psi$ be a character of $\Gal(K/\widetilde{E})^{(p')}$.
The module $H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi}$ is torsion-free if and only if \eqref{eq:134} holds for any $v \in \mathcal{V} \setminus S_p(E)$ and moreover $X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}$ is pseudo-null.
\item[(2)]
$D_{\mathfrak{p}}(K)$ is torsion-free for each $\mathfrak{p} \in S_p(E)$.
\end{itemize}
\end{prop}
\begin{proof}
This proposition can be proved in the same way as in \cite[Remark 4.2.5]{BCG+}, as follows.
By Proposition \ref{prop:20}, the module $H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi}$ (resp.~$D_{\mathfrak{p}}(K)$) is torsion-free if and only if $E^1(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}))^{\omega \psi^{-1}} = 0$ (resp.~$E^1(H^2_{\Iw}(K_{\mathfrak{p}}, \mathbb{Z}_p)) = 0$),
that is, $H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}}$ (resp.~$H^2_{\Iw}(K_{\mathfrak{p}}, \mathbb{Z}_p)$) is pseudo-null.
Therefore, claim (1) follows from \eqref{eq:77} and $Z_v(K)_{\PN} = 0$ for finite prime $v$ outside $p$, and claim (2) follows from \eqref{eq:42} and Lemma \ref{lem:a13}.
\end{proof}
\begin{lem}\label{lem:135}
Let $\psi$ be a character of $\Gal(K/\widetilde{E})^{(p')}$.
Suppose that \eqref{eq:134} holds for any $v \in \mathcal{V} \setminus S_p(E)$.
Then we have $X_{\mathcal{V} \setminus \mathcal{U}^c}(K)^{\psi} \simeq X_{\mathcal{U}}(K)^{\psi}$.
\end{lem}
\begin{proof}
We have an exact sequence
\[
\bigoplus_{v \in \mathcal{V} \setminus S_p(E)} D_v(K)
\to X_{\mathcal{V} \setminus \mathcal{U}^c}(K)
\to X_{\mathcal{U}}(K) \to 0.
\]
For each $v \in \mathcal{V} \setminus S_p(E)$, assumption \eqref{eq:134} and the duality \eqref{eq:94} imply that $D_v(K)^{\psi} = 0$.
Hence the lemma follows.
\end{proof}
\begin{prop}\label{prop:74}
The following are true.
\begin{itemize}
\item[(1)]
Let $\psi$ be a character of $\Gal(K/\widetilde{E})^{(p')}$.
Suppose that \eqref{eq:134} holds for any $v \in \mathcal{V} \setminus S_p(E)$ and moreover $X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}$ is pseudo-null.
Then we have a natural isomorphism
\[
\bigcap_{\mathcal{R}^{\psi}}^l H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi} \simeq \Det_{\mathcal{R}^{\psi}}(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi}.
\]
\item[(2)]
For each $1 \leq i \leq n$, we have a natural isomorphism
\[
\bigcap_{\mathcal{R}}^l D_{\mathcal{T}_i}(K) \simeq \Det_{\mathcal{R}}^{-1}(C_{\mathcal{T}_i}^{\loc}).
\]
\end{itemize}
\end{prop}
\begin{proof}
Recall that, in Proposition \ref{prop:46}, we checked that the conditions of Proposition \ref{prop:30} hold for $C = C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c}^{\psi}[1]$ and for $C = C_{\mathcal{T}_i}^{\loc}$.
Then by Proposition \ref{prop:22} we can apply Proposition \ref{prop:43} to deduce the proposition.
\end{proof}
\begin{prop}\label{prop:a18}
Suppose $d = 1$ (i.e., $E$ is an imaginary quadratic field).
Then we have the following.
\begin{itemize}
\item[(1)]
The $\mathcal{R}$-module $\mathbb{Z}_p(1)$ does not appear as a submodule or a quotient module of $X(K)$.
\item[(2)]
If we assume that $X(K)^{\omega}$ is pseudo-null, then $\mathbb{Z}_p(1)$ does not appear as a submodule or a quotient module of $E^2(X(K))$.
\end{itemize}
\end{prop}
\begin{proof}
(1) follows from \cite[Proposition 4.1.15]{BCG+}.
For (2), we observe the duality
\[
X(K)^{\omega}_{/\fin} \simeq E^2(E^2(X(K)^{\omega}_{/\fin}))
\]
(see Proposition \ref{prop:88}).
Note also that $E^2(\mathbb{Z}_p(1)) \simeq \mathbb{Z}_p(1)$.
Then it is easy to see that $\mathbb{Z}_p(1)$ is a sub (resp.~a quotient) of $E^2(X(K))^{\omega}$ if and only if $\mathbb{Z}_p(1)$ is a quotient (resp.~a sub) of $X(K)^{\omega}$.
Therefore, claim (2) follows from claim (1).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:105}]
For a while we treat all cases simultaneously.
Let $\mathcal{T}_1 = \{\mathfrak{p}\}$ and $\mathcal{T}_2 = \{\ol{\mathfrak{p}}\}$.
By Propositions \ref{prop:20} and \ref{prop:22} we have a commutative diagram with exact rows
\[
\xymatrix{
0 \ar[r]
& D_{\{\mathfrak{p}, \ol{\mathfrak{p}}\}}(K)^{\psi} \ar[r] \ar[d]_{f_1}
& D_{\{\mathfrak{p}, \ol{\mathfrak{p}}\}}(K)^{\psi, **} \ar[d]_{f_2} \ar[r]
& E^2(Z_{\{\mathfrak{p}, \ol{\mathfrak{p}}\}}(K)^{\omega \psi^{-1}})^{\iota}(1) \ar[r] \ar[d]_{f_3}
& 0 \\
0 \ar[r]
& H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi} \ar[r]
& H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi, **} \ar[r]
& E^2(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}})^{\iota}(1) \ar[r]
& W_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}^{\psi}.
}
\]
By Proposition \ref{prop:74}, this diagram can be identified with the $\psi$-component of that in Proposition \ref{prop:46}.
We shall show that the snake lemma applied to this diagram proves Theorem \ref{thm:105}.
By Proposition \ref{prop:47}, we know that
\[
\Coker(f_2) \simeq \frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_1^c}, \mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_2^c})}.
\]
To study $f_1$, we use the exact sequence
\begin{equation}\label{eq:a16}
0 \to X_{\mathcal{U}}(K)^{\psi}
\to H^2(C_{\Sigma, \mathcal{V} \setminus \mathcal{U}^c})^{\psi}
\to Z_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}^0(K)^{\psi} \to 0
\end{equation}
obtained by \eqref{eq:77} and Lemma \ref{lem:135}.
Putting $\mathcal{A} = \Coker(f_1)$, we deduce an exact sequence
\begin{equation}\label{eq:a17}
0 \to X_{\ol{\mathcal{U}^c}}(K)^{\psi} \to \mathcal{A} \to Z_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}^0(K)^{\psi} \to 0,
\end{equation}
which is claimed in the theorem.
Similarly, to study $f_3$, we use the exact sequence
\begin{equation}\label{eq:61}
0 \to X_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}(K)^{\omega \psi^{-1}}
\to H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}}
\to Z_{\mathcal{U}}^0(K)^{\omega \psi^{-1}} \to 0
\end{equation}
obtained by \eqref{eq:77} and \eqref{eq:134}.
All modules in \eqref{eq:61} are pseudo-null by assumption.
Now we consider (1) (so $\mathcal{U} = S_p(E)$).
In this case, since we have $E^i(\mathbb{Z}_p) = 0$ and $E^i(Z_{S_p(E)}(K)) = 0$ for $i \neq 2$, we deduce from \eqref{eq:61} an exact sequence
{\small
\[
0 \to E^2(\mathbb{Z}_p)^{\omega \psi^{-1}} \to E^2(Z_{S_p(E)}(K))^{\omega \psi^{-1}}
\to E^2(H^2(C_{\Sigma, \Sigma_f \setminus S_p(E)}))^{\omega \psi^{-1}}
\to E^2(X_{\Sigma_f \setminus S_p(E)}(K))^{\omega \psi^{-1}} \to 0.
\]
}
Hence we have
\[
\Ker(f_3) \simeq \mathbb{Z}_p(1)^{\psi},
\qquad \Coker(f_3) \simeq E^2(X_{\Sigma_f \setminus S_p(E)}(K)^{\omega \psi^{-1}})^{\iota}(1).
\]
Putting these together, the snake lemma induces an exact sequence
\[
\mathbb{Z}_p(1)^{\psi} \to \mathcal{A}
\to \frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \{\mathfrak{p}\}}, \mathcal{L}^{\alg, \psi}_{\Sigma, \{\ol{\mathfrak{p}}\}})} \to E^2(X_{\Sigma_f \setminus S_p(E)}(K)^{\omega \psi^{-1}})^{\iota}(1) \to W_{\Sigma_f \setminus S_p(E)}^{\psi}.
\]
The last map to $W_{\Sigma_f \setminus S_p(E)}^{\psi}$ is zero because, we only have to consider the case $\Sigma_f = S_p(E)$ and then Proposition \ref{prop:a18}(2) applies.
Let us moreover show that the image of the first map from $\mathbb{Z}_p(1)^{\psi}$ is finite.
It is easy to see that the module $Z_{\Sigma_f \setminus S_p(E)}(K)$ does not contain $\mathbb{Z}_p(1)$ as a submodule.
Then Proposition \ref{prop:a18}(1) implies that $\mathcal{A}$ does not contain $\mathbb{Z}_p(1)$ as a submodule, so the claim holds.
Now we consider (2).
We first observe that the natural map
\[
Z_{\mathcal{U}}^0(K)^{\omega \psi^{-1}} \to Z_{\{\mathfrak{p}, \ol{\mathfrak{p}}\}}(K)^{\omega \psi^{-1}}
\]
is surjective.
If $d \geq 2$, this follows from $\mathcal{U} \supsetneqq \{\mathfrak{p}, \ol{\mathfrak{p}}\}$.
If $\mathcal{V} \supsetneqq S_p(E)$, then condition \eqref{eq:134} clearly implies $(\mathbb{Z}_p)^{\omega \psi^{-1}} = 0$ (i.e., $\psi \neq \omega$).
Thus the surjectivity holds.
Therefore, by \eqref{eq:61}, we can define a module $\mathcal{B}$ by the following exact sequence
\begin{equation}\label{eq:301}
0 \to \mathcal{B} \to H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}}
\to Z_{\{\mathfrak{p}, \ol{\mathfrak{p}}\}}(K)^{\omega \psi^{-1}} \to 0,
\end{equation}
and then $\mathcal{B}$ fits in the short exact sequence (with $\mathcal{B}$ in the middle) claimed in the theorem.
On the other hand, from \eqref{eq:301} we deduce an exact sequence
\[
0 \to E^2(Z_{\{\mathfrak{p}, \ol{\mathfrak{p}}\}}(K)^{\omega \psi^{-1}}) \to E^2(H^2(C_{\Sigma, (\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c})^{\omega \psi^{-1}}) \to E^2(\mathcal{B}) \to 0.
\]
Hence we have
\[
\Ker(f_3) = 0,
\qquad \Coker(f_3) \simeq E^2(\mathcal{B})^{\iota}(1).
\]
Putting these together, the snake lemma induces an exact sequence
\[
0 \to \mathcal{A} \to
\frac{\mathcal{R}^{\psi}}{(\mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_1^c}, \mathcal{L}^{\alg, \psi}_{\Sigma, \mathcal{V} \setminus \mathcal{S}_2^c})}
\to E^2(\mathcal{B})^{\iota}(1) \to W_{(\Sigma_f \setminus \mathcal{V}) \cup \mathcal{U}^c}^{\psi}.
\]
Finally we show that the last map is zero.
We only have to deal with the case where $\Sigma_f = \mathcal{V}$ and $\mathcal{U}^c = \emptyset$ (so $d = 1$).
In that case, we have $\mathcal{B} \simeq X(K)^{\omega \psi^{-1}}$, and Proposition \ref{prop:a18}(2) again concludes the proof.
\end{proof}
\renewcommand{\thesection}{\Alph{section}}
\setcounter{section}{0}
\section{Properties of Fitting ideals}\label{sec:87}
Let $\mathcal{R}$ be a ring which contains a regular local ring $\Lambda$ as in the first paragraph of \S \ref{subsec:a22}.
Let $\mathcal{P}^1_{\mathcal{R}}$ be the category of finitely generated torsion $\mathcal{R}$-modules $P$ with $\pd_{\mathcal{R}}(P) \leq 1$.
This is equivalent to that $P$ admits an exact sequence of the form
\begin{equation}\label{eq:pres}
0 \to \mathcal{R}^a \overset{H}{\to} \mathcal{R}^a \to P \to 0
\end{equation}
(with an integer $a \geq 0$).
Then the Fitting ideal $\Fitt_{\mathcal{R}}(P)$ is by definition generated by the single element $\det(H)$, which is a non-zero-divisor, so in particular $\Fitt_{\mathcal{R}}(P)$ is invertible as a fractional ideal.
\begin{prop}\label{prop:A1}
For each $P \in \mathcal{P}^1_{\mathcal{R}}$, the following hold.
\begin{itemize}
\item[(1)]
We have $E^1(P) \in \mathcal{P}^1_{\mathcal{R}}$ and $E^1(E^1(P)) \simeq P$.
\item[(2)]
We have $\Fitt_{\mathcal{R}}(E^1(P)) = \Fitt_{\mathcal{R}}(P)$.
\end{itemize}
\end{prop}
\begin{proof}
This proposition is well-known to experts, but for the convenience of the reader we give a proof.
Let us take a sequence of the form \eqref{eq:pres}, which yields an exact sequence
\begin{equation}\label{eq:pres'}
0 \to (\mathcal{R}^a)^* \overset{H^*}{\to} (\mathcal{R}^a)^* \to E^1(P) \to 0,
\end{equation}
where $H^*$ is the map induced by $H$.
This implies that $E^1(P) \in \mathcal{P}^1_{\mathcal{R}}$ and
\[
\Fitt_{\mathcal{R}}(E^1(P)) = (\det(H^*)) = (\det(H)) = \Fitt_{\mathcal{R}}(P).
\]
Moreover, \eqref{eq:pres'} yields an exact sequence
\[
0 \to (\mathcal{R}^a)^{**} \overset{H^{**}}{\to} (\mathcal{R}^a)^{**} \to E^1(E^1(P)) \to 0.
\]
Comparing this sequence with \eqref{eq:pres}, together with $(\mathcal{R}^a)^{**} \simeq \mathcal{R}^a$, shows $E^1(E^1(P)) \simeq P$.
\end{proof}
Now we consider codimension two analogues of Proposition \ref{prop:A1}.
Let $\mathcal{P}^2_{\mathcal{R}}$ be the category of pseudo-null $\mathcal{R}$-modules $M$ with $\pd_{\mathcal{R}}(M) \leq 2$.
\begin{prop}\label{prop:88}
For each $M \in \mathcal{P}^2_{\mathcal{R}}$, the following hold.
\begin{itemize}
\item[(1)]
We have $E^2(M) \in \mathcal{P}^2_{\mathcal{R}}$ and $E^2(E^2(M)) \simeq M$.
\item[(2)]
We have $\Fitt_{\mathcal{R}}(E^2(M)) = \Fitt_{\mathcal{R}}(M)$.
\end{itemize}
\end{prop}
First we show claim (1).
\begin{proof}[Proof of Proposition \ref{prop:88}(1)]
This is proved in a similar way as Proposition \ref{prop:A1}(1).
Let us choose a module $P \in \mathcal{P}^1_{\mathcal{R}}$ and a surjective homomorphism from $P$ to $M$; for instance, if $f \in \mathcal{R}$ is an annihilator of $M$ that is a non-zero-divisor, we may take $P$ as a direct sum of copies of $\mathcal{R}/(f)$.
Then, letting $Q$ denote the kernel of $P \to M$, we obtain an exact sequence
\begin{equation}\label{eq:pres2}
0 \to Q \to P \to M \to 0.
\end{equation}
By $M \in \mathcal{P}^2_{\mathcal{R}}$ and $P \in \mathcal{P}^1_{\mathcal{R}}$, we have $Q \in \mathcal{P}^1_{\mathcal{R}}$.
Then sequence \eqref{eq:pres2} induces an exact sequence
\begin{equation}\label{eq:pres2'}
0 \to E^1(P) \to E^1(Q) \to E^2(M) \to 0.
\end{equation}
This sequence, together with the fact $E^1(P), E^1(Q) \in \mathcal{P}^1_{\mathcal{R}}$ by Proposition \ref{prop:A1}(1), implies that $\pd_{\mathcal{R}}(E^2(M)) \leq 2$.
Since $M$ is pseudo-null, so is $E^2(M)$.
Therefore, we have $E^2(M) \in \mathcal{P}^2_{\mathcal{R}}$.
Moreover, comparing \eqref{eq:pres2} with the sequence
\[
0 \to E^1(E^1(Q)) \to E^1(E^1(P)) \to E^2(E^2(M)) \to 0
\]
induced by \eqref{eq:pres2'}, and using the facts $E^1(E^1(P)) \simeq P$ and $E^1(E^1(Q)) \simeq Q$ by Proposition \ref{prop:A1}(1), we obtain $E^2(E^2(M)) \simeq M$.
\end{proof}
For the proof of Proposition \ref{prop:88}(2), we need an auxiliary lemma.
\begin{lem}\label{lem:a88}
For any finitely generated torsion $\mathcal{R}$-module $M$, there exists a finite presentation of $M$
\[
\mathcal{R}^a \overset{H}{\to} \mathcal{R}^b \to M \to 0
\]
such that all $b \times b$ minors of the presentation matrix of $H$ are non-zero-divisors of $\mathcal{R}$.
\end{lem}
\begin{proof}
If $\Lambda$ has only finitely many elements, then $\Lambda$ is a finite field and the only torsion module is the zero module, so the assertion is trivial.
Therefore, we may assume that $\Lambda$ has infinitely many elements.
Let $\mathcal{R}^m \overset{H}{\to} \mathcal{R}^n \to M \to 0$ be any finite presentation of $M$ ($m \geq n$).
We identify $H$ with its presentation matrix in $M_{m, n}(\mathcal{R})$.
Let us fix a non-zero-divisor $f \in \mathcal{R}$ that annihilates $M$.
For an element $X = (x_{i j})_{i, j} \in M_{m, n}(\Lambda)$, we consider a matrix
\[
H_X = \begin{pmatrix}
f I_n \\ H + f X
\end{pmatrix} \in M_{m+n, n}(\mathcal{R}),
\]
where $I_n$ denotes the identity matrix of size $n$.
Then $H_X$ can also be regarded as a finite presentation of $M$.
We will find an element $X \in M_{m, n}(\Lambda)$ such that all $n \times n$ minors of $H_X$ are non-zero-divisors, which would complete the proof of the lemma.
By a pair $(I, J)$, we will mean a pair of subsets $I \subset \{1, 2, \dots, m\}$ and $J \subset \{1, 2, \dots, n\}$ such that $\# I = \# J$.
For a matrix $A = (a_{ij})_{1 \leq i \leq m, 1 \leq j \leq n}$ of size $m \times n$, let us write
\[
A_{I, J} = (a_{ij})_{i \in I, j \in J},
\]
i.e., the square submartix of $A$ obtained by picking the rows in $I$ and the columns in $J$.
We observe that any $n \times n$ minor of $H_X$ is the product of a power of $f$ and a minor of $H + fX$ (not necessary of degree $n$).
Therefore, the required property of $X$ is equivalent to that $\det((H + fX)_{I, J}) \in \mathcal{R}$ is a non-zero-divisor for any pair $(I, J)$.
Let us write $\mathcal{R}[X]$ (resp.~$\Lambda[X]$) for the polynomial ring over $\mathcal{R}$ (resp.~$\Lambda$) in variables $\{x_{ij}\}_{1 \leq i \leq m, 1 \leq j \leq n}$; here we regard $\{x_{ij}\}_{i, j}$ as indeterminates.
For example, $\det(X_{I, J}) \in \Lambda[X]$ is a nonzero homogenous polynomial of degree $\# I (= \# J)$.
By the definition of the determinant, we have
\begin{equation}\label{eq:Adet2}
\det((H + fX)_{I, J}) = f^{\# I} \det(X_{I, J}) + (\text{lower degree}),
\end{equation}
where (lower degree) denotes a polynomial in $\mathcal{R}[X]$ whose degree is strictly less than $\# I$.
We put
\[
D_X = \prod_{(I, J)} \det((H + fX)_{I, J}),
\]
where $(I, J)$ runs over all the pairs we are considering.
By taking the product of \eqref{eq:Adet2}, we obtain
\begin{equation}\label{eq:Adet}
D_X = f^N F(X) + (\text{lower degree})
\end{equation}
with $N = \sum_{(I, J)} \# I$, $F(X) = \prod_{(I, J)} \det(X_{I, J})$, and (lower degree) denotes a polynomial in $\mathcal{R}[X]$ whose degree is strictly less than that of $F(X)$.
Since $\mathcal{R}$ is free of finite rank over $\Lambda$, we have the norm map $\mathsf{N}: \mathcal{R} \to \Lambda$; for $a \in \mathcal{R}$, the norm $\mathsf{N}(a)$ is defined as the determinant of the presentation matrix of the $\Lambda$-homomorphism $a: \mathcal{R} \to \mathcal{R}$ (with respect to any basis).
Note that an element $a \in \mathcal{R}$ is a non-zero-divisor if and only if $\mathsf{N}(a) \neq 0$.
By \eqref{eq:Adet}, we find
\[
\mathsf{N}(D_X) = \mathsf{N}(f)^N F(X)^{\rank_{\Lambda}(\mathcal{R})} + (\text{lower degree}),
\]
where (lower degree) denotes a polynomial in $\Lambda[X]$ whose degree is strictly less than that of $F(X)^{\rank_{\Lambda}(\mathcal{R})}$.
Now it is enough to show that $\mathsf{N}(D_X) \neq 0$ for some $X \in M_{m, n}(\Lambda)$.
Since $\det(X_{I, J})$ is a nonzero homogenous polynomial in $\Lambda[X]$, so is $F(X)$.
Recall that we assume $\Lambda$ has infinitely many elements.
Therefore, there exists an element $X_0 \in M_{m, n}(\Lambda)$ such that $F(X_0) \neq 0$.
Once we fix such an $X_0$, we may regard $\mathsf{N}(D_{\lambda X_0})$ as a polynomial in $\lambda \in \Lambda$ whose leading coefficient is $\mathsf{N}(f)^N F(X_0)^{\rank_{\Lambda}(\mathcal{R})} \neq 0$.
Then we find an element $\lambda \in \Lambda$ such that $\mathsf{N}(D_{\lambda X_0}) \neq 0$.
This completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:88}(2)]
By claim (1), it is enough to show a single inclusion, say $\supset$.
Let us take a finite presentation $\mathcal{R}^a \overset{H}{\to} \mathcal{R}^b \overset{\pi}{\to} M \to 0$ as in Lemma \ref{lem:a88}.
Take any $b \times b$ submatrix of $H$ and let $\alpha: \mathcal{R}^b \to \mathcal{R}^b$ be the corresponding homomorphism.
By the definition of Fitting ideals, it is enough to show $\det(\alpha) \in \Fitt_{\mathcal{R}}(E^2(M))$.
Put $P = \Coker(\alpha)$, so we have $\Fitt_{\mathcal{R}}(P) = (\det(\alpha))$.
Since $\alpha$ is injective by the choice of $H$, we also have $P \in \mathcal{P}^1_{\mathcal{R}}$.
On the other hand, we have $\pi \circ \alpha = 0$, so $\pi$ induces a surjective homomorphism $P \to M$.
We define $Q$ as its kernel, so we obtain an exact sequence
\begin{equation}\label{eq:90}
0 \to Q \to P \to M \to 0.
\end{equation}
Observe that $M \in \mathcal{P}^2_{\mathcal{R}}$ and $P \in \mathcal{P}^1_{\mathcal{R}}$ imply $Q \in \mathcal{P}^1_{\mathcal{R}}$.
Then \eqref{eq:90} induces an exact sequence
\[
0 \to E^1(P) \to E^1(Q) \to E^2(M) \to 0.
\]
In particular, since $E^2(M)$ is a quotient of $E^1(Q)$, we have
\[
\Fitt_{\mathcal{R}}(E^2(M)) \supset \Fitt_{\mathcal{R}}(E^1(Q)).
\]
Since $Q \in \mathcal{P}^1_{\mathcal{R}}$, Proposition \ref{prop:A1}(2) implies $\Fitt_{\mathcal{R}}(E^1(Q)) = \Fitt_{\mathcal{R}}(Q)$.
Finally we show $\Fitt_{\mathcal{R}}(Q) = \Fitt_{\mathcal{R}}(P)$.
Let $\mathfrak{q}$ be any height one prime of $\Lambda$, and recall that the subscript $(-)_{\mathfrak{q}}$ denotes the localization with respect to the multiplicative set $\Lambda \setminus \mathfrak{q}$.
By \eqref{eq:90} and the pseudo-nullity of $M$, we have $Q_{\mathfrak{q}} \simeq P_{\mathfrak{q}}$, so
$\Fitt_{\mathcal{R}}(Q)\mathcal{R}_{\mathfrak{q}} = \Fitt_{\mathcal{R}}(P)\mathcal{R}_{\mathfrak{q}}$.
Then the claim $\Fitt_{\mathcal{R}}(Q) = \Fitt_{\mathcal{R}}(P)$ follows from this, since both sides are invertible (moreover principal) fractional ideals of $\mathcal{R}$.
\end{proof}
\begin{rem}\label{rem:95}
As mentioned in Remark \ref{rem:BCG+b_main}, when $\mathcal{R}$ is a product of regular local rings, we have $c_2(E^2(M)) = c_2(M)$ for a pseudo-null module $M$.
Although the statement of Proposition \ref{prop:88} is quite similar, the proofs have nothing in common.
\end{rem}
Let us suppose that $\mathcal{R}$ is a product of regular local rings.
Then, for each finitely generated torsion $\mathcal{R}$-module $M$, we have a classical definition of characteristic ideal $\cha_{\mathcal{R}}(M)$, using the structure theorem of finitely generated modules (up to pseudo-null modules).
By definition $\cha_{\mathcal{R}}(M)$ is principal.
The characteristic ideals satisfy the additivity properties with respect to exact sequences, and we have $\cha_{\mathcal{R}}(P) = \Fitt_{\mathcal{R}}(P)$ for $P \in \mathcal{P}^1_{\mathcal{R}}$.
In fact, these properties characterize $\cha_{\mathcal{R}}(-)$.
The following proposition is used in \S \ref{subsec:132}.
\begin{prop}\label{prop:91}
Suppose that $\dim(\Lambda) = 2$ and that $\mathcal{R}$ is a product of regular local rings.
For each finitely generated torsion $\mathcal{R}$-module $M$, we have
\[
\Fitt_{\mathcal{R}}(M) = \cha_{\mathcal{R}}(M) \Fitt_{\mathcal{R}}(E^2(M)).
\]
\end{prop}
\begin{proof}
Consider the exact sequence $0 \to M_{\PN} \to M \to M_{/\PN} \to 0$.
We have $M_{/\PN} \in \mathcal{P}^1_{\mathcal{R}}$ by applying the Auslander-Buchsbaum formula.
Then a well-known property of Fitting ideals (see, e.g., \cite[Lemma 3]{CG98}) tells us
\[
\Fitt_{\mathcal{R}}(M) = \Fitt_{\mathcal{R}}(M_{/\PN}) \Fitt_{\mathcal{R}}(M_{\PN}).
\]
By basic properties of characteristic ideals, we have
\[
\Fitt_{\mathcal{R}}(M_{/\PN}) = \cha_{\mathcal{R}}(M_{/\PN}) = \cha_{\mathcal{R}}(M).
\]
On the other hand, we have $M_{\PN} \in \mathcal{P}^2_{\mathcal{R}}$, so Proposition \ref{prop:88} implies
\[
\Fitt_{\mathcal{R}}(M_{\PN}) = \Fitt_{\mathcal{R}}(E^2(M_{\PN})) = \Fitt_{\mathcal{R}}(E^2(M)).
\]
This completes the proof.
\end{proof}
\section*{Acknowledgments}
I would like to thank Masato Kurihara for encouraging me in this research and giving valuable comments.
I am also grateful to Mahiro Atsuta and Ryotaro Sakamoto for discussion on the papers \cite{BCG+} and \cite{BCG+b}.
This research was supported by JSPS KAKENHI Grant Number 19J00763.
{
\bibliographystyle{abbrv}
|
1,116,691,497,270 | arxiv | \section{Introduction}
Let $f:I\subseteq \mathbb{R\rightarrow R}$ be a convex function defined on
the interval $I$ of real numbers and $a,b\in I$ with $a<b$. The following
inequalit
\begin{equation}
f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\dint\limits_{a}^{b}f(x)d
\leq \frac{f(a)+f(b)}{2} \label{1-1}
\end{equation}
holds. This double inequality is known in the literature as Hermite-Hadamard
integral inequality for convex functions. Note that some of the classical
inequalities for means can be derived from (\ref{1-1}) for appropriate
particular selections of the mapping $f$. Both inequalities hold in the
reversed direction if $f$ is concave. For some results which generalize,
improve and extend the inequalities (\ref{1-1}) we refer the reader to the
recent papers (see \cite{I13,I13a,I13aa,KOA11,SOD10} ) and references
therein.
In \cite{I13}, Iscan gave definition of harmonically convexity as follows:
\begin{definition}
Let $I\subseteq
\mathbb{R}
\backslash \left\{ 0\right\} $ be a real interval. A function
f:I\rightarrow
\mathbb{R}
$ is said to be harmonically convex, if \
\begin{equation}
f\left( \frac{xy}{tx+(1-t)y}\right) \leq tf(y)+(1-t)f(x) \label{1-2}
\end{equation
for all $x,y\in I$ and $t\in \lbrack 0,1]$. If the inequality in (\ref{1-2})
is reversed, then $f$ is said to be harmonically concave.
\end{definition}
The following result of the Hermite-Hadamard type holds.
\begin{theorem}[\protect\cite{I13}]
\label{1.1} Let $f:I\subseteq
\mathbb{R}
\backslash \left\{ 0\right\} \rightarrow
\mathbb{R}
$ be a harmonically convex function and $a,b\in I$ with $a<b.$ If $f\in
L[a,b]$ then the following inequalities hold
\begin{equation*}
f\left( \frac{2ab}{a+b}\right) \leq \frac{ab}{b-a}\dint\limits_{a}^{b}\frac
f(x)}{x^{2}}dx\leq \frac{f(a)+f(b)}{2}.
\end{equation*}
\end{theorem}
\begin{lemma}[\protect\cite{I13}]
\label{1.2} Let $f:I\subseteq
\mathbb{R}
\backslash \left\{ 0\right\} \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ and $a,b\in I$ with $a<b$. If
$f^{\prime }\in L[a,b]$ then
\begin{eqnarray}
&&\frac{f(a)+f(b)}{2}-\frac{ab}{b-a}\dint\limits_{a}^{b}\frac{f(x)}{x^{2}}dx
\label{1-3} \\
&=&\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{1-2t}{\left(
tb+(1-t)a\right) ^{2}}f^{\prime }\left( \frac{ab}{tb+(1-t)a}\right) dt.
\notag
\end{eqnarray}
\end{lemma}
In \cite{I13}, Iscan proved the following results connected with the right
part of (\ref{1-2})
\begin{theorem}
\label{1.3}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$, $a,b\in I$ with $a<b,$ and
f^{\prime }\in L[a,b].$ If $\left\vert f^{\prime }\right\vert ^{q}$ is
harmonically convex on $[a,b]$ for $q\geq 1,$ the
\begin{eqnarray}
&&\left\vert \frac{f(a)+f(b)}{2}-\frac{ab}{b-a}\dint\limits_{a}^{b}\frac{f(x
}{x^{2}}dx\right\vert \label{1-4a} \\
&\leq &\frac{ab\left( b-a\right) }{2}\lambda _{1}^{1-\frac{1}{q}}\left[
\lambda _{2}\left\vert f^{\prime }\left( a\right) \right\vert ^{q}+\lambda
_{3}\left\vert f^{\prime }\left( b\right) \right\vert ^{q}\right] ^{\frac{1}
q}}, \notag
\end{eqnarray
where
\begin{eqnarray*}
\lambda _{1} &=&\frac{1}{ab}-\frac{2}{\left( b-a\right) ^{2}}\ln \left(
\frac{\left( a+b\right) ^{2}}{4ab}\right) , \\
\lambda _{2} &=&\frac{-1}{b\left( b-a\right) }+\frac{3a+b}{\left( b-a\right)
^{3}}\ln \left( \frac{\left( a+b\right) ^{2}}{4ab}\right) , \\
\lambda _{3} &=&\frac{1}{a\left( b-a\right) }-\frac{3b+a}{\left( b-a\right)
^{3}}\ln \left( \frac{\left( a+b\right) ^{2}}{4ab}\right) \\
&=&\lambda _{1}-\lambda _{2}.
\end{eqnarray*}
\end{theorem}
\begin{theorem}
\label{1.4}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$, $a,b\in I$ with $a<b,$ and
f^{\prime }\in L[a,b].$ If $\left\vert f^{\prime }\right\vert ^{q}$ is
harmonically convex on $[a,b]$ for $q>1,\;\frac{1}{p}+\frac{1}{q}=1,$ the
\begin{eqnarray}
&&\left\vert \frac{f(a)+f(b)}{2}-\frac{ab}{b-a}\dint\limits_{a}^{b}\frac{f(x
}{x^{2}}dx\right\vert \label{1-4} \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \frac{1}{p+1}\right) ^{\frac{1}{
}}\left( \mu _{1}\left\vert f^{\prime }\left( a\right) \right\vert ^{q}+\mu
_{2}\left\vert f^{\prime }\left( b\right) \right\vert ^{q}\right) ^{\frac{1}
q}}, \notag
\end{eqnarray
wher
\begin{eqnarray*}
\mu _{1} &=&\frac{\left[ a^{2-2q}+b^{1-2q}\left[ \left( b-a\right) \left(
1-2q\right) -a\right] \right] }{2\left( b-a\right) ^{2}\left( 1-q\right)
\left( 1-2q\right) }, \\
\mu _{2} &=&\frac{\left[ b^{2-2q}-a^{1-2q}\left[ \left( b-a\right) \left(
1-2q\right) +b\right] \right] }{2\left( b-a\right) ^{2}\left( 1-q\right)
\left( 1-2q\right) }.
\end{eqnarray*}
\end{theorem}
We recall the following special functions and inequality
(1) The Beta function
\begin{equation*}
\beta \left( x,y\right) =\frac{\Gamma (x)\Gamma (y)}{\Gamma (x+y)
=\dint\limits_{0}^{1}t^{x-1}\left( 1-t\right) ^{y-1}dt,\ \ x,y>0,
\end{equation*}
(2) The hypergeometric function
\begin{equation*}
_{2}F_{1}\left( a,b;c;z\right) =\frac{1}{\beta \left( b,c-b\right)
\dint\limits_{0}^{1}t^{b-1}\left( 1-t\right) ^{c-b-1}\left( 1-zt\right)
^{-a}dt,\ c>b>0,\ \left\vert z\right\vert <1\text{ (see \cite{KST06}).}
\end{equation*}
\begin{lemma}[\protect\cite{PBM81,WZZ13}]
\label{1.5}For $0<\alpha \leq 1$ and $0\leq a<b$, we hav
\begin{equation*}
\left\vert a^{\alpha }-b^{\alpha }\right\vert \leq \left( b-a\right)
^{\alpha }.
\end{equation*}
\end{lemma}
In the following, we will give some necessary definitions and mathematical
preliminaries of fractional calculus theory which are used further in this
paper.
\begin{definition}
Let $f\in L\left[ a,b\right] $. The Riemann-Liouville integrals
J_{a+}^{\alpha }f$ and $J_{b-}^{\alpha }f$ of oder $\alpha >0$ with $a\geq 0$
are defined b
\begin{equation*}
J_{a+}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\dint\limits_{a}^{x}\left(
x-t\right) ^{\alpha -1}f(t)dt,\ x>a
\end{equation*}
an
\begin{equation*}
J_{b-}^{\alpha }f(x)=\frac{1}{\Gamma (\alpha )}\dint\limits_{x}^{b}\left(
t-x\right) ^{\alpha -1}f(t)dt,\ x<b
\end{equation*
respectively, where $\Gamma (\alpha )$ is the Gamma function defined by
\Gamma (\alpha )=$ $\dint\limits_{0}^{\infty }e^{-t}t^{\alpha -1}dt$ and
J_{a^{+}}^{0}f(x)=J_{b^{-}}^{0}f(x)=f(x).$
\end{definition}
Because of the wide application of Hermite-Hadamard type inequalities and
fractional integrals, many researchers extend their studies to
Hermite-Hadamard type inequalities involving fractional integrals not
limited to integer integrals. Recently, more and more Hermite-Hadamard
inequalities involving fractional integrals have been obtained for different
classes of functions; see \cite{WZZ13,I13b,I13c,I13d,SSYB13,S12,WFZ12}.
The aim of this paper is to establish Hermite--Hadamard's inequalities for
Harmonically convex functions via Riemann--Liouville fractional integral and
some other integral inequalities using the identity is obtained for
fractional integrals.These results have some relationships with \cite{I13}.
\section{\protect\bigskip Main results}
Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$, the interior of $I$,
throughout this section we will tak
\begin{eqnarray*}
&&I_{f}\left( g;\alpha ,a,b\right) \\
&=&\frac{f(a)+f(b)}{2}-\frac{\Gamma (\alpha +1)}{2}\left( \frac{ab}{b-a
\right) ^{\alpha }\left\{ J_{1/a-}^{\alpha }\left( f\circ g\right)
(1/b)+J_{1/b+}^{\alpha }\left( f\circ g\right) (1/a)\right\}
\end{eqnarray*
where $a,b\in I$ with $a<b$, $\alpha >0$, $g(x)=1/x$ and $\Gamma $ is Euler
Gamma function.
Hermite--Hadamard's inequalities for Harmonically convex functions can be
represented in fractional integral forms as follows:
\begin{theorem}
\label{2.0}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a function such that $f\in L[a,b]$, where $a,b\in I$ with $a<b$. If $f$
is a harmonically convex function on $[a,b]$, then the following
inequalities for fractional integrals hold
\begin{equation}
f\left( \frac{2ab}{a+b}\right) \leq \frac{\Gamma (\alpha +1)}{2}\left( \frac
ab}{b-a}\right) ^{\alpha }\left\{ J_{1/a-}^{\alpha }\left( f\circ g\right)
(1/b)+J_{1/b+}^{\alpha }\left( f\circ g\right) (1/a)\right\} \leq \frac
f(a)+f(b)}{2} \label{2-0}
\end{equation
with $\alpha >0$.
\end{theorem}
\begin{proof}
Since $f$ is a harmonically convex function on $[a,b]$, we have for all
x,y\in \lbrack a,b]$ (with $t=1/2$ in the inequality (\ref{1-2})
\begin{equation*}
f\left( \frac{2xy}{x+y}\right) \leq \frac{f(x)+f(y)}{2}.
\end{equation*
Choosing $x=\frac{ab}{tb+(1-t)a}$, $y=\frac{ab}{ta+(1-t)b}$, we ge
\begin{equation}
f\left( \frac{2ab}{a+b}\right) \leq \frac{f\left( \frac{ab}{tb+(1-t)a
\right) +f\left( \frac{ab}{ta+(1-t)b}\right) }{2}. \label{2-0a}
\end{equation
Multiplying both sides of (\ref{2-0a}) by $t^{\alpha -1}$, then integrating
the resulting inequality with respect to $t$ over $[0,1]$, we obtai
\begin{eqnarray*}
f\left( \frac{2ab}{a+b}\right) &\leq &\frac{\alpha }{2}\left\{
\dint\limits_{0}^{1}t^{\alpha -1}f\left( \frac{ab}{tb+(1-t)a}\right)
dt+\dint\limits_{0}^{1}t^{\alpha -1}f\left( \frac{ab}{ta+(1-t)b}\right)
dt\right\} \\
&=&\frac{\alpha }{2}\left( \frac{ab}{b-a}\right) ^{\alpha }\left\{
\dint\limits_{1/b}^{1/a}\left( x-\frac{1}{b}\right) ^{\alpha -1}f\left(
\frac{1}{x}\right) dx+\dint\limits_{1/b}^{1/a}\left( \frac{1}{a}-x\right)
^{\alpha -1}f\left( \frac{1}{x}\right) dx\right\} \\
&=&\frac{\alpha \Gamma (\alpha )}{2}\left( \frac{ab}{b-a}\right) ^{\alpha
}\left\{ J_{1/a-}^{\alpha }\left( f\circ g\right) (1/b)+J_{1/b+}^{\alpha
}\left( f\circ g\right) (1/a)\right\} \\
&=&\frac{\Gamma (\alpha +1)}{2}\left( \frac{ab}{b-a}\right) ^{\alpha
}\left\{ J_{1/a-}^{\alpha }\left( f\circ g\right) (1/b)+J_{1/b+}^{\alpha
}\left( f\circ g\right) (1/a)\right\} ,\ \text{where }g(x)=1/x.
\end{eqnarray*
and the first inequality is proved.
For the proof of the second inequality in (\ref{2-0}) we first note that if
f$ is a harmonically convex function, then, for $t\in \left[ 0,1\right] $,
it yield
\begin{equation*}
f\left( \frac{ab}{tb+(1-t)a}\right) \leq tf(a)+(1-t)f(b)
\end{equation*
an
\begin{equation*}
f\left( \frac{ab}{ta+(1-t)b}\right) \leq tf(b)+(1-t)f(a).
\end{equation*
By adding these inequalities we hav
\begin{equation}
f\left( \frac{ab}{tb+(1-t)a}\right) +f\left( \frac{ab}{ta+(1-t)b}\right)
\leq f(a)+f(b). \label{2-0b}
\end{equation
Then multiplying both sides of (\ref{2-0b}) by $t^{\alpha -1}$, and
integrating the resulting inequality with respect to $t$ over $\left[ 0,
\right] $, we obtai
\begin{equation*}
\dint\limits_{0}^{1}f\left( \frac{ab}{tb+(1-t)a}\right) t^{\alpha
-1}dt+\dint\limits_{0}^{1}f\left( \frac{ab}{ta+(1-t)b}\right) t^{\alpha
-1}dt\leq \left[ f(a)+f(b)\right] \dint\limits_{0}^{1}t^{\alpha -1}dt
\end{equation*
i.e
\begin{equation*}
\Gamma (\alpha +1)\left( \frac{ab}{b-a}\right) ^{\alpha }\left\{
J_{1/a-}^{\alpha }\left( f\circ g\right) (1/b)+J_{1/b+}^{\alpha }\left(
f\circ g\right) (1/a)\right\} \leq f(a)+f(b).
\end{equation*
The proof is completed.
\end{proof}
\begin{lemma}
\label{2.1}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in
L[a,b]$, where $a,b\in I$ with $a<b$. Then the following equality for
fractional integrals holds
\begin{eqnarray}
&&I_{f}\left( g;\alpha ,a,b\right) \label{2-1} \\
&=&\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{\left[ t^{\alpha
}-(1-t)^{\alpha }\right] }{\left( ta+(1-t)b\right) ^{2}}f^{\prime }\left(
\frac{ab}{ta+(1-t)b}\right) dt. \notag
\end{eqnarray}
\end{lemma}
\begin{proof}
Let $A_{t}=ta+(1-t)b$. It suffices to note that
\begin{eqnarray}
I_{f}\left( g;\alpha ,a,b\right) &=&\frac{ab\left( b-a\right) }{2
\dint\limits_{0}^{1}\frac{\left[ t^{\alpha }-(1-t)^{\alpha }\right] }
A_{t}^{2}}f^{\prime }\left( \frac{ab}{A_{t}}\right) dt \notag \\
&=&\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{t^{\alpha }}
A_{t}^{2}}f^{\prime }\left( \frac{ab}{A_{t}}\right) dt-\frac{ab\left(
b-a\right) }{2}\dint\limits_{0}^{1}\frac{(1-t)^{\alpha }}{A_{t}^{2}
f^{\prime }\left( \frac{ab}{A_{t}}\right) dt \notag \\
&&I_{1}+I_{2}. \label{2-1a}
\end{eqnarray
By integrating by part, we have
\begin{eqnarray}
I_{1} &=&\frac{1}{2}\left[ \left. t^{\alpha }f\left( \frac{ab}{A_{t}}\right)
\right\vert _{0}^{1}-\alpha \dint\limits_{0}^{1}t^{\alpha -1}f\left( \frac{a
}{A_{t}}\right) dt\right] \notag \\
&=&\frac{1}{2}\left[ f\left( b\right) -\alpha \left( \frac{ab}{b-a}\right)
^{\alpha }\dint\limits_{1/b}^{1/a}\left( \frac{1}{a}-x\right) ^{\alpha
-1}f\left( \frac{1}{x}\right) dx\right] \notag \\
&=&\frac{1}{2}\left[ f\left( b\right) -\Gamma (\alpha +1)\left( \frac{ab}{b-
}\right) ^{\alpha }J_{1/b+}^{\alpha }\left( f\circ g\right) (1/a)\right]
\label{2-1b}
\end{eqnarray
and similarly we get,
\begin{eqnarray}
I_{2} &=&-\frac{1}{2}\left[ \left. (1-t)^{\alpha }f\left( \frac{ab}{A_{t}
\right) \right\vert _{0}^{1}+\alpha \dint\limits_{0}^{1}(1-t)^{\alpha
-1}f\left( \frac{ab}{A_{t}}\right) dt\right] \notag \\
&=&-\frac{1}{2}\left[ -f\left( a\right) +\alpha \left( \frac{ab}{b-a}\right)
^{\alpha }\dint\limits_{1/b}^{1/a}(x-\frac{1}{b})^{\alpha -1}f\left( \frac{
}{x}\right) dx\right] \notag \\
&=&\frac{1}{2}\left[ f\left( a\right) -\Gamma (\alpha +1)\left( \frac{ab}{b-
}\right) ^{\alpha }J_{1/a-}^{\alpha }\left( f\circ g\right) (1/b)\right] .
\label{2-1c}
\end{eqnarray
Using (\ref{2-1b}) and (\ref{2-1c}) in (\ref{2-1a}), we get equality (\re
{2-1}).
\end{proof}
\begin{remark}
If Lemma \ref{2.1}, we let $\alpha =1$, then equality (\ref{2-1}) becomes
equality (\ref{1-3}) of Lemma \ref{1.2}.
\end{remark}
Using lemma \ref{2.1}, we can obtain the following fractional integral
inequality.
\begin{theorem}
Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in
L[a,b]$, where $a,b\in I^{\circ }$ with $a<b$. If $\left\vert f^{\prime
}\right\vert ^{q}$ is harmonically convex on $\left[ a,b\right] $ for some
fixed $q\geq 1$, then the following inequality for fractional integrals
holds
\begin{eqnarray}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \label{2-2} \\
&\leq &\frac{ab\left( b-a\right) }{2}C_{1}^{1-1/q}(\alpha ;a,b)\left(
C_{2}(\alpha ;a,b)\left\vert f^{\prime }(b)\right\vert ^{q}+C_{3}(\alpha
;a,b)\left\vert f^{\prime }(a)\right\vert ^{q}\right) ^{1/q}, \notag
\end{eqnarray
wher
\begin{eqnarray*}
C_{1}(\alpha ;a,b) &=&\frac{b^{-2}}{\alpha +1}\left[ _{2}F_{1}\left(
2,1;\alpha +2;1-\frac{a}{b}\right) +_{2}F_{1}\left( 2,\alpha +1;\alpha +2;1
\frac{a}{b}\right) \right] , \\
C_{2}(\alpha ;a,b) &=&\frac{b^{-2}}{\alpha +2}\left[ \frac{1}{\alpha +1
._{2}F_{1}\left( 2,2;\alpha +3;1-\frac{a}{b}\right) +_{2}F_{1}\left(
2,\alpha +2;\alpha +3;1-\frac{a}{b}\right) \right] , \\
C_{3}(\alpha ;a,b) &=&\frac{b^{-2}}{\alpha +1}\left[ _{2}F_{1}\left(
2,1;\alpha +3;1-\frac{a}{b}\right) +\frac{1}{\alpha +1}._{2}F_{1}\left(
2,\alpha +1;\alpha +3;1-\frac{a}{b}\right) \right] .
\end{eqnarray*}
\end{theorem}
\begin{proof}
Let $A_{t}=ta+(1-t)b$. From Lemma..., using the property of the modulus, the
power mean inequality and the harmonically convexity of $\left\vert
f^{\prime }\right\vert ^{q}$, we fin
\begin{eqnarray*}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \\
&\leq &\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{\left\vert
(1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}\left\vert f^{\prime
}\left( \frac{ab}{A_{t}}\right) \right\vert dt \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\frac
\left\vert (1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}dt\right)
^{1-1/q}\left( \dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha
}-t^{\alpha }\right\vert }{A_{t}^{2}}\left\vert f^{\prime }\left( \frac{ab}
A_{t}}\right) \right\vert ^{q}dt\right) ^{1/q}
\end{eqnarray*
\begin{equation*}
\leq \frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\frac{\left[
1-t)^{\alpha }+t^{\alpha }\right] }{A_{t}^{2}}dt\right) ^{1-1/q}\left(
\dint\limits_{0}^{1}\frac{(\left[ 1-t)^{\alpha }+t^{\alpha }\right] }
A_{t}^{2}}\left[ t\left\vert f^{\prime }(b)\right\vert ^{q}+(1-t)\left\vert
f^{\prime }(a)\right\vert ^{q}\right] dt\right) ^{1/q}
\end{equation*
\begin{equation}
\leq \frac{ab\left( b-a\right) }{2}C_{1}^{1-1/q}(\alpha ;a,b)\left(
C_{2}(\alpha ;a,b)\left\vert f^{\prime }(b)\right\vert ^{q}+C_{3}(\alpha
;a,b)\left\vert f^{\prime }(a)\right\vert ^{q}\right) ^{1/q}. \label{2-2a}
\end{equation
calculating $C_{1}(\alpha ;a,b)$, $C_{2}(\alpha ;a,b)$ and $C_{3}(\alpha
;a,b)$, we have
\begin{eqnarray}
C_{1}(\alpha ;a,b) &=&\dint\limits_{0}^{1}\frac{\left[ 1-t)^{\alpha
}+t^{\alpha }\right] }{A_{t}^{2}}dt \notag \\
&=&\frac{b^{-2}}{\alpha +1}\left[ _{2}F_{1}\left( 2,1;\alpha +2;1-\frac{a}{b
\right) +_{2}F_{1}\left( 2,\alpha +1;\alpha +2;1-\frac{a}{b}\right) \right] ,
\label{2-2b}
\end{eqnarray
\begin{eqnarray}
C_{2}(\alpha ;a,b) &=&\dint\limits_{0}^{1}\frac{\left[ 1-t)^{\alpha
}+t^{\alpha }\right] }{A_{t}^{2}}tdt \notag \\
&=&\frac{b^{-2}}{\alpha +2}\left[ \frac{1}{\alpha +1}._{2}F_{1}\left(
2,2;\alpha +3;1-\frac{a}{b}\right) +_{2}F_{1}\left( 2,\alpha +2;\alpha +3;1
\frac{a}{b}\right) \right] , \label{2-2c}
\end{eqnarray
\begin{eqnarray}
C_{3}(\alpha ;a,b) &=&\dint\limits_{0}^{1}\frac{\left[ 1-t)^{\alpha
}+t^{\alpha }\right] }{A_{t}^{2}}(1-t)dt \notag \\
&=&\frac{b^{-2}}{\alpha +1}\left[ _{2}F_{1}\left( 2,1;\alpha +3;1-\frac{a}{b
\right) +\frac{1}{\alpha +1}._{2}F_{1}\left( 2,\alpha +1;\alpha +3;1-\frac{
}{b}\right) \right] , \label{2-2d}
\end{eqnarray
Thus, if we use (\ref{2-2b}), (\ref{2-2c}) and (\ref{2-2d}) in (\ref{2-2a}),
we obtain the inequality of (\ref{2-2}). This completes the proof.
\end{proof}
When $0<\alpha \leq 1$, using Lemma \ref{1.5} and Lemma \ref{2.1} we shall
give another result for harmonically convex functions as follows.
\begin{theorem}
\label{2.3}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in
L[a,b]$, where $a,b\in I^{\circ }$ with $a<b$. If $\left\vert f^{\prime
}\right\vert ^{q}$ is harmonically convex on $\left[ a,b\right] $ for some
fixed $q\geq 1$, then the following inequality for fractional integrals
holds
\begin{eqnarray}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \label{2-3} \\
&\leq &\frac{ab\left( b-a\right) }{2}C_{1}^{1-1/q}(\alpha ;a,b)\left(
C_{2}(\alpha ;a,b)\left\vert f^{\prime }(b)\right\vert ^{q}+C_{3}(\alpha
;a,b)\left\vert f^{\prime }(a)\right\vert ^{q}\right) ^{1/q}, \notag
\end{eqnarray
wher
\begin{eqnarray*}
&&C_{1}(\alpha ;a,b) \\
&=&\frac{b^{-2}}{\alpha +1}\left[ _{2}F_{1}\left( 2,\alpha +1;\alpha +2;1
\frac{a}{b}\right) -_{2}F_{1}\left( 2,1;\alpha +2;1-\frac{a}{b}\right)
\right. \\
&&\left. +_{2}F_{1}\left( 2,1;\alpha +2;\frac{1}{2}\left( 1-\frac{a}{b
\right) \right) \right] ,
\end{eqnarray*
\begin{eqnarray*}
&&C_{2}(\alpha ;a,b) \\
&=&\frac{b^{-2}}{\alpha +2}\left[ _{2}F_{1}\left( 2,\alpha +2;\alpha +3;1
\frac{a}{b}\right) -\frac{1}{\alpha +1}._{2}F_{1}\left( 2,2;\alpha +3;1
\frac{a}{b}\right) \right. \\
&&\left. +\frac{1}{2\left( \alpha +1\right) }._{2}F_{1}\left( 2,2;\alpha +3
\frac{1}{2}\left( 1-\frac{a}{b}\right) \right) \right] ,
\end{eqnarray*
\begin{eqnarray*}
&&C_{3}(\alpha ;a,b) \\
&=&\frac{b^{-2}}{\alpha +2}\left[ \frac{1}{\alpha +1}._{2}F_{1}\left(
2,\alpha +1;\alpha +3;1-\frac{a}{b}\right) -_{2}F_{1}\left( 2,1;\alpha +3;1
\frac{a}{b}\right) \right. \\
&&\left. +_{2}F_{1}\left( 2,1;\alpha +3;\frac{1}{2}\left( 1-\frac{a}{b
\right) \right) \right]
\end{eqnarray*
and $0<\alpha \leq 1.$
\end{theorem}
\begin{proof}
Let $A_{t}=ta+(1-t)b$. From Lemma \ref{2.1}, using the property of the
modulus, the power mean inequality and the harmonically convexity of
\left\vert f^{\prime }\right\vert ^{q}$, we fin
\begin{eqnarray*}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \\
&\leq &\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{\left\vert
(1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}\left\vert f^{\prime
}\left( \frac{ab}{A_{t}}\right) \right\vert dt \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\frac
\left\vert (1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}dt\right)
^{1-1/q}\left( \dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha
}-t^{\alpha }\right\vert }{A_{t}^{2}}\left\vert f^{\prime }\left( \frac{ab}
A_{t}}\right) \right\vert ^{q}dt\right) ^{1/q} \\
&\leq &\frac{ab\left( b-a\right) }{2}K_{1}^{1-1/q}\left( \dint\limits_{0}^{1
\frac{\left\vert (1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}\left[
t\left\vert f^{\prime }(b)\right\vert ^{q}+(1-t)\left\vert f^{\prime
}(a)\right\vert ^{q}\right] dt\right) ^{1/q}
\end{eqnarray*
\begin{equation}
\leq \frac{ab\left( b-a\right) }{2}K_{1}^{1-1/q}\left( K_{2}\left\vert
f^{\prime }(b)\right\vert ^{q}+K_{3}\left\vert f^{\prime }(a)\right\vert
^{q}\right) ^{1/q}, \label{2-3a}
\end{equation
where
\begin{eqnarray*}
K_{1} &=&\dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha }-t^{\alpha
}\right\vert }{A_{t}^{2}}dt, \\
K_{2} &=&\dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha }-t^{\alpha
}\right\vert }{A_{t}^{2}}tdt, \\
K_{3} &=&\dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha }-t^{\alpha
}\right\vert }{A_{t}^{2}}(1-t)dt.
\end{eqnarray*
Calculating $K_{1}$, $K_{2}$ and $K_{3}$, by Lemma \ref{1.5}, we hav
\begin{eqnarray*}
K_{1} &=&\dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha }-t^{\alpha
}\right\vert }{A_{t}^{2}}dt \\
&=&\dint\limits_{0}^{1/2}\frac{(1-t)^{\alpha }-t^{\alpha }}{A_{t}^{2}
dt+\dint\limits_{1/2}^{1}\frac{t^{\alpha }-(1-t)^{\alpha }}{A_{t}^{2}}dt \\
&=&\dint\limits_{0}^{1}\frac{t^{\alpha }-(1-t)^{\alpha }}{A_{t}^{2}
dt+2\dint\limits_{0}^{1/2}\frac{(1-t)^{\alpha }-t^{\alpha }}{A_{t}^{2}}dt
\end{eqnarray*
\begin{eqnarray*}
&\leq &\dint\limits_{0}^{1}t^{\alpha
}A_{t}^{-2}dt-\dint\limits_{0}^{1}(1-t)^{\alpha
}A_{t}^{-2}dt+2\dint\limits_{0}^{1/2}(1-2t)^{\alpha }A_{t}^{-2}dt \\
&=&\dint\limits_{0}^{1}t^{\alpha
}A_{t}^{-2}dt-\dint\limits_{0}^{1}(1-t)^{\alpha
}A_{t}^{-2}dt+\dint\limits_{0}^{1}(1-u)^{\alpha }b^{-2}\left( 1-u\frac{1}{2
(1-\frac{a}{b})\right) ^{-2}du
\end{eqnarray*
\begin{eqnarray}
&=&\frac{b^{-2}}{\alpha +1}\left[ _{2}F_{1}\left( 2,\alpha +1;\alpha +2;1
\frac{a}{b}\right) -_{2}F_{1}\left( 2,1;\alpha +2;1-\frac{a}{b}\right)
\right. \notag \\
&&\left. +_{2}F_{1}\left( 2,1;\alpha +2;\frac{1}{2}\left( 1-\frac{a}{b
\right) \right) \right] \notag \\
&=&C_{1}(\alpha ;a,b) \label{2-3b}
\end{eqnarray
and similarly we ge
\begin{eqnarray}
K_{2} &=&\dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha }-t^{\alpha
}\right\vert }{A_{t}^{2}}tdt \notag \\
&\leq &\dint\limits_{0}^{1}t^{\alpha
+1}A_{t}^{-2}dt-\dint\limits_{0}^{1}(1-t)^{\alpha
}tA_{t}^{-2}dt+2\dint\limits_{0}^{1/2}(1-2t)^{\alpha }tA_{t}^{-2}dt \notag
\\
&=&\frac{b^{-2}}{\alpha +2}\left[ _{2}F_{1}\left( 2,\alpha +2;\alpha +3;1
\frac{a}{b}\right) -\frac{1}{\alpha +1}._{2}F_{1}\left( 2,2;\alpha +3;1
\frac{a}{b}\right) \right. \notag \\
&&\left. +\frac{1}{2\left( \alpha +1\right) }._{2}F_{1}\left( 2,2;\alpha +3
\frac{1}{2}\left( 1-\frac{a}{b}\right) \right) \right] \notag \\
&=&C_{2}(\alpha ;a,b) \label{2-3c}
\end{eqnarray
\begin{eqnarray*}
K_{3} &=&\dint\limits_{0}^{1}\frac{\left\vert (1-t)^{\alpha }-t^{\alpha
}\right\vert }{A_{t}^{2}}(1-t)dt \\
&\leq &\dint\limits_{0}^{1}t^{\alpha
}(1-t)A_{t}^{-2}dt-\dint\limits_{0}^{1}(1-t)^{\alpha
+1}A_{t}^{-2}dt+2\dint\limits_{0}^{1/2}(1-2t)^{\alpha }(1-t)A_{t}^{-2}dt
\end{eqnarray*
\begin{eqnarray}
&=&\frac{b^{-2}}{\alpha +2}\left[ \frac{1}{\alpha +1}._{2}F_{1}\left(
2,\alpha +1;\alpha +3;1-\frac{a}{b}\right) \right. \notag \\
&&\left. -_{2}F_{1}\left( 2,1;\alpha +3;1-\frac{a}{b}\right)
+_{2}F_{1}\left( 2,1;\alpha +3;\frac{1}{2}\left( 1-\frac{a}{b}\right)
\right) \right] \notag \\
&=&C_{3}(\alpha ;a,b). \label{2-3d}
\end{eqnarray}
Thus, if we use (\ref{2-3b}), (\ref{2-3c}) and (\ref{2-3d}) in (\ref{2-3a}),
we obtain the inequality of (\ref{2-3}). This completes the proof.
\end{proof}
\begin{remark}
If we take $\alpha =1$ in Theorem \ref{2.3}, then inequality (\ref{2-3})
becomes inequality (\ref{1-4a}) of Theorem \ref{1.3}.
\end{remark}
\begin{theorem}
Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in
L[a,b]$, where $a,b\in I^{\circ }$ with $a<b$. If $\left\vert f^{\prime
}\right\vert ^{q}$ is harmonically convex on $\left[ a,b\right] $ for some
fixed $q>1$, then the following inequality for fractional integrals holds
\begin{eqnarray}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \label{2-4} \\
&\leq &\frac{a\left( b-a\right) }{2b}\left( \frac{1}{\alpha p+1}\right)
^{1/p}\left( \frac{\left\vert f^{\prime }(b)\right\vert ^{q}+\left\vert
f^{\prime }(a)\right\vert ^{q}}{2}\right) ^{1/q} \notag \\
&&\times \left[ _{2}F_{1}^{1/p}\left( 2p,1;\alpha p+2;1-\frac{a}{b}\right)
+_{2}F_{1}^{1/p}\left( 2p,\alpha p+1;\alpha p+2;1-\frac{a}{b}\right) \right]
, \notag
\end{eqnarray
where $1/p+1/q=1.$
\end{theorem}
\begin{proof}
Let $A_{t}=ta+(1-t)b$. From Lemma \ref{2.1}, using the H\"{o}lder inequality
and the harmonically convexity of $\left\vert f^{\prime }\right\vert ^{q}$,
we fin
\begin{eqnarray*}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \\
&\leq &\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{(1-t)^{\alpha
}}{A_{t}^{2}}\left\vert f^{\prime }\left( \frac{ab}{A_{t}}\right)
\right\vert dt+\dint\limits_{0}^{1}\frac{t^{\alpha }}{A_{t}^{2}}\left\vert
f^{\prime }\left( \frac{ab}{A_{t}}\right) \right\vert dt \\
&\leq &\frac{ab\left( b-a\right) }{2}\left\{ \left( \dint\limits_{0}^{1
\frac{(1-t)^{\alpha p}}{A_{t}^{2p}}dt\right) ^{1/p}\left(
\dint\limits_{0}^{1}\left\vert f^{\prime }\left( \frac{ab}{A_{t}}\right)
\right\vert ^{q}dt\right) ^{1/q}\right.
\end{eqnarray*
\begin{eqnarray}
&&\left. +\left( \dint\limits_{0}^{1}\frac{t^{\alpha p}}{A_{t}^{2p}
dt\right) ^{1/p}\left( \dint\limits_{0}^{1}\left\vert f^{\prime }\left(
\frac{ab}{A_{t}}\right) \right\vert ^{q}dt\right) ^{1/q}\right\} \notag \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( K_{4}^{1/p}+K_{5}^{1/p}\right)
\left( \dint\limits_{0}^{1}\left[ t\left\vert f^{\prime }(b)\right\vert
^{q}+(1-t)\left\vert f^{\prime }(a)\right\vert ^{q}\right] dt\right) ^{1/q}
\notag \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( K_{4}^{1/p}+K_{5}^{1/p}\right)
\left( \frac{\left\vert f^{\prime }(b)\right\vert ^{q}+\left\vert f^{\prime
}(a)\right\vert ^{q}}{2}\right) ^{1/q}. \label{2-4a}
\end{eqnarray
Calculating $K_{4}$ and $K_{5}$, we have
\begin{eqnarray}
K_{4} &=&\dint\limits_{0}^{1}\frac{(1-t)^{\alpha p}}{A_{t}^{2p}}dt \notag \\
&=&\frac{b^{-2p}}{\alpha p+1}._{2}F_{1}\left( 2p,1;\alpha p+2;1-\frac{a}{b
\right) , \label{2-4b}
\end{eqnarray
\begin{eqnarray}
K_{5} &=&\dint\limits_{0}^{1}\frac{t^{\alpha p}}{A_{t}^{2p}}dt \notag \\
&=&\frac{b^{-2p}}{\alpha p+1}._{2}F_{1}\left( 2p,\alpha p+1;\alpha p+2;1
\frac{a}{b}\right) \label{2-4c}
\end{eqnarray}
Thus, if we use (\ref{2-4b}) and (\ref{2-4c}) in (\ref{2-4a}), we obtain the
inequality of (\ref{2-4}). This completes the proof.
\end{proof}
\begin{theorem}
Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in
L[a,b]$, where $a,b\in I^{\circ }$ with $a<b$. If $\left\vert f^{\prime
}\right\vert ^{q}$ is harmonically convex on $\left[ a,b\right] $ for some
fixed $q>1$, then the following inequality for fractional integrals holds
\begin{eqnarray}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \label{2-5} \\
&\leq &\frac{b-a}{2\left( ab\right) ^{1-1/p}}L_{2p-2}^{2-2/p}(a,b)\left(
\frac{1}{\alpha q+1}\right) ^{1/q}\left( \frac{\left\vert f^{\prime
}(b)\right\vert ^{q}+\left\vert f^{\prime }(a)\right\vert ^{q}}{2}\right)
^{1/q}, \notag
\end{eqnarray
where $1/p+1/q=1$ and $L_{2p-2}(a,b)=\left( \frac{b^{2p-1}-a^{2p-1}}
(2p-1)(b-a)}\right) ^{1/(2p-2)}$ is $2p-2$-Logarithmic mean.
\end{theorem}
\begin{proof}
Let $A_{t}=ta+(1-t)b$. From Lemma \ref{2.1} and Lemma \ref{1.5}, using the
\"{o}lder inequality and the harmonically convexity of $\left\vert f^{\prime
}\right\vert ^{q}$, we fin
\begin{eqnarray*}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \\
&\leq &\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{\left\vert
(1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}\left\vert f^{\prime
}\left( \frac{ab}{A_{t}}\right) \right\vert dt \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\frac{1}
A_{t}^{2p}}dt\right) ^{1/p}\left( \dint\limits_{0}^{1}\left\vert
(1-t)^{\alpha }-t^{\alpha }\right\vert ^{q}\left\vert f^{\prime }\left(
\frac{ab}{A_{t}}\right) \right\vert ^{q}dt\right) ^{1/q} \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\frac{1}
A_{t}^{2p}}dt\right) ^{1/p}\left( \dint\limits_{0}^{1}\left\vert
1-2t\right\vert ^{\alpha q}\left[ t\left\vert f^{\prime }(b)\right\vert
^{q}+(1-t)\left\vert f^{\prime }(a)\right\vert ^{q}\right] dt\right) ^{1/q}
\end{eqnarray*
\begin{equation}
\leq \frac{ab\left( b-a\right) }{2}K_{6}^{1/p}\left( K_{7}\left\vert
f^{\prime }(b)\right\vert ^{q}+K_{8}\left\vert f^{\prime }(a)\right\vert
^{q}\right) ^{1/q}, \label{2-5a}
\end{equation
wher
\begin{eqnarray}
K_{6} &=&\dint\limits_{0}^{1}\frac{1}{A_{t}^{2p}}dt=b^{-2p}\din
\limits_{0}^{1}\left( 1-t\left( 1-\frac{a}{b}\right) \right) ^{-2p}dt \notag
\\
&=&b^{-2p}._{2}F_{1}\left( 2p,1;2;1-\frac{a}{b}\right) =\frac
L_{2p-2}^{2p-2}(a,b)}{\left( ab\right) ^{2p-1}}, \label{2-5b}
\end{eqnarray
\begin{eqnarray}
K_{7} &=&\dint\limits_{0}^{1}\left\vert 1-2t\right\vert ^{\alpha q}tdt
\notag \\
&=&\dint\limits_{0}^{1/2}\left( 1-2t\right) ^{\alpha
q}tdt+\dint\limits_{1/2}^{1}\left( 2t-1\right) ^{\alpha q}tdt \notag \\
&=&\frac{1}{2\left( \alpha q+1\right) }, \label{2-5c}
\end{eqnarray
and
\begin{eqnarray}
K_{8} &=&\dint\limits_{0}^{1}\left\vert 1-2t\right\vert ^{\alpha q}(1-t)dt
\notag \\
&=&\frac{1}{2\left( \alpha q+1\right) }. \label{2-5d}
\end{eqnarray
Thus, if we use (\ref{2-5b}), (\ref{2-5c}) and (\ref{2-5d}) in (\ref{2-5a}),
we obtain the inequality of (\ref{2-5}). This completes the proof.
\end{proof}
\begin{theorem}
\label{2.6}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow
\mathbb{R}
$ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in
L[a,b]$, where $a,b\in I^{\circ }$ with $a<b$. If $\left\vert f^{\prime
}\right\vert ^{q}$ is harmonically convex on $\left[ a,b\right] $ for some
fixed $q>1$, then the following inequality for fractional integrals holds
\begin{eqnarray}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \leq \frac{a\left(
b-a\right) }{2b}\left( \frac{1}{\alpha p+1}\right) ^{1/p} \label{2-6} \\
&&\times \left( \frac{_{2}F_{1}\left( 2q,2;3;1-\frac{a}{b}\right) \left\vert
f^{\prime }(b)\right\vert ^{q}+_{2}F_{1}\left( 2q,1;3;1-\frac{a}{b}\right)
\left\vert f^{\prime }(a)\right\vert ^{q}}{2}\right) ^{1/q}, \notag
\end{eqnarray
where $1/p+1/q=1$.
\end{theorem}
\begin{proof}
Let $A_{t}=ta+(1-t)b$. From Lemma \ref{2.1} and Lemma \ref{1.5}, using the
\"{o}lder inequality and the harmonically convexity of $\left\vert f^{\prime
}\right\vert ^{q}$, we fin
\begin{eqnarray*}
&&\left\vert I_{f}\left( g;\alpha ,a,b\right) \right\vert \\
&\leq &\frac{ab\left( b-a\right) }{2}\dint\limits_{0}^{1}\frac{\left\vert
(1-t)^{\alpha }-t^{\alpha }\right\vert }{A_{t}^{2}}\left\vert f^{\prime
}\left( \frac{ab}{A_{t}}\right) \right\vert dt \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\left\vert
(1-t)^{\alpha }-t^{\alpha }\right\vert ^{p}dt\right) ^{1/p}\left(
\dint\limits_{0}^{1}\frac{1}{A_{t}^{2q}}\left\vert f^{\prime }\left( \frac{a
}{A_{t}}\right) \right\vert ^{q}dt\right) ^{1/q} \\
&\leq &\frac{ab\left( b-a\right) }{2}\left( \dint\limits_{0}^{1}\left\vert
1-2t\right\vert ^{\alpha p}dt\right) ^{1/p}\left( \dint\limits_{0}^{1}\frac{
}{A_{t}^{2q}}\left[ t\left\vert f^{\prime }(b)\right\vert
^{q}+(1-t)\left\vert f^{\prime }(a)\right\vert ^{q}\right] dt\right) ^{1/q}
\end{eqnarray*
\begin{equation}
\leq \frac{ab\left( b-a\right) }{2}K_{9}^{1/p}\left( K_{10}\left\vert
f^{\prime }(b)\right\vert ^{q}+K_{11}\left\vert f^{\prime }(a)\right\vert
^{q}\right) ^{1/q}, \label{2-6a}
\end{equation
wher
\begin{equation}
K_{9}=\dint\limits_{0}^{1}\left\vert 1-2t\right\vert ^{\alpha p}dt=\frac{1}
\alpha p+1} \label{2-6b}
\end{equation
\begin{eqnarray}
K_{10}
&=&\dint\limits_{0}^{1}tA_{t}^{-2q}dt=b^{-2q}\dint\limits_{0}^{1}t\left(
1-t\left( 1-\frac{a}{b}\right) \right) ^{-2q}dt \notag \\
&=&\frac{1}{2b^{2q}}._{2}F_{1}\left( 2q,2;3;1-\frac{a}{b}\right)
\label{2-6c}
\end{eqnarray
and
\begin{equation}
K_{11}=\dint\limits_{0}^{1}(1-t)A_{t}^{-2q}dt=\frac{1}{2b^{2q}
._{2}F_{1}\left( 2q,1;3;1-\frac{a}{b}\right) \label{2-6d}
\end{equation
Thus, if we use (\ref{2-6b}), (\ref{2-6c}) and (\ref{2-6d}) in (\ref{2-6a}),
we obtain the inequality of (\ref{2-6}). This completes the proof.
\end{proof}
\begin{remark}
If we take $\alpha =1$ in Theorem \ref{2.6}, then inequality (\ref{2-6})
becomes inequality (\ref{1-4}) of Theorem \ref{1.4}.
\end{remark}
|
1,116,691,497,271 | arxiv | \section{Introduction}
Our understanding of physics consistently advanced thanks to the study of exactly solvable models -- from the Kepler problem \cite{babelon2003introduction}, to the statistical mechanics of lattice systems \cite{baxter2016exactly}, to the latest advances in interacting quantum field theories (QFTs). A powerful illustration of this approach is given by integrable QFTs (IQFTs) in two spacetime dimensions; see, e.g., refs. \cite{Dorey:1996gd, Bombardelli:2016rwb} for reviews. These systems possess infinitely many independent, mutually commuting conserved quantities \cite{Bazhanov:1994ft,Bazhanov:1996aq,Bazhanov:1996dr,Negro:2016yuu}. Their existence constrains the dynamics to the point of allowing the efficient computation of a wealth of observables -- something very remarkable for an interacting QFT. Another interesting development of modern physics has been the understanding of the phenomena of chaos and thermalization \cite{maldacena2016remarks,maldacena2016bound,gross2021chaotic,deutsch1991quantum,Motamarri:2021zwf}. These last two terms are usually assumed to be interconnected and the corresponding concepts are often conflated, implying that thermalization necessarily entails chaos and chaos inevitably leads to the thermalization. As a consequence, it is commonly believed that integrable models are incompatible with chaos, since the large amount of conservation laws prevents the system from thermalizing. In this letter, we will present evidence of chaotic behavior in integrable models, calling into question their supposed complete incompatibility and arguing that the relationship between the concepts of chaos, thermalization and integrability deserves to be studied more in depth.
Integrable systems come in various forms, the simplest of these being the exactly solvable models of classical mechanics, whose phase space is finite-dimensional \cite{babelon2003introduction}. Due to this last property one can prove that these systems are indeed incompatible with the concept of chaos. However this proof hinges on the fact that the dimension of the phase space is finite, and therefore cannot be applied to integrable theories on a general ground. An intriguing question is then whether such a statement applies to integrable systems with an infinite-dimensional phase space. In this note we will present evidence supporting a negative answer to this question. Before diving right in, we wish to clarify the notions of \emph{integrability} and \emph{chaos} we are going to employ, in order for the significance of our findings and statements to be clear. We will use the following definitions
\begin{enumerate}
\item {\it Integrability}. We refer to a system as \emph{integrable} if there exists a method that allows one to solve the theory -- \emph{i.e.} find all physical observables -- with a finite number of quadradures \cite{babelon2003introduction} or algebraic manipulations.
\item {\it Chaos}. We will say that a system is \emph{chaotic} -- equivalently, that it exhibit \emph{deterministic chaos} -- if, for any initial state, one can find a small deformation that drives a system away under time evolution at least in a weak sense, namely that the deviation grows and is unbounded \cite{guckenheimer2013nonlinear}.
\end{enumerate}
Let us elaborate on these definitions. The above notion of chaos implies that if we know the initial state of our system only with some finite precision, then the precision with which we can predict the state of the system will deteriorate as we let the latter evolve in time. Stated differently, any error in the determination of initial state will become arbitrarily large after the system has evolved for a sufficient amount of time. This feature is known as the \emph{butterfly effect}. It is possible to define a similar concept of chaos also for maps between metric spaces. Later we are going to encounter instances of such maps, so let us complement the above definition with the following one
\begin{itemize}
\item[2'.] \emph{Chaotic maps}. We will say that a map $f$ between two metric spaces $(d_1,M_1)$ and $(d_2,M_2)$ is \emph{chaotic} -- equivalently, it exhibit the \emph{butterfly effect} -- if there exist nearby points in $M_1$ that can be sent to distant points in $M_2$. More rigorously $$\forall \epsilon>0\;,\; M>0\;,\quad \exists x,y \in M_1 \;:\; d_1(x,y) <\epsilon\;,\; d_2\Big(f(x),f(y)\Big)>M\;.$$
%
\end{itemize}
Clearly, here there is a strong dependence on the definition of \emph{distance} employed. For example, a map inducing a metric on the target space will not exhibit the butterfly effect. However, the vast majority of physical problems comes equipped with a natural measure, making the concept of chaotic map presented above a relevant one. Note that for smooth, finite dimensional spaces $M_1,M_2$, a smooth map $f$ cannot be chaotic. For infinite-dimensional spaces, on the other hand, this is what seems to happen in general.
For what concerns integrability, its definition is a notoriously slippery one \cite{zakharov1991integrability, hitchin2013integrable}, which is often used interchangeably with the concept of \emph{exact solvability}, though the two are not necessarily synonymous \cite{Torrielli:2016ufi}. We will not address this matter, but will content ourselves with the above working definition. This is purposefully broad to include the exactly solvable models, the standard Liouville integrable systems and integrable Partial Differential Equations (PDEs) that will be considered in the following. Importantly, the above definition of integrability does not exclude the possibility of chaos. For instance, the free quantum field theory in Rindler space, though integrable, exhibits some thermal properties and has a non-zero Lyapunov exponent \cite{zheng2003observer}. From these definitions we see that integrability and chaos do not necessarily exclude each other.
The paper is organized as follows. In the next Section \ref{sec:din_sys} we begin investigating how chaotic features can appear in integrable systems. We first discuss theories with finite dimensional phase spaces. We will see that the differentiability of the latter is a pivotal criterion for the absence of chaotic behavior. Next we consider two examples of exactly solvable discrete-time dynamical systems that exhibit deterministic chaos. Their simplicity will allow us to pinpoint the source of such chaotic behavior. We will see that this arises from a map between the position variables and a set of auxiliary ones in which the dynamics of the systems becomes, in a sense, linear. Section \ref{sec:IFT} is devoted to integrable systems with an infinite dimensional phase spaces: integrable field theories or integrable PDEs. We will consider two famous examples: the KdV equation and the sine-Gordon model. Here too we will see that there is chaotic behavior associated to a map -- the \emph{inverse scattering transform}\cite{zakharov1980inverse, faddeev1987hamiltonian} -- between two sets of variables, in this case the field variables on one side and the conserved charges on the other. We relate the chaotic features of this map to the fact that the propagator of free particles in $1$ dimension is IR-divergent. This causes the inverse scattering map to be very sensitive to small perturbations. We conclude and present a number of open questions in Section \ref{sec:conc_out}. We tried to keep the content of this letter accessible to non-specialized readers and dedicated the appendices for the more technical aspects. In Appendix \ref{app:pert_scatt} we briefly present a general result concerning the instability of the inverse scattering transform under small variations of the initial conditions. Appendix \ref{app:mon_mat} is devoted to a lightning review of the procedure to compute the conserved charges in classical integrable field theories.
\section{Integrability and Chaos in Dynamical Systems}
\label{sec:din_sys}
\subsection{Finite Dimensional Integrable Systems}\label{subsec:FDIS}
We start from the case of finite dimensional integrable systems. Specifically, let us first consider models whose phase space is a differentiable manifold. Then we can conclude that their dynamics cannot be chaotic. Indeed, let us fix the dimension of the phase space to be $2n$. The system being integrable, it is possible to find at least $n$ algebraically independent conserved charges $I_j$ in involution: $\left\{I_i, I_j \right\} = 0$. Consequently, there exist a canonical map to action-angle variables in terms of which the dynamics takes place on an $n$-torus \cite{babelon2003introduction}. Since this map is finite-dimensional and, at generic points, smooth it cannot generate arbitrarily large deviations from small ones. Therefore the map is not chaotic. We conclude that finite dimensional integrable systems whose phase space is a differentiable manifold cannot exhibit deterministic chaos.
This argument relies importantly on two points: the finite dimensionality and the differentiability of the phase space manifold. If we relax the latter, it becomes easy to find a counter-example to the incompatibility of chaos and integrability.
In fact, let us consider the \emph{pin-ball problem}: a two dimensional system of a point particle scattering rigidly against three disks (see fig. \ref{fig:pinball}).
\begin{figure}
\centering
\includegraphics[scale=.4]{pinball.pdf}
\caption{Two trajectories of a pin-ball scattering, issuing with the same velocity $\vec{v}$ from nearby initial positions $x=(0,-2)$ (the blue, full line) and $x'=(0,-2-5\cdot 10^{-4})$ (the purple, dashed line). The trajectories are indistinguishable at first, begin diverging after a few scatterings, and end up in completely different regions for late time.}
\label{fig:pinball}
\end{figure}
This problem obeys our definition of integrability: from any initial position and velocity it is possible to determine the position at any later time through a finite number of algebraic manipulations. Indeed, in-between the collisions the particle moves freely and at the collision points we know exactly what happens: the velocity of the particle gets reflected with respect to the plane perpendicular to the circle at the collision point. However it is very well-known that this system is chaotic \cite{smilansky1991classical}. A small change of initial position and velocity vectors can drastically change the trajectory at later times. This simple system has even been observed to exhibit some kind of fractal behavior \cite{eckhardt1987fractal}. The reason for the appearance of chaotic behavior, even though the system is finite-dimensional, is to be found in the non-differentiability of the phase space manifold.
\subsection{Deterministic Chaos in Discrete-Time Dynamical Systems}
The other way in which the argument we presented above can fail is if the phase space of the system is infinite dimensional. In this case a smooth map can indeed send small variations to large ones. Let us then investigate more in detail infinite dimensional systems. As a warm up, we are going to first consider some toy examples so that we can appreciate in a simple setting how chaos manifests itself. The first example is the Bernoulli dynamical system \cite{guckenheimer2013nonlinear}, defined as follows
\begin{equation}
R\;:\;\begin{array}{l c l}
\left[0,1\right) & \longrightarrow & \left[0,1\right) \\
x & \longmapsto & R(x) = (2x) \mod 1
\end{array}\;.
\end{equation}
This system is integrable. Indeed any real number $x\in \left[0,1\right)$ can be mapped into an infinite binary sequence $\lbrace\sigma_i\rbrace_{i=1}^{\infty}$:
\begin{equation}
x = \sum^\infty_{i=1} \frac{\sigma_i}{2^i}\;.
\end{equation}
The Bernoulli map $R(x)$ acts on the binary sequence by shifting it to the left $\sigma_i \to \sigma_{i+1}$ and removing $\sigma_0$ from the set. So from a given initial condition $x_0$ we can determine the future state $R^n(x)$ for any finite $n$ performing a finite number of manipulations on the sequence $\lbrace\sigma_i\rbrace_{i=1}^{\infty}$. Nonetheless, the system clearly presents a chaotic behavior. Indeed, a rational initial condition $x_0$ yields a periodic trajectory\footnote{If $x_0$ is an inverse power of $2$, the trajectory actually collapses to the fixed point $x=0$ after a finite amount of iterations.}, i.e. $x_0$ is the fixed point of the $n$-th iterate of $R$ for some positive integer $n$: $R^{n}(x_0)=x_0$. However an irrational $x_0$ generates an aperiodic trajectory. Now, since the sets of rational and irrational numbers are dense we conclude that a slight change of the initial conditions can drastically change the behavior of the system. Notice that the binary representations of rational and irrational numbers differ vastly: rational numbers possess a finite or recurring infinite binary representation, while for irrational numbers the binary representation is aperiodic. In other words, altering slightly a number $x$ greatly affects its binary representation and we can say that the map $x\to \left\lbrace\sigma_i\right\rbrace$ is chaotic. This is the basic reason underlying the chaotic properties of the Bernoulli map and we will see that the same principle is at play in general integrable systems.
In order to get closer to the systems that possess an infinite number of conserved charges, we can consider the Baker's map, defined in the following way
\begin{equation}
B\;:\;\begin{array}{l c l}
\left[0,1\right)^2 & \longrightarrow & \left[0,1\right)^2 \\
\left( x, y \right) & \longmapsto & B(x,y) = \left\lbrace \begin{array}{l r} \left(2x, \frac{y}{2}\right)\;, & x \in \left[0,\frac{1}{2}\right) \\ \left(2-2x, 1-\frac{y}{2}\right) \;, & x \in \left[\frac{1}{2},1\right) \end{array} \right.
\end{array}\;.
\end{equation}
This map is integrable as well: as before we can map the position variables $(x,y)$ to a binary sequence, this time doubly-infinite
%
\begin{equation}
I\;:\;\begin{array}{l c l}
\left[0,1\right)^{2} & \longrightarrow & \left\lbrace0,1\right\rbrace^{\mathbb Z} \\
(x,y) & \longmapsto & \left\lbrace\sigma_i\right\rbrace_{i=-\infty}^{\infty}\;.
\end{array}
\end{equation}
%
where
%
\begin{equation}
x = \sum_{i=0}^{\infty} \frac{\sigma_{-i}}{2^{i+1}}\;,\qquad y = \sum_{i=0}^{\infty}\frac{\sigma_{i+1}}{2^{i+1}}\;.
\end{equation}
%
Then the composition of this map with the Baker's one simply acts as a left shift of the binary sequence
\begin{equation}
I\circ B\circ I^{-1}\;:\;\left\lbrace\sigma_i\right\rbrace_{i=-\infty}^{\infty}\;\longmapsto\;\left\lbrace\tilde{\sigma}_i = \sigma_{i+1}\right\rbrace_{i=-\infty}^{\infty}\;.
\end{equation}
In this system, any totally symmetric function of the $\sigma_i$ is a conserved quantity, e.g.
\begin{equation}
\mathfrak{q}_{2n+1} = \lim_{N\rightarrow\infty}\frac{1}{2N+1} \sum_{j=-N}^{N} \prod_{k=-n}^n s_{j+k}\;,\qquad s_j = \sigma_j - \frac{1}{2}
\end{equation}
However, just as for the Bernoulli map, the behavior of the trajectories is chaotic. We see that the cause is the same as for the Bernoulli map. When described in the binary sequence ``system of coordinates'', the Baker's map is not chaotic: any small change in the initial binary sequence will remain small in the subsequent dynamics. The chaotic behavior appears in the translation from the position variables $(x,y)$ to the binary sequence $\left\{\sigma_i\right\}^\infty_{n=-\infty}$. A small deviation in $x$ can cause an arbitrary large change in the binary sequence which, in turn, produces a large difference in the dynamics. One might argue that it is entirely possible to perform all the necessary calculations to determine the trajectory at any point in the future using only the binary digits ``coordinates'' and, consequently, the system is inherently non-chaotic. However, let us suppose, for the sake of the argument, that we are using the Baker's map to model a real-world physical system. In this case we might want to make predictions on the position of the system as a function of time and of its initial position. Then we cannot ignore the fact that trajectories issuing from nearby initial conditions can diverge after a finite amount of time. In this perspective, the position variables $(x,y)$ have a clear physical interpretation, while the binary digits are a mathematical construct, an helpful tool that solves the specific model we are dealing with. Mapping from the latter to the former can produce chaotic behavior in the system.
As we will see later, this feature is also present in integrable PDEs, such as the KdV equation and the sine-Gordon model that we will consider below. Before moving to these cases, we wish to consider another example where integrability and chaos seem to coexist: the problem of reconstructing an analytic function from its values on a compact set. Take some function $\alpha(t)$ defined on the unit interval $t\in \left[0,1\right]$. We wish to find a function $f(z)$, analytic in the whole complex plane $z\in\mathbb{C}$ and such that
\begin{equation}
f(t) = \alpha(t), \qquad t\in \left[0,1\right]\;.
\end{equation}
This problem is integrable: a solution exists, is unique and can be constructed by a finite number of integrations. It is also easy to see that it is chaotic. Indeed, let us introduce a small change in the ``initial conditions'' $\alpha(t)$ as follows
%
\begin{equation}
\alpha(t)\;\longrightarrow\;\alpha'(t) = \alpha(t) + \frac{C}{t^2+1}, \quad C \langle 1\;.
\end{equation}
The deformation $\delta\alpha = \alpha' - \alpha$ is smaller or equal to $C$ everywhere in the interval $t\in[0,1]$ where the initial conditions are defined. However the analytic function $f(z)$ receives arbitrarily large corrections in the vicinity of $t= \pm i$. In other words, this problem is unstable or chaotic for small deformations.
\section{Chaotic Behavior in Integrable Field Theories}
\label{sec:IFT}
\subsection{The KdV Equation}
Now let us move to classical integrable systems with infinite dimensional phase space, that is, integrable field theories or integrable PDEs \cite{zakharov1980inverse}. We will start with one of the most renowned examples: the Korteweg–de Vries (KdV) equation
\begin{equation}
\partial_\tau u(x,\tau) + 6 u(x,\tau) \partial_x u(x,\tau) + \partial_x^3 u(x,\tau) = 0\;.
\label{eq:KdVeq}
\end{equation}
This system is famously integrable and can be solved by means of the \emph{inverse scattering transform}\cite{zakharov1980inverse, faddeev1987hamiltonian}. Namely, let us consider the following scattering problem
\begin{align}
&\Big(-\partial_x^2 + u(x)\Big) \psi_k(x) = k^2 \psi_k(x), \notag\\
&\psi_k(x) \sim \left\{
\begin{array}{l l}
e^{-ik x} & x \to -\infty \\
\frac{1}{t_k} e^{-i k x} + \frac{r_k}{t_k} e^{i k x} & x \to +\infty
\end{array}
\right.\;,
\label{eq:KdVscattering}
\end{align}
where we take the potential $u(x)$ to correspond to the initial condition for the KdV equation: $u(x) = u(x,\tau = 0)$. Then, if we demand that $u(x,\tau)$ evolves in time according to the KdV equation, the scattering data -- that is the transmission and reflection coefficients $t_k$ and $r_k$ -- will have the following simple time dependence
\begin{equation}
r_k(\tau) = r_k(0) e^{8 i k^3 \tau}\;,\quad t_k(\tau) = t_k(0)\;.
\end{equation}
For the case of a bound state, the scattering problem reads
\begin{align}
&\Big(-\partial_x^2 + u(x)\Big) \psi_k(x) = -\kappa_n^2 \psi_k(x), \notag\\
&\psi_n(x) \sim \left\{
\begin{array}{l l}
e^{\kappa_n x} & x \to -\infty, \\
b_n e^{- \kappa_n x} & x \to +\infty
\end{array}
\right. \;,
\label{eq:KdVscatteringbound}
\end{align}
and the coefficients $b_n$ will also have a simple time dependence:
\begin{equation}
b_n(t) = b_n(0) e^{8 \kappa_n^3 t}\;.
\end{equation}
Now, the task of the inverse scattering method is to reconstruct the potential $u(x,\tau)$ from the scattering data $t_k(\tau)$, $r_k(\tau)$ and $b_n(\tau)$. This can be achieved thanks to the \emph{Gel'fand-Levitan-Marchenko} integral equation \cite{gel1951determination, marchenko2011sturm}. So the inverse scattering method acts as a ``non-linear Fourier transform'' of sorts, mapping the coordinates $u(x,\tau)$ to the scattering data $t_k(\tau)$, $r_k(\tau)$, $\kappa_n$ and $b_n(\tau)$ in terms of which the time evolution is almost trivial. However this map is unstable, meaning that a small perturbation in the scattering data can generate arbitrarily large deviations in the reconstructed potential \cite{dorren1994stability,carrion1986stability,feinberg2004response}. This is a very well-known fact in geophysics where the inverse scattering method is used to reconstruct the internal structure of a medium from the back-scattered signal \cite{dorren1994stability,berryman1980discrete}.
The direct scattering problem -- that is the process of passing from the non-linear equation \eqref{eq:KdVeq} to the scattering representation (\ref{eq:KdVscattering}, \ref{eq:KdVscatteringbound}) -- also turns out to be chaotic. Indeed, it is a well-known phenomena that in $1$ and $2$ space dimension any small dip or well in $u(x)$ must correspond to a bound state \cite{landau2013quantum}. Therefore a small change in $u(x)$ or in $u'(x)$ might create bound states and drastically alters the scattering data $t_k$, $r_k$, $\kappa_n$ and $b_n$. Let us stress the fact that not all small variations of the potential $u(x)$ are bound to cause drastic changes in the scattering data. For instance, if we change $u(x)$ by letting it evolve with respect to the KdV equation, the scattering data will only changes slightly. The important point is that the overwhelming majority of changes in the initial conditions modifies considerably the future behavior of the system.
The reason for the presence of chaotic behavior, from the perspective of the scattering problem, can be traced back to the fact that in $1$ and $2$ dimensions the latter is ``strongly-coupled''. Thus perturbation theory cannot be employed. Indeed, let us consider again the scattering problem \eqref{eq:KdVscattering} and take a very small potential $u(x)$ that we can consider as a small deformation of zero. At first order in perturbation theory we can write
\begin{equation}
\psi_k(x) = e^{-i k x} + \delta \psi(x)\;, \quad \left(\partial_x^2 + k^2\right) \delta \psi(x) = u(x) e^{-i k x}\;.
\end{equation}
The solution of this equation is
\begin{equation}
\delta \psi(x) = \int \frac{dy}{4\pi i} \frac{e^{ik|x-y|}}{k} u(y) e^{-ik y},
\end{equation}
%
and, as one immediately sees, in the vicinity of $k=0$ this correction can be become arbitrarily large, meaning perturbation theory breaks down. While the reconstruction of the potential is not sensitive to changes in the UV range of the scattering data, the IR one can drastically modify the resulting potential. We show this fact more in general in the Appendix \ref{app:pert_scatt}. Let us stress that this instability occurs only in $1$ and $2$ dimensions, where the propagator of a free particle diverges at large distances.
Notice how the situation described above has some resemblance to the one of the Bernoulli and Baker's maps mentioned in the previous section. Here as well we have a map from the field variables $u(x)$ and $\partial_x u(x)$ -- the ones in which the initial conditions of the system are given -- to a set of auxiliary variables in which the dynamics is easily solvable -- the scattering data. As it was the case in the previous examples, this map turns out to be chaotic. We can gain a more concrete feel of this instability, by computing numerically the conserved charges of the KdV equation for two very similar initial profiles. An expression of the conserved charges can be obtained using the following recursive relation \cite{zakharov1980inverse} (see also Appendix \ref{app:mon_mat}):
\begin{equation}
Q_n = \intop_{-\infty}^{\infty} w_{n-1}(x) dx\;,\qquad \left\lbrace \begin{array}{l}
w_0(x) = u(x) \\
w_1(x) = \partial_x u(x) \\
w_n(x) = \partial_x w_{n-1}(x) + \sum\limits^{n-2}_{k=0} w_k(x) w_{n-2-k}(x)
\end{array}\right.\;.
\end{equation}
It is possible to check that $Q_{2n} = 0$ -- since $w_{2n-1}$ are total derivatives. We numerically evaluated the first few low-lying charges for the trivial solution $u(x) = 0$ and for a small perturbation in the form of a Gaussian distribution
\begin{equation}
\tilde{u}(x) = 0.1 \exp\left(-x^2\right)
\end{equation}
The charges for the trivial solutions are obviously all vanishing
\begin{equation}
Q_{2n-1} = 0\;,\quad \forall n\geq1\;,
\end{equation}
while the numerical values of the charges for $\tilde{u}(x)$ are
\begin{align}
& \tilde{Q}_1 = 1.772\cdot 10^{-1}\;,\quad \tilde{Q}_3 = 1.253\cdot 10^{-2}\;,\quad \tilde{Q}_5 = -1.049\cdot 10^{-2}\;,\quad \tilde{Q}_7 = 3.122\cdot 10^{-2}\;, \notag \\
& \tilde{Q}_9 = -1.527\cdot 10^{-1}\;, \quad \tilde{Q}_{11} = 1.041 \;,\quad \tilde{Q}_{13}= -9.066 \;, \quad \tilde{Q}_{15} = 9.582\cdot 10^1 \;, \\
& \tilde{Q}_{17} = -1.186 \cdot 10^{3}\;,\quad \tilde{Q}_{19} = 1.6804 \cdot 10^{4}\;,\quad \tilde{Q}_{21} = 2.670\cdot 10^{5} \;,\quad \ldots \;. \notag
\end{align}
While the magnitude of first $5$ charges remains of the order of or smaller than $10^{-1}$, we see that as we increase the index, the charges grow larger and larger. This suggests that the map from initial profile $u(x)$ to the data of conserved charges is indeed chaotic: a perturbation by a small Gaussian distribution leads to a drastic change of the charges.
\subsection{The sine-Gordon Model}
\label{sec:sG_model}
The previous considerations also apply to other integrable PDEs. Here we consider another famous example: the sine-Gordon model. This model can also be solved with the inverse scattering method. Therefore, due to the instability of the inverse scattering problem, we should observe some chaotic property. In particular we expect chaos to arise in the map from the initial configuration to the space of conserved charges. At the quantum level we propose the following interpretation. The sine-Gordon model has a factorized S-matrix describing the scattering between solitons. While we are not claiming that this $S$-matrix is chaotic, we propose that the map from a given intial quantum state to the basis of states with defined soliton number does indeed display chaotic features. In other words, we expect that a slight deformation of the initial state might lead to the appearance of an arbitrarily large number of additional solitons. These will participate in the scattering process, and thus drastically change its outcome.
Let us play the same game we did for the KdV equation and evaluate numerically the first few conserved charges for slightly different initial conditions. We will consider the sine-Gordon equation in the ``light-cone'' form
\begin{equation}
\partial_+ \partial_- \phi(x_+,x_-) = - \sin \phi(x_+,x_-)\;,
\label{eq:sG_equation}
\end{equation}
and interpret $x_+$ as the time and $x_-$ as the spatial coordinates. The conserved charges for this equation can be computed following a recursive procedure \cite{candu2013introduction} similar to the one we presented above for the KdV equation (see also Appendix \ref{app:mon_mat})
\begin{equation}
Q_n = \intop_{-\infty}^{\infty} p_n(x) dx\;,\qquad \left\lbrace \begin{array}{l}
p_0(x) = \frac{i}{2}\partial_-\phi(0,x) \\
p_1(x) = p^2_0(x) - \partial_- p_0(x) \\
p_n(x) = -\partial_- p_{n-1}(x) - \sum\limits ^{n-2}_{k=1} p_k(x) p_{n-1-k}(x)
\end{array}\right.\;.
\end{equation}
Here too it is possible to check that $Q_{2n} = 0$. First we are going to compare the charges associated to the stationary soliton
\begin{equation}
\phi(0,x_-) = 4 \arctan e^{x_-}\;,
\end{equation}
and to the following, slightly altered profile
\begin{equation}
\tilde{\phi}(x_-) = \pi\left(\tanh\frac{2x_-}{\pi} + 1\right)\;.
\end{equation}
\begin{figure}
\centering
\includegraphics{sG_1Sol_vs_almost1Sol.pdf}
\caption{A plot of the two profiles $\phi(x) = 4\arctan e^{x}$ and $\tilde{\phi}(x) = \pi\left(\tanh\frac{2x_-}{\pi} + 1\right)$. The inset plot displays the absolute difference $\vert\delta\phi(x)\vert = \vert\tilde{\phi}(x) - \phi(x)\vert$.}
\label{fig:sG_solitonVSalmost_solition}
\end{figure}
As one can see from fig. \ref{fig:sG_solitonVSalmost_solition}, these two profiles are quite similar for any real value of $x_-$. The charges for the soliton can actually be computed analytically to be inversely proportional to their index
\begin{equation}
Q_{2n-1} = -\frac{2}{2n-1}\;.
\end{equation}
On the other hand, for the profile $\tilde{\phi}$ we resort to a numerical evaluation that yields
\begin{align}
& \tilde{Q}_1 = -2.094\;,\quad \tilde{Q}_3 = -7.571\cdot 10^{-1}\;,\quad \tilde{Q}_5 = -5.205\cdot 10^{-1}\;,\quad \tilde{Q}_7 = -2.896\cdot 10^{-1}\;, \notag \\
& \tilde{Q}_9 = -1.042\;, \quad \tilde{Q}_{11} = 6.168 \;,\quad \tilde{Q}_{13}= -7.461\cdot 10^{1} \;, \quad \tilde{Q}_{15} = 1.048\cdot 10^{3} \;, \\
& \tilde{Q}_{17} = -1.775 \cdot 10^4 \;,\quad \tilde{Q}_{19} = 3.539 \cdot 10^5 \;,\quad \tilde{Q}_{21} = -8.182 \cdot 10^6 \;,\quad \ldots \;. \notag
\end{align}
The other pair of profiles we wish to compare are the trivial solution $\phi(x_-) = 0$ and the linear superposition of two solitons, separated by a small distance $2x_0$
\begin{equation}
\tilde{\phi}(x_-) = 4 \arctan e^{x_- - x_0} - 4 \arctan e^{x_- + x_0}\;,\quad x_0\langle 1\;.
\end{equation}
\begin{figure}
\centering
\includegraphics{sG_2_shifted_1sol.pdf}
\caption{A plot of the profiles $\tilde{\phi}(x) = 4 \left(\arctan e^{x_- - 10^{-2}} - \arctan e^{x_- + 10^{-2}}\right)$ compared to the trivial function $\phi(x) = 0$.}
\label{fig:sG_zeroVSalmost_zero}
\end{figure}
Figure \ref{fig:sG_zeroVSalmost_zero} shows that this profile differs from zero by the order of magnitude of $x_0$ and can thus be made arbitrarily small. The charges for $\phi(x_-) = 0$ are clearly all vanishing, while those for $\tilde{\phi}(x_-)$ are computed numerically for $x_0 = 10^{-2}$ to be
\begin{align}
& \tilde{Q}_1 = -2.667\cdot 10^{-4} \;,\quad \tilde{Q}_3 = 3.733\cdot 10^{-4} \;,\quad \tilde{Q}_5 = -1.180\cdot 10^{-3}\;,\quad \tilde{Q}_7 = 6.771\cdot 10^{-3}\;, \notag \\
& \tilde{Q}_9 = -6.191\cdot 10^{-2}\;, \quad \tilde{Q}_{11} = 8.285\cdot 10^{-1} \;,\quad \tilde{Q}_{13}= -1.528\cdot 10^1 \;, \quad \tilde{Q}_{15} = 3.715 \cdot 10^2 \;, \notag\\
& \tilde{Q}_{17} = -1.152 \cdot 10^4\;,\quad \tilde{Q}_{19} = 4.434 \cdot 10^5 \;,\quad \tilde{Q}_{21} = -2.075 \cdot 10^7\;,\quad \ldots \;.
\end{align}
Just as for the KdV equation, we see that a small deformation of the initial conditions produces a seemingly arbitrarily large change in the conserved charges. Again, we argue that this behavior is a consequence of the instability of the inverse scattering method. A related way to see this is to notice that the conserved charges can be obtained from the expansion of an analytic function -- the \emph{transfer matrix} -- about one of its singularities\cite{babelon2003introduction, candu2013introduction}, see Appendix \ref{app:mon_mat} for a quick review. The rigidity of analytic functions makes such a procedure very sensitive to small deformations.
As a further evidence suggesting the presence of deterministic chaos in integrable systems, we conduct the following numerical experiment. We consider the sine-Gordon equation, this time on a cylinder of circumference $L$, and compare the evolution of the following two Cauchy problems
\begin{align}
&\left\{
\begin{array}{l r}
\left(\partial_t^2 - \partial_x^2\right) \phi_1(t,x) = \sin \phi_1(t,x)& \quad x+L \sim x,\\
\partial_t \phi_1 (0,x) = 0 & \\ \phi_1(0,x) = \cos \frac{2\pi x}{L} &
\end{array}\right.\;, \label{eq:numdev1} \\
&\left\{
\begin{array}{l r}
\left(\partial_t^2 - \partial_x^2\right) \phi_2(t,x) = \sin \phi_2(t,x)& \quad x+L \sim x, \\
\partial_t \phi_2 (0,x) = 0 & \\ \phi_2(0,x) = \cos \frac{2\pi x}{L} + \epsilon \vartheta\left(\frac{x}{L}\right) &
\end{array}\right.\;,
\label{eq:numdev2}
\end{align}
where we introduced $\vartheta(z) = \sum\limits^\infty_{n=-\infty} \exp\left(-(x-n)^2\right)$, and we take $\epsilon\ll1$. In Figure \ref{fig:dif} we plotted the difference $\phi_1 - \phi_2$. We can see that this quantity grows larger -- in absolute value -- with time and the solutions become increasingly different.
\begin{figure}
\centering
\includegraphics[scale=0.6]{dif.pdf}\quad
\includegraphics[scale=0.8]{sglyap.pdf}
\caption{Left: A plot of the difference between the solutions $\phi_1$ and $\phi_2$ to the Cauchy problems (\ref{eq:numdev1}, \ref{eq:numdev2}). Right: the $L_2$-norm of the difference between two solutions as a function of time. The difference grows with time suggesting that small perturbations can exhibit the {\it butterfly effect}.}
\label{fig:dif}
\end{figure}
\section{Conclusions and Outlook}
\label{sec:conc_out}
In this note we have shown that, in contradiction to the common lore that integrable systems are incompatible with chaos, they can display chaotic behavior. In particular we have shown that the map from the initial conditions to the set of conserved charges is chaotic, in the sense that can map small differences to arbitrarily large ones. This happens only for the case where the conserved charges are infinite in number, corresponding to systems with an infinite-dimensional phase space.
There are several open questions that deserve further investigation. Amongst these we highlight the following:
\begin{itemize}
\item[--] It would be interesting to explore and classify the types of small deformations of integrable systems of the kind explored in this note. In particular identify which of these do not yield a chaotic map between the initial conditions and the conserved charges.
\item[--] In this note we have considered the interplay between chaotic behavior and integrability in the context of classical systems. Naturally one would like to explore this for quantum systems as well.
\item[--] Recently, an integrable $N$-body system called the \emph{zigzag model} was proposed \cite{Donahue:2019adv,Donahue:2019fgn,Donahue:2022jxu} as a high-energy description of long confining strings in massive adjoint 2D QCD. This model possesses a non-differentiable phase space containing boundaries and distinct topological sectors. In light of the discussion in \S \ref{subsec:FDIS} a natural question concerning the presence of chaos in the zigzag model arises.
\item[--] Other classical models that have a non-differentiable phase spaces and in which it will be interesting to probe for the existence of chaotic features are the $\mathrm{T}\overline{\mathrm{T}}$-deformed theories \cite{Smirnov:2016lqw,Cavaglia:2016oda} and their generalizations to ``higher $\mathrm{T}\overline{\mathrm{T}}$'' deformations \cite{Conti:2019dxg,Hernandez-Chifflet:2019sua,Camilo:2021gro,Cordova:2021fnr} and to quantum-mechanical $\mathrm{T}\overline{\mathrm{T}}$-like deformations \cite{Gross:2019ach}.
\item[--] Free theories in any space-time dimension also admit an infinite number of conserved charges. An interesting question is whether the map between these and the initial conditions exhibit some chaotic feature also for these systems.
\item[--] It is well known that ``ordinary chaotic behavior" characterizes physical systems that are described by integrable theories perturbed by integrability breaking terms. Examples of these type of theories are the massless Schwinger model perturbed by a mass term and the sine-Gordon model with an additional $\phi^2$ term. The relation between this type of chaos and the one discussed in this work may shed new insight on the concept of chaos in general.
\end{itemize}
\section{Acknowledgement}
The work of S.N. is partially supported by the NSF grant PHY-2210349 and by the Simons Collaboration on Confinement and QCD Strings. S.N. wishes to thank P. Dorey, R. Tateo and A. Zamolodchikov for precious discussions and suggestions, and the department of physics of the Universit\`{a} degli Studi di Torino for kind hospitality.
F.K.P. is currently a Simons Junior Fellow at NYU and supported by a grant 855325FP from the Simons Foundation.
The work of J.S. was supported in part by the by a center of excellence of the Israel Science Foundation (grant number 2289/18). J.S. would like to thank M. Bianchi, M. Firrota and D. Weissman for useful discussions. This work was carried out while J.S. stayed in NYU and the Simon's Center. He would like to thank both institutes for a warm hospitality.
We are grateful as well to A. Dymarsky, V. Rosenhaus, A. Gorsky, B. Harrop-Griffiths for insightful discussions and suggestions throughout the project.
|
1,116,691,497,272 | arxiv | \section{Introduction}
In 1940, Mahler \cite{Mah} gave the first idea for introducing continued fractions in the field of $p$--adic numbers $\mathbb Q_p$. Starting from this, several authors studied the problem of defining an algorithm for expanding elements of $\mathbb Q_p$ in continued fractions. The most notable results were provided by Browkin \cite{BI}, Ruban \cite{RUB} and Schneider \cite{SCH} who defined different $p$--adic continued fractions algorithms with the aim of obtaining the same good properties that hold in the real case. However, all these algorithms fail in the attempt of characterizing quadratic irrationals by periodic continued fractions, as in the case of $\mathbb R$. The study of the periodicity of these algorithms have been deepened by several authors. Schneider's algorithm is not periodic for all quadratic irrationals, but there is an effective criterion to forecast when this happens (see \cite{VP, TIL, DEWII}).
Ooto \cite{OO} proved that an analogue of Lagrange's Theorem does not hold for Ruban's continued fractions and Capuano et al. \cite{CVZ} gave an effective condition to check the periodicity. Moreover, Ruban and Schneider algorithms provide finite or periodic expansion for rationals.
Browkin's algorithm is of particular interest since it always gives finite representations for rational numbers, but it is not known if an analogue of the Lagrange's Theorem holds. In \cite{BEI, BEII}, the authors proved some results about the periodicity of this algorithm and Capuano et al. \cite{CMT} gave some necessary and sufficient conditions for periodicity, but such conditions do not allow to prove that an analogue of Lagrange's Theorem does not hold. From experimental results, it seems very unlikely that Browkin's algorithm provides periodic expansion for any quadratic irrational. For this reason, in 2000, Browkin himself defined a new algorithm \cite{BII} and it has been proved in \cite{BCMI} that also this second algorithm produces a finite continued fraction for rational numbers. Browkin's second algorithm works better on quadratic irrationals, but also in this case they do not always present periodic expansions in continued fractions. The periodicity of this algorithm has been deepened in \cite{MRS}. Further studies on $p$--adic continued fractions can be found in \cite{DEA, LAO, WANI, WANII}. Thus, it is worth to study the definition of new algorithms for $p$--adic continued fractions. It is believed that some slight modification of Browkin's second algorithm \cite{BII} can give a periodic continued fraction for all quadratic irrationals in $\mathbb{Q}_p$, without losing the finite representation for the rationals.
With this purpose in mind, the first condition that a new method needs to fulfill is the convergence in $\mathbb Q_p$ of the continued fractions produced by the algorithm.
In this paper, we give a sufficient condition on the partial quotients of a $p$--adic continued fractions in order to achieve the convergence in $\mathbb{Q}_p$. In particular, we study a condition that allows to extend the idea of Browkin in \cite{BII}, giving space to several possible new definitions of $p$--adic continued fractions. Exploiting this condition, we then propose a new $p$--adic continued fraction algorithm that is a natural generalization of the construction performed in \cite{BII} for the second algorithm of Browkin. Moreover,
we also prove that this new algorithm terminates in a finite number of steps on each $\alpha\in\mathbb{Q}$.
\vspace{-0.07cm}
\section{Preliminaries}
Let us denote with $v_p(\cdot)$ and $|\cdot|_p$, respectively, the $p$--adic valuation and the $p$--adic absolute value over $\mathbb{Q}$, where $p$ is an odd prime. The Euclidean norm will be denoted as usual by $|\cdot|$.
We denote a continued fraction of a value $\alpha$ with the usual notation as
\[\alpha = b_0 + \cfrac{1}{b_1 + \cfrac{1}{b_2 + \cfrac{1}{\ddots}}} = [b_0, b_1, b_2, \ldots].\]
Moreover, we call $\frac{A_n}{B_n}$, for all $n\in\mathbb{N}$, the convergents of the continued fraction, that may be defined recursively by using the well-known formulas
\[
\begin{cases}
A_0=b_0,\\
A_1=b_1b_0+1,\\
A_n=b_nA_{n-1}+A_{n-2} \text{ for } n \geq 2,
\end{cases}
\begin{cases}
B_0=1,\\
B_1=b_1,\\
B_n=b_nB_{n-1}+B_{n-2} \text{ for } n \geq 2.
\end{cases}
\]
The first important requirement when designing an algorithm for $p$--adic continued fractions is that all the expansions converge to a $p$--adic number, that is
\[\lim\limits_{n\rightarrow +\infty} \frac{A_n}{B_n}=\alpha\in\mathbb{Q}_p.\]
The first algorithm proposed by Browkin in \cite{BI} works as follows.
Starting from an input $\alpha_0\in\mathbb{Q}_p$ then the partial quotients of the $p$--adic continued fraction are evaluated by
\begin{equation}\label{Br1}
\begin{cases}
b_n=s(\alpha_n)\\
\alpha_{n+1}=\frac{1}{\alpha_n-b_n},
\end{cases} \quad n \geq0
\end{equation}
where $s:\mathbb{Q}_p\rightarrow \mathbb{Q}$ is defined by
\[s(\alpha)=\sum\limits_{n=-r}^{0} a_n p^n\in\mathbb{Q},\]
for a $p$--adic number $\alpha=\sum\limits_{n=-r}^{+\infty} a_np^n\in\mathbb{Q}_p$, with $r\in\mathbb{Z}$ and $a_n\in \{-\frac{p-1}{2},\ldots,\frac{p-1}{2}\}$. In this algorithm, the function $s$ plays the same role of the floor function in the classical algorithm of continued fractions in $\mathbb R$. Ruban's algorithm \cite{RUB} employs the same function $s$, with the only difference that the representatives are taken in $\{0,\ldots, p-1 \}$. More than 20 years later, Browkin defines another algorithm in \cite{BII}, where starting from $\alpha_0\in\mathbb{Q}_p$, the partial quotients $b_n$, for $n \geq 0$, are evaluated by
\begin{align}
\begin{cases}\label{Br2}
b_n=s(\alpha_n) \ \ \ \ \ & \textup{if} \ n \ \textup{even}\\
b_n=t(\alpha_n) & \textup{if} \ n \ \textup{odd}\ \textup{and} \ v_p(\alpha_n-t(\alpha_n))= 0\\
b_n=t(\alpha_n)-sign(t(\alpha_n)) & \textup{if} \ n \ \textup{odd} \ \textup{and} \ v_p(\alpha_n-t(\alpha_n))\neq 0\\
\alpha_{n+1}=\frac{1}{\alpha_n-b_n},
\end{cases}
\end{align}
where $t:\mathbb{Q}_p\rightarrow \mathbb{Q}$ is another function defined for any $p$--adic value $\alpha=\sum\limits_{n=-r}^{+\infty} a_np^n$ as
\[ t(\alpha)=\sum\limits_{n=-r}^{-1}a_np^n, \]
with $r\in\mathbb{Z}$ and $a_n\in \{-\frac{p-1}{2},\ldots,\frac{p-1}{2}\}$.
In the following we will refer to \eqref{Br1} and \eqref{Br2} respectively as \textit{Browkin I} and \textit{Browkin II}.
The convergence in $\mathbb{Q}_p$ of the continued fractions generated by \textit{Browkin I} is based on the following lemma.
\begin{Lemma}[\cite{BI}, Lemma 1]\label{ConvBr1}
Let an infinite sequence $b_0,b_1,\ldots\in \mathbb{Z}[\frac{1}{p}]$ such that $v_p(b_{n})<0$, for all $n \geq 1$. Then the continued fraction $[b_0,b_1,\ldots]$ is convergent to a $p$--adic number.
\end{Lemma}
In fact, the partial quotients $b_n$ arising from \textit{Browkin I}, for $n\geq 1$, all have negative valuations.\\
For what concerns \textit{Browkin II}, the $p$--adic convergence relies on the following lemma.
\begin{Lemma}[\cite{BII}, Lemma 1]\label{ConvBr2}
Let an infinite sequence $b_0,b_1,\ldots\in \mathbb{Z}[\frac{1}{p}]$ such that, for all $n\in\mathbb{N}$,
\begin{equation}
\begin{cases}
v_p(b_{2n})=0\\
v_p(b_{2n+1})<0.
\end{cases}
\end{equation}
Then the continued fraction $[b_0,b_1,\ldots]$ is convergent to a $p$--adic number.
\end{Lemma}
\begin{Remark}\label{rema2}
The proofs of Lemma \ref{ConvBr1} and Lemma \ref{ConvBr2} exploit the strict decrease of the sequence of valuations $v_p(B_n{B_{n+1}})$.
Moreover, requiring the sequence $v_p(B_{n}B_{n+1})$ strictly decreasing is equivalent to ask that $v_p(B_{n+1})<v_p(B_{n-1})$ for all $n\geq 1$. Thus, the divergence of the sequence of valuations implies the convergence of the correspondent $p$--adic continued fraction. Indeed, in this way we have that
\[ \lim_{n \to \infty} v_p\left( \frac{A_{n+1}}{B_{n+1}} - \frac{A_n}{B_n} \right) = \lim_{n \to \infty} - v_p(B_nB_{n+1}) = +\infty, \]
and
\[\left|\frac{A_{m}}{B_{m}}-\frac{A_n}{B_n}\right|_p=\left|\frac{A_{n+1}}{B_{n+1}}-\frac{A_n}{B_n}\right|_p=\left|\frac{(-1)^n}{B_{n}B_{n+1}}\right|_p=p^{v_p(B_nB_{n+1})},\]
proving that $\left\{ \frac{A_n}{B_n} \right\} _{n\in\mathbb{N}}$ is a Cauchy sequence and therefore convergent in $\mathbb Q_p$.
\end{Remark}
\section{Convergence of $p$--adic continued fractions}
The reduction of the number of partial quotients having negative valuations shows better properties in terms of the periods of quadratic irrationals, as pointed out in \cite{BII}. Therefore a promising approach for the definition of a new algorithm should be a further modification of \textit{Browkin II}: we may define a ``$3$-steps''-algorithm that generates the partial quotients such that, for all $n \in\mathbb{N}$,
\begin{equation}\label{Br3}
\begin{cases}
v_p(b_{3n+1})<0\\
v_p(b_{3n+2})=0\\
v_p(b_{3n+3})=0.
\end{cases}
\end{equation}
Such a construction turns out to be more complex than the previous two algorithms defined by Browkin. In the following example we show that a sequence having these constraints does not converge without a stronger hypothesis. In particular, for every prime $p$, we may construct a suitable continued fraction that does not converge to any $p$--adic number.
\begin{Example}\label{controex}
Let $p$ be an odd prime. We are going to show that there exists a sequence $b_0,b_1,\ldots\in\mathbb{Q}_p$ with, for all $n\in\mathbb{N}$,
\[\begin{cases}
v_p(b_{3n+1})<0\\
v_p(b_{3n+2})=0\\
v_p(b_{3n+3})=0,
\end{cases}\]
such that the sequence $v_p(B_nB_{n+1})$ does not diverge to $-\infty$. Let us define $b_{1}=\frac{1}{p}$. The first denominators of the convergents are
\begin{align*}
B_0&=1,\\
B_1&=b_1=\frac{1}{p},\\
B_2&=b_2B_1+B_0=\frac{b_2+p}{p},\\
B_3&=b_3B_2+B_1=\frac{(b_3b_2+1)+b_3p}{p}.
\end{align*}
Their valuations are
\begin{align*}
v_p(B_0)&=v_p(1)=0,\\
v_p(B_1)&=v_p\Big(\frac{1}{p}\Big)=-1,\\
v_p(B_2)&=v_p\Big(\frac{b_2+p}{p}\Big)=-1,\\
v_p(B_3)&=v_p\Big(\frac{(b_3b_2+1)+b_3p}{p}\Big).
\end{align*}
Let us choose suitable $b_2$ and $b_3$ such that $b_3b_2+1=p$ (for example, $b_2=2$ and $b_3=\frac{p-1}{2}$). Then
\[v_p(B_3)=v_p\Big(\frac{b_3p+p}{p}\Big)=v_p(b_3+1)\geq 0.\]
At this point,
for a generic $n\in\mathbb{N}$ for which
\[v_p(B_{3n+1})=-1, \ v_p(B_{3n+2})=-1, \ v_p(B_{3n+3})\geq 0,\]
we are going to show that there exists a choice for the partial quotients such that
\[v_p(B_{3(n+1)+1})=-1, \ v_p(B_{3(n+1)+2})=-1, \ v_p(B_{3(n+1)+3})\geq 0.\]
We can write
\begin{align*}
B_{3n+1}&=\frac{a_1}{p}, \ &\textup{with} \ v_p(a_1)&=0,\\
B_{3n+2}&=\frac{a_2}{p}, \ &\textup{with} \ v_p(a_2)&=0,\\
B_{3n+3}&=a_3, \ &\textup{with} \ v_p(a_3)&\geq 0.
\end{align*}
We have two cases:
\begin{itemize}
\item
In the case that $v_p(a_3+a_2)=0$, we choose $b_{3n+4}=\frac{1}{p}$. Therefore,
\[B_{3n+4}=b_{3n+4}B_{3n+3}+B_{3n+2}=\frac{a_3+a_2}{p}.\]
Its valuation is
\[v_p(B_{3n+4})=v_p(a_3+a_2)-v_p(p)=-1,\]
so that we can write $B_{3n+4}=\frac{a_4}{p}$, with $v_p(a_4)=0$. Subsequently,
\[
B_{3n+5}=b_{3n+5}B_{3n+4}+B_{3n+3}=b_{3n+5}\frac{a_4}{p}+a_3=\frac{b_{3n+5}a_4+a_3p}{p},
\]
so that $v_p(B_{3n+5})=-1$. It means that $B_{3n+5}=\frac{a_5}{p}$, with $v_p(a_5)=0$. At the following step,
\[B_{3n+6}=b_{3n+6}B_{3n+5}+B_{3n+4}=\frac{b_{3n+6}a_5+a_4}{p}.\]
Notice that $a_4$ and $a_5$ are arbitrary nonzero elements and we can choose a suitable $b_{3n+6}$ such that
\[b_{3n+6}a_5+a_4\equiv 0 \bmod p.\]
We obtain that $p$ divides $b_{3n+6}a_5+a_4$ and so $v_p(B_{3n+6})\geq 0$.
In this case we have obtained that, starting from
\[v_p(B_{3n+1})=-1, \ v_p(B_{3n+2})=-1, \ v_p(B_{3n+3})\geq 0,\]
then
\[v_p(B_{3(n+1)+1})=-1, \ v_p(B_{3(n+1)+2})=-1, \ v_p(B_{3(n+1)+3})\geq 0.\]
\item
Let us examine also the case $v_p(a_3+a_2)>0$. Here we choose $b_{3n+4}=\frac{2}{p}$. Since $v_p(a_2)=0$ and $v_p(a_3+a_2)>0$, necessarily also $v_p(a_3)=0$. The next denominator is
\[B_{3n+4}=b_{3n+4}B_{3n+3}+B_{3n+2}=\frac{2a_3+a_2}{p}.\]
Notice that since $p$ divides $a_3+a_2$ but does not divide $a_3$, it can not divide $2a_3+a_2$. In this way $v_p(2a_3+a_2)=0$ and
\[v_p(B_{3n+4})=v_p(2a_3+a_2)-v_p(p)=-1.\]
Then we get
\[v_p(B_{3n+5})=v_p(b_{3n+5}B_{3n+4}+B_{3n+3})=-1,\]
and so we can write
\begin{align*}
B_{3n+4}&=\frac{a_4}{p}, \ &\textup{with} \ v_p(a_4)&=0,\\
B_{3n+5}&=\frac{a_5}{p}, \ &\textup{with} \ v_p(a_5)&=0.
\end{align*}
At the next step we have
\[B_{3n+6}=b_{3n+6}B_{3n+5}+B_{3n+4}=\frac{b_{3n+6}a_5+a_4}{p}.\]
As before, we choose $b_{3n+6}$ such
\[b_{3n+6}a_5+a_4\equiv 0 \bmod p.\]
In this way we get $v_p(B_{3n+6})\geq 0$. Hence, also in this second case we have obtained that
\[v_p(B_{3(n+1)+1})=-1, \ v_p(B_{3(n+1)+2})=-1, \ v_p(B_{3(n+1)+3})\geq 0.\]
\end{itemize}
We have just constructed a sequence of denominators $B_n$ such that the sequence of valuations $v_p(B_{n}B_{n+1})=v_p(B_{n})+v_p(B_{n+1})$ can not diverge to $-\infty$. In fact, in particular, $v_p(B_n)\geq -1$ for all $n\in\mathbb{N}$ and the $p$--adic continued fraction is not convergent.
\end{Example}
Starting from the observations of the last example, we would like to characterize the strict decrease of the sequence $v_p(B_nB_{n+1})$ in general. From Remark \ref{rema2}, it is sufficient to investigate the condition $v_p(B_{n+1})<v_p(B_{n-1})$ for all $n\geq 1$.\\
In the following, $b_0,b_1,\ldots$ are elements of $\mathbb{Q}_p$. In fact, as we are going to see in the next results, Browkin's hypotesis of $b_n\in\mathbb{Z}[\frac{1}{p}]$ for all $n\in\mathbb{N}$, seen in Lemma \ref{ConvBr1} and Lemma \ref{ConvBr2}, is not needed.
\begin{Lemma}\label{lem1}
For all $n\geq 1$, if $v_p(B_{n+1})<v_p(B_{n-1})$, then
\[v_p(B_{n+1})\leq v_p(B_n).\]
\begin{proof}
Let us recall that
\[v_p(B_{n+1})=v_p(b_{n+1}B_n+B_{n-1})\geq \min \{v_p(b_{n+1}B_n),v_p(B_{n-1}) \},\]
with the equality for $v_p(b_{n+1}B_n)\neq v_p(B_{n-1})$.\\
If $v_p(b_{n+1}B_n)< v_p(B_{n-1})$, then
\[v_p(B_{n+1})=v_p(b_{n+1}B_n)=v_p(b_{n+1})+v_p(B_n)\leq v_p(B_n),\]
since $v_p(b_{n+1})\leq 0$. Instead, if $v_p(b_{n+1}B_n)\geq v_p(B_{n-1})$,
\[v_p(B_{n+1})\geq \min \{v_p(b_{n+1}B_n),v_p(B_{n-1})\}= v_p(B_{n-1}),\]
but it is a contradiction with the hypothesis of $v_p(B_{n+1})<v_p(B_{n-1})$, hence this second case can not occur.
\end{proof}
\end{Lemma}
On the other hand it is also possible to prove the following equivalence.
\begin{Lemma}\label{lem2}
For all $n\geq 1$, $v_p(B_{n+1})<v_p(B_{n-1})$ if and only if
\[v_p(b_{n+1}B_{n})<v_p(B_{n-1}).\]
\begin{proof}
If $v_p(B_{n+1})<v_p(B_{n-1})$ and $v_p(b_{n+1}B_{n})\geq v_p(B_{n-1})$, then
\[v_p(B_{n+1})\geq\min \{v_p(b_{n+1}B_{n}),v_p(B_{n-1})\}=v_p(B_{n-1}),\]
but this contradicts the hypothesis.\\
Conversely, if $v_p(b_{n+1}B_{n})<v_p(B_{n-1})$, then
\[v_p(B_{n+1})=v_p(b_{n+1}B_{n})<v_p(B_{n-1}),\]
and the claim is proved.
\end{proof}
\end{Lemma}
Using the results obtained above, we may prove the following theorem on the characterization of the strict decrease of the sequence $v_p(B_nB_{n+1})$.
\begin{Theorem} \label{teoconve}
The following conditions are equivalent:
\begin{enumerate}
\item[i)]$v_p(b_{n+1}B_{n})<v_p(B_{n-1})$, for all $n\geq 1$,
\item[ii)]$v_p(b_nb_{n+1})<0$, for all $n\geq 1$.
\end{enumerate}
\begin{proof}
$i)\Rightarrow ii)$\\
Let us suppose that $v_p(b_{n+1}B_{n})<v_p(B_{n-1})$ for all $n\geq 1$.
\\If $v_p(b_{n+1})<0$, then $v_p(b_{n+1}b_{n})=v_p(b_{n+1})+v_p(b_{n})<0$ and the claim is proved. Therefore, let us assume $v_p(b_{n+1})=0$ and we prove that $v_p(b_{n})<0$.
Since $v_p(b_{n+1})=0$ and
\[v_p(b_{n+1}B_{n})<v_p(B_{n-1}),\]
then $v_p(B_{n})<v_p(B_{n-1})$. The latter means that:
\[v_p(B_{n})=v_p(b_{n}B_{n-1}+B_{n-2})<v_p(B_{n-1}).\]
Moreover, $v_p(B_n)=v_p(b_{n}B_{n-1})$ because otherwise $v_p(B_n)\geq v_p(B_{n-2})$ and this leads to a contradiction, by Lemma \ref{lem2}. Hence, we have obtained that
\[v_p(B_n)=v_p(b_{n}B_{n-1})=v_p(b_{n}) +v_p(B_{n-1})<v_p(B_{n-1}),\]
where the last inequality implies $v_p(b_{n})<0$ and this concludes the proof.\\
$ii)\Rightarrow i)$\\
Conversely, let us suppose that $v_p(b_nb_{n+1})<0$ for all $n\geq 1$. We prove the claim by induction on $n$.\\
\textbf{Base step:}\\
By hypotesis, we have that $v_p(b_1b_2)<0$ and $v_p(b_2b_3)<0$. Hence, for $n=1$ and $n=2$, we have that:
\begin{align*}
v_p(b_2B_1)&=v_p(b_2b_1)<0=v(1)=v(B_0),\\
v_p(b_3B_2)&=v_p(b_3b_2b_1+b_3)=v_p(b_3b_2b_1)=v_p(b_3b_2)+v_p(b_1)<\\
&<v_p(b_1)=v_p(B_1).
\end{align*}
\textbf{Induction step:}\\
Let us suppose that the thesis is true until a step $n\geq 2$ and we show it for $n+1$.
From $v_p(b_{n+2}b_{n+1})<0$ we get that either $v_p(b_{n+2})<0$ or $v_p(b_{n+1})<0$ (or both).\\ \ \\
\textbf{Case $v_p(b_{n+2})<0$:}\\
In this case, using inductive hypothesis and Lemma \ref{lem1} we get that $v_p(B_{n+1})\leq v_p(B_n)$, hence:
\[v_p(b_{n+2}B_{n+1})=v_p(b_{n+2})+v_p(B_{n+1})<v_p(B_{n+1})\leq v_p(B_n).\]\ \\
\textbf{Case $v_p(b_{n+1})<0$:}\\
In this case we have
\[
b_{n+2}B_{n+1}=b_{n+2}\left(b_{n+1}B_{n} + B_{n-1} \right),
\]
therefore
\[
v_p \left(b_{n+2}B_{n+1} \right) \leq v_p\left(b_{n+1}B_{n} + B_{n-1} \right).
\]
The inductive hypothesis ensures that $v_p\left(b_{n+1}B_{n} \right) < v_p(B_{n-1})$, so
\[
v_p \left(b_{n+2}B_{n+1} \right) \leq v_p\left(b_{n+1}B_{n} \right) < v_p(B_n)
\]
and this concludes the proof.
\end{proof}
\end{Theorem}
We easily obtain the following corollary, fully characterizing the strict decrease of the sequence of denominators.
\begin{Corollary}
The sequence $\{v_p(B_nB_{n+1})\}_{n\in\mathbb{N}}$ is strictly decreasing if and only if $v_p(b_nb_{n+1})<0$ for all $n\in\mathbb{N}$.
\end{Corollary}
In other words, we have proved that the definition of two consecutive partial quotients with zero valuation makes us lose the strict decrease of the valuation. Moreover, the sufficiency of this condition means that every possible definition in this range works. It would be interesting to study some algorithms that satisfy this hypothesis, different from \textit{Browkin I} and \textit{Browkin II}. For example, it is possible to define $2$ negative partial quotients each $3$ steps or partial quotients that are not in $\mathbb{Z}[\frac{1}{p}]$, as long as the condition, $v_p(b_nb_{n+1})<0$ for all $n\in\mathbb{N}$, is satisfied.
\section{Design of a new algorithm}
In Example \ref{controex} we have showed that an algorithm generating the partial quotients as in $(\ref{Br3})$ never assures the $p$--adic convergence of the continued fraction. Moreover, we have characterized the strict decrease of the sequence $v_p(B_nB_{n+1})$.
However, for the negative divergence of this sequence, we do not need it to be strictly decreasing. So we may wonder in which cases it diverges although it is not strictly decreasing.
What we are going to see here is that adding one additional constraint on the two partial quotients having null valuation it is possible to avoid the growth of the valuation of the denominators $B_n$. In this way we succeed to obtain the convergence of a $p$--adic continued fraction with only one partial quotient with negative valuation each three steps, as defined in $(\ref{Br3})$.
\begin{Theorem}\label{ConvBr3}
Let $b_0,b_1,\ldots \in \mathbb{Q}_p$ such that, for all $n\in\mathbb{N}$:
\[\begin{cases}
v_p(b_{3n+1})<0\\
v_p(b_{3n+2})=0\\
v_p(b_{3n+3})=0.
\end{cases}\]
If $v_p(b_{3n+3}b_{3n+2}+1)=0$ for all $n \in \mathbb{N}$, then,
\[v_p(B_{3n-2})=v_p(B_{3n-1})=v_p(B_{3n})>v_p(B_{3n+1}).\]
\begin{proof}
Let us prove the claim by induction on $n$.\\
\textbf{Base step}:
\begin{align*}
v_p(B_0)&=v_p(1)=0,\\
v_p(B_{1})&=b_1<0,\\
v_p(B_2)&=v_p(b_2b_1+1)=v_p(b_2)+v_p(b_1)=v_p(b_1)=v_p(B_1),\\
v_p(B_3)&=v_p(b_3B_2+B_1)=v_p((b_3b_2+1)B_1+b_3B_0)\\
&=v_p((b_3b_2+1)B_1)=v_p(B_1)=v_p(B_2),\\
v_p(B_4)&=v_p(b_4B_3+B_2)=v_p(b_4)+v_p(B_3)<v_p(B_3)=\\
&=v_p(B_1)=v_p(B_2),
\end{align*}
where we employed that $v_p(b_4)<0$ and $v_p(b_3b_2+1)=0$.\\
\textbf{Induction step:}\\
Let us suppose that:
\[v_p(B_{3n-2})=v_p(B_{3n-1})=v_p(B_{3n})>v_p(B_{3n+1}).\]
In fact, the valuation of $B_{3n+1}$ is:
\[v_p(B_{3n+1})=v_p(b_{3n+1}B_{3n}+B_{3n-1})=v_p(b_{3n+1})+v_p(B_{3n}) < v_p(B_{3n}),\]
since, by induction hypotesis, $v_p(B_{3n})=v_p(B_{3n-1})$ and $v_p(b_{3n+1})<0$.\\ Recalling that $v_p(b_{3n+4})<0$ and $v_p(b_{3n+3}b_{3n+2}+1)=0$, at the following steps we obtain:
\begin{align*}
v_p(B_{3n+2})&=v_p(b_{3n+2}B_{3n+1}+B_{3n})=v_p(b_{3n+2})+v_p(B_{3n+1})=\\
&=v_p(B_{3n+1})<v_p(B_{3n}),\\
v_p(B_{3n+3})&=v_p(b_{3n+3}B_{3n+2}+B_{3n+1})=\\
&=v_p((b_{3n+3}b_{3n+2}+1)B_{3n+1}+b_{3n+3}B_{3n})=\\
&=v_p((b_{3n+3}b_{3n+2}+1)B_{3n+1})=v_p(B_{3n+1})=\\&=v_p(B_{3n+2})<v_p(B_{3n}),\\
v_p(B_{3n+4})&=v_p(b_{3n+4}B_{3n+3}+B_{3n+2})=v_p(b_{3n+4})+v_p(B_{3n+3})<\\
&<v_p(B_{3n+3})=v_p(B_{3n+1})=v_p(B_{3n+2}).
\end{align*}
Hence, we have obtained that
\[v_p(B_{3n+4})<v_p(B_{3n+3})=v_p(B_{3n+2})=v_p(B_{3n+1})<v_p(B_{3n}),\]
and this proves the claim.
\end{proof}
\end{Theorem}
Theorem \ref{ConvBr3} easily leads to the following corollary, achieving the convergence of a $p$--adic continued fraction generating the partial quotients as in (\ref{Br3}).
\begin{Corollary}\label{CorConvBr3}
Let $b_0,b_1,\ldots$ as in Theorem \ref{ConvBr3}. Then the continued fraction $[b_0,b_1,\ldots]$ is convergent to a $p$--adic number.
\begin{proof}
We know from Remark \ref{rema2} that the continued fraction $[b_0,b_1,\ldots]$ converges to a $p$--adic number if and only if
\[\lim\limits_{n\rightarrow +\infty} v_p(B_nB_{n+1})= -\infty.\]
Notice that, for all $n\in\mathbb{N}$,
\[v_p(B_{3n}B_{3n+1})>v_p(B_{3n+1}B_{3n+2}), \]
since $v_p(B_{3n+1})<v_p(B_{3n})$ and $v_p(B_{3n+1})=v_p(B_{3n+2})$. Then
\[v_p(B_{3n+1}B_{3n+2})=v_p(B_{3n+2}B_{3n+3}), \]
since all the three valuations are equal. Moreover,
\[v_p(B_{3n+2}B_{3n+3})>v_p(B_{3n+3}B_{3n+4}), \]
since $v_p(B_{3n+4})<v_p(B_{3n+3})$ and $v_p(B_{3n+3})=v_p(B_{3n+2})$. So, the sequence $v_p(B_nB_{n+1})$ is decreasing and divergent.
\end{proof}
\end{Corollary}
\section{Some new algorithms}
Starting from Theorem \ref{ConvBr3} and Corollary \ref{CorConvBr3}, we propose some new algorithms. We use three different functions. For
\[a=\sum\limits_{n=-r}^{+\infty}a_np^n\in\mathbb{Q}_p, \ \ \ \ a_n\in\Big\{ 0, \pm 1,\pm 2,\ldots,\pm \frac{p-1}{2}\Big\},\]
the first two functions are the same $s$ and $t$ of \textit{Browkin II}, that are
\[
s(a)=\sum\limits_{n=-r}^{0}a_np^n, \ \ \ t(a)=\sum\limits_{n=-r}^{-1}a_np^n,\]
and then the third is:
\begin{align*}
u(a)=\begin{cases} +1 \ \ &\textup{if} \ a_0\in\Big\{+2,\ldots,\dfrac{p-1}{2}\Big\}\cup \{-1\}\\
-1 &\textup{if} \ a_0\in\Big\{-\dfrac{p-1}{2},\ldots,-2\Big\}\cup \{+1\}.
\end{cases}
\end{align*}
We can now design the shape of two new algorithms.
\begin{Definition}[First new algorithm]\label{firstnew}
On input $\alpha_0=\alpha$, for $n\geq 0$, our first new algorithm work as follows:
\begin{align*}\begin{cases}
b_n=s(\alpha_n) \ \ \ \ \ &\textup{if} \ n \equiv 0\bmod 3\\
b_n=t(\alpha_n) &\textup{if} \ n \equiv 1 \bmod 3 \ \textup{and} \ v_p(\alpha_n-t(\alpha_n))= 0\\
b_n=t(\alpha_n)-sign(t(\alpha_n)) & \textup{if} \ n \equiv 1 \bmod 3 \ \textup{and} \ v_p(\alpha_n-t(\alpha_n))\neq0\\
b_n=u(\alpha_n) & \textup{if} \ n \equiv 2 \bmod 3\\
\alpha_{n+1}=\frac{1}{\alpha_n-b_n}.
\end{cases}
\end{align*}
\end{Definition}
\begin{Definition}[Second new algorithm]\label{secondnew}
On input $\alpha_0=\alpha$, for $n\geq 0$, our second new algorithm work as follows:
\begin{align*}\begin{cases}
b_n=s(\alpha_n) \ \ \ \ \ &\textup{if} \ n \equiv 0\bmod 3\\
b_n=t(\alpha_n) &\textup{if} \ n \equiv 1 \bmod 3 \ \textup{and} \ v_p(\alpha_n-t(\alpha_n))= 0\\
b_n=t(\alpha_n)-sign(t(\alpha_n)) & \textup{if} \ n \equiv 1 \bmod 3 \ \textup{and} \ v_p(\alpha_n-t(\alpha_n))\neq0\\
b_n=s(\alpha_n)-u(\alpha_n) & \textup{if} \ n \equiv 2 \bmod 3\\
\alpha_{n+1}=\frac{1}{\alpha_n-b_n}.
\end{cases}
\end{align*}
\end{Definition}
\begin{Remark}
The choice of the third function $u$ is a little tricky. The function $t$ takes all the negative powers, leaving out the constant term. The function $u$ needs to act on a $p$--adic number with zero valuation, but it has to leave apart another term with zero valuation, otherwise the third partial quotient will not have null valuation.
Clearly, the choice of this function can be done in several ways. In fact, there are a lot of manners to separate the constant term $a_0\in\{-\frac{p-1}{2},\ldots,\frac{p-1}{2} \}$ in two nonzero parts.
Here we have presented two proposals, but it would surely be interesting to analyze also other options different from ours.
\end{Remark}
Both of the constructions in Definition \ref{firstnew} and Definition \ref{secondnew} produce a sequence of partial quotients $b_0,b_1,\ldots\in\mathbb{Q}_p$ such that, for all $n\in\mathbb{N}$,
\[\begin{cases}
v_p(b_{3n+1})<0\\
v_p(b_{3n+2})=0\\
v_p(b_{3n})=0.
\end{cases}\]
We are going to see that also the additional condition required by Theorem \ref{ConvBr3}, i.e.
\[v_p(b_{3n+2}b_{3n+3}+1)=0, \ \textup{for} \ \textup{all} \ n \in \mathbb{N},\]
is satisfied for both algorithm.
\begin{Proposition}\label{Alg1}
Let $\alpha\in\mathbb{Q}_p$. Then the partial quotients generated by the new algorithms in Definition \ref{firstnew} and Definition \ref{secondnew} satisfy the conditions of Theorem \ref{ConvBr3}.
\begin{proof}
To prove the claim, we are left to show that \[v_p(b_{3n+2}b_{3n+3}+1)=0, \, \text{ for all } n\in\mathbb{N}.\]
We prove it only for the second algorithm, the other proof is similar. First we notice that, by construction,
\[v_p(b_{3n+2}b_{3n+3})=v_p(b_{3n+2})+v(b_{3n+3})=0,\]
so that
$v_p(b_{3n+2}b_{3n}+1)\geq \min \{ v_p(b_{3n+2}b_{3n}), v_p(1)\}=0$.
Let us show that the case $v_p(b_{3n+2}b_{3n}+1)>0$ can not occur.
For all $n\in\mathbb{N}$,
\[\alpha_{3n+2}= \frac{1}{\alpha_{3n+1}-t(\alpha_{3n+1})}=a_0+a_1p+a_2p^2+\ldots. \]
and
\begin{align*}
b_{3n+2}&=s(\alpha_{3n+2})-u(\alpha_{3n+2})=a_0 \mp 1,\\
b_{3n+3}&=s(\alpha_{3n+3})=s\Big(\frac{1}{\alpha_{3n+2}-b_{3n+2}}\Big)=(a_0-b_{3n+2})^{-1}=\pm 1.
\end{align*}
Therefore, the condition $v_p(b_{3n+2}b_{3n}+1)=0$ is satisfied if and only if
\[b_{3n+2}(a_0-b_{3n+2})^{-1} \equiv (\pm 1) (a_0 \mp 1 ) \equiv -1 \bmod p\]
is not fulfilled.
However, this would imply that $a_0 \equiv 0 \bmod p$, but this cannot happen, due to the constraints in the algorithm when using the function $t$.
\end{proof}
\end{Proposition}
Finally we prove that the second new algorithm succeed in obtaining the finiteness of the expansion for rational numbers, as it happens for \textit{Browkin I} and \textit{Browkin II}. We state it in the following theorem.
\begin{Theorem}\label{finito}
If $\alpha \in \mathbb Q$, then the second new algorithm (Definition \ref{secondnew}) stops in a finite number of steps.
\end{Theorem}
\begin{proof}
Let us consider $\alpha\in\mathbb{Q}$. We are going to show that the algorithm from Definition \ref{secondnew} stops in a finite number of steps when the input is $\alpha$. By construction we have,
\[v_p(\alpha_{3k+1})<0, \ v_p(\alpha_{3k+2})=v_p(\alpha_{3k+3})=0,\]
so that we can write
\begin{align*}
\alpha_{3k+1}&=\frac{N_{3k+1}}{D_{3k+1}p^l}, & \ &\text{with} \ (N_{3k+1},D_{3k+1})=1, \ \ p\not| N_{3k+1}D_{3k+1}, \ \ l\geq 1,\\
\alpha_{3k+2}&=\frac{N_{3k+2}}{D_{3k+2}}, & &\text{with} \ (N_{3k+2},D_{3k+2})=1, \ \ p\not| N_{3k+2}D_{3k+2},\\
\alpha_{3k+3}&=\frac{N_{3k+3}}{D_{3k+3}}, & &\text{with} \ (N_{3k+3},D_{3k+3})=1, \ \ p\not| N_{3k+3}D_{3k+3}.\\
\end{align*}
Let us notice that for this algorithm, for all $n\in\mathbb{N}$, the partial quotients are such that $b_{3n+2}\in\{-\frac{p-1}{2}+1,\ldots,-1,1,\ldots,\frac{p-1}{2}-1\}$ and $b_{3n+3}=\pm 1$, so that
\begin{equation*}
|b_{3n+2}|\leq \frac{p-3}{2},\quad
|b_{3n+3}|=1.
\end{equation*}
Since $v_p(b_{3n+1})<0$, we can write
\[b_{3n+1}=\frac{c_{3n+1}}{p^l}, \ \text{with} \ v_p(c_{3n+1})=0, \ l\geq 1.\]
The partial quotients $b_{3n+1}$ are generated by the function $t$ and it has been shown in \cite{BCMI} that
\[|c_{3n+1}|\leq p^l\left(1-\frac{1}{p^l}\right). \]
For the sake of simplicity, we also write $c_{3k+2}=b_{3k+2}$ and $c_{3k+3}=b_{3k+3}$, so that the coefficients $c_n$ always have zero valuation.\\
Exploiting $\alpha_{k+1}=\frac{1}{\alpha_k-b_k}$, we get
\begin{align*}
N_{3k+1}(N_{3k}-c_{3k}D_{3k})&=p^lD_{3k}D_{3k+1},\\
N_{3k+2}(N_{3k+1}-c_{3k+1}D_{3k+1})&=p^lD_{3k+1}D_{3k+2},\\
N_{3k+3}(N_{3k+2}-c_{3k+2}D_{3k+2})&=D_{3k+2}D_{3k+3}.
\end{align*}
Since $(|N_n|,p|D_n|)=1$ for all $n\in\mathbb{N}$, then
\[|N_{3k+1}|=|D_{3k}|, \ |N_{3k+2}|=|D_{3k+1}|,\ |N_{3k+3}|=|D_{3k+2}|,\]
and
\begin{align*}
|D_{3k+1}|&=\frac{|N_{3k}-c_{3k}D_{3k}|}{p^l}\leq \frac{|N_{3k}|+|c_{3k}D_{3k}|}{p^l} =\frac{1}{p^l}|N_{3k}|+\frac{1}{p^l}|D_{3k}|,\\
|D_{3k+2}|&=\frac{|N_{3k+1}-c_{3k+1}D_{3k+1}|}{p^l}\leq \frac{1}{p^l}|N_{3k+1}|+\left( 1- \frac{1}{p^l}\right)|D_{3k+1}|,\\
|D_{3k+3}|&=|N_{3k+2}-c_{3k+2}D_{3k+2}|\leq |N_{3k+2}|+\left(\frac{p-3}{2} \right)|D_{3k+2}|.
\end{align*}
By using the formulas above we may write
\begin{align*}
&|N_{3k+3}|+|D_{3k+3}|\leq |D_{3k+1}|+\frac{p-1}{2}|D_{3k+2}|\leq \\
&\leq |D_{3k+1}|+\frac{p-1}{2}\left(\frac{1}{p^l}|N_{3k+1}|+ \frac{p^l-1}{p^l}|D_{3k+1}| \right)=\\
&=\frac{p-1}{2p^l}|N_{3k+1}|+\frac{p^{l+1}+p^l-p+1}{2p^l}|D_{3k+1}|\leq\\
&\leq \frac{p-1}{2p^l}|D_{3k}|+\frac{p^{l+1}+p^l-p+1}{2p^l} \cdot \left(\frac{1}{p^l}|N_{3k}|+\frac{1}{p^l}|D_{3k}|\right)=\\
&=\left(\frac{p^{l+1}+p^l-p+1}{2p^{2l}}\right)|N_{3k}|+\left(\frac{2p^{l+1}-p+1}{2p^{2l}}\right)|D_{3k}|.
\end{align*}
We have that $2p^{l+1}-p+1<2p^{2l}$, since $p^{2l} \geq p^l$ for every $l \geq 1$ and consequently we also have $p^{l+1}+p^l-p+1<2p^{2l}$.
Thus, we obtain, for all $k\in\mathbb{N}$, that
\[|N_{3k+3}|+|D_{3k+3}|<|N_{3k}|+|D_{3k}|.\]
Since the sequence $\{|N_{3n}|+|D_{3n}|\}_{n\in\mathbb{N}}$ is a strictly decreasing sequence of natural numbers it must be finite and hence $\alpha$ has a finite continued fraction.
\end{proof}
\section{Generalization to $n$ steps}
The aim of this section is to generalize Theorem \ref{ConvBr3} to a generic n-step algorithm. On this purpose, we also need several additional conditions on the valuations, thus we introduce the following notation for a family of sequences.
Let $n,m\in\mathbb{N}$, with $m\geq 2$, we define the family of sequences $U_m^{(n)}$ as
\[U_m^{(0)}=1, \ \ U_m^{(1)}=b_m, \ \ U_m^{(n+1)}=b_{m+n}U_m^{(n)}+U_m^{(n-1)}. \]
\begin{Lemma}\label{seqden}
For every $n\geq 2$, the partial denominators $B_n$ can be obtained as:
\[B_n=U_2^{(n-1)}B_1+U_3^{(n-2)}B_0.\]
\begin{proof}
Let us prove the claim by induction on $n$. For $n=2$ and $n=3$ it holds since:
\begin{align*}
B_2&=b_2B_1+B_0=U_2^{(1)}B_1+U_3^{(0)}B_0,\\
B_3&=b_3B_2+B_1=(b_3b_2+1)B_1+b_3B_0=U_2^{(2)}B_1+U_3^{(1)}B_0.
\end{align*}
Now let us suppose that the claim holds at the steps $n$ and $n+1$, that is:
\begin{align*}
B_n&=U_2^{(n-1)}B_1+U_3^{(n-2)}B_0,\\
B_{n+1}&=U_2^{(n)}B_1+U_3^{(n-1)}B_0.
\end{align*}
We are going to show that it is true also for $B_{n+2}$. In fact:
\begin{align*}
B_{n+2}&=b_{n+2}B_{n+1}+B_{n}=\\
&=b_{n+2}(U_2^{(n)}B_1+U_3^{(n-1)}B_0)+(U_2^{(n-1)}B_1+U_3^{(n-2)}B_0)=\\
&=(b_{n+2}U_2^{(n)}+U_2^{(n-1)})B_1+(b_{n+2}U_3^{(n-1)}+U_3^{(n-2)})B_0=\\
&=U_2^{(n+1)}B_1+U_3^{(n)}B_0.
\end{align*}
It follows that the thesis is true for all $n\geq 2$.
\end{proof}
\end{Lemma}
\begin{Remark}\label{seqdenk}
Notice that Lemma \ref{seqden} holds also starting from a generic step $k$. It means that for all $k\in\mathbb{N}$ and $n\geq 2$,
\[B_{k+n}=U_{k+2}^{(n-1)}B_{k+1}+U_{k+3}^{(n-2)}B_k,\]
and the proof is similar to the case $k=0$ seen in Lemma \ref{seqden}.
\end{Remark}
\begin{Theorem}\label{ConvBrN}
Let us consider $r\in\mathbb{N^+}$ and $b_0,b_1,\ldots \in \mathbb{Q}_p$ such that, for all $n\in\mathbb{N}$:
\[\begin{cases}
v_p(b_{rn+1})<0\\
v_p(b_{rn+i})=0, \ \forall i\in\{2,\ldots,r\}.\\
\end{cases}.\]
Moreover let us suppose that, for all $n\in\mathbb{N}$,
\begin{align*}
v_p(U_{rn+2}^{(i)})&=0 \ \textup{for} \ \textup{all} \ i\in \{2,\ldots,r-1\} \text{ and for } r \geq 3,\\
v_p(U_{rn+3}^{(i)})&=0 \ \textup{for} \ \textup{all}\ i\in \{2,\ldots,r-2\} \text { and for } r \geq 4.
\end{align*}
Then we have, for all $n\in\mathbb{N}$,
\[v_p(B_{rn+1})=v_p(B_{rn+2})=\ldots=v_p(B_{rn+r})>v_p(B_{rn+r+1}).\]
\begin{proof}
Let us prove the claim by induction on $n$.\\
\textbf{Base step:}\\
We prove the thesis for $n=0$. The valuation of the first denominator is:
\[v_p(B_1)=v_p(b_1)<0.\]
By Lemma \ref{seqden}, for $i\in\{2,\ldots,r\}$,
\begin{align*}
v_p(B_i)&=v_p(U_2^{(i-1)}B_1+U_3^{(i-2)}B_0)=v_p(U_2^{(i-1)}B_1)=v_p(b_2B_1)=v_p(B_1).
\end{align*}
At the following step, since $v_p(b_{r+1})<0$, we get:
\[v_p(B_{r+1})=v_p(b_{r+1}B_r+B_{r-1})=v_p(b_{r+1})+v_p(B_r)<v_p(B_r).\]
Hence, the claim is true for $n=0$.\\
\textbf{Induction step:}\\
Let us suppose that the thesis holds for a generic $n\in\mathbb{N}$, that is:
\[v_p(B_{rn+1})=v_p(B_{rn+2})=\ldots=v_p(B_{rn+r})>v_p(B_{rn+r+1}).\]
We want to prove the claim for $n+1$.
\begin{comment}
Since $v_p(b_{r(n+1)+1})<0$ by construction, the first denominator is:
\begin{align*}
v_p(B_{r(n+1)+1})&=v_p(b_{r(n+1)+1}B_{r(n+1)}+B_{r(n+1)-1})=\\
&=v_p(b_{r(n+1)+1})+v_p(B_{r(n+1)})<v_p(B_{r(n+1)}).
\end{align*}
\end{comment}
Here we use Remark \ref{seqdenk} with $k=r(n+1)$.
Now, for $i\in\{2,\ldots,r\}$,
\begin{align*}
v_p(B_{r(n+1)+i})&=v_p(U_{r(n+1)+2}^{(i-1)}B_{r(n+1)+1}+U_{r(n+1)+3}^{(i-2)}B_{r(n+1)})=\\
&=v_p(U_{r(n+1)+2}^{(i-1)}B_{r(n+1)+1})=\\
&=v_p(U_{r(n+1)+2}^{(i-1)})+v_p(B_{r(n+1)+1})=v_p(B_{r(n+1)+1}).
\end{align*}
At the following step, since $v_p(b_{r(n+2)+1})<0$, then:
\begin{align*}
v_p(B_{r(n+2)+1})&=v_p(b_{r(n+2)+1}B_{r(n+2)}+B_{r(n+2)-1})=\\
&=v_p(b_{r(n+2)+1}B_{r(n+2)})<v_p(B_{r(n+2)}).
\end{align*}
The induction is then complete and the claim holds for all $n\in\mathbb{N}$.
\end{proof}
\end{Theorem}
\begin{Corollary}\label{nsteps}
Let $r\in\mathbb{N^+}$ and $b_0,b_1,\ldots$ as in Theorem \ref{ConvBrN}. Then the continued fraction $[b_0,b_1,\ldots]$ is convergent to a $p$--adic number.
\begin{proof}
Using Remark \ref{rema2}, the continued fraction $[b_0,b_1,\ldots]$ converges in $\mathbb{Q}_p$ if and only if
\[\lim\limits_{n\rightarrow +\infty} v_p(B_nB_{n+1})= -\infty.\]
By Theorem \ref{ConvBrN} we have that, for all $n\in\mathbb{N}$,
\[v_p(B_{rn+1}B_{rn+2})=\ldots=v_p(B_{rn+r-1}B_{rn+r})>v_p(B_{rn+r}B_{rn+r+1}),\]
so that the sequence $v_p(B_nB_{n+1})$ is decreasing and divergent to $-\infty$.
\end{proof}
\end{Corollary}
By Corollary \ref{nsteps}, we obtain the convergence of a $p$--adic continued fractions algorithm generating the partial quotients as
\begin{equation}
\begin{cases}
v_p(b_{rn+1})<0\\
v_p(b_{rn+2})=0\\
v_p(b_{rn+3})=0\\
\ldots\\
v_p(b_{rn+r})=0.
\end{cases}
\end{equation}
With a construction similar to the one made in Example \ref{controex}, it can be proved that the conditions of Theorem \ref{ConvBrN} are necessary for the $p$--adic convergence.
\section{Conclusions}
In this paper we have analyzed the convergence of $p$--adic continued fractions in order to give a better understanding for the design of an optimal algorithm, that at the present time does not exist. In Theorem \ref{teoconve}, we have characterized the strict decrease of the valuations $v_p(B_n B_{n+1})$, used by Browkin in \cite{BI} and \cite{BII}. This characterization guarantees the $p$--adic convergence of all the algorithms generating partial quotients such that $v_p(b_n)+v_p(b_{n+1})<0$ for all $n\in\mathbb{N}$. Outside from this hypothesis, we have also obtained some effective conditions for the convergence of a $p$--adic continued fractions with only one negative partial quotient each $r$ steps. In particular, Browkin's continued fractions in \cite{BI} and \cite{BII} are respectively the cases when $r=1$ and $r=2$. For the case $r=3$ we have proposed some actual algorithms, proving that one of them terminates in a finite number of steps when processing a rational number.
|
1,116,691,497,273 | arxiv | \section{Introduction}\label{sec:intro}
Anomalous X-ray pulsars (AXPs) and soft-gamma repeaters are neutron stars
many of whose attributes at X-ray, gamma-ray, and infrared wavelengths
\citep[see][for a review]{wt06} are best understood in the context of
the magnetar model \citep{dt92a}, according to which their high-energy
emission results from the rearrangement and decay of ultra-strong
magnetic fields. Much remains to be learned about magnetars, of which
only a dozen are known.
XTE~J1810--197\ is an AXP with spin period $P=5.54$\,s, unusual in being transient.
Identified in early 2003 when its X-ray luminosity increased 100-fold
\citep{ims+04}, by 2007 it has returned to the quiescent state it
maintained for at least 24 years \citep{gh05}. Uniquely for a magnetar it
emits radio waves \citep{hgb+05}, that turned on by early 2004. Unlike in
ordinary rotation-powered pulsars, the radio pulses have a flat spectrum
and vary in luminosity and shape on daily timescales \citep{crh+06}.
Radio emission from XTE~J1810--197\ links magnetars and ordinary pulsars, and
provides a new window for learning about the physical characteristics
of a magnetar. For instance, while in principle radio emission could
be generated in the corona from closed or open magnetic field lines, the
large pulse profile and flux density changes observed on short timescales
\citep{ccr+07} appear to point to the latter \citep[cf.][]{bt07}.
Here we report on observations of the polarized emission from XTE~J1810--197\
in an attempt to shed some light on the geometry of the radio-emitting
regions of this magnetar.
\section{Observations and Analysis}\label{sec:obs}
We have observed XTE~J1810--197\ with the Parkes 64-m telescope in New South Wales,
Australia, in full-Stokes polarimetry mode for a total of 20\,hr on-source
between 2006 April and November. Table~\ref{tab:obs} summarizes the
relevant observations.
\begin{deluxetable}{llclr}
\tablewidth{0.86\linewidth}
\tablecaption{\label{tab:obs} Parkes polarimetric observations of XTE~J1810--197\ }
\tablecolumns{5}
\tablehead{
\colhead{Date} &
\colhead{Frequency} &
\colhead{Integration} &
\colhead{Backend} &
\colhead{$S_{\rm peak}$} \\
\colhead{(MJD/mmdd)} &
\colhead{(GHz)} &
\colhead{(hr)} &
\colhead{} &
\colhead{(mJy)}
}
\startdata
53852/0427 & 1.369 & 0.9 & WBC\tablenotemark{a} & 650 \\
53862/0507 & 1.369 & 0.2 & WBC\tablenotemark{a} & 350 \\
53879/0524 & 1.369 & 0.2 & WBC\tablenotemark{a} & 320 \\
53913/0627 & 1.369\tablenotemark{b} & 2.0 & DFB & 1500 \\
53986/0908 & 1.369 & 1.2 & DFB & 70 \\
53989/0911 & 8.356\tablenotemark{c} & 5.0 & WBC\tablenotemark{d} & 45 \\
53993/0915 & 3.222 & 0.3 & DFB\tablenotemark{d} & 20 \\
54002/0924 & 1.369 & 3.2 & DFB & 50 \\
54021/1013 & 1.369 & 0.4 & DFB\tablenotemark{d} & 20 \\
54021/1013 & 3.222 & 1.8 & DFB\tablenotemark{d} & 10 \\
54022/1014 & 3.222 & 3.3 & DFB & 20 \\
54060/1121 & 1.369\tablenotemark{b} & 1.6 & DFB & 20 \\
\enddata
\tablecomments{Observations at 1.4\,GHz used the central beam of the
multibeam receiver at a variety of feed angles, unless otherwise noted.
Observations at 3.2\,GHz were performed with the 10/50\,cm receiver,
and 8.4\,GHz observations used the Mars receiver. We used the wide-band
correlator (WBC) or the digital filterbank (DFB) spectrometer. The
integration times were divided into scans interspersed with calibration
observations. Scans were divided into subscans where an integer number
of pulsar periods (minimum of two) were folded modulo the pulsar period.
Unless noted otherwise, the total bandwidth recorded was 256\,MHz (with
128, 512, or 1024 channels across it), and there were 2048 phase bins
across each folded pulse profile (2.7\,ms resolution).
}
\tablenotetext{a}{1024 phase bins.}
\tablenotetext{b}{H-OH receiver.}
\tablenotetext{c}{512\,MHz bandwidth.}
\tablenotetext{d}{Data folded at half the pulse period.}
\end{deluxetable}
We have used the three available receiver/feed combinations that have
well-characterized polarization properties: H-OH (1.4\,GHz), 10/50\,cm
(3.2\,GHz), and Mars (8.4\,GHz). Due to common availability, at 1.4\,GHz
we have also used the central beam of the multibeam receiver, although
its polarimetric characteristics are less ideal compared to those of
the H-OH receiver \citep{joh02}. The H-OH and 10/50\,cm systems have
orthogonal linear feeds, while the Mars package receives dual circular
polarizations. In all cases a pulsed calibrating signal can be injected
at an angle of 45\arcdeg\ to the feed probes.
To record data we used either the digital filterbank (DFB) or the
wide-band correlator (WBC). The bandwidth, frequency- and time-resolution
varied depending on receiver and spectrometer, but typical values were,
respectively, 256\,MHz, 128 channels, and 2048 bins across the pulse
profile (see Table~\ref{tab:obs} for details). An integer number of
pulse periods were folded and recorded to disk in PSRFITS format for
off-line analysis. Because the dump time of the spectrometers is $\ge
10$\,s, a minimum of two pulse periods were folded in each subscan,
with $\sim 10$ more common. Typically, scans lasting up to $\sim
1$\,hr were interspersed with $\sim 1$\,min observations of the pulsed
calibrator in order to determine the relative gain and phase between
the two feed probes. For our purposes, the main difference between
the 3-level sampling/correlation WBC and the 8-bit precision DFB was
the latter's much greater sensitivity to radio frequency interference,
which we excised in the frequency- and time-domain during analysis.
We used existing observations of the flux calibrator Hydra~A, whose flux
density is 43.1, 20.3 and 8.4\,Jy at 1.4, 3.2 and 8.4\,GHz respectively,
to determine the system equivalent flux density for the receivers and
to flux-calibrate the pulse profiles.
All data were analyzed with the {\sc psrchive} software package
\citep{hvm04}. As part of the analysis we corrected the Stokes parameters
($Q$, $U$, total intensity $I$, and circular polarization $V$) for
the position of the feed probes relative to the telescope meridian and
for the parallactic angle of the observation. We also observed strong
pulsars with known polarization characteristics (such as the Vela pulsar)
to provide a check on our polarimetric calibration. Analysis of these
pulsars yielded linear polarization $L = \sqrt{Q^2 + U^2}$, position
angle of linear polarization $\mbox{PA} = \frac{1}{2} \arctan\ (U/Q)$,
and $V$ matching those in the literature \citep[e.g.,][]{jhv+05,jw06}.
\section{Results and Discussion}\label{sec:res}
To complete polarization calibration for XTE~J1810--197\ we had to compute the
amount of Faraday rotation suffered by the radiation in its passage
through the Galactic magnetic field. We determined the rotation measure
by measuring PA as a function of frequency within the 256-MHz band at
1.4\,GHz when the pulsar was strong. The resulting value, $\mbox{RM} =
+76 \pm 4$\,rad\,m$^{-2}$, did not vary within the quoted uncertainty
either as a function of pulse phase or time. The RM was then used to
correct the measured PAs and frequency-integrated $L$ at all frequencies
to their values at infinite frequency so that a comparison could be made
between frequencies \citep[e.g.,][]{kj06}. The PAs and $L$ shown in
Figure~\ref{fig:pol} therefore represent those emitted at the pulsar.
We also display in the Figure the Stokes $I$ and $V$.
Together with the integrated column density of free electrons to XTE~J1810--197,
$\mbox{DM}=178$\,cm$^{-3}$\,pc \citep{crh+06}, the RM can be used to
determine the average magnetic field strength parallel to the line of
sight weighted by electron density, $1.2\,\mbox{RM/DM} = 0.5\,\mu$G. This
fairly small value appears reasonable given the location of the pulsar,
$(l,b) = (10\fdg73, -0\fdg16)$ and $d=3.5$\,kpc, for which the large-scale
Galactic field is mostly in the perpendicular direction, and with at
least one reversal along the line of sight \citep[e.g.,][]{hml+06}.
In spite of the variability of the profiles shown in Figure~\ref{fig:pol}
there are three striking and constant aspects to the polarization
profiles. First, the fractional linear polarization is extremely
high, close to 100\%, and remains high at all frequencies measured
here\footnote{\citet{crh+06} reported that the pulsar was 65\% linearly
polarized at 1.4\,GHz. The discrepancy arises from then-uncorrected
Faraday rotation within the observing band.}. Secondly, there is a
shallow increase in the PA as a function of rotational phase, which
remains essentially unchanged regardless of time or frequency of the
observations. The rate of change is reasonably constant over the ``main''
pulse profile components and is around $0\fdg5$\,deg$^{-1}$. Finally,
there is little or no circular polarization ($\la 5\%$) in the integrated
profiles at any frequency (Fig.~\ref{fig:pol}), or in individual pulses
at 1.4\,GHz except for occasional levels up to $\sim 10\%$ of total
intensity in the ``precursor'' pulse components (cf. Fig.~\ref{fig:sp}).
The emission from XTE~J1810--197\ changed in character in late 2006 July
\citep{ccr+07}. While daily variations continue unabated, generally
the pulse profiles are broader (compare Figs.~\ref{fig:pol}~[a]
and [b]) and the fluxes are lower (the peak flux densities listed
in Table~\ref{tab:obs} attest to this). In contrast, the general
polarization characteristics do not seem to vary. This suggests that the
gross observed changes in profile morphology are not due to detectable
changes in the underlying magnetic field geometry of the emission regions.
With the variability of the integrated profiles as a caveat, we
nevertheless attempt to compare the profiles at 1.4, 3.2 and 8.4\,GHz.
In order to isolate long-term variations, we consider for this purpose
only data taken in a one-week period in 2006 September. The double
peaked profile gets narrower as the frequency increases and the ratio of
the leading to trailing component becomes larger (Fig.~\ref{fig:pa}).
Also, the slow PA sweep (and absolute value of the PA) is identical
at all frequencies, as expected in the ``rotating vector model''
of \citet{rc69a}. This is consistent with the radius-to-frequency
mapping paradigm in which lower frequencies are emitted farther from the
star than higher frequencies \citep[e.g.,][]{cor78}. Without detailed
geometrical information, however, it is difficult to quantify this effect
\citep[for a brief discussion of radius-to-frequency mapping concerning
XTE~J1810--197, see][]{crh+06,drr07}.
In the very early days of pulsar astronomy, it was realized that
the observed PA swing could be used to derive the geometry of the
star under the assumption that the PA was related to the projection
of the dipolar field lines on the plane of the sky \citep{rc69a}.
Unfortunately, it is difficult to determine the geometry in the majority
of pulsars mainly because of the small longitude range over which they
emit \citep[e.g.,][]{ew01}. This is true of XTE~J1810--197\ also. In addition,
it is a priori unclear whether a dipolar field structure holds true in
this pulsar, although we proceed on the assumption that it might and
see where that leads us. In our post-2006 July data, neither $\alpha$
(the angle between the magnetic and rotation axes) nor $\beta$ (the
angle of closest approach of the line of sight to the magnetic axis)
can be constrained. The earlier data, with the appearance of pulse
components far from the main component (Fig.~\ref{fig:sp}), are more
promising in this regard. Here, however, the main uncertainty is whether
there is 90\arcdeg\ of PA rotation between the widely spaced components.
Formal fits to the data both with and without an extra 90\arcdeg\ of
phase are reasonable (see Fig.~\ref{fig:pafit}). In the former case,
the fits yield values of $\alpha$ near 70\arcdeg\ and high values of
$\beta$ near 20\arcdeg--25\arcdeg. Without the added orthogonal jump,
the fits yield $\alpha \sim 4\arcdeg$ and $\beta \sim 4\arcdeg$ implying
that the magnetic and rotation axes are almost aligned.
The polarization characteristics of XTE~J1810--197\ are very similar to those
seen in young pulsar profiles \citep{jw06}. They too are highly linearly
polarized, often have double profiles, and show a slow swing of PA across
a wide profile. \cite{jw06} showed that a single cone of emission
originating from relatively high in the pulsar magnetosphere could
explain the observed characteristics of young pulsars and it is tempting
to make the same case here. However, there is a significant difference
in the polar cap radius, $\propto P^{-1/2}$ (and light cylinder radius
$c P/2\pi$), between a young pulsar with a period of 0.1\,s and XTE~J1810--197\
with its 5.5\,s period. This makes it difficult to see how such a wide
($\approx 0.15\,P$) observed pulse profile can be produced unless (a)
the emission height is very large or (b) the magnetic and rotation axes
are almost aligned. We will discuss these two possibilities in turn.
In the first case, knowledge of the geometry and the observed pulse
width can be used to compute an emission height. For values of $\alpha$
near 70\arcdeg\ and $\beta \approx 25\arcdeg$, one can use equation~(2)
of \citet{ggr84} to derive the cone opening angle $\rho$. In turn the
emission height can be computed as $\sim 2cP \rho^2 / 9\pi$, or $\sim
20000$\,km. This is about 10\% of the light cylinder radius --- similar
to the value in other young pulsars \citep{jw06}. If this scenario
were typical of magnetars in general then the beaming fraction would be
high and most bright radio active magnetars would likely be detectable
in pulsations. In the second case, for small values of $\alpha$, the
line of sight could remain wholly within the emission beam leading to the
observed wide profile. In this case, if $\alpha$ were to vary slightly
with time (for reasons unknown), there could be a large effect on both the
observed beam shape and torque, both of which have been observed to vary
significantly \citep{ccr+07}. Perhaps the emission in this particular
magnetar could be in part a direct function of the quasi-alignment
between the rotation axis and magnetic axis, or perhaps the alignment
might occur as a natural process in magnetars. In either case, the small
polar cap size would make the beaming fraction of such magnetars rather
low. The main difficulty with this interpretation is that the radio and
X-ray beams appear to be nearly aligned \citep{ccr+07} and the observed
modulation of thermal X-rays is very large \citep[$\sim 50\%$;][]{gh06},
which would be hard to obtain from a nearly aligned rotator.
In summary, the polarized emission from XTE~J1810--197\ shares many characteristics
of those in young pulsars generally. The emission is highly linearly
polarized with little evolution with frequency, the pulse profile
is wide and double, and there is only a shallow swing of PA through
the main pulse. This leads to the possibility that the ``standard''
pulsar ideas of emission along open magnetic field lines also hold here.
In this case, either the magnetic and rotation axes are almost aligned,
or the emission originates high above the surface of the star, which
is our preferred interpretation. Obvious remaining differences between
XTE~J1810--197\ and other pulsars are its pulse profile variability (which does not
appear to be accompanied by corresponding gross changes in the magnetic
field geometry), fluctuating flux density, and flat spectrum.
\acknowledgements
We thank John Sarkissian for help with observations, and Aidan Hotan
and Aris Karastergiou for discussions. The Parkes Observatory is
part of the Australia Telescope, which is funded by the Commonwealth
of Australia for operation as a National Facility managed by CSIRO.
FC acknowledges the NSF for support through grant AST-05-07376.
|
1,116,691,497,274 | arxiv | \section{Introduction}
Coupled oscillator models have been used to study different aspects of biology, chemistry and engineering, for example chemical waves \cite{kuramoto2003chemical}, flashing of fireflies \cite{mirollo1990synchronization}, laser arrays \cite{winful1988stability, wang1988dynamics}, power system networks \cite{dorfler2013synchronization}, neural networks \cite{kopell1988coupled,hansel1993phase,crook1997role,park2016weakly}, movement of a slime mold \cite{TFE}, and coupled
predator-prey systems \cite{wall2013synchronization,zhang2015robust}.
Time delays in the connections between the oscillators are inescapable due to the time
for a signal to propagate from one element to the other.
Many of these systems exhibit phase-locking behaviour, i.e., all the oscillators have
similar waveforms and frequencies, but with some fixed phase difference between different
oscillators. To study the existence and stability of such phase-locked solutions and how
they are related to the time delay and other parameters, one must formulate a model
for the system. We discuss two approaches below.
One approach to study connected networks of oscillators is through phase models
\cite{DorflerB14}. In these models, each oscillator is represented only by
its phase along its limit cycle,
with amplitude variation neglected \cite{hoppensteadt2012weakly,Porter2014Dynamical,schwemmer2012theory}.
Phase models take the general form \cite{campbell2018phase,hoppensteadt2012weakly}:
\begin{equation}
\label{General_Phase_Model}
\frac{d \theta_{i}}{d \xi}=\Omega_{i}+ H_{i}\left(\theta_{1}(\xi), \ldots, \theta_{n}(\xi)\right), \quad i=1, \ldots, n,
\end{equation}
where $\theta_i\in[0,2\pi)$ is the phase of the $i^{\rm th}$ oscillator, $\Omega_i>0$
the natural frequency and $H_i$ are the connection functions.
Motivated by the famous \textit{Kuramoto model} \cite{kuramoto2003chemical},
in the literature the functions $H_i$ often take the form:
\begin{equation}
\label{fun_H_KM}
H_{i}\left(\theta_{1}(\xi), \ldots, \theta_{n}(\xi)\right)= \sum_{j=1}^{n} K_{ij}H \left(\theta_{j}(\xi)-\theta_{i}(\xi)\right), \quad i=1, \ldots, n,
\end{equation}
where $K_{ij}$ is the adjacency matrix of
an unweighted network \cite{Porter2014Dynamical,earl2003synchronization}.
In the original Kuramoto model \cite{kuramoto2003chemical} the function $H$ in (\ref{fun_H_KM}) is the sine function. Usually, transmission time delay is introduced as an explicit delay in the argument of the phases
\cite{earl2003synchronization,kim1997multistability,Luz,NSK,yeung1999time,Schuster1989Mutual}:
\begin{equation}
\label{Kuramoto_model_Exp_Delay}
\frac{d \theta_{i}}{d \xi}=\Omega_{i}+ \sum_{j=1}^{n} K_{ij}H\left(\theta_{j}(\xi-\tau)-\theta_{i}(\xi)\right), \quad i=1, \ldots, n.
\end{equation}
Most studies of this model focus only on synchronization
\cite{earl2003synchronization} or use simplifications such as
$H(\cdot)=\sin(\cdot)$ \cite{ermentrout2009delays,kim1997multistability,Luz,NSK,Schuster1989Mutual,yeung1999time} or $n=2$
\cite{kim1997multistability,Schuster1989Mutual,yeung1999time}.
Other authors introduce additional processes into system (\ref{Kuramoto_model_Exp_Delay}). For instance, in \cite{kim1997multistability},
the dynamic behavior of coupled oscillators with time delayed interaction under a pinning force is studied. In \cite{NSK,yeung1999time}, the authors study time delayed phase models with $H=\sin(\cdot)$ and random noise forcing.
Finally, a \textit{phase shift} is sometimes included in the model of a network of
connected oscillators to represent the temporal distance between the oscillators.
In general, the phase shift between two oscillators $\alpha_{ij}$ is incorporated
in the phase model as, see e.g., \cite{brede2016frustration,ermentrout2009delays,sakaguchi1986soluble},
\begin{equation}
\label{Kuramoto_model_phasShift}
\frac{d \theta_{i}}{d \xi}=\Omega_{i}+ \sum_{j=1}^{n} K_{ij}H\left(\theta_{j}(\xi)-\theta_{i}(\xi)-\alpha_{ij}\right), \quad i=1, \ldots, n.
\end{equation}
In the case where $H(\cdot)=\sin(\cdot)$ this model is called the Kuramoto-Sakaguchi
model\cite{sakaguchi1986soluble}.
In fact, there is a relation between such phase shifts and the transmission time delay.
In \cite{izhikevich1998phase,ermentrout1994introduction}, the authors have shown how the model with delay and the model with the phase shift are linked. We will review the
details of this link later in this section.
Models of coupled oscillators are also formulated as physically or biological derived
differential equations \cite{wall2013synchronization,zhang2015robust,campbell2018phase}.
These models are of the form
\begin{equation}
\label{phase_eq1}
\review{\frac{d \mathbf{X}_{i}}{d \rho}={\mathbf{F}}_{i}\left(\mathbf{X}_{i}(\rho)\right)+\epsilon {\mathbf{G}}_{i}\big(\mathbf{X}_{1}(\rho), \ldots,\mathbf{X}_{i}(\rho),\ldots, \mathbf{X}_{n}(\rho)\big), \quad i=1, \ldots, n,\quad \mathbf{X}_i\in\mathbb{R}^m,}
\end{equation}
and are such that when $\epsilon=0$ the dynamical system of each uncoupled oscillator
has an exponentially asymptotically stable $T_i-$periodic limit cycle with corresponding
(natural) frequency $\Omega_i$. In these models, \review{$\mathbf{X}_{i}$ represents the state of the
$i^{th}$ oscillator of the system}, ${\mathbf{G}}_i$ are the coupling functions
and $\epsilon>0$ is the coupling strength \cite{ET10,hoppensteadt2012weakly,izhikevich1998phase,KE02}. Note that $\mathbf{X}_{i}$ is a vector of dimension at least $2$, but
can be high dimensional.
For example, in a pendulum model $\mathbf{X}_{i}$ represents the position and velocity of the
$i^{th}$ pendulum, while in a neural model $\mathbf{X}_{i}$ represents the voltage and gating
variables of the $i^{th}$ neuron.
If the coupling is weak, $0<\epsilon\ll 1$, then the theory of weakly coupled
oscillators can be used to connect the physical model \eqref{phase_eq1} to a
phase model \cite{crook1997role,ET10,KE02,galan2009phase,ZS09}
.
More precisely,
the dynamics of each oscillator in the network can be rigorously reduced to a single
equation that indicates how the phase of the oscillator changes in time
\cite{hoppensteadt2012weakly,izhikevich1998phase,schwemmer2012theory}.
One form of weakly coupled oscillator theory is \textit{Malkin's Theorem} where the
connection functions in the phase model are determined explicitly in terms of ${\mathbf{G}}_i$ and
the limit cycles of the uncoupled system, (\ref{phase_eq1}) with $\epsilon=0$.
Let $\varphi_i(t)\in\mathbb{S}^1$ be the {\em phase deviation} of the $i^{\rm th}$ oscillator
of (\ref{phase_eq1}), i.e., the change in the phase due to the coupling.
It then follows from Malkin's Theorem (see e.g., \cite[Theorem 9.2]{hoppensteadt2012weakly}) that
the dynamics of (\ref{phase_eq1}) can be described by the phase deviation model:
\begin{equation}
\begin{aligned}
\label{intor_21}
\frac{d\varphi_i}{d{t}}&=H_i\big(\varphi_{1}\left(t\right)-\varphi_{i}\left(t\right), \ldots, \varphi_{n}\left(t\right)-\varphi_{i}\left(t\right)\big)+\mathcal{O}(\epsilon), \quad i=1, \ldots, n,
\end{aligned}
\end{equation}
where $H_i$ are the phase interaction functions and the variable $t:=\epsilon \rho$
represents slow time because the phase deviations $\varphi_i$ are slow
variables. The references
\cite{KE02,hoppensteadt2012weakly,schwemmer2012theory} provide other forms
of the theory and give further references.
\review{We also refer the reader to the recent articles \cite{R1pietras2019network,R2nakao2016phase,R3ashwin2016mathematical} for an overview of various numerical and analytical techniques for phase reduction.}
In \cite{izhikevich1998phase}, Izhikevich generalizes Malkin's theorem to weakly connected oscillators with fixed delay, $\tau$, in their interaction:
\begin{equation} \begin{aligned}
\label{phase_eq2}
\review{\frac{d \mathbf{X}_{i}}{d \rho}={\mathbf{F}}_{i}\left(\mathbf{X}_{i}(\rho)\right)+\epsilon {\mathbf{G}}_{i}\big(\mathbf{X}_{1}(\rho-\tau), \ldots, \mathbf{X}_{i}(\rho-\tau), \ldots \mathbf{X}_{n}(\rho-\tau)\big), \quad i=1, \ldots, n,\quad \mathbf{X}_{i}\in\mathbb{R}^m}
\end{aligned}\end{equation}
where all uncoupled oscillators have nearly identical natural frequencies.
Assuming the natural frequency is $1$, Izhikevich shows that the phase deviation model corresponding to (\ref{phase_eq2}) is
\begin{equation}
\begin{aligned}
\label{intor_2}
\frac{d\varphi_i}{d{t}}&=H_i\big(\varphi_{1}\left(t-\eta\right)-\varphi_{i}\left(t\right)-\zeta, \ldots, \varphi_{n}\left(t-\eta\right)-\varphi_{i}\left(t\right)-\zeta\big)+\mathcal{O}(\epsilon), \quad i=1, \ldots, n,
\end{aligned}
\end{equation}
where $\eta:=\epsilon\tau$ and $\zeta:=\tau\mod2\pi$.
The functions $H_i$ are still defined explicitly in terms of ${\mathbf{G}}_i$ and the uncoupled limit cycle in (\ref{phase_eq2}).
It is clear that the time delay $\tau$ enters the phase model (\ref{intor_2}) as both
an explicit delay, $\eta$, and a phase shift, $\zeta$.
The major result that Izhikevich proved in \cite{izhikevich1998phase} is that if the delay $\tau$ in (\ref{phase_eq2}) satisfies $\epsilon\tau=\mathcal{O}(1)$ (large delay), then
the explicit delay occurs in the phase model (\ref{intor_2}).
However, when the delay satisfies $\tau=\mathcal{O}(1)$ with respect to $\epsilon$ (small delay), no delay appears in the argument of the phases. Hence, (\ref{intor_2})
becomes:
\begin{equation}
\begin{aligned}
\label{intor_7}
\frac{d\varphi_i}{d{t}}&=H_i\big(\varphi_{1}\left(t\right)-\varphi_{i}\left(t\right)-\zeta, \ldots, \varphi_{n}\left(t\right)-\varphi_{i}\left(t\right)-\zeta\big)+\mathcal{O}(\epsilon), \quad i=1, \ldots, n.\end{aligned}
\end{equation}
We refer the reader to the review article
\cite{ermentrout2009delays} and the references therein for different scenarios where large or small delay appears in-phase models.
\review{In this article we focus on physical models with the following particular form
\begin{equation}
\frac{{d{{\mathbf{X}}_i}}}{{d\rho}} = {\mathbf{F}}({{\mathbf{X}}_i}(\rho)) + \epsilon\sum\limits_{j = 1}^n {{K_{ij}}} {\mathbf{G}}({{\mathbf{X}}_i}(\rho),{{\mathbf{X}}_j}(\rho - \tau )), \quad i = 1, \ldots n,\ {{\mathbf{X}}_i} \in {\mathbb{R}^m}
\label{newmodel}
\end{equation}
where $K_{ii}=0$.
This represent the following modelling assumptions. The oscillators are
identical. The coupling occurs pairwise between the oscillators and there
is no coupling from an oscillator to itself. The coupling to the $i^{th}$
oscillator occurs close to that oscillator, so the time delay represents the
time it takes for information to travel from the $j^{th}$ oscillator to the
$i^{th}$ oscillator. Models with such structure occur in models of
biological systems \cite{crook1997role,wall2013synchronization}.}
\review{Assuming the uncoupled
oscillators in \eqref{newmodel} have a natural frequency $\Omega$ and
the $K_{ij}=\mathcal{O}(1)$ with respect to $\epsilon$, we show in the
appendix that the approach of \cite{izhikevich1998phase} can be applied to
yield
\begin{equation}
\frac{d\varphi_i}{dt}=\frac{1}{\Omega}\sum_{j=1}^{n} K_{ij} H(\varphi_j(t-\eta)-\varphi_i(t)-\zeta)
+\mathcal{O}(\epsilon)
\label{pairwise}
\end{equation}
where $\eta:=\epsilon\Omega\tau$ and $\zeta:=\Omega\tau\mod2\pi$, in the case of large delay, i.e., when $\epsilon\Omega\tau=\mathcal{O}(1)$.
In the case of small delay \eqref{pairwise}
becomes
\begin{equation}\label{smaleDealy_intro}
\frac{{d{\varphi _i}(t)}}{{dt}} = \frac{1}{{\Omega}}\sum\limits_{j = 1}^n {{K_{ij}}}H\left( {{\varphi _j}(t) - {\varphi _i}(t) - \Omega \tau } \right)+ \mathcal{O}(\epsilon ).
\end{equation}
}
To see how the phase deviation model relates to the standard phase model, note
that the phase of oscillations $\theta_i$ in (\ref{newmodel}) have the form:
\begin{equation}
\label{eq1234}
\theta_i(\xi)=\Omega\xi+\varphi_i(t), \quad i=1, \ldots, n,
\end{equation}
where $t=\epsilon \Omega\xi$, see \cite{izhikevich1998phase,hoppensteadt2012weakly}. Notice
that the natural frequency of each uncoupled oscillator in (\ref{eq1234}) is $\Omega$.
Then,
\begin{equation} \begin{aligned}
\label{New_equation_1}
\frac{d \theta_{i}}{d \xi}=\Omega+ \epsilon \Omega\frac{d \varphi_i} {d t}=\Omega +\epsilon \sum_{j=1}^{n} K_{ij}H\left(\theta_{j}(\xi-\tau)-\theta_{i}(\xi)\right)+\mathcal{O}(\epsilon^2).
\end{aligned}\end{equation}
Similarly, when the time delay is small, we have
\begin{equation} \begin{aligned}
\label{New_equation_2}
\frac{d \theta_{i} }{d \xi}=\Omega+ \epsilon\sum_{j=1}^{n} K_{ij}H
\left(\theta_{j}(\xi)-\theta_{i}(\xi)-\zeta\right)+\mathcal{O}(\epsilon^2).
\end{aligned}\end{equation}
Thus in the phase model formulation, the coupling strength parameter $\epsilon$
explicitly appears in front of the connection function $h$.
Regarding the dynamics,
it follows from (\ref{eq1234}) that
\[
\theta_{i+1}-\theta_{i}=\varphi_{i+1}-\varphi_{i},\quad i=1,\ldots,{n-1}
\]
i.e., phase-locked solutions are the same as phase deviation locked solutions \cite{hoppensteadt2012weakly}. The existence and stability of phase-locked solutions of
system (\ref{New_equation_2}) has been studied in the case of two oscillators
\cite{campbell2012phase,ermentrout2009delays} and many oscillators with
structured coupling \cite{campbell2018phase,ermentrout2009delays,ko2004wave}.
The goals in this paper are twofold. First, the majority of studies of coupled
oscillators with large delays have been done in the context of isolated phase models, often with just sine function coupling. Thus we will revisit and extend
this analysis in the case where the phase model is explicitly connected to a
physical differential equation model and the function $H$ is general. In
particular, we will show that the
multiple stable phase-locked solutions of the same type may occur even when the
coupling is weak.
Second, note that the small delay phase deviation model
\eqref{smaleDealy_intro} is a system of ordinary differential equations, while the large
delay model \eqref{newmodel} is a delay differential equation model. Thus the
spectrum of Floquet multipliers of a periodic solution is finite for the former and countably
infinite for the latter. Nevertheless, several studies have verified numerically
that the model \eqref{smaleDealy_intro} gives an accurate description of existence and stability
of phase-locked periodic solutions of \eqref{phase_eq1} in the case of weak coupling
and small delay
\cite{campbell2012phase,campbell2018phase}. Here we will show why this is the case.
In particular we will show how the solutions of system (\ref{newmodel}) reduce to those of system (\ref{smaleDealy_intro}) if the delay is small.
In this article, \review{we will focus on (\ref{newmodel})} when $n=2$ as this is enough to illustrate our main points.
The paper is organized as follows. In the next section, we reduce the model of two weakly connected oscillators with large time delay to a phase model, and study the existence of
phase-locked solutions. In Section \ref{sec_stability}, we give a complete
description of the stability criteria for all phase-locked solutions and
describe the potential bifurcations
that can occur in the system. Then we compare our results with the
stability criteria in \cite{campbell2012phase} when the time delay is small.
In Section \ref{Sec3},
we consider a particular application to Morris-Lecar oscillators with diffusive coupling. Numerically, we derive the corresponding
phase model, calculate the phase-locked solutions, determine their stability and explore the existence of bifurcations.
We also compare prediction of the phase model and solutions of the full model. Finally, we examine the behaviour when the time delay is small.
In Section \ref{sec_conc}, we discuss our results.
\section{Phase Model}
Consider the system of ODEs
\begin{equation}
\label{ODE}
\frac{d\mathbf{X}_i}{d\rho}={\mathbf{F}}({\mathbf{X}_i}(\rho))\quad i=1,2,\quad \mathbf{X}_i\in\mathbb{R}^n.
\end{equation}
Assume that the system (\ref{ODE}) admits an exponentially asymptotically stable periodic
orbit given by $\mathbf{X}=\tilde{\mathbf{X}}(\rho)$ with natural frequency $\Omega$, $0\le \rho\le T=2\pi/\Omega$.
Next, consider a weakly connected system
of two identical coupled oscillators of the form (\ref{ODE}) with time delayed coupling:
\begin{equation} \begin{aligned}
\label{Full_Mod}
\frac{d\mathbf{X}_1}{d{\rho}}&=\mathbf{F}\left(\mathbf{X}_1(\rho) \right)+\epsilon \mathbf{G}\left(\mathbf{X}_1(\rho),\mathbf{X}_2(\rho-\tau);\epsilon\right),\\
\frac{d\mathbf{X}_2}{d\rho}&=\mathbf{F}\left(\mathbf{X}_2(\rho) \right)+\epsilon \mathbf{G}\left(\mathbf{X}_2(\rho),\mathbf{X}_1(\rho-\tau);\epsilon\right),
\end{aligned}\end{equation}
where $\mathbf{G}:\mathbb{R}^n\times\mathbb{R}^n\to \mathbb{R}^n $ describes the coupling between the two oscillators and $\epsilon$ is the coupling strength.
Assume that $\epsilon$ is sufficiently small and $\eta:=\epsilon\Omega\tau=\mathcal{O}(1)$. Let
${t}=\epsilon \rho$ be slow time and $\varphi_i(t)\in \mathbb{S}^1$ be the phase deviation from the
natural oscillation $\hat{X}(\rho)$, $\rho\ge 0$. Then, by applying weakly coupled oscillator theory with delayed interactions in \cite{izhikevich1998phase}, $(\varphi_1,\varphi_2)^T\in \mathbb{T}^2$ is a solution to
\begin{equation} \begin{aligned}
\label{phase_ModEpsilon}
\frac{d\varphi_1}{d{t}}&=\frac{1}{\Omega} H(\varphi_2(t-\eta)-\varphi_1(t)-\Omega\tau)+\mathcal{O}(\epsilon),\\
\frac{d\varphi_2}{d{t}}&=\frac{1}{\Omega} H(\varphi_1(t-\eta)-\varphi_2(t)-\Omega\tau)+\mathcal{O}(\epsilon),
\end{aligned}\end{equation}
where $H$ is a $2\pi-$periodic function defined by
\begin{equation}\label{New:funH}
H(\phi ) = \frac{1}{{2\pi }}\int\limits_0^{2\pi } {\hat{\mathbf{Z}}{{(\rho)}^T}\mathbf{G}\left( {\hat {\mathbf X}(\rho),\hat {\mathbf X}(\rho + \phi )} \right)} d\rho.
\end{equation}
Here $\hat{\mathbf{Z}}{(\rho)}$ is the unique nontrivial $2\pi-$periodic solution to the adjoint linear system
\[\frac{{d\hat{\mathbf{Z}}}}{{d\rho}} = - {\left[ {D\mathbf{F}\left( {\hat {\mathbf{X}}(\rho)} \right)} \right]^T}\hat{\mathbf{Z}}\]
satisfying the normalization condition
\[\frac{1}{{2\pi }}\int\limits_0^{2\pi } {\hat{\mathbf{Z}}(\rho) \cdot } \mathbf{F}\left( {\hat {\mathbf{X}}(\rho)} \right)d\rho = 1.\]
\review{The derivation of system (\ref{phase_ModEpsilon}) from (\ref{Full_Mod}) follows from the Appendix with $n=2$ and $K_{12}=K_{21}=1$}.
Dropping the terms $\mathcal{O}(\epsilon)$ in (\ref{phase_ModEpsilon}), we obtain the phase deviation model:
\begin{equation} \begin{aligned}
\label{phase_Mod}
\frac{d\varphi_1}{d{t}}&=\frac{1}{\Omega} H(\varphi_2(t-\eta)-\varphi_1(t)-\Omega\tau),\\
\frac{d\varphi_2}{d{t}}&=\frac{1}{\Omega} H(\varphi_1(t-\eta)-\varphi_2(t)-\Omega\tau).
\end{aligned}\end{equation}
For simplicity, in the rest of the paper we will refer to (\ref{phase_Mod}) as the \textit{phase model} instead of the phase deviation model.
We study the dynamics of the model (\ref{phase_Mod}) by exploring \textit{phase
locking} in (\ref{phase_Mod}), that is, solutions of (\ref{phase_Mod}) such that $\varphi_2-\varphi_1=\text{constant}$ \cite{hoppensteadt2012weakly}.
We suppose that
\begin{equation}
\varphi_1(t)=\omega t\qquad\text{and}\qquad \varphi_2(t)=\omega t+\psi
\label{phases}
\end{equation}
where $\omega$ is the frequency deviation of the oscillator and $\psi$ is the
natural phase difference \cite{hoppensteadt2012weakly}. Substituting (\ref{phases}) into (\ref{phase_Mod}) leads to
\begin{equation} \begin{aligned}
\label{sol_sys}
\omega-\frac{1}{\Omega} H(\psi-\omega\eta-\Omega\tau)&=0,\\
\omega-\frac{1}{\Omega} H(-\psi-\omega\eta-\Omega\tau)&=0.
\end{aligned}\end{equation}
We rewrite this as
\begin{equation} \begin{aligned}
F(\omega,\psi)=0=F(\omega,-\psi)
\label{Fsys}
\end{aligned}\end{equation}
where
\begin{equation} \begin{aligned}
F(\omega,\cdot):=\omega-\frac{1}{\Omega} H(\cdot-\omega\eta-\Omega\tau).
\label{Fdef}
\end{aligned}\end{equation}
In this article, we are interested in exploring how the solutions \review{($\psi$ and $\omega$) of (\ref{sol_sys}) vary with $\tau$ when the coupling strength ($\epsilon$) and frequency ($\Omega$) are fixed. Note that, we need only to investigate $\psi$
in $[0,2\pi)$, due to the $2\pi$ periodicity of $H$, and $\omega\in\mathbb{R}$.}
First, by subtracting the equations of (\ref{sol_sys}), we obtain
\begin{equation}
\label{eq_H}
H(\psi-\omega\eta-\Omega\tau)-H(-\psi-\omega\eta-\Omega\tau)=0.
\end{equation}
Since $H$ is $2\pi-$periodic function, equation (\ref{eq_H}) always has
the solutions $\psi=0,\pi$.
The corresponding frequency deviation is determined from the equation
\begin{equation}
\label{omega0}
F(\omega,0)=\omega-\frac{1}{\Omega} H(-\omega\eta-\Omega\tau)=0
\end{equation}
when $\psi=0$ and
\begin{equation}
\label{omegapi}
F(\omega,\pi)=\omega-\frac{1}{\Omega} H(\pi-\omega\eta-\Omega\tau)=0
\end{equation}
when $\psi=\pi$.
Equations (\ref{omega0}) and (\ref{omegapi}) are guaranteed to have a least one
solution due to the continuity and $2\pi$ periodicity of $H$.
In fact, if $\tau$ is sufficiently large, they may have multiple solutions. To see
this, recall that $\eta=\epsilon\omega \tau$ and note that
\begin{equation} \begin{aligned}
F_\omega(\omega,0)=1+\epsilon \tau H'(-\omega \epsilon\Omega \tau-\Omega\tau),
\label{Fp0def}
\end{aligned}\end{equation}
\review{where $F_\omega$ is the partial derivative of $F$ with respect to $\omega$.
If
there exists $\overline{\omega}$ such that $F(\overline{\omega},0)=0$ and
$F_\omega(\overline{\omega},0)<0$ then \eqref{omega0} has more than one solution.
Similar arguments apply to equation (\ref{omegapi}).
This may be possible if $\tau$ is sufficiently large.}
\begin{remark}
The solutions $\psi^*=0$ and $\psi^*=\pi$ of (\ref{eq_H}) correspond to \textbf{in-phase} and \textbf{anti-phase} periodic solutions of the original model (\ref{Full_Mod}), respectively. By {in-phase} solution we mean both oscillators reach their highest peak at
the same time, whereas an anti-phase solution means one oscillator reaches its
highest peak one half-period after the other oscillator.
Examples of these solutions are given in Figure \ref{Fig_case}.
\end{remark}
\begin{figure}[hbt!]
\centering
\hspace{1cm}\includegraphics[width=0.9\textwidth]{Phase_Solution.pdf}
\caption{Illustrations of the phase-locked dynamics of model (\ref{Full_Mod}).}
\label{Fig_case}
\end{figure}
In fact, system (\ref{Full_Mod}) could have other phase-locked solutions (neither in-phase nor anti-phase) corresponding to the solutions $\psi$ of (\ref{sol_sys}) such that $\psi\notin \{0,\pi\}$.
As in \cite{Scholarpedia1}, we will refer to these solutions of (\ref{Full_Mod}) as \textbf{out-of-phase} solutions. Let
$(\omega^*,\psi^*)$ be a solution of (\ref{sol_sys}) at $\tau=\tau^*$ such that $\psi^*\notin \{0,\pi\}$. Then $\omega^*$ and $\psi^*$ satisfy \eqref{Fsys}, that is,
$(\omega^*,\psi^*)$ is an intersection point of the contours $F(\omega,\psi)=0$ and $F(\omega,-\psi)=0$ in the $\omega\psi-$plane.
Suppose that $\psi^*\in (0,\pi)$ with a corresponding $\omega^*$ are solutions to (\ref{sol_sys}) at $\tau=\tau^*$, then
\begin{equation*} \begin{aligned}
\frac{1}{\Omega} H(\psi^*-2\pi-\omega^*\eta^*-\Omega\tau^*)&=\frac{1}{\Omega} H(\psi^*-\omega^*\eta^*-\Omega\tau^*)=\omega^*
\end{aligned}\end{equation*}
and
\begin{equation*} \begin{aligned}
\frac{1}{\Omega} H(2\pi-\psi^*-\omega^*\eta^*-\Omega\tau^*)&=\frac{1}{\Omega} H(-\psi^*-\omega^*\eta^*-\Omega\tau^*)=\omega^*
\end{aligned}\end{equation*}
due to the periodicity of $H$. Thus, $2\pi-\psi^*$ is also a solution in (\ref{sol_sys}) with corresponding $\omega^*$.
This leads to the following.
\begin{proposition}[\textup{Existence of phase-locked solutions}]
For any interaction function $H$ and any values of $\Omega$, $\epsilon$ and $\tau$
the phase model (\ref{phase_Mod}) has the solutions $\psi^*=0$ and $\psi^*=\pi$ with corresponding frequency deviations determined by (\ref{omega0}) and (\ref{omegapi}), respectively.
If $\psi^*\in(0,\pi)$ with corresponding $\omega^*$ are solutions to (\ref{sol_sys}) at $\tau=\tau^*$ then so is $2\pi-\psi^*$ with $\omega^*$, i.e., solutions come in pairs.
\label{prop_1}
\end{proposition}
\section{Stability }
\label{sec_stability}
In this section, we discuss the linear stability of the solutions (\ref{phases}) of (\ref{phase_Mod}). The linearization of (\ref{phase_Mod}) about the solution (\ref{phases}) is
\begin{equation} \begin{aligned}
\label{LinSys}
\frac{{d{u_1}}}{{dt}} &= - a{u_1}(t) + a{u_2}(t - \eta ),\\
\frac{{d{u_2}}}{{dt}} &= - b{u_2}(t) + b{u_1}(t - \eta ),
\end{aligned}\end{equation}
where
\begin{equation}
\label{ab}
a=\frac{1}{\Omega} H'(\psi-\omega\eta-\Omega\tau)\qquad\text{and}\qquad b=\frac{1}{\Omega} H'(-\psi-\omega\eta-\Omega\tau).
\end{equation}
In (\ref{ab}), $H'$ represents the derivative of $H$ with respect to its argument. It is
useful for our analysis to scale time so the delay becomes one. Applying the scaling
\[ \eta s=t,U_1(s)=u_1(t),\ U_2(s)=u_2(t),\ \]
results in
\begin{equation} \begin{aligned}
\label{LinSys_Scaled}
\frac{{d{U_1}}}{{ds}} &= - \eta a {U_1}(s) + \eta a{U_2}(s - 1 ),\\
\frac{{d{U_2}}}{{ds}} &= - \eta b{U_2}(s) + \eta b{U_1}(s -1 ).
\end{aligned}\end{equation}
It follows that the corresponding characteristic equation is
\begin{equation}\label{chactEq}
\Delta (\lambda ;\eta ) = {\lambda ^2} + \eta(a + b)\lambda + \eta^2 ab - \eta^2 ab{e^{ - 2\lambda }}=0.
\end{equation}
In the following we study the distribution of roots of this equation.
\begin{proposition}\label{prop000A}
Assume $ab=0$. Then $\Delta(\lambda;\eta)$ has:
\begin{enumerate}
\item [i.] One positive root and one zero root when $a+b<0$;
\item [ii.] Two zero roots when $a+b=0$;
\item [iii.] One negative root and one zero root when $a+b>0$.
\end{enumerate}
\end{proposition}
\begin{proof} The characteristic equation in this case reduces to
\[ {\lambda ^2} + \eta(a + b)\lambda =0. \]
The result follows.
\end{proof}
\begin{proposition}
$\Delta(\lambda;\eta)$ has a positive real root when one of the following holds.
\begin{enumerate}
\item [i.] $ab>0$ and $a+b<0$;
\item [ii.] $ab< 0$ and $a+b\le0$;
\item [iii.] $ab<0$, $a+b>0$ and $a+b+2\eta a b<0$.
\end{enumerate}
\end{proposition}
\begin{proof} Define
\begin{equation}
\label{f&g}
f(\lambda)=(\lambda+\eta a)(\lambda+\eta b) \quad \text{and}
\quad g(\lambda)=\eta^2 a b e^{-2\lambda}.
\end{equation}
Then $f(0)=g(0)=\eta^2 a b$ and
\begin{equation}
\label{eq1543}
\Delta(\lambda;\eta)=0 \quad \iff \quad f(\lambda)= g(\lambda).
\end{equation}
\begin{enumerate}
\item [i.] It follows from $ab>0$ and $a+b<0$ that $a<0$ and $b<0$. Since (\ref{chactEq}) is symmetric in $a$ and $b$, without loss of generality, we may assume $b<a<0$.
Note that $f(-\eta b)=0<g(-\eta b)$. Since $f$ is positive and increasing for $\lambda>-\eta b>0$ and $g$ is positive and decreasing for $\lambda>0$, there exists $\lambda^*>-\eta b$ such that $f(\lambda^*)=g(\lambda^*)$, see Figure \ref{Fig0_A}.
\item[ii.] Assume $a> 0$ and $b<0$. When $a+b<0$, $f$ is decreasing for $\lambda\in \left( 0,-\frac{a+b}{2}\eta\right)$ and is increasing for $\lambda> -\frac{a+b}{2}\eta$. Further, $g$ increases for $\lambda> 0$ and $\lim_{\lambda\to \infty}g(\lambda)=0$, thus there exists $\lambda^*\in \left( -\frac{a+b}{2}\eta,-\eta b\right)$ such that $f(\lambda^*)=g(\lambda^*)$, see Figure \ref{Fig0_B}.
When $a+b=0$, $f(0)=g(0)=\eta^2 a b$, $f'(0)=0<g'(0)$ and $f$ is increasing for $\lambda> 0$. Thus, with the same arguments, $\lambda^*$ lies in $\left(0,-\eta b\right)$.
\item[iii.] Assume $a>0$ and $b<0$. In this case $f$ and $g$ are increasing for $\lambda>0$ and $g<0$ for $\lambda\ge 0$. Since $f'(0)=\eta(a+b)<-2\eta^2 a b=g'(0)$, then there exists $\lambda^*\in (0,-\eta b)$ such that $f(\lambda^*)=g(\lambda^*)$, see Figure \ref{Fig0_C}.
\end{enumerate}
\end{proof}
\begin{figure}[hbt!]
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=1\textwidth]{Fig0_A.pdf}
\caption{$ab>0$ and $a+b<0$.}
\label{Fig0_A}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.80\textwidth]{Fig0_B.pdf}
\caption{$ab<0$ and $a+b<0$.}
\label{Fig0_B}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.90\textwidth]{Fig0_C.pdf}
\caption{$ab<0$ and $a+b>0$.}
\label{Fig0_C}
\end{subfigure}
\caption{Positive real roots in $\Delta(\lambda;\eta)=0$. }
\label{Fig0}
\end{figure}
\begin{proposition}
When $ab>0$ and $a+b>0$, $\Delta(\lambda;\eta)$ has no roots with positive real part.
\end{proposition}
\begin{proof}
Since $ab>0$ and $a+b>0$, we have $a>0$ and $b>0$. Assume there is a root $\lambda^*=x+iy$ of $\Delta(\lambda;\eta)=0$ with $x>0$. Then, it follows from (\ref{f&g}) and (\ref{eq1543}) that
\begin{equation}
|f(\lambda^*)|=|g(\lambda^*)|.
\label{eq1201}
\end{equation}
Notice that, due \review{to} the positivity of $x$ we get
\[\left| {f\left( {{\lambda ^*}} \right)} \right| = \sqrt {{{(x + \eta a)}^2} + {y^2}} \sqrt {{{(x + \eta b)}^2} + {y^2}} > {\eta ^2}ab\]
and
\[\left| {g\left( {{\lambda ^*}} \right)} \right| = {\eta ^2}ab{e^{ - 2x}} < {\eta ^2}ab.\]
Hence, $|f(\lambda^*)|>|g(\lambda^*)|$, which contradicts (\ref{eq1201}). Thus, all roots of $\Delta(\lambda;\eta)=0$ have nonpositive real parts when $ab>0$ and $a+b>0$.
\end{proof}
\begin{proposition}
\label{prop_zero_root}
$\lambda=0$ is a root of (\ref{chactEq}) for any $\eta$. If $\eta\neq \eta^*:=-\frac{a+b}{2ab}$ then $\lambda=0$ is a simple root. Otherwise, it is a double root. The double multiplicity of $\lambda=0$ occurs only in the following cases.
\begin{enumerate}
\item [i.] $ab>0$ and $a+b<0$;
\item [ii.] $ab<0$ and $a+b>0$.
\end{enumerate}
\end{proposition}
\begin{proof}
It is clear that $\Delta(0;\eta)=0$ and
$\Delta'(0;\eta)=\eta\left(a+b+2ab\eta\right)$
where $'$ is the derivative with respect to $\lambda$. If $\eta\neq \eta^*$ then $\Delta'(0;\eta)\neq0$, and hence $\lambda=0$ is a simple root. When $\eta=\eta^*$, we have $\Delta'(0;\eta^*)=0$ and
\[\Delta''(0;\eta^*)=-\frac{a^2+b^2}{ab}\neq 0.\]
Thus, $\lambda=0$ has double multiplicity.
\noindent It is clear that $\eta^*$ exists if and only if
\[-\frac{a+b}{2ab}>0\iff \{ab>0\ \text{and}\ a+b<0\}\ \text{or}\ \{ab<0\ \text{and}\ a+b>0\}.\]
\end{proof}
\begin{proposition}\label{prop_new1}
When $ab<0$, $a+b>0$ and $a+b+2\eta ab\ge 0$, $\Delta(\lambda;\eta)$ has no roots with positive real part.
\end{proposition}
\begin{proof}
\review{Note that the characteristic equation \eqref{chactEq} can be written as
\[{\Delta }(\lambda ;\eta )= \lambda^2+\eta(a+b)\lambda +\eta^2 ab\int_0^2 \lambda e^{-u\lambda} du=0.\]
Suppose that ${\Delta }(\lambda ;\eta )=0$ has root $\bar{\lambda}$ with ${\rm{Re}}(\bar{\lambda})>0$. Then
\[\left| {\bar \lambda (\bar \lambda + \eta(a+b))} \right| = \left| {{\eta ^2}ab\bar \lambda \int\limits_0^2 {{e^{ - u\bar \lambda }}du} } \right| \le {\eta ^2}\left| {ab} \right||\bar \lambda |\left| {\int\limits_0^2 {{e^{ - u({\rm{Re}}(\bar \lambda ))}}du} } \right| \le 2{\eta ^2}\left| {ab} \right||\bar \lambda |.\]
Since $ab<0$ and $a+b+2\eta ab\ge 0$, we have
\[|\bar \lambda (\bar \lambda + \eta(a+b))| \le - 2{\eta ^2}ab|\bar \lambda | \le \eta (a + b)|\bar \lambda |\]
which is satisfied if $\bar \lambda=0$ (a contradiction) or
\[|\bar{\lambda}+\eta(a+b)| \le \eta(a+b).\]
This implies that $\bar{\lambda}$ is in the disk of radius $\eta ( {a + b} )$ centred at the point $-\eta ( {a + b} )$ in the complex
plane. Thus, ${\rm{Re}}(\bar{\lambda})<0$ or $\bar{\lambda}=0$. In both cases we arrive at a
contradiction.}
\end{proof}
\review{Finally, we show that (\ref{chactEq}) does not have pure imaginary roots for any value of the parameters.}
\begin{proposition}
\label{prop_imag_roots}
The characteristic equation (\ref{chactEq})
has no pure imaginary roots.
\end{proposition}
\begin{proof}
Assume $\lambda=iy$ ($y>0$) is a root of (\ref{chactEq}).
Separating the real and imaginary parts, we obtain
\begin{align*}
\eta^2ab-y^2&=\eta^2ab\cos(2y)\\
\eta(a+b)y&=-\eta^2ab\sin(2y)
\end{align*}
Squaring and adding these equations leads to
\begin{equation*}
y^2\left(y^2+\eta^2(a^2+b^2)\right)=0.
\end{equation*}
which has no real roots. Thus, there are no roots of the form $iy$.
\end{proof}
The distribution of roots in (\ref{chactEq}) is summarized in Figure \ref{Fig2}.
\begin{figure}[hbt!]
\centering
\includegraphics[width=1\textwidth]{Trees15.pdf}
\caption{The distribution of roots in (\ref{chactEq}) \review{as discussed in Propositions \ref{prop000A}$-$\ref{prop_imag_roots}}.}
\label{Fig2}
\end{figure}
Recall the structure of the phase-locked solutions (\ref{phases}) of the phase model
(\ref{phase_Mod}). From this we see that a phase-locked periodic solution of the
original model (\ref{Full_Mod}) corresponds to a line in the phase model
(\ref{phase_Mod}), that is, when $\psi^*$ and $\omega^*$ are solutions
of (\ref{sol_sys}), it follows that
\[\left\{ {\begin{array}{*{20}{c}}
{{\varphi _1} = {\omega ^*}t\qquad}&{\left( {\bmod 2\pi } \right)}\\
{{\varphi _2} = {\omega ^*}t + {\psi ^*}}&{\left( {\bmod 2\pi } \right)}
\end{array}} \right. \Rightarrow {\varphi _2} = {\varphi _1} + {\psi ^*}\left( {\bmod 2\pi } \right).\]
From Proposition \ref{prop_zero_root},
we know that for any $\tau>0$, $\Delta(\lambda;\eta(\tau))=0$ has a zero root.
The simple zero root corresponds to the motion along these lines. It corresponds to
the Floquet multiplier $1$ which is associated with the periodic solution
of the original model \eqref{Full_Mod}. Thus phase-locked solutions will be
asymptotically stable if $\lambda=0$ is a simple root of the characteristic equation
\eqref{chactEq} and all other roots have negative real part.
\begin{remark}
\label{remark2}
The solution $\psi^*\ne 0,\pi$ is asymptotically stable for values of $a,b$ such
that $a>0$ and $b>0$ or $ab<0,\ a+b>0$ and $a+b+2\eta ab>0$.
Since $H'$ is a $2\pi-$periodic function, the solutions
$\psi^*$ and $2\pi-\psi^*$ have the same stability.
\end{remark}
\begin{remark}
\label{remark1}
Since $H$ is a $2\pi-$periodic function, $a=b=\frac{1}{\Omega}H'(\psi^*-\omega^*\eta-\Omega\tau)$ in (\ref{ab}) when $\psi^*=0,\pi$. Hence, the stability of solutions when $\psi^*=0,\pi$ is determined by the sign of $H'(\psi^*-\omega^*\eta-\Omega\tau)$, that is, the solution is asymptotically stable when $H'(\psi^*-\omega^*\eta-\Omega\tau)>0$ and unstable when $H'(\psi^*-\omega^*\eta-\Omega\tau)<0$.
\end{remark}
\subsection{Bifurcation}\label{bif:sec}
Suppose that $\Omega$ and $\epsilon$ are fixed, but $\tau$ may be varied. From the
discussion above, potential bifurcation points of the model \eqref{phase_Mod}
are values $\tau=\tau^*$ where the characteristic equation for a particular phase-locked
solution, $\psi^*,\omega^*$ has a double zero root. Let $\eta^*=\epsilon\Omega\tau^*$.
When $\psi^*=0$ or $\pi$ there are two types of potential bifurcation points:
\begin{itemize}
\item[(1)] $\tau^*$ where $H'(\psi^*-\omega^*\eta^*-\Omega\tau^*)=0$ (see Remark \ref{remark1});
\item[(2)] $\tau^*$ where $1+\eta^* \frac{1}{\Omega}H'(\psi^*-\omega^*\eta^*-\Omega\tau^*)=0$ (see Proposition~\ref{prop_zero_root}).
\end{itemize}
For other values of $\psi^*$, Proposition~\ref{prop_zero_root} indicates there is a potential bifurcation point at
\begin{itemize}
\item[(3)] $\tau^*$ where $\eta^*=-\frac{a+b}{2ab}$.
\end{itemize}
Note that it is impossible to find an
explicit \review{expression} for the bifurcation values because each of these conditions are
implicit equations for $\tau^*$.
Now we consider what type of bifurcations may occur at these points. We do not make
a rigorous proof, which would require centre manifold and normal form theory. However,
we can make some plausible arguments based on the equations for the equilibrium
solutions. Recall that $(\psi^*,\omega^*)$ with $\psi^*=0$ or $\pi$ defines
a phase-locked solution at $\tau$ if $F(\omega^*,\psi^*;\tau)=0$ where
\[ F(\omega,\psi^*;\tau)=\omega-\frac{1}{\Omega} H(\psi^*-\omega\eta-\Omega\tau). \]
Differentiating $F$ with respect to $\omega$ shows that the condition (2)
corresponds to $F_\omega(\omega^*,\psi^*;\tau^*)=0$, that is, $\omega^*$ is
a double root of $F$ when $\tau=\tau^*$. Thus as $\tau$ varies near $\tau^*$
we may expect that there should be two roots of $F$ near $\omega^*$ or none
\footnote{More precisely, we expect this will occur if $F$ satisfies the further conditions
$F_{\tau}(\omega^*,\psi^*;\tau^*)=-\Omega(1+\epsilon\omega^*)/\eta^*\ne 0$ and
$F_{\omega \omega}(\omega^*,\psi^*;\tau^*)=-(\eta^*)^2H''(\psi^*-\omega^*\eta^*-\Omega\tau^*)\ne 0$
\cite{Kuznetsov}.}.
Thus the bifurcation associated with condition (2) should be a saddle-node bifurcation
involving two different phase-locked solutions with the same $\psi^*$. Note that this
bifurcation is only physically relevant if $\eta^*>0$, i.e.,
$H'(\psi^*-\omega^*\eta^*-\Omega\tau^*)<0$. Thus, from Remark~\ref{remark1}, the
associated solutions will be unstable. In a similar manner one can show that
condition (3) corresponds to $(\psi^*,\omega^*)$ at $\tau=\tau^*$ being a point
of tangency of the curves defined by equations \eqref{sol_sys}. Thus we expect
it to correspond to a saddle-node bifurcation involving two out-of-phase solutions
with different $\psi^*$. The stability of these
solutions will depend on which case of Proposition~\ref{prop_zero_root} applies.
Finally, we consider phase-locked solutions near $\psi=0$. Expanding
equations \eqref{eq_H} and the first of \eqref{sol_sys} in $\psi$ and keeping the
two lowest order terms we have
\begin{eqnarray}
0&=&2H'(-\omega\eta-\Omega\tau)\psi+\frac{2}{3}H'''(-\omega\eta-\Omega\tau)\psi^3\\
\omega&=&\frac{1}{\Omega}\left(H(-\omega\eta-\Omega\tau)+H'(-\omega\eta-\Omega\tau)\psi\right).
\end{eqnarray}
Thus we see that $\psi^*=0$, \review{$\omega^*=H(-\omega^*\eta-\Omega\tau)/\Omega$,} is always a solution
of this system and if there is $\tau^*$ such that condition (1) is satisfied
and $H'''(-\omega^*\eta^*-\Omega\tau^*)\ne 0$
then this will be a triple root of the system.
Thus we expect that condition
(1) with $\psi^*=0$ corresponds to a pitchfork bifurcation where two out-of-phase solutions
are created near $0$. Similarly condition (1) with $\psi^*=\pi$ should correspond
to a pitchfork bifurcation where two out-of-phase solutions are created near $\pi$.
\review{Note that the phase interaction function $H$ can be represented by Fourier series expansion
\[H(\phi) = {a_0} + \sum\limits_{k = 1}^\infty {\left[ {{a_k}\cos (k\phi) + {b_k}\sin (k\phi)} \right]}. \]
When the interaction function $H$ is represented by the first set of Fourier modes
\begin{equation}\label{Eq:Fourier_0}
H(\phi)=a_0+a_1\cos(\phi)+b_1\sin(\phi),
\end{equation}
the authors in \cite{campbell2012phase} show that the out-of-phase solutions and pitchfork bifurcation cannot occur in the phase model \eqref{phase_Mod} with small time delay.
However, it may occur when the time delay is large.
Indeed, when $H$ has the form in \eqref{Eq:Fourier_0}, then it follows from \eqref{sol_sys} and \eqref{eq_H} that
\begin{align}
\Omega {\omega ^*} &= {a_0} + A({\omega ^*})\sin ({\psi ^*}) + B({\omega ^*})\cos ({\psi ^*}),\label{PB_FFM_1}\\
0 &= 2A({\omega ^*})\sin ({\psi ^*})\label{PB_FFM_2}
\end{align}
respectively, where
\begin{align*}
A({\omega ^*}) &= {b_1}\cos ({\omega ^*}\eta + \Omega \tau ) + {a_1}\sin ({\omega ^*}\eta + \Omega \tau ),\\
B({\omega ^*}) &= {a_1}\cos ({\omega ^*}\eta + \Omega \tau ) - {b_1}\sin ({\omega ^*}\eta + \Omega \tau )
\end{align*}
Thus, from $\sin ({\psi ^*})=0$, we have that $\psi ^*=0,\pi$ with the corresponding $\omega^*$ determined by
\begin{equation}\label{PB_FFM_3}
\Omega {\omega ^*} - {a_0} = \pm B({\omega ^*}).
\end{equation}
where the $+$ corresponds to $\psi^*=0$
and the $-$ to $\psi^*=\pi$. Also, from $A({\omega ^*})=0$ we determine $\omega ^*$ and the corresponding $\psi^*$ is obtained from
\begin{equation}\label{PB_FFM_4}
\cos ({\psi ^*}) = \frac{{\Omega {\omega ^*} - {a_0}}}{{B({\omega ^*})}}.
\end{equation}
Consequently, we have the following cases
\begin{itemize}
\item if $\left| {\Omega {\omega ^*} - {a_0}} \right| < \left| {B({\omega ^*})} \right|$, then two out-of-phase solutions $\psi^*$ and $2\pi-\psi^*$ exist,
\item if $\left| {\Omega {\omega ^*} - {a_0}} \right| = \left| {B({\omega ^*})} \right|$, then one solution exists ($\psi^*=0$ or $\psi^*=\pi$),
\item if $\left| {\Omega {\omega ^*} - {a_0}} \right| > \left| {B({\omega ^*})} \right|$, then no solution satisfying \eqref{PB_FFM_4} exists.
\end{itemize}
Note that $H'({-\omega ^*}\eta - \Omega \tau )=A(\omega ^*)$ and $H'({\pi-\omega ^*}\eta - \Omega \tau )=-A(\omega ^*)$.
Thus, the solutions $0$ and $\pi$ change stability when $A(\omega ^*)=0$ where $\omega^*$ satisfies \eqref{PB_FFM_3}.
As $\tau$ varies, out-of-phase solutions will disappear if $ \frac{{\Omega {\omega ^*} - {a_0}}}{{B({\omega ^*})}}-1$ changes its sign from negative to positive.
When $ \frac{{\Omega {\omega ^*} - {a_0}}}{{B({\omega ^*})}}=1$, then $\psi^*=0$. Hence, a pitchfork bifurcation occurs at $\psi^*=0$. Similarly when $ \frac{{\Omega {\omega ^*} - {a_0}}}{{B({\omega ^*})}}=-1$ a pitchfork bifurcation occurs at
$\psi^*=\pi$.}
\subsection{The full model with small delay}
\label{Sec2_small_Delay}
When the time delay, $\tau$, in (\ref{Full_Mod}) is relatively small, in the sense that $\Omega \tau=\mathcal{O}(1)$, it follows from the theory of averaging that the time delay $\tau$
enters the interaction function $H$ in (\ref{phase_Mod}) as a
phase shift \cite{ermentrout2009delays,hoppensteadt2012weakly,izhikevich1998phase,campbell2012phase}.
In \cite{campbell2012phase}, the authors considered this case and consequently the time delay $\eta$ in the phase model (\ref{phase_Mod}) was neglected, and hence, it becomes
\begin{equation} \begin{aligned}
\label{phase_Mod22}
\frac{d\varphi_1}{d{t}}&=\frac{1}{\Omega} H(\varphi_2(t)-\varphi_1(t)-\Omega\tau),\\
\frac{d\varphi_2}{d{t}}&=\frac{1}{\Omega} H(\varphi_1(t)-\varphi_2(t)-\Omega\tau),
\end{aligned}\end{equation}
Therefore,
they were able to reduce (\ref{phase_Mod}) into a one dimensional ordinary differential equation
\begin{equation}
\label{eq_small_delay_1}
\frac{d \phi}{d t}=-2 \epsilon[H(\phi-\Omega \tau)-H(-\phi-\Omega \tau)].
\end{equation}
where $\phi=\varphi_2-\varphi_1$.
The existence of phase-locked solutions of (\ref{eq_small_delay_1}) was discussed in \cite{campbell2012phase} without introducing the frequency deviation $\omega$. Hence, the in-phase and anti-phase solutions were unique. Moreover, the stability of the phase-locked solution $\phi^*$ in (\ref{eq_small_delay_1}) was determined by the sign of
\begin{equation}
\label{eq_small_delay_2}
{\widehat H}'(\phi^*):=\overline{a}+\overline{b}
\end{equation}
where $\overline{a}=H^{\prime}(\phi^*-\Omega \tau)$ and $\overline{b}=H^{\prime}(-\phi^*-\Omega \tau)$. If ${\widehat H}'(\phi^*)>0$ then $\phi^*$ is asymptotically stable and if ${\widehat H}'(\phi^*)<0$ it is unstable. When ${\widehat H}'(\phi^*)=0$ the stability is not determined by the linearization.
\begin{remark}
\review{In \cite{campbell2012phase},
due to the reduction of the two dimensional system
(\ref{phase_Mod22}) into a single equation (\ref{eq_small_delay_1}), the zero root was omitted in characteristic equation.} Indeed,
the characteristic equation of (\ref{eq_small_delay_1}) is $\lambda+{\widehat H}'(\phi^*)=0$
while the characteristic equation of (\ref{phase_Mod22}) is
\begin{equation}
\lambda({\lambda} + {\widehat H}'(\phi^*)) =0.
\end{equation}
It is clear that the latter characteristic equation always has a zero root.
\end{remark}
\review{Now we compare these results with what happens when $\tau$ is small, i.e.,
$\Omega \tau=\mathcal{O}(1)$, in our model \eqref{phase_Mod}}. Recall that
$\eta=\epsilon\Omega\tau$ thus the assumption on $\tau$
implies that $\eta=\mathcal{O}(\epsilon)$. Also, note that the phase difference
$\phi^*$ of the phase locked solutions for the model \eqref{eq_small_delay_2} is
the same as the phase deviation difference $\psi^*$ for our model.
First consider the existence of phase-locked solutions. For our model we must
solve the equations \eqref{eq_H} and one of \eqref{sol_sys} simultaneously for $\psi$ and
$\omega$. When $\eta=\mathcal{O}(\epsilon)$, however, to first order in $\epsilon$
the $H$ function no longer depends on $\omega$. Thus phase-locked solutions
are determined by $\psi^*$ satisfying $H_{\tau}(\psi^*)=0$, with
$\omega^*=\frac{1}{\Omega}H(\psi^*-\Omega\tau)$. \review{This equation for $\psi^*$ is the
same as in \cite{campbell2012phase}}. In \cite{campbell2012phase} they did not solve for $\omega^*$ as it was not needed
to determine the phase-locked solutions or their stability. It remains to consider the
uniqueness of the in-phase and anti-phase solutions.
From equations (\ref{omega0}) and (\ref{omegapi}), these solutions correspond to frequency
deviations $\omega^*$ satisfying $F(\omega^*,\psi^*)=0$
with $\psi^*=0,\pi$, respectively.
Since $H$ and $H'$ are continuous and $2\pi$ periodic they are bounded. Thus we see that
$\lim_{\omega \rightarrow \pm\infty}F(\omega)=\pm\infty$. Further, recalling
\eqref{Fp0def}, since $\eta=\mathcal{O}(\epsilon)$, $F_\omega(\omega^*,\psi^*)>0$.
Thus for any $\tau$ sufficiently small, there
will be a unique frequency deviation $\omega^*$ for $\psi^*=0$ and for $\psi^*=\pi$.
This is consistent with the results in \cite{campbell2012phase} which have only
one in-phase and anti-phase solution for each value of $\tau$.
Now consider the stability of the phase-locked solutions. Recall that the stability
for our model is summarized in Figure~\ref{Fig2}. When $\eta=\mathcal{O}(\epsilon)$,
${\rm sgn}(a+b+ab\eta)\approx {\rm sgn}(a+b)$, thus the conditions for stability/instability of
phase-locked solutions of our model reduce to the stability if $a+b>0$ and
\review{instability} if $a+b<0$. Further $a\approx \overline{a}$ and $b\approx\overline{b}$, thus
the stability results of our model reduce to those of \cite{campbell2012phase}
when $\Omega\tau=\mathcal{O}(1)$. The key point is that, regardless of the size
of $\tau$, the countable infinity of complex roots of the characteristic equation
\eqref{chactEq} all have negative real part. Thus the stability
of the phase-locked solutions is determined by finitely many real roots,
and it is possible for an ordinary differential equation to accurately reflect
this stability.
In Section \ref{Sec3_3}, we will show numerically that our model with
$\Omega\tau=\mathcal{O}(1)$ fully recovers \cite[Figure 4b]{campbell2012phase} and
\cite[Figure 5b]{campbell2012phase}.
\section[Application to Morris-Lecar model]{Application to Morris-Lecar oscillators with diffusive\\ coupling}
\label{Sec3}
In this section we apply the results from the previous sections to a network of dimensionless Morris-Lecar oscillators with time delayed diffusive coupling, see e.g., \cite{prasad2008universal,buric2003dynamics}. This model is given by
\begin{equation} \begin{aligned}
\label{MLmodel}
{v'_i} &= {I_{app}} - {g_{Ca}}{m_\infty }({v_i})({v_i} - {v_{Ca}}) - {g_K}{w_i}({v_i} - {v_K}) - {g_L}({v_i} - {v_L}) - \epsilon ({v_j}(t - \tau ) - {v_i}(t)),\\
{w'_i} &= \varphi \lambda ({v_i})({w_\infty }({v_i}) - {w_i}),
\end{aligned}\end{equation}
for $i,j=1,2$ such that $i\neq j$, where
\begin{equation*} \begin{aligned}
m_{\infty}(v) &=\frac{1}{2}\left(1+\tanh \left(\left(v-\nu_{1}\right) / \nu_{2}\right)\right), \\
w_{\infty}(v) &=\frac{1}{2}\left(1+\tanh \left(\left(v-\nu_{3}\right) / \nu_{4}\right)\right), \\
\lambda(v) &=\cosh \left(\left(v-\nu_{3}\right) /\left(2 \nu_{4}\right)\right).
\end{aligned}\end{equation*}
Using the parameter set I$\backslash$II from \cite[Table 1]{campbell2012phase}, when there is no
coupling in the network each oscillator has a unique exponentially
asymptotically stable limit cycle with period $T=23.87\backslash13.81$ corresponding to frequency $\Omega=0.2632\backslash 0.455$.
The normalized system, such that the frequency is $1$, corresponding to (\ref{MLmodel}) is
\begin{equation} \begin{aligned}
\label{MLmodelNor}
{v'_i} &= \frac{1}{\Omega}({I_{app}} - {g_{Ca}}{m_\infty }({v_i})({v_i} - {v_{Ca}}) - {g_K}{w_i}({v_i} - {v_K}) - {g_L}({v_i} - {v_L})) - \frac{\epsilon}{\Omega} ({v_j}(t - \Omega \tau ) - {v_i}(t)),\\
{w'_i} &= \frac{1}{\Omega}(\varphi \lambda ({v_i})({w_\infty }({v_i}) - {w_i})),
\end{aligned}\end{equation}
$i=1,2$.
\review{Note that this is in the form (\ref{Full_Mod}) with ${\bf{X}}_i(t)=(v_{i}(t), w_{i}(t))^T$ and the function $\mathbf{G}:\mathbb{R}^2\times\mathbb{R}^2\to \mathbb{R}^2$ is given by ${\bf{G}}=(G_1,G_2)$ where ${{G}}_1({\bf{X}}_1(t), {\bf{X}}_2(t)) = \frac{1}{\Omega}(v_2(t−\Omega \tau)−v_1(t))$ and ${{G}}_2({\bf{X}}_1(t), {\bf{X}}_2(t)) =0$.
Then, the phase model interaction function $H$ is given by \eqref{New:funH}.}
For each parameter set, the authors in
\cite{campbell2012phase} solved \eqref{New:funH} numerically and
calculated the approximation of the phase model interaction function $H$ by the first five terms of its Fourier series. These are given by
\begin{equation} \begin{aligned}
\label{H_I_Num}
H_I(\phi)&=2.915252-2.684797 \cos (\phi)-0.3278022 \cos (2 \phi)\\
&~ ~~+0.05596774 \cos (3 \phi)+0.0351635 \cos (4 \phi)+ 4.908449 \sin (\phi) \\
&~ ~~-0.7020183 \sin (2 \phi)-0.09934668 \sin (3 \phi)-0.01104474 \sin (4 \phi),\\ H_{II}(\phi)&=0.6271561-0.5209326 \cos (\phi)-0.08538575 \cos (2 \phi)\\
&~ ~~ -0.005648281\cos (3 \phi) -0.0002642404 \cos (4 \phi)+ 1.595618\sin (\phi)\\
&~ ~~-0.04727176\sin (2 \phi)-0.00301241 \sin (3 \phi)-0.002760313 \sin (4 \phi) \end{aligned}\end{equation}
corresponding to the parameter sets I and II, respectively, see \cite[Table 2]{campbell2012phase}.
\review{Note that the two parameter sets represent limit cycles which are created
by different bifurcations as the input current $I_{app}$ is
varied. For parameter set I the limit cycle is created in a
saddle-node on an invariant circle bifurcation, while
for parameter set II the limit cycle is created in a supercitical
Hopf bifurcation. The chosen parameter values have $I_{app}$
slightly larger than the bifurcation values.}
In \cite {campbell2012phase} the authors studied how
small epsilon needed to be for the phase model to
faithfully represented the behaviour of the full system
\eqref{MLmodelNor}, in the case of small delay. They found that for
parameter set I $\epsilon$ could be as large as $0.05$ while
for parameter set II epsilon should not exceed $0.001$.
Therefore, in the rest of this section, we take $\epsilon=0.05$ with parameter set I and $\epsilon=0.001$ when we use parameter set II.
Consequently, we choose $\tau\ge 75.988$ for parameter set I and $\tau\ge 2197.8$ for parameter set II so that $\epsilon \Omega \tau=\mathcal{O}(1)$. Moreover, we
compare our results with the results in \cite {campbell2012phase} when the time delay, $\tau$, in (\ref{Full_Mod}) is relatively small.
\subsection{In-phase and anti-phase solutions}
\label{Sec3_1}
To find $\omega^*$ corresponding to the in-phase and anti-phase solutions, $\psi^*=0,\pi$, we solve (\ref{omega0}) and (\ref{omegapi}) \review{with $H$ given by either $H_{I}$ or $H_{II}$
from \eqref{H_I_Num}. Note that these equations
can only be solved numerically due to the complicated
form of $H_I$ and $H_{II}$. For particular values of $\tau$, we
represent these solutions graphically in Figure \ref{fig0} as the
intersection points of the line $y=\omega$ and the curve
$y=H(-\omega \eta -\Omega \tau)/\Omega$.}
In (\ref{omega0}), the slope of the right hand side at any $\omega$ is $\ell_{\tau}=-\epsilon\tau H'(-\omega\eta-\Omega\tau)$.
Then, by applying the stability condition in Remark \ref{remark1}, we see that the
in-phase solution is stable when
the line $y=\omega$ intersects the curve of the function $H(-\omega\eta-\Omega\tau)/\Omega$ at a point where it has negative slope, while it is unstable when the intersection is at a point with positive slope, see Figure \ref{fig0}.
When the line $y=\omega$ alternates from intersecting the curve of $H(-\omega\eta-\Omega\tau)/\Omega$ at a point with positive slope to intersecting it at a point with negative slope, the solutions $\omega^*$ alternate between stable and unstable, see Figure \ref{fig0}.
\review{For fixed $\Omega$ and $\epsilon$, as $\tau$ increases the curve
$H(-\eta \omega-\Omega\tau)=H(-\Omega\tau(1+\epsilon\omega))$ compresses horizontally
causing the creation and destruction of intersection points.
For specific values
$\tau=\tau^*_1>0$, an intersection point will occur at the point where the function $H$ has
slope one, i.e., the curve $y=H$ will be tangent to the line $y=\omega$ at these
values of $\tau$, see Figure \ref{fig000_B}.
Near such points, i.e., for $\tau$ slightly bigger or smaller,
there exist two consecutive intersection points both of which are unstable, see
Figure \ref{fig000_C}.
Then, as $\tau$ changes further to $\tau_2^*$, one unstable point
quickly passes through the point where $H$ has zero slope and becomes stable, see
Figure \ref{fig0_B}.
The values $\tau^*_1$ correspond to the saddle-node
bifurcations of in-phase and anti-phase solutions discussed in Section \ref{bif:sec}.}
We will discuss the points $\tau_2^*$ later. In Figure \ref{fig_SI_tau_omega_psi},
we plot $\omega^*$ corresponding to $\psi^*=0,\pi$ for various values of the time
delay $\tau$, showing the many co-existing solutions which can occur and the
transitions of the solutions as $\tau$ varies.
These solutions were found by
implementing the algorithm from \cite{rahimian2011new} in \textit{Wolfram Mathematica} to find all the solutions of \eqref{omega0} or \eqref{omegapi}.
\begin{figure}[hbt!]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{omegaVSh_psi_0_SI_tau_90}
\caption{$ H_I(-\omega\eta-\Omega\tau)/\Omega$ and $y=\omega$. $\tau=90$.}
\label{fig0_A}
\end{subfigure}%
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1\linewidth]{omegaVSh_psi_0_SII_tau_2236
\caption{$ H_{II}(-\omega\eta-\Omega\tau)/\Omega$ and $y=\omega$. $\tau=2236$.}
\label{fig0_B}
\end{subfigure}
\caption{Graphical representation of the solutions to (\ref{omega0}) with fixed $\tau$. The circles \textcolor{green}{
$\CIRCLE$}/\textcolor{red}{ $\Circle$} represent stable/unstable solutions.}
\label{fig0}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[height=3.8cm,width=5.5cm]{omegaVSh_psi_0_SII_tau_2234_4}
\caption{ $\tau=2234.4$.}
\label{fig000_A}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[height=3.8cm,width=5.5cm]{omegaVSh_psi_0_SII_tau_2234_7778
\caption{$\tau=2234.78$.}
\label{fig000_B}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[height=3.8cm,width=5.5cm]{omegaVSh_psi_0_SII_tau_2235
\caption{$\tau=2235$.}
\label{fig000_C}
\end{subfigure}
\caption{Graphical
solutions of $ H_{II}(-\omega\eta-\Omega\tau)/\Omega$ and $y=\omega$ with fixed $\tau$. The circles \textcolor{green}{
$\CIRCLE$}/\textcolor{red}{$\Circle$} represent stable/unstable solutions.}
\label{fig000}
\end{figure}
To compare prediction of the phase model (\ref{phase_Mod}) and solutions of the full model (\ref{MLmodel}), we
solve (\ref{MLmodelNor})
numerically with parameter sets I and II with various values of $\tau$ and different initial conditions.
The
initial conditions are of the form
\begin{equation} \begin{aligned}
\label{InCoform}
\left(v_{1}(t), w_{1}(t), v_{2}(t), w_{2}(t)\right)^T=\left(v_{10}, w_{10}, v_{20}, w_{20}\right)^T \quad t\in[-\tau\Omega,0].
\end{aligned}\end{equation}
Figure \ref{Fig_V_vs_t} shows time series of $v_i$ in (\ref{MLmodelNor}) with different initial conditions. We notice the coexistence of in-phase
solutions with different
frequencies when $\tau=110$ with parameter set I.
The numerical solutions are obtained by using \textit{Wolfram Mathematica}. We use the command \textsf{NDSolve} to solve the full model numerically.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.7\textwidth]{V_vs_t.pdf}
\caption{The coexistence of in-phase
solutions of (\ref{MLmodelNor}) with different
frequencies when $\tau=110$ with parameter sets I. We take different initial conditions: $(1.53422,-4.42364,1.58103,-4.12258)^T$ for red/orange curves and $(-2.22807,3.626,-2.28885,-0.972632)^T$ for blue/green curves.}
\label{Fig_V_vs_t}
\end{figure}
When $\epsilon=0$, each uncoupled equation in (\ref{MLmodelNor}) has $2\pi-$periodic solution, that is, the frequency of each oscillator is unity.
Consequently, when $\epsilon\ne 0$ and equation (\ref{MLmodelNor}) has a phase-locked solution, the phase of the first oscillator is $\theta_1(t)= t+\omega^*\epsilon t$ and that of the second oscillator is $\theta_2(t)= t+\omega^*\epsilon t+\psi^*$ where $\omega^*$ is the frequency deviation and $\psi^*$ is the phase shift. Thus, the frequency of each oscillator is $1+\omega\epsilon^*$, and the period $\mathcal{T}$ is approximately
$$\mathcal{T}=\frac{2\pi}{1+\omega^*\epsilon}.$$
From the numerical solution of (\ref{MLmodelNor}) for a stable phase-locked solution,
we can calculate the period $\mathcal{T}$ of the oscillators and determine the
approximate frequency deviation from
\begin{equation} \begin{aligned}
\omega^*\approx \frac{1}{\epsilon}\left( \frac{2\pi}{\mathcal{T}}-1 \right).
\label{appo}
\end{aligned}\end{equation}
Figure \ref{fig_SI_tau_omega_psi} shows the coexistence of stable in-phase and anti-phase periodic solutions and demonstrates that the approximation of $\omega^*$ from (\ref{appo}) is close to a stable solution of the phase model.
\review{The values of $\omega^*$ with
the normalized error
\begin{equation}\label{Nerror}
{{\rm E_N}}=\frac{(\omega^* {\rm{~in~the~phase~model}})-(\omega^* {\rm{~in~the~full~model}})}{\omega^* {\rm{~in~the~full~model}}}
\end{equation}
are shown in Tables \ref{table_psi_0} and \ref{table_psi_0_S2}. Note that the quantity ${{\rm E_N}}$ is the normalized error with respect to the size of $\omega^*$ in the full model.}
Except for a few cases, the phase model gives a very accurate prediction of the values
of $\omega^*$. The phase model predicted stable phase-locked solutions that we did not find numerically, however, it is possible that further exploration with different initial conditions might find them.
\begin{figure}[hbt!]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{Set_I_tau_omega_psi_0.pdf}
\caption{Parameter set I. $\psi^*=0$.}
\label{fig_SI_tau_omega_psi_1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{Set_I_tau_omega_psi_Pi.pdf}
\caption{Parameter set I. $\psi^*=\pi$}
\label{fig_SI_tau_omega_psi_2}
\end{subfigure}
~\bigskip
\begin{subfigure}{.5\textwidth}
\centering
\centering
\includegraphics[width=1\linewidth]{Set_II_tau_omega_psi_0.pdf}
\caption{Parameter set II. $\psi^*=0$}
\label{fig_SII_tau_omega_psi_1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{Set_II_tau_omega_psi_Pi.pdf}
\caption{Parameter set II. $\psi^*=\pi$}
\label{fig_SII_tau_omega_psi_2}
\end{subfigure}
\caption{ The circles \textcolor{green}{$\CIRCLE$}/\textcolor{red}{$\Circle$} represent stable/unstable solutions to the phase model (\ref{phase_Mod}) corresponding to (\ref{MLmodelNor}) and $\boldsymbol{\times}$ represents the calculated $\omega^*$ for each stable phase-locked periodic solution found by numerical integration of the full model (\ref{MLmodelNor}) with parameter sets I and II. The insets are in-phase and anti-phase periodic solutions of (\ref{MLmodelNor}).
The initial conditions for all simulations was of the form (\ref{InCoform}). For
the insets the values of $\left(v_{10}, w_{10}, v_{20}, w_{20}\right)^T$ are as follows. (a) $(1.53422,-4.42364,1.58103,-4.12258)^T$ (b) $(1.57882,4.1827,2.78262,0.358165)^T$ (c) $(1.892, -0.296437, -1.05518, 1.09985)^T$ (d) $(-1.72448,-1.46442,4.4848,1.31822)^T$}
\label{fig_SI_tau_omega_psi}
\end{figure}
\begin{table}[ht]
\centering
\scalebox{0.90}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{12}{|c|}{\cellcolor[HTML]{C0C0C0}{$\psi^*=0$}}\\ \cline{1-12}
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=90$} & & \multicolumn{3}{c|}{$\tau=110$} & \multirow{4}{*}{} & \multicolumn{3}{c|}{$\tau=130$} \\ \cline{2-4}\cline{6-8} \cline{10-12}
& {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} \\ \cline{1-4}\cline{6-8} \cline{10-12}
\multirow{2}{*}{$\omega^*$} & \small $1.1446$ & \small $0.988971$ & \small $0.1574$ & & \small $1.61521$ & \small $1.44374$ & \small $0.1188$ & & \small $-1.55286$ & \small $−1.31666$ & \small $-0.1794$ \\ \cline{2-4} \cline{6-8}\cline{10-12}
& \small $6.17015$ & \small $5.68396$ & \small $0.0855$ & & \small $9.95867$ & \small $9.67259$ & \small $0.0296$ & & \small $5.48587$ & \small $5.16611$ & \small $0.0619$ \\ \hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=150$} & & \multicolumn{3}{c|}{$\tau=170$} & \multirow{4}{*}{} & \multicolumn{3}{c|}{$\tau=190$} \\ \cline{2-4}\cline{6-8} \cline{10-12}
& {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} \\ \cline{1-4}\cline{6-8} \cline{10-12}
\multirow{2}{*}{$\omega^*$} & \small $-0.860951$ & \small $−0.766851$ & \small $-0.1227$ & & \small $2.38636$ & \small$2.23525$ & \small $0.0676$ & & \small $0.101849$ & \small $0.092197$ & \small $0.1047$ \\ \cline{2-4}\cline{6-8} \cline{10-12}
& \small $2.19521$ & \small $2.03459$ & \small $0.0789$ & & \small$5.11589$ & \small$4.87837$ & \small $0.0487$ & & \small$7.44601$ & \small $7.16109$ & \small $0.0398$ \\ \hline
\multicolumn{12}{|c|}{\cellcolor[HTML]{C0C0C0}{$\psi^*=\pi$}}\\ \cline{1-12}
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=90$} & & \multicolumn{3}{c|}{$\tau=110$} & \multirow{4}{*}{} & \multicolumn{3}{c|}{$\tau=130$} \\ \cline{2-4}\cline{6-8} \cline{10-12}
& {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} \\ \cline{1-4}\cline{6-8} \cline{10-12}
\multirow{2}{*}{$\omega^*$} & \small $-1.32864$ & \small $-1.08976$ & \small $-0.2192$ & & \small $3.68511$ & \small $3.39257$ & \small $0.0862$ & & \small $0.193408$ & \small $0.169843$ & \small $0.1387$ \\ \cline{2-4} \cline{6-8}\cline{10-12}
& \small $8.70987$ & \small $8.244$ & \small $0.0565$ & & \small $7.86$ & \small $7.36178$ & \small $0.0677$ & & \small $7.26467$ & \small $6.86785$ & \small $0.0578$ \\ \hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=150$} & & \multicolumn{3}{c|}{$\tau=170$} & \multirow{4}{*}{} & \multicolumn{3}{c|}{$\tau=190$} \\ \cline{2-4}\cline{6-8} \cline{10-12}
& {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} & & {\scriptsize Phase Model} & {\scriptsize Full Model} & {\scriptsize ${\rm E_N}$} \\ \cline{1-4}\cline{6-8} \cline{10-12}
\multirow{2}{*}{$\omega^*$} & \small $0.663926$ & \small $0.600602$ & \small $0.1054$ & & \small $1.02812$ & \small $0.946137$ & \small $0.0867$ & & \small $3.76194$ & \small $3.58466$ & \small $0.0495$ \\ \cline{2-4}\cline{6-8} \cline{10-12}
& \small $9.92786$ & \small $9.41424$ & \small $0.0546$ & & \small $3.74932$ & \small $3.55154$ & \small $0.0557$ & & \small $8.67729$ & \small $8.34016$ & \small $0.0404$ \\ \hline
\end{tabular}}
\caption{\sloppy Comparison of $\omega^*$ between the phase model prediction and the full model (\ref{MLmodelNor}) when $\psi^*=0,\pi$ with parameter set I. The quantity ${\rm E_N}$ is defined in \eqref{Nerror}.
}
\label{table_psi_0}
\end{table}
\begin{table}[ht]
\centering
\scalebox{1}{
\begin{tabular}{|c|c|c|clc|c|c|}
\hline
\multicolumn{8}{|c|}{\cellcolor[HTML]{C0C0C0}{$\psi^*=0$}}\\ \cline{1-8}
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=2200$} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{$\tau=2500$} \\ \cline{2-4} \cline{6-8}
&{\scriptsize Phase Model} & {\scriptsize Full Model}& \multicolumn{1}{c|}{\scriptsize ${\rm E_N}$} & \multicolumn{1}{l|}{} &{\scriptsize Phase Model} & {\scriptsize Full Model}& {\scriptsize ${\rm E_N}$} \\ \cline{1-4} \cline{6-8}
$\omega^*$ & \small $-1.418638$ & \small $-1.10245$ & \multicolumn{1}{c|}{\small $-0.2868$} & \multicolumn{1}{l|}{} & \small $-0.159684$ & \small $-0.223359$ & \small $0.2851$ \\ \hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=2800$} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{$\tau=3100$} \\ \cline{2-4} \cline{6-8}
&{\scriptsize Phase Model} & {\scriptsize Full Model}& \multicolumn{1}{c|}{{\scriptsize ${\rm E_N}$}} & \multicolumn{1}{l|}{} &{\scriptsize Phase Model} & {\scriptsize Full Model}& {\scriptsize ${\rm E_N}$} \\ \cline{1-4} \cline{6-8}
$\omega^*$ & \small $0.9584587$ & \small $0.720141$ & \multicolumn{1}{c|}{\small $0.3309$} & \multicolumn{1}{l|}{} & \small $1.914776$ & \small $1.880915$ & \small $0.018$ \\ \hline
\multicolumn{8}{|c|}{\cellcolor[HTML]{C0C0C0}{$\psi^*=\pi$}}\\ \cline{1-8}
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=2200$} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{$\tau=2500$} \\ \cline{2-4} \cline{6-8}
&{\scriptsize Phase Model} & {\scriptsize Full Model}& \multicolumn{1}{c|}{\scriptsize ${\rm E_N}$} & \multicolumn{1}{l|}{} &{\scriptsize Phase Model} & {\scriptsize Full Model}& {\scriptsize ${\rm E_N}$} \\ \cline{1-4} \cline{6-8}
$\omega^*$ & \small $0.913646$ & \small $0.647528$ & \multicolumn{1}{c|}{\small $0.411$} & \multicolumn{1}{l|}{} & \small $2.06122$ & \small $1.58187$ & \small $0.303$ \\ \hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{$\tau=2800$} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{$\tau=3100$} \\ \cline{2-4} \cline{6-8}
&{\scriptsize Phase Model} & {\scriptsize Full Model}& \multicolumn{1}{c|}{{\scriptsize ${\rm E_N}$}} & \multicolumn{1}{l|}{} &{\scriptsize Phase Model} & {\scriptsize Full Model}& {\scriptsize ${\rm E_N}$} \\ \cline{1-4} \cline{6-8}
$\omega^*$ & \small $-1.011297$ & \small $-0.880915$ & \multicolumn{1}{c|}{\small $-0.148$} & \multicolumn{1}{l|}{} & \small $0.0496789$ & \small $-0.03801$ & \small $-2.307$ \\ \hline
\end{tabular}}
\caption{\sloppy Comparison of $\omega^*$ between the phase model prediction and the full model (\ref{MLmodelNor}) when $\psi^*=0,\pi$ with parameter set II. The quantity The quantity ${\rm E_N}$ is defined in \eqref{Nerror}.}
\label{table_psi_0_S2}
\end{table}
\subsection{Out-of-phase solutions}
\label{Sec3_2}
\review{To find phase-locked solutions other than the in-phase and anti-phase solutions, we fix $\tau$ and solve
\begin{equation} \begin{aligned}
\omega^*&=\frac{1}{\Omega} H_{II}(\psi^*-\omega^*\eta-\Omega\tau),\\
\omega^*&=\frac{1}{\Omega} H_{II}(-\psi^*-\omega^*\eta-\Omega\tau)
\label{sysA1}
\end{aligned}\end{equation}
for $\omega^*$ and $\psi^*$.
Figure \ref{Contour} shows all solutions to (\ref{sysA1}) when $\tau=100\backslash2205$ with the parameter set I$\backslash$II.
As seen for the existence of in-phase and anti-phase solutions in Section \ref{Sec3_1}, the number of phase-locked solutions with the parameter set I is bigger that II. For the purpose of clarity in the bifurcation figures, we consider the parameter set II in this section.}
In Figure \ref{Fig_new}, we observe that there are four non-trivial phase-locked solutions:
$\psi^*_1=1.85996$
and $\psi^*_2=2.13981$ in $(0,\pi)$; and $\psi^*_3=2\pi-\psi^*_1=4.42323$ and $\psi^*_4=2\pi-\psi^*_2=4.14338$ in $(\pi,2\pi)$. Moreover, we have $\omega^*_1=\omega^*_3=0.14125$ and $\omega^*_2=\omega^*_4=0.14125$ where $\omega^*_i$ is the corresponding frequency deviation to $\psi^*_i$, $i=1,2,3,4$.
This agrees with Proposition \ref{prop_1}.
\begin{figure}[hbt!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.67\textwidth]{New_fig_1.pdf}
\caption{Parameter set I with $\tau=100$.}
\label{Fig_new_1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{New_fig.pdf}
\caption{Parameter set II with $\tau=2205$.}
\label{Fig_new}
\end{subfigure}
\caption{Contour plot of the equations in (\ref{sysA1}) to show the graphical solutions of (\ref{sysA1}) with fixed $\tau$.}
\label{Contour}
\end{figure}
\review{In Figure \ref{New_Fig9}a, we plot all solutions of system (\ref{sysA1}) in $\tau\psi-$plane and mark the stability using the criteria in Section \ref{sec_stability}. Note that since this representation suppresses $\omega^*$, the multiple in-phase or anti-phase solutions which occur for
particular values of $\tau$ in Figure \ref{fig_SI_tau_omega_psi} are superimposed.
As $\tau$ varies, we observe that a stable solution corresponding to $\psi^*=0,\pi$ always exists with the appearance of an unstable solution in disjoint intervals of $\tau$, while all the out-of-phase solutions are unstable. More precisely, for the in-phase solution, as $\tau$ increases, we notice that an unstable solution
disappears at $\tau\approx 2203$, exists between $\tau\approx2207.5$ and
$\tau\approx2217$, and reappears at $\tau\approx 2221$. The same behaviour occurs
for the anti-phase solution at different values of $\tau$. Near the appearance and
disappearance of these unstable solutions the unstable out-of-phase solutions appear
and disappear.
As we observe in Figure \ref{Contour}, there are multiple solutions $(\omega^*,\psi^*)$ of (\ref{sysA1}) when $\tau$ is fixed.
To study the creation and destruction of solutions further,
we take particular values for $\tau$ and show all solutions in the blue rectangles from Figure \ref{New_Fig9}a in the $\omega\psi-$plane, see Figures \ref{New_Fig9}b$-$\ref{New_Fig9}i.
We now see that there are pitchfork bifurcations where
a stable in-phase or
anti-phase solution becomes unstable as two unstable out-of-phase solutions merge
together,
see Figures \ref{New_Fig9}b$-$\ref{New_Fig9}c and \ref{New_Fig9}f$-$\ref{New_Fig9}g.
This correspond
to the values $\tau_2^*$ discussed above.
Moreover, there are saddle-node bifurcations where two unstable in-phase or
anti-phase solutions collide then vanish, see Figures \ref{New_Fig9}d$-$\ref{New_Fig9}e and \ref{New_Fig9}h$-$\ref{New_Fig9}i.
This corresponds to the value $\tau_1^*$ discussed above.
For other parameter values, we observe the opposite sequence of bifurcations: two unstable in-phase or anti-phase solutions are created by a saddle-node bifurcation after which one gets stabilized by a pitchfork bifurcation involving two unstable out-of-phase solutions.
All the bifurcations are as predicted for the general model in Section \ref{bif:sec}.
We did not observe any saddle-node bifurcations of out-of-phase solutions for this
parameter set.}
\review{To help understand these bifurcations, we plot solutions in the $\tau\omega-$plane and the solutions near $\psi=\pi$ in the $\tau\omega\psi-$space in Figures \ref{fig1_B}$-$\ref{fig1_C}, respectively.
Considering the case $\psi^*=\pi$, we observe that:}
\begin{itemize}
\item the pitchfork bifurcation occurs when two unstable out-of-phase solutions merge together with one stable anti-phase solution \textcolor{green}{$\CIRCLE$} to produce one unstable anti-phase solution \textcolor{red}{$\blacksquare$},
\item the saddle-node bifurcation occurs when the created unstable anti-phase solution \textcolor{red}{$\blacksquare$} in the above collides with another unstable anti-phase \textcolor{red}{$\blacksquare$} and both vanish.
\end{itemize}
\begin{figure}[hbt!]
\centering
\includegraphics[width=1\textwidth]{New_Fig9.pdf}
\caption{The solutions of phase model (\ref{phase_Mod}) corresponding to (\ref{MLmodelNor})
in the blue rectangles in Figure \ref{fig1} in $\omega\psi-$plane.
The circles \textcolor{green}{$\CIRCLE$}/\textcolor{red}{$\Circle$} represent stable/unstable solutions of (\ref{sysA1}).}
\label{New_Fig9}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}{.54\textwidth}
\centering
\includegraphics[width=1\textwidth]{tauVSomega_2200_2225_A
\caption{$\omega$ vs $\tau$}
\label{fig1_B}
\end{subfigure}
\begin{subfigure}{0.37\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{3D}
\caption{$\tau$, $\omega$ and $\psi$}
\label{fig1_C}
\end{subfigure}
\caption{Numerical bifurcation diagram with respect to $\tau\in(2200,2225)$ for
the solutions of the phase model (\ref{phase_Mod}) corresponding to the Morris-Lecar model (\ref{MLmodelNor}) with parameter set II. The circles \textcolor{gray}{$\CIRCLE$}/\textcolor{pink}{$\textrm{\ding{98}}$} represent stable/unstable in-phase solutions, \textcolor{green}{$\blacktriangle$}/\textcolor{red}{$\blacksquare$} represents stable/unstable anti-phase solutions, and \textcolor{blue}{$\times$} represents unstable out-of-phase solutions of (\ref{sysA1}). }
\label{fig1}
\end{figure}
\subsection{Small delay}
\label{Sec3_3}
In this subsection, we consider small time delay, in the sense that, $\Omega \tau=\mathcal{O}(1)$ with respect to the small parameter $\epsilon$, and compare the results with \cite{campbell2012phase} where the authors studied this case using the parameter set II.
In \cite{campbell2012phase}, the authors studied the dynamics of the phase model corresponding to the full model (\ref{Full_Mod}) without introducing the frequency deviation in their analysis because
the time delay $\eta$ was neglected in the phase model when $\Omega \tau=\mathcal{O}(1)$. We have stated some results from \cite{campbell2012phase} in Section \ref{Sec2_small_Delay}.
As in the previous section we solve (\ref{omega0}) and (\ref{omegapi}) to find
$\omega^*$ for the in-phase and anti-phase solutions and \eqref{sysA1} to find
$(\psi^*,\omega^*)$ for the out-of-phase solutions. We choose $\tau\in (0,15)$,
which is similar to the range chosen by \cite{campbell2012phase}.
In contrast with the results of the last section, here we observe that for
$\psi^*=0,\pi$ there is a {\em unique} solution $\omega^*$
for each $\tau$ in the range we considered.
This agrees with the prediction of the phase model in Section \ref{Sec2_small_Delay}.
We describe our results in more detail below.
\begin{figure}[hbt!]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=0.93\textwidth]{small_tauA}
\caption{$\tau\in(0,15)$}
\label{fig2_A}
\end{subfigure}\hspace{-0.5cm}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{small_tau_smooth
\caption{$\tau\in(2.7,2.84)$}
\label{fig2_B}
\end{subfigure}\hspace{-0.5cm}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{small_tau_smoothA
\caption{$\tau\in(9.59,9.73)$}
\label{fig2_C}
\end{subfigure}
\caption{ Numerical bifurcation diagram with respect to $\tau\in(0,15)$ for
the phase model (\ref{phase_Mod}) corresponding to the Morris-Lecar model (\ref{MLmodelNor}) with parameter set II. The circles \textcolor{green}{$\CIRCLE$}/\textcolor{red}{$\Circle$} represent stable/unstable solutions in the phase model (\ref{sysA1}).} \label{fig2}
\end{figure}
In Figure \ref{fig2}, we plot the in-phase and anti-phase solutions as $\tau$ varies in $(0,15)$ in the $\tau\psi-$plane. We note that there is similar behaviour in Figure \ref{fig2_A} and \cite[Figure 4b]{campbell2012phase}. The in-phase and anti-phase solutions change stability as $\tau$ increases and their stabilities appear to
be the opposite of each other. To examine the behaviour near changes of stability,
in Figures \ref{fig2_B}$-$\ref{fig2_C} we show the bifurcation diagrams zoomed close
to the two switching points. We see that the transition from
stable in-phase solution to stable anti-phase solution involves
two pitchfork bifurcations and one saddle-node bifurcation of out-of-phase
solutions, which agrees with \cite{campbell2012phase}.
Figure \ref{fig21} shows this behaviour when the solutions are plotted in the
$\tau\omega-$plane.
\review{Furthermore, we observe in Figures \ref{fig2_B}$-$\ref{fig2_C} that there are small intervals of $\tau$ where bistability occurs.
Figure \ref{Fig_v1v2_small_delay} shows the coexistence of stable anti-phase and out-of-phase solutions.}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.7\textwidth]{NewV1V2_Small_delayV2.pdf}
\caption{The coexistence of stable anti-phase (red) and out-of-phase (blue) solutions of (\ref{MLmodelNor}) when $\tau=9.661$ with parameter sets II. We take different initial conditions: $(0.664192, 0.204054, 5.58914, 0.762568)^T$ for red curve and $(-0.883364, -0.200879, -0.686477, -0.989329)^T$ for blue curve.}
\label{Fig_v1v2_small_delay}
\end{figure}
\begin{remark}
The results in this section are consistent with the results in
\cite{izhikevich1998phase}, which indicate that a phase model
where the time delay enters as a phase shift is accurate
when $\tau$ is small in the full model (\ref{Full_Mod}) in
the sense that \review{$\Omega\tau=\mathcal{O}(1)$ with respect to $\epsilon$ for $0<\epsilon\ll 1$.}
\end{remark}
\begin{figure}[hbt!]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=.93\textwidth]{small_tau_VS_omega_A}
\caption{$\tau\in(0,15)$}
\label{fig2_A0}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{small_tau_smooth_Vs_Omega.pdf
\caption{$\tau\in(2.7,2.84)$}
\label{fig2_B0}
\end{subfigure
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{small_tau_smoothA_VS_Omega.pdf
\caption{$\tau\in(9.59,9.73)$}
\label{fig2_C0}
\end{subfigure}
\caption{Numerical bifurcation diagram with respect to $\tau$ for the
phase model \eqref{phase_Mod} corresponding to the Morris-Lecar model (\ref{MLmodelNor}) with parameter set II. The circles \textcolor{green}{$\CIRCLE$}/\textcolor{red}{$\Circle$} represent stable/unstable solutions in the phase model (\ref{sysA1}).} \label{fig21}
\end{figure}
\section{Conclusions}
\label{sec_conc}
In this paper, we studied the phase-locking dynamics of a system of two weakly connected oscillators with time-delayed interaction.
By applying the theory of weakly coupled oscillators, we transformed the system into a phase model with an explicit delay in the argument of the phases. We showed that the system always
has phase-locked solutions corresponding to in-phase (synchronous, $0$ phase difference)
and anti-phase (phase difference of half the period) solutions. Further, we showed for
small delay (\review{$\Omega\tau=\mathcal{O}(1)$}) the in-phase and anti-phase solutions
are unique, but for large delay multiple solutions of each type may exist, corresponding to different frequencies. Finally, we showed that phase-locked solutions with any
other phase differences (out-of-phase solutions) are also possible.
Since the phase model is an infinite-dimensional system of delay differential equations, the linearized system about the phase-locked solutions has a countable infinity of eigenvalues.
Through the stability analysis for our model, we discussed the distribution of the
eigenvalues on the complex plane to
provide stability conditions for the in-phase, anti-phase and out-of-phase solutions.
We found that the zero eigenvalue always exists for any choice of parameters and functions which corresponds to the motion along the phase-locked solutions. We showed that the only
way in which bifurcations can occur is through the existence of (additional) zero
eigenvalues and argued
that the following bifurcations may occur: saddle-node bifurcations of two in-phase solutions
with different frequencies, saddle-node bifurcations of two anti-phase solutions
with different frequencies, saddle-node bifurcations of two different out-of-phase solutions,
pitchfork bifurcations where two out-of-phase solutions arise from an in-phase
or anti-phase solution. We showed that the saddle-node bifurcations of in-phase and
anti-phase solutions only involve unstable solutions.
Our results on in-phase and anti-phase solutions agree with those in
\cite{Schuster1989Mutual,ermentrout2009delays}, which study the phase model
(\ref{Kuramoto_model_Exp_Delay}), with $n=2$ and $H(\cdot)=\sin(\cdot)$.
We note that they emphasized the need for large coupling-strength
for multiple in-phase/anti-phase solutions to exist, however, we show that it is possible
with weak coupling and sufficiently large delays. They do not study out-of-phase
solutions as these are not possible in their model due to the restriction on $H$.
As can be seen in the literature
\cite{crook1997role,park2016weakly,wall2013synchronization,zhang2015robust}, in order for
phase models derived from biophysical oscillator models to adequately capture the
dynamics of the full model, the function $H$ generally must include multiple Fourier modes.
\review{In \cite{campbell2012phase} it was shown that out-of-phase solutions and pitchfork
bifurcations cannot occur in a phase model with
small delay if only the first Fourier modes are included
in $H$. However, when the time delay is large, we showed that both out-of-phase solutions and pitchfork
bifurcations can occur in the phase model with only the first Fourier modes of $H$.
In general, in the case of large time delay, the bifurcation structure may change if some modes are dropped.
If the coefficients of the modes that are dropped are small, then the bifurcation structure wouldn't change much. The bifurcation points may just move around. If the coefficients of the modes dropped are big enough then there could be large changes in the bifurcation structure.}
When the delay is small (\review{$\Omega\tau=\mathcal{O}(1)$}), Campbell and Kobelevskiy studied the system
\begin{equation} \begin{aligned}
\label{Campbell&Kobelevskiy}
\frac{d\theta_1}{d{t}}&=\Omega+\epsilon H(\theta_2(t)-\theta_1(t)-\Omega\tau),\\
\frac{d\theta_2}{d{t}}&=\Omega+\epsilon H(\theta_1(t)-\theta_2(t)-\Omega\tau),
\end{aligned}\end{equation}
and proved that in-phase and anti-phase solutions are stable when $H'(\phi^*-\Omega \tau)>0$, $\phi^*\in\{0,\pi\}$ in \cite{campbell2012phase}.
On the other hand, when the time delay is large \review{$\epsilon\Omega\tau=\mathcal{O}(1)$}, we proved that these solutions are stable whenever $H'(\phi^*-\omega^*\epsilon\Omega\tau-\Omega \tau)>0$ where $\omega^*$ is the corresponding frequency deviation.
It is clear that the stability condition in the first case is independent of the coupling strength parameter and the frequency deviation. Indeed, under the assumption $\theta_1(t)=\Omega+\omega t$ and $\theta_2(t)=\Omega+\omega t+\phi^*$ (see (\ref{phases})), the terms of the frequency deviation $\omega$ will cancel out inside the function $H$ in (\ref{Campbell&Kobelevskiy}).
In fact, in \cite{campbell2012phase}, the authors reduce (\ref{Campbell&Kobelevskiy}) into a single ordinary differential equation and study the dynamics of the model without introducing the frequency deviation.
Due to the explicit delay in the phase model, we couldn't reduce the model into a single equation.
For the out-of-phase solutions $\phi^*\notin \{0,\pi\}$, the stability condition $H'(\phi^*-\Omega \tau)>0$ is still valid when the delay is small.
While for the large delay the stability becomes more complicated since the explicit delay
is an additional parameter that needs to be considered in the phase model.
As an example we considered two Morris-Lecar oscillators with delayed, diffusive coupling.
We adopted the parameter values from \cite{campbell2012phase} to compare the results when
the time delay is small. We studied the existence and stability of the phase-locked
solutions, and explored the bifurcations in the phase model by using a four mode trunction
of the Fourier series for the interaction function and compared these results with
numerical simulations of the full model.
When the time delay $\tau$ is large, we found:
\begin{itemize}
\item There exist more than one frequency deviation $\omega$ corresponding to the in-phase and anti-phase solutions, i.e., co-existence of multiple stable and unstable solutions;
\item All out-of-phase solutions are unstable;
\item Both the pitchfork and saddle-node bifurcations of in-phase and anti-phase solutions
occur.
\end{itemize}
When the time delay is small, we observed:
\begin{itemize}
\item \review{Unique solution in each phase-locked solution category (in-phase, anti-phase and out-of-phase).}
\item The occurrence of saddle-node bifurcations of out-of-phase solutions and
pitchfork bifurcations of in-phase and anti-phase solutions.
\end{itemize}
Our results agree with \cite{campbell2012phase} when the time delay is small and are consistent with the results in \cite{izhikevich1998phase}, that the explicit time delay can be neglected in the phase model when $\tau$ is small.
A special type of phase-locked solutions, so-called \textit{symmetric cluster solutions}, can appear in a network of $n$ identical oscillators, see e.g., \cite{campbell2018phase,okuda1993variety},
\begin{equation}
\label{Campbell&Wang1}
\frac{d \mathbf{X}_{i}}{d t}={\mathbf{F}}\left(\mathbf{X}_{i}(t)\right)+\epsilon \sum_{j=1}^{n} a_{i j} {\mathbf{G}}\left(\mathbf{X}_{i}(t), \mathbf{X}_{j}\left(t-\tau\right)\right), \quad i=1, \ldots, n, \quad \mathbf{X}_i\in\mathbb{R}^m.
\end{equation}
In these solutions, also called travelling wave solutions, oscillators in
the same cluster are synchronized while
those in different clusters have non-zero phase-difference.
In \cite{campbell2018phase}, Campbell and Wang determined conditions for existence and stability of symmetric cluster solutions in (\ref{Campbell&Wang1}) when $\tau$ is small and
the coupling matrix is circulant. Stability conditions for cluster solutions in networks
with small distance dependent delays and random, nearest neighbour coupling have
been formulated by several authors (see \cite{ermentrout2009delays,ko2004wave} and references therein).
When the time delay is large, Earl and Strogatz provided the stability condition
for the in-phase solution ($\theta_i(t)=\Omega t$, i.e., one cluster solution), see \cite{earl2003synchronization}.
For future research, it would be interesting to study the existence and stability of
symmetric cluster solutions in (\ref{Campbell&Wang1}) with large time delay.
\section*{Acknowledgments}
The authors would like to thank the anonymous referees for their careful
reading and helpful suggestions.
|
1,116,691,497,275 | arxiv | \section{Introduction}
The presence of a strongly triaxial object, a bar, at the center of some $30\%$ of disk galaxies provides an opportunity to probe the distribution of dark matter in the inner few disk scale lengths
and to test the maximum disk hypothesis.
The bar pattern speed, $\Omega_p$, which plays an important role in these dynamics, can be parametrized by the ratio $s = D_L/a_B$.
Here, $D_L$ is the distance from the center to the Lagrange point along the bar major axis (loosely known as corotation) and $a_B$ is the bar semi-major axis.
Contopoulos (1980) argued that self-consistent bars can extend no further than co-rotation, i.e. $s \ge 1$.
For our purposes, we will consider bars to be fast when $1.0 \leq s \leq 1.4$.
The prevailing theory of bar formation requires a resonant cavity of spiral density waves (Toomre 1981).
The cavity extends from the center to corotation, where spiral waves are reflected.
Strong amplification at corotation leads to an instability that gives rise to a strong bar when it saturates.
Numerical simulations (e.g. Sellwood 1981; Combes and Sanders 1981) of unstable disks have revealed $s \simeq 1$ for bars formed in this way and those formed in our simulations also start out with a value not much larger than unity.
On the observational side, the evidence is meager but seems to indicate that bars are fast.
Tremaine and Weinberg (1984) devised a direct method for measuring $\Omega_p$ which requires a tracer population that satisfies the continuity equation.
It can therefore be used only for stellar populations free of obscuration, such as in SB0 galaxies.
Kent (1987) and, more reliably, Merrifield and Kuijken (1995) have applied this method to NGC 936, finding $s = 1.4 \pm 0.3$, consistent with a fast bar.
All other determinations of bar pattern speeds are indirect.
In particular, Athanassoula (1992) has modelled gas dynamics in barred galaxies.
Identifing the bar dust lanes with gas shocks, she searched for a
value of $s$ which yielded shock patterns that match the observed dust
lane morphology, and concluded that $\Omega_p$ must be such that $s = 1.2 \pm 0.2$.
Similar studies have been carried out by others, most notably using
potentials obtained from IR photometry (e.g. Lindblad et al. 1996, Weiner et al. 1996). This evidence, which spans Hubble types SB0 to SBc, strongly suggests that all bars are fast in the sense defined above.
However, the fact that all bars appear to rotate rapidly suggest that dynamical friction is weak.
Chandrasekhar (1943) showed that a massive object moving through a uniform background of particles experiences a drag from the trailing wake that it induces.
This retarding force is called dynamical friction.
A similar process takes place when a bar rotates inside a halo: the bar excites a trailing, bisymmetric wake in the halo, which exerts a retarding torque on the bar.
This torque transfers angular momentum from the bar to the halo, slowing the bar down.
Linear perturbative calculations by Weinberg (1985) indicate that bars are slowed down in only a few rotations for Weinberg's massive halos.
One then expects old bars inside massive halos to be slow.
\section{N-body Experiments}
Weinberg's calculations suffered from a number of shortcomings.
In particular, Weinberg modelled his bars with fixed analytic density distributions.
Thus his treatment was not self-consistent and precluded any possibility of a back reaction on the bar (apart from slow-down).
For example, Kormendy's (1979) hypothesis that a bar would dissolve could not be tested in Weinberg's work.
For this reason, we found it desirable to carry out {\it fully self-consistent} N-body simulations to test Weinberg's prediction.
The results of these simulations are presented below.
Athanassoula (1996) is carrying out similar experiments.
\subsection{Setup}
We have sought to generate galaxy models in which the halo is
initially in equilibrium with the disk. We have accomplished this by
integrating iteratively the distribution function of the halo in the
potential of the halo and fixed disk until the halo density converges.
We used either Kuz'min-Toomre or exponential disks, which were thicked vertically by a Gaussian factor.
The vertical velocity dispersion in the disk was chosen to maintain this disk thickening.
The disk velocity distribution in the radial direction was chosen to give a constant Toomre $Q$ at all radii, and we have experimented with different values.
Our halo distribution functions were lowered polytropes with different values of m:
\begin{equation}
f = f[(-E)^m] - f[(-E_0)^m]
\end{equation}
where $E_0$ is the energy at some truncation radius.
Quiet starts (e.g. Sellwood 1983) were used to suppress all components of the total linear momentum, and the disk-plane components of the total angular momentum.
The system generated in this way is initially in equilibrium, but the
disk is not stable and a bar rapidly forms.
The simulations were run on 3D cartesian and polar grids; the grid codes used are described in Sellwood \& Merritt (1994) for the cartesian code and Sellwood \& Valluri (1996) for the polar code.
We use units $G = M_d(\infty)/f = a = 1$, where $M_d(\infty)$ is the mass of the analytic disk model out to infinity, $f$ is the fraction of mass in the disk, and $a$ is the length scale of the disk.
The unit of time is the dynamical time $t_{dyn} = \sqrt{a^3/GM}$.
We parametrize the ratio of the halo mass to disk mass in the inner regions by the ratio $ \eta \equiv ( v_{c,disk}/v_{c,halo} )^2$ at the radius of the maximum total circular velocity in the plane of the disk.
\subsection{Massive halo model}
The massive halo (MH) model had a Kuz'min-Toomre disk with $Q = 0.05$ initially and a halo polytrope index $m = {{3}\over{2}}$.
The rotation curve for the MH model, shown in Figure \ref{vc_r_mh}, peaks at $R = 2.7$ with $\eta = 1.1$.
Thus the dynamics of the disk are strongly influenced by the halo.
By $t = 100$, the MH model forms a fast bar (in these units, the orbital period at $R = 2$ is $31$).
This bar has been compared to that in NGC 936 (kinematic data of Kormendy, 1983).
We found our bar was little more than a factor of two times as strong as the bar in NGC 936.
With the formation of a bar, angular momentum starts to be transferred from the disk to the halo (Figure \ref{jz_om_s_t_mh}(a)).
During this time, a trailing, bisymmetric wake can be identified in
the halo. The trail angle is initially $\sim 45^\circ$, decreasing towards $0 ^\circ$ by $t = 1600$ when the net torque on the bar becomes negligible.
The loss of angular momentum from the bar results in a significant drop in $\Omega_p$ (Figure \ref{jz_om_s_t_mh}(b)).
Although the bar starts out with $ s \simeq 1.0 $, by simulation's end $ s $ has risen to $\sim 2.6$.
This evolution, shown in Figure \ref{jz_om_s_t_mh}(c), occurs despite the continued growth of the bar throughout the simulation.
Other simulations with initial $Q = 1.0$ and $Q = 1.5$ gave similar results.
We have also checked that halo rotation (both direct and retrograde) does not change our basic result, which is that a massive halo model cannot support a fast bar for a Hubble time.
\subsection{``Near maximum disk'' model}
We now report another model in which the disk dominated the inner scale lengths.
The halo polytrope index was set to $m = {{1}\over{2}}$, the lowest value possible for stability against axisymmetric and radial instabilities.
As can be seen from Figure \ref{vc_r_md}, at $R = 3.6$ where $v_c$ peaks, $\eta = 1.93$.
It is in this sense that we refer to this as a ``near maximum disk'' (NMD) model: for a polytropic distribution function, the disk cannot be made any more dominant without making the halo unstable or unphysically hollow.
The initial disk of this simulation was an exponential disk with $Q = 1.0$, resulting in a rotation curve which is more or less flat out to $R \simeq 15.$
The disk formed a fast bar by $t = 150$ (at $R = 2$, the orbital period is $55$ at $R=2$).
Figure \ref{jz_om_s_t_md} shows the evolution of this NMD model.
Although angular momentum is still being transferred from the disk to the halo, it is clear that the torque is much weaker in this case than in the massive halo model.
The effect of this reduced torque is that the bar remains rapidly rotating for a Hubble time.
At simulation's end ($t \simeq 850 $), $s$ has reached a value of $1.4$ and seems to be holding steady at that level.
It therefore seems likely that dynamical friction is not excessively strong in NMD models.
Our result is still rather preliminary however, since so far we have only a single simulation in this regime.
\section{Discussion}
Assuming that all bars are fast, we have shown that barred
galaxies cannot be halo dominated within the inner few scale lengths.
De Blok et al. (these proceedings) find that low surface brightness (LSB)
galaxies are halo dominated at all radii. The detection of slow bars in LSB
galaxies would then provide independent confirmation of this, while
presenting us with examples of previously unobserved slow bars.
We have also shown that near maximum disk models are free of the
dynamical friction problem, although this is still a preliminary
result to be explored further in future work. If this result
turns out to hold in general, then continuity of bar strength from SB
to SAB to SA galaxies could perhaps be used as an argument in favor of near
maximum disks for all bright galaxies.
|
1,116,691,497,276 | arxiv | \section{Introduction.}\label{s:intro}
Diffusion in comb-like structures arises in the study of several applications such as the study of linear porous media, microscopically disordered fluids, transport in dendrites and tissues (see for instance~\cites{Young88,ArbogastDouglasEA90,ShowalterWalkington91,BressloffEarnshaw07,DagdugBerezhkovskiiEA07}
and references therein).
Our aim in this paper is to study idealized, periodic, comb-shaped domains in $\R^2$ under scaling regimes where an anomalous diffusive behavior is observed.
We also study scaling limits of a skew Brownian motion on an infinite comb-shaped graph.
In both scenarios we show that under a certain scaling the limiting process is a Brownian motion time-changed by the local time of an independent sticky reflected Brownian motion.
We describe each of these scenarios separately in Sections~\ref{s:ifatcomb} and~\ref{s:ithincomb} below.
\begin{comment}[,coltext=red]
several physical situations such as for several phenomena, such as in linear porous media, microscopically disordered fluids,
The first question we study in this paper is motivated by the effective behavior of a passive tracer in a microscopically disordered fluid.
In this paper we study Brownian motion in periodic comb-like structures, which consist of an unbounded, connected set, which we call the ``spine'', and ``teeth'' which are attached to the spine periodically at spacing $\epsilon > 0$. The Brownian motion can travel along the spine or into the teeth. We consider two models, which we call the \emph{fat comb} and the \emph{thin comb}. For the fat comb model, diffusion into the teeth is limited by the width of the teeth; for the thin comb model, diffusion into the teeth is limited by a flux condition at their boundary with the spine. Our main interest is the asymptotic behavior of these diffusions as $\epsilon \to 0$: we prove that the scaling limit involves a time-changed Brownian motion or ``trapped Brownian motion'' in the language of \cite{BenArousCabezasEA15}.
\end{comment}
\subsection{Anomalous Diffusion in Comb-Shaped Domains.}\label{s:ifatcomb}
Let $h_0 \in (0, \infty]$, and $\alpha, \epsilon > 0$, and let $\Omega_\epsilon \subset \R^2$ be the fattened comb-shaped domain defined by
\begin{equation}\label{e:OmegaEp}
\Omega_\epsilon = \set{
(x, y) \in \R^2 \; \st -\epsilon < y < h_0 \one_{B(\epsilon \Z, \alpha \epsilon^2/2)}(x)
}\,,
\end{equation}
where $B(\epsilon \Z, \alpha \epsilon^2/2) \subseteq \R$ denotes the $\alpha \epsilon^2/2$ neighborhood of $\epsilon \Z$, and $\one$ denotes the indicator function.
Figure~\ref{f:fatcomb} shows a picture of the domain~$\Omega_\epsilon$.
We refer to the region where $-\epsilon < y < 0$ as the spine; $\Omega_\epsilon$ also has teeth of height $h_0$ and width~$\alpha \epsilon^2$, which are spaced $\epsilon$ apart.
\begin{figure}[hbt]
\begin{center}
\begin{tikzpicture
\fill[spinefill] (-3.9,0) rectangle (3.9,1);
\draw (-3.9,0) -- (3.9,0);
\foreach \x [count=\n]in {-3.1,-2.1,-1.1,-.1,.9,1.9,2.9}{
\fill[spinefill] (\x,1) rectangle ++(.2,3);
\draw (\x,1) -- ++(-.8,0);
\draw (\x,4) -- ++(.2,0);
}
\foreach \x [count=\n]in {-3.1,-2.9,-2.1,-1.9,-1.1,-.9,-.1,.1,.9,1.1,1.9,2.1,2.9,3.1}{
\draw (\x,1) -- ++(0,3);
}
\draw (3.1,1) -- ++(.8,0);
\draw[<->] (-.1,4.2) -- (.1,4.2);
\draw node at (0,4.5) {$\alpha \epsilon^2$};
\coordinate (A) at (-.9,1.5);
\coordinate (B) at ( 0.1,1.5);
\draw[<->] (A) -- (B) node[midway,fill=white] {$\epsilon$};
\coordinate (A) at (-3.5,1);
\coordinate (B) at ( -3.5,4);
\draw[<->] (A) -- (B) node[midway,fill=white] {$h_0$};
\coordinate (A) at (-3.4,1);
\coordinate (B) at ( -3.4,0);
\draw[<->] (A) -- (B) node[midway,fill=spinefill] {$\epsilon$};
\end{tikzpicture}
\caption{Image of the comb-shaped domain~$\Omega_\epsilon$.
The teeth have width $\alpha \epsilon^2$ and height $h_0$.
The spine has width $\epsilon$, and the teeth are spaced a distance of~$\epsilon$ apart.}
\label{f:fatcomb}
\end{center}
\end{figure}
Let $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ be a Brownian motion in $\Omega_\epsilon$ that is reflected normally on the boundary $\partial \Omega_\epsilon$.
Our aim is to study the limiting behavior of~$Z^\epsilon$ as~$\epsilon \to 0$.
This is an idealized, two dimensional, version of the arterial flow models considered by Young~\cite{Young88}.
Note that the process $Z^\epsilon$ may travel large horizontal distances when it is in the spine, but travels only negligible horizontal distances when it is ``trapped'' inside the teeth.
From the shape of~$\Omega_\epsilon$, one expects that the chance $Z^\epsilon$ wanders into the teeth from the spine is of order $\alpha \epsilon$.
Since the teeth are spaced $\epsilon$ apart, the process $Z^\epsilon$ encounters $O(1/\epsilon)$ teeth after traveling an~$O(1)$ distance horizontally.
These balance, and after large horizontal distances, the process $Z^\epsilon$ spends comparable amounts of time in the spine and in the teeth.
This leads us to expect that the limiting horizontal behavior of~$Z^\epsilon$ should be described by a Brownian motion that is time-changed so that it only moves when the process is in the spine -- this is our main result.
To state the result, we let $\Omega_0 \stackrel{\Delta}{=} {\mathbb R} \times [0,h_0]$, and let $\pi_\epsilon\colon \Omega_\epsilon \to \Omega_0$ be defined by $\pi_\epsilon(x, y) = (x, y^+)$, where $y^+ = \max\set{y, 0}$ denotes the positive part of~$y$.
Given a probability measure~$\mu^\epsilon$ on~$\Omega_\epsilon$, let~$\pi_\epsilon^*(\mu^\epsilon)$ denote the push forward of~$\mu^\epsilon$, under the map $\pi_\epsilon$, to a probability measure on $\Omega_0$.
We can now state the main result.
\begin{theorem}\label{t:zlimfat}
Let $Z^\epsilon = (X^\epsilon,Y^\epsilon)$ be a normally reflected Brownian motion in $\Omega_\epsilon$ with initial distribution $\mu^\epsilon$.
If the sequence of measures $(\pi_\epsilon^* (\mu^\epsilon) )$ converges weakly to a probability measure $\mu$ on~$\Omega_0$, then the sequence of processes $Z^{\epsilon,+} \stackrel{\Delta}{=} \pi_\epsilon(Z^\epsilon)$ converges weakly as $\epsilon \to 0$.
\GI[2019-04-18]{I changed $Z^\epsilon$ to $(Z^\epsilon)^+$, and introduced~$\pi_\epsilon^+$.
I think it needs to be changed everywhere for the fat comb.}
The limiting process, denoted by $Z = (X,Y)$, can be described as follows.
The initial distribution of~$Z$ is~$\mu$.
The process $Y$ is a Brownian motion on $(0, h_0)$, which is normally reflected at $h_0$ if $h_0 < \infty$, and is stickily reflected (with parameter $1/\alpha$) at $0$.
The process $X$ is a time-changed Brownian motion given by
\begin{equation}\label{e:Xtc}
X_t = \bar W_{ \frac{2}{\alpha}L^{Y}_t(0) }\,,
\end{equation}
where $\bar W$ is a Brownian motion on $\R$ that is independent of $Y$, and $L^Y(0)$ is the local time of~$Y$ at $0$.
\end{theorem}
To clarify notation, we follow the normalization convention of~\cite{KaratzasShreve91}, and define local time of $Y$ at $0$ by
\[
L_t^Y(0)
= \lim_{\delta \to 0} \frac{1}{2 \delta} \int_0^t \one_{\{0 \leq Y_s \leq \delta \}} \,d \qv{Y}_s
= \lim_{\delta \to 0} \frac{1}{2 \delta} \int_0^t \one_{\{0 < Y_s \leq \delta \}} \,ds \,.
\]
In the second equality above we note that the strict inequality $0 < Y_s$ in the integrand is crucial, as the process $Y$ spends a non-negligible time at $0$.
Indeed, recall that the sticky reflection of the process $Y$ at $0$ is characterized by the local time relation
\begin{equation*}
2\, dL^Y_t(0) = \alpha \one_{\set{Y_t = 0}} \, dt\,.
\end{equation*}
Such a process can be constructed explicitly by time changing a reflected Brownian motion, or by using the Hille-Yosida theorem.
We elaborate on this in Section~\ref{s:limitprocess}, below.
We remark that while the statement of Theorem~\ref{t:zlimfat} is intuitive, the proof isn't as simple.
The broad outline of the proof follows techniques introduced by Freidlin and Wentzell (see for instance Theorem 8.2.2 in~\cite{FreidlinWentzell12}) and the structure in~\cites{HairerKoralovEA16,HairerIyerEA18}.
However, the key step in establishing the required estimates requires balancing the time spent by~$Z^\epsilon$ in the spine with the local time at the interface between the teeth and spine.
In order to prove this, we require an oscillation estimate on the solution to a certain Neumann problem (Proposition~\ref{p:uosc1}, below).
To the best of our knowledge, the oscillation estimate we require can not be obtained by standard techniques for the following reasons:
First, for the problem at hand energy methods only provide estimates with domain dependent constants.
Since~$\Omega_\epsilon$ varies with~$\epsilon$ these constants may degenerate as~$\epsilon \to 0$.
Second, since we impose Neumann boundary conditions on the entire boundary we may not easily use techniques based on the comparison principle.
We prove the oscillation estimate here directly by using a probabilistic argument, and this comprises the bulk of the proof of Theorem~\ref{t:zlimfat}.
\medskip
Notice that Theorem~\ref{t:zlimfat} immediately yields the behavior of the variance of the horizontal displacement.
This question has been studied by various authors (see for instance~\cite{BerezhkovskiiDagdugEA14} and references therein), and is of interest as it is an easily computable benchmark indicating anomalous diffusion.
\begin{corollary}\label{c:var}
If~$h_0 < \infty$ then
\begin{subequations}
\begin{gather}
\label{e:varShort}
\lim_{t \to 0}
\lim_{\epsilon \to 0}
\frac{1}{t} \E^{(x,0)} \abs{X^\epsilon_t - x}^2 = 1\,,
\\
\label{e:varLong}
\lim_{t \to \infty} \lim_{\epsilon \to 0}
\frac{1}{t} \E^{(x,0)} \abs{X^\epsilon_t - x}^2 = \frac{1}{\alpha h_0 + 1}\,.
\end{gather}
\end{subequations}
If~$h_0 = \infty$, then~\eqref{e:varShort} still holds.
However, instead of~\eqref{e:varLong} we have
\begin{equation}\label{e:varLongInf}
\lim_{t \to \infty} \lim_{\epsilon \to 0} \frac{1}{\sqrt{t}}
\E^{(x,0)} \abs{X^\epsilon_t - x}^2 = \frac{1}{\alpha} \paren[\Big]{\frac{8}{\pi}}^{1/2}\,.
\end{equation}
\end{corollary}
Here we clarify that the notation $\E^{(x, 0)}$ refers to the expectation under the probability measure~$\P^{(x,0)}$ under which $(X^\epsilon_0, Y^\epsilon_0) = (x, 0)$ almost surely.
Note that when $h_0 < \infty$, the variance is asymptotically linear with slope~$1$ at short time, and asymptotically linear at long time with slope strictly smaller than~$1$.
On the other hand, when $h_0 = \infty$ the variance is asymptotically linear for short time, and asymptotically $O(\sqrt{t})$ for long time, indicating an anomalous sub-diffusive behavior on long time scales.
This was also previously observed by Young~\cite{Young88}.
\begin{comment}
Using this and the definition of $L^Y_t(0)$, it is easy to see that $\E^\mu (X_t - x)^2 \leq t$ always holds, whether or not $h_0 < \infty$. Thus, the presence of the teeth certainly cannot enhance horizontal diffusion. However, when $h_0 = +\infty$, one can show that
\[
\E^\mu (X_t - x)^2 \sim \frac{2 \sqrt{2 t}}{\alpha \sqrt{\pi}}, \quad \text{as} \;\; t \to \infty
\]
so that $X_t$ exhibits anomalous diffusive behavior at large times. When $h_0 < \infty$, $Y_t$ is ergodic on $[0,h_0]$ so that $\E^\mu L^Y_t(0) \sim C t$ as $t \to \infty$. So, the anomalous scaling of $\E^\mu (X_t - x)^2$ only happens in the case $h_0 = + \infty$, as $t \to \infty$.
\end{comment}
In addition to the variance, another quantity of interest is the limiting behavior of the probability density function.
This is essentially a PDE homogenization result that also follows quickly from Theorem~\ref{t:zlimfat}.
Explicitly, let~$u^\epsilon$ represent the concentration density of a scalar diffusing in the region $\Omega_\epsilon$.
When the diffusivity is normalized to be~$1/2$, and the boundaries are impermeable the time evolution of~$u^\epsilon$ is governed by the heat equation with Neumann boundary conditions:
\begin{subequations}
\begin{alignat}{2}
\label{e:heat1}
\span
\partial_t u^\epsilon - \frac{1}{2} \lap u^\epsilon = 0
&\qquad&
\text{in } \Omega_\epsilon
\\
\label{e:heat2}
\span
\partial_\nu u^\epsilon = 0
&&
\text{on } \partial \Omega_\epsilon\,.
\end{alignat}
\end{subequations}
Using Theorem~\ref{t:zlimfat} we can show that $u^\epsilon$ converges as $\epsilon \to 0$, and obtain effective equations for the limit.
The same equations were also obtained heuristically by Young~\cite{Young88}.
\begin{corollary}\label{c:pde}
Let $u_0 \colon \Omega_0 \to \R$ be a bounded continuous function, and let $u^\epsilon$ be the solution to~\eqref{e:heat1}--\eqref{e:heat2} with initial data~$u_0 \circ \pi_\epsilon$.
Let $\mu^\epsilon$ be a family of test probability measures on $\Omega_\epsilon$ such that $(\pi_\epsilon^*(\mu^\epsilon))$ converges weakly to a probability measure~$\mu$ on~$\Omega_0$.
Then for any $t > 0$ we have
\begin{equation}\label{e:uepConv}
\lim_{\epsilon \to 0} \int_{\Omega_\epsilon} u^\epsilon(z, t) \, d\mu^\epsilon(z)
= \int_{\Omega_0} u(z, t) \, d\mu(z)\,,
\end{equation}
where $u \colon \Omega_0 \to \R$ is the unique solution of the system
\begin{subequations}
\begin{alignat}{2}
\label{e:rho1}
\span
\partial_t u - \frac{1}{2} \partial_y^2 u = 0\,,
&\qquad&
\text{for } t > 0,\ y \in (0, h_0)\,,
\\
\label{e:rho2}
\span
\alpha \partial_y u + \partial_x^2 u = \partial_y^2 u\,,
&&
\text{when }y = 0\,,
\\
\label{e:rho3}
\span
\partial_y u = 0
&&
\text{when } y = h_0\,,
\\
\label{e:rho4}
\span
u = u_0
&&
\text{when } t = 0\,.
\end{alignat}
\end{subequations}
\end{corollary}
Since large scale transport only occurs in the $x$-direction, one is often only interested in the limiting behavior in this direction.
This can be obtained by taking the slice of $u$ at $y = 0$, leading to a self contained time fractional equation, similar to the Basset equation~\cite{Basset86}.
We remark that such time fractional PDEs associated with the time-changed diffusions have been studied in more generality in~\cites{BaeumerMeerschaertEA09} (see also~\cites{Cohn18,MagdziarzSchilling15}), and we refer the reader to these papers for the details.
\begin{proposition}\label{p:ftime}
Let $v(x, t) = u( x, 0, t)$, where $u$ is the solution of~\eqref{e:rho1}--\eqref{e:rho4}.
Then~$v$ satisfies
\begin{equation}\label{e:ev}
\partial_t v + \frac{\alpha}{2} \partial_t^w v - \frac{1}{2} \partial_x^2 v = \frac{\alpha}{2} f\,,
\end{equation}
with initial data $v(x, 0) = u_0(x, 0)$.
The operator $\partial_t^w$ appearing above is a \emph{generalized Caputo derivative}
defined by
\begin{equation*}
\partial_t^w v(x, t) \stackrel{\Delta}{=} \int_0^t w(t-s) \partial_t v(x, s) \, ds\,,
\end{equation*}
where $w$ is defined by
\begin{equation*}
w(t) \stackrel{\Delta}{=}
\frac{2}{h_0} \sum_{k=0}^\infty \exp\paren[\Big]{
-\frac{(2k + 1)^2 \pi^2 t}{8 h_0^2}
}\,.
\end{equation*}
\iffalse
\RP[2018-09-02]{%
I think the function $w(t)$ itself is completely monotone.
The primitive of the RHS appears as \#71 in the table of complete Bernstein functions in Schilling-Song-Vondracek, with references to tables in Gradshteyn-Ryzhik and Erdeleyi.
I want to check.
It would be a nice addition/replacement for remark 2.4, \remove{as it is probably relevant to the analysis of~\eqref{e:ev}.}
}%
\footnote{
The existence of such a function~$w$ can be established using the heat kernel, and is addressed in Remark~\ref{r:w}, below.
}
whose Laplace transform is given by
\begin{equation}\label{e:LW}
\mathcal L w(s)
= \int_0^\infty e^{-s t} w(t) \, dt
= \frac{2\tanh \paren{ h_0 \sqrt{2 s} }}{\sqrt{2 s}} \,.
\end{equation}
\fi
The function $f$ appearing on the right of~\eqref{e:ev} can be explicitly determined in terms of $u_0$ by the identity $f = f(x, t) = \partial_y g(x, 0, t)$, where $g = g(x, y, t)$ solves
\begin{alignat*}{2}
\span
\partial_t g - \frac{1}{2} \partial_y^2 g = 0
&\qquad&
\text{for } t > 0,\ y \in (0, h_0)\,,
\\
\span
g(x, 0, t) = g(x, h_0, t) = 0
&&
\text{for } t > 0\,,
\\
\span
g(x, y, 0) = u_0(x, y) - u_0(x, 0)
&&
\text{for } y \in (0, h_0),\ t = 0\,.
\end{alignat*}
\end{proposition}
\begin{remark*}
As we will see later, the Laplace transform of~$w$ is given by
\begin{equation}\label{e:LW}
\mathcal L w(s)
= \int_0^\infty e^{-s t} w(t) \, dt
= \frac{2\tanh \paren{ h_0 \sqrt{2 s} }}{\sqrt{2 s}} \,.
\end{equation}
For $h_0 = \infty$,
\begin{equation*}
w(t) = \paren[\Big]{ \frac{2}{\pi t} }^{1/2}\,,
\qquad\text{and}\qquad
\mathcal Lw(s)
= \paren[\Big]{\frac{2}{s}}^{1/2}\,.
\end{equation*}
In this case, $\partial_t^w$ is precisely $\sqrt{2} \partial_t^{1/2}$, the standard Caputo derivative of order $1/2$ (see for instance~\cite{Diethelm10}), and equation~\eqref{e:ev} becomes the Basset differential equation~\cite{Basset86}.
\end{remark*}
Finally we conclude this section with two remarks on generalizations of Theorem~\ref{t:zlimfat}.
\begin{remark}[Other scalings]
The width of the spine and teeth may be scaled in different ways to obtain the same limiting process as in Theorem~\ref{t:zlimfat}.
Explicitly, let
\begin{equation*}\label{scale2}
\tilde \Omega_\epsilon = \set{
(x, y) \in \R^2 \; \st -w_S(\epsilon) < y < h_0 \one_{B(\epsilon \Z, w_T(\epsilon)/2)}(x)
}\,,
\end{equation*}
where $w_S(\epsilon)$ and $w_T(\epsilon)$ denote the width of the spine and teeth respectively.
We claim that Theorem~\ref{t:zlimfat} still holds (with the same limiting process), provided
\begin{equation}
\lim_{\epsilon \to 0} \frac{w_T}{\epsilon w_S(\epsilon)} = \alpha \in (0, \infty)\,,
\qquad\text{and}\qquad
\lim_{\epsilon \to 0} w_S(\epsilon) = 0\,. \label{genscale}
\end{equation}
The proof of Theorem~\ref{t:zlimfat} needs to be modified slightly to account for this more general statement.
These modifications are described in Section~\ref{sec:otherscaling}, below.
In the degenerate case when $\alpha = 0$, the process $Z^\epsilon$ rarely enters the teeth and the limiting behavior is simply that of a horizontal Brownian motion.
On the other hand, if $\alpha = \infty$, then the process $Z^\epsilon$ enters the teeth too often, and the limiting behavior is simply that of a vertical, doubly reflected, Brownian motion.
\end{remark}
\begin{remark}[Higher dimensional models]
Theorem \ref{t:zlimfat} can also be extended to analogous higher-dimensional models.
For example, let $\Omega_\epsilon' \subseteq \R^3$ be a three dimensional ``brush'', defined by
\[
\Omega_\epsilon' \stackrel{\Delta}{=} \bigcup_{k \in \Z} (Q_k \cup T_k)\,.
\]
Here $Q_k$ and~$T_k$ are defined by
\begin{align*}
Q_k &\stackrel{\Delta}{=} \paren[\big]{\epsilon k-\frac{\epsilon}{2}, \epsilon k + \frac{\epsilon}{2}} \times \paren[\big]{-\frac{\epsilon}{2}, \frac{\epsilon}{2}} \times [-\epsilon,0), \\
T_k &\stackrel{\Delta}{=} \set[\big]{ (x_1,x_2,x_3) \in \R^3 \st \paren{(x_1 - \epsilon k)^2 + x_2^2}^{1/2} \leq r \epsilon^{3/2}, \quad x_3 \in [0,h_0) }.
\end{align*}
In this case, the spine is the set $\cup_{k} \overline{Q_k}$, an infinite rectangular cylinder; the cylindrical teeth $T_k$ are spaced $O(\epsilon)$ apart and have radius $r\epsilon^{3/2} > 0$. If $Z^\epsilon$ is a Brownian motion in this domain with normal reflection at the boundary, then one obtains an analogous scaling limit as $\epsilon \to 0$. The $O(\epsilon^{3/2})$ scaling of the radius of the teeth is chosen so that the ratio
\[
\frac{2 \text{Vol}(Q_k)}{\text{Area}( \overline{Q_k} \cap \overline{T_k} )} = \frac{2}{\pi r^2}
\]
is independent of $\epsilon$ -- this constant ratio plays the same role as the constant $\sfrac{2}{\alpha}$ in the comb-shaped domain~$\Omega_\epsilon$.
While our proof of Theorem \ref{t:zlimfat} extends to this higher-dimensional version in a straight-forward way, the added modifications are technical.
Thus, for simplicity and clarity of presentation, we only focus only on the comb-shaped domain as defined above for Theorem~\ref{t:zlimfat}.
\end{remark}
\subsection{Anomalous Diffusion in Comb-Shaped Graphs.}\label{s:ithincomb}
We now turn our attention to comb-shaped graphs, with the intention of studying a simpler version of the model in Section~\ref{s:ifatcomb} and of relating it to other work on trapped random walks. Related random walk models on comb-shaped discrete graphs have been studied by several authors, including~\cites{BZ03,Ber06,CCFR09, CsakiCsorgoEA11}. In each of these works, a limit process is obtained which involves a Brownian motion time-changed by the local time of an independent Brownian motion. One difference between these other works and Theorem \ref{t:zlim} below is that the limiting processes in our result involves Brownian motion with sticky reflections, a consequence of the gluing condition described below. More closely related to our model are the works~\cite{BenArousCerny07, BenArousCabezasEA15}, especially Section~3.2 of \cite{BenArousCabezasEA15}, where the trapping and drift of the random walk plays a role that is similar to our gluing condition. In Section~\ref{s:tbm} below, we will use the framework in~\cite{BenArousCabezasEA15} for an alternate proof of our result in this simpler setting, illuminating the relationship between these models. Nevertheless, the analyses in these other works do not apply to the comb-shaped domains considered in the previous Section~\ref{s:ifatcomb}, where the boundary local time of the diffusion process (pre-limit) plays an essential role.
We consider the infinite connected comb-shaped graph, $\mathcal C_\epsilon \subset \R^2$, be defined by
\begin{equation}\label{e:CepDef}
\mathcal C_\epsilon = \paren[\big]{\R \times \set{0}} \cup \paren[\big]{ \epsilon \Z \times [0, h_0) }\,.
\end{equation}
We think of $\R \times \set{0}$ as the \emph{spine} of~$\mathcal C_\epsilon$, and $\epsilon \Z \times [0, h_0)$ as the infinite collection of teeth.
The teeth meet the spine at the junction points $J_\epsilon \subseteq \mathcal C_\epsilon$ defined by
\begin{equation}\label{e:JepDef}
J_\epsilon \stackrel{\Delta}{=} (\epsilon \Z) \times \set{0}\,,
\end{equation}
and is depicted in Figure~\ref{f:thincomb}.
\begin{figure}[hbt]
\begin{center}
\begin{tikzpicture}
\foreach \x [count=\n]in {-4,-3.5,...,4}{
\node at (\x,0) [circle,fill=black,inner sep=0pt,minimum size=4pt] {};
\draw [thick] (\x,0) -- ++(0,3);
};
\draw [thick] (-4.25,0) -- (4.25,0);
\draw[<->] (-4.75,0) -- ++(0,3) node[midway,fill=white] {$h_0$};
\draw[decorate,decoration={brace,mirror}] (0,-.2) -- ++(.5,0)
node[pos=.5, anchor=north, yshift=-2pt] {$\epsilon$};
\end{tikzpicture}
\caption{Image of the comb-shaped graph~$\mathcal C_\epsilon$.
The teeth are spaced $\epsilon$ apart and have height $h_0$.}\label{f:thincomb}
\end{center}
\end{figure}
Let~$Z^\epsilon = (X^\epsilon, Y^\epsilon)$ be a diffusion on~$\mathcal C_\epsilon$ such that away from the junction points~$J_\epsilon$, the process~$Z^\epsilon$ is a standard Brownian motion.
If $h_0 < \infty$, we reflect~$Z^\epsilon$ at the ends of the teeth.
At the junction points, we specify a ``gluing condition'' that dictates~$Z^\epsilon$ enters the teeth with probability~$\alpha \epsilon / (2 + \alpha \epsilon)$, and stays in the spine with probability~$2 / (2 + \alpha \epsilon)$.
One can formulate this precisely by requiring the local time balance
\begin{equation*}
L^{X^\epsilon}_t(J_\epsilon) = \frac{2}{2 + \alpha \epsilon} L^{Z^\epsilon}_t(J_\epsilon)\,,
\qqua
L^{Y^\epsilon}_t(J_\epsilon) = \frac{\epsilon}{2 + \alpha \epsilon} L^{Z^\epsilon}_t(J_\epsilon)\,,
\end{equation*}
at the junction points, and we describe this further in Section~\ref{s:thincomb}.
Alternately, one can make the gluing condition precise by using the excursion decomposition of~$Z^\epsilon$, and we do this in Section~\ref{s:excursion}.
Clearly the mechanics of the above diffusion on the comb-shaped graph~$\mathcal C_\epsilon$ shows that it is a simplified model of the diffusion on the comb-shaped domain~$\Omega_\epsilon$.
Our main result in this section shows convergence of~$Z^\epsilon$ to the same limit process as that in Theorem~\ref{t:zlimfat}.
\begin{theorem} \label{t:zlim}
Let $(\mu^\epsilon)$ be sequence of probability measures on~$\mathcal C_\epsilon$ which converge weakly to a probability measure $\mu$ on $\Omega_0 \stackrel{\Delta}{=} \R \times [0,h_0]$.
Let $Z^\epsilon$ be the above graph diffusion with initial distribution~$\mu^\epsilon$.
Then, as $\epsilon \to 0$, the processes~$Z^\epsilon$ converge weakly to the same limit process~$Z = (X, Y)$ defined in Theorem~\ref{t:zlimfat}.
\end{theorem}
The proof of Theorem~\ref{t:zlim} is technically and conceptually much simpler than that of Theorem~\ref{t:zlimfat}, and is presented in Section~\ref{s:thincomb}.
Moreover, the excursion decomposition of~$Z^\epsilon$ on the comb-shaped graph~$\mathcal C_\epsilon$ allows for an elegant proof using time changes and the trapped Brownian motion framework in~\cites{BenArousCabezasEA15}.
We present this approach in Section~\ref{s:excursion}.
The process process~$Z^\epsilon$ on the comb-shaped graph~$\mathcal C_\epsilon$ is closely related to a model of fluid flow in fissured media, where trapping in microscopic regions of low permeability yields a macroscopic anomalous diffusive effect. Explicitly, consider medium composed of two materials: a set of \emph{blocks}, where the permeability is relatively low, and~\emph{fissures} where the permeability is relatively high (see for instance~\cites{ArbogastDouglasEA90,ShowalterWalkington91,BourgeatLuckhausEA96}).
Assuming that the region occupied by the fissures is connected and that the blocks are arranged periodically, the fluid flow in this situation is modeled by the equation
\begin{equation*
\partial_t u^\epsilon - \dv \paren[\big]{ a^\epsilon \grad u^\epsilon } = f\,,
\qquad
a^\epsilon(x)
= \one_F\paren[\Big]{\frac{x}{\epsilon}} a\paren[\Big]{\frac{x}{\epsilon}}
+ \epsilon^2 \one_B\paren[\Big]{\frac{x}{\epsilon}} A\paren[\Big]{\frac{x}{\epsilon}}\,.
\end{equation*}
Here $a, A$ are uniformly elliptic matrices representing the permeability in the fissures and blocks respectively, and $F, B$ denote the region occupied by the blocks and fissures respectively. For this linear model, Clark~\cite{Clark98} proved that as $\epsilon \to 0$, the functions $u^\epsilon$ two-scale converges to a function $U = U(x, y, t)$ that satisfies a coupled system, called the double-porosity model, in which the fluid in the fissures is driven in a non-local manner by the fluid in the blocks.
To understand this model probabilistically, one could study a diffusion~$\tilde Z^\epsilon$ whose generator is~$\dv a^\epsilon \grad$.
Inside the fissures, the process~$\tilde Z^\epsilon$ diffuses freely until it hits the boundary of a block.
Upon hitting a block boundary, the contrast between the block and fissure permeabilities dictates that~$\tilde Z^\epsilon$ enters the blocks with probability~$O(\epsilon)$, and remains in the fissures with probability $1 - O(\epsilon)$.
Since the blocks have diameter $O(\epsilon)$, and the permeability there is $O(\epsilon^2)$, the excursions of $\tilde Z^\epsilon$ into the blocks take~$O(1)$ amount of time. These characteristic features are exactly captured by the above comb model: the spine plays the role of the fissures and the teeth play the role of the blocks (rescaled to have size~$1$), and our gluing condition dictates that~$Z^\epsilon$ enters the teeth with probability~$O(\epsilon)$.
\iffalse
Besides this description via the generator $\mathcal{L}^\epsilon$, we can describe the process $Z^\epsilon$ in two other complementary ways. First, $Z^\epsilon = (X^\epsilon,Y^\epsilon)$ is a weak solution to a certain system of SDEs, involving a constraint on the local time of $Z^\epsilon$ at the junction points. With $\epsilon > 0$, the process behaves like a skew Brownian motion \cite{Lejay06}, skewed at the junction points $\epsilon \Z \times \set{0}$. The second approach to describing $Z^\epsilon = (X^\epsilon,Y^\epsilon)$ involves It\^o's excursion theory for Brownian motion \cite{Ito72, PitmanYor07}. From this point of view, the process consists of infinitely many excursions from the junction points: excursions into the teeth, and excursions across the spine. Together these excursions form an excursion-valued point process parameterized by the local time of $Z^\epsilon$ at the junction points.
describe a one dimensional comb model where the same limiting behavior is observed.
For this thin comb model, we consider a diffusion process $Z^\epsilon = (X^\epsilon,Y^\epsilon)$ which has generator
\begin{equation*}
\mathcal{L}^\varepsilon f =
\begin{dcases}
\frac{1}{2} \partial_y^2f & \text{ if } (x,y) \in \varepsilon\Z \times(0,h_0) \,,\\
\frac{1}{2} \partial_x^2f & \text{ if } (x,y) \in \R \times \{0\} \,.
\end{dcases}
\end{equation*}
acting on functions $f\in C_0(\Omega_\varepsilon) \cap C^2_b(\Omega_\varepsilon - (\varepsilon\Z \times \{0\}))$ such that $\mathcal{L}^\varepsilon f \in C_0(\Omega_\varepsilon)$ and
\begin{subequations}
\begin{alignat}{2}
\span \alpha \varepsilon \partial_y f(x,0)
+ \partial_x^+f(x,0) -\partial_x^-f(x,0) = 0
&\quad&\text{for } x \in \varepsilon\Z \,, \label{flux1}
\\
\span\partial_y f(x,h_0) = 0
&&\text{for } x \in \varepsilon\Z \, \text{(if $h_0$ is finite)} \label{reflect1}
\end{alignat}
\end{subequations}
Here $\partial_x^-$ and $\partial_x^+$ refer to the left and right derivatives respectively. The general theory in~\cite[\S4.1--4.2]{EthierKurtz86} (see also Theorem~3.1 in~\cite{FreidlinWentzell93}) can be used to show the existence of a continuous Fellerian Markov process $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ that has generator $\mathcal{L}^\epsilon$. Away from the vertices $\epsilon \Z \times \set{0, h_0}$ the process $Z^\epsilon$ is simply a Brownian motion along the teeth or along the spine. If $h_0$ is finite, the process is reflected at the free end of each tooth, at the vertices $\epsilon \Z \times \set{h_0}$, due to \eqref{reflect1}. At the junction points $\epsilon \Z \times \set{0}$, the flux balance condition~\eqref{flux1} implies (roughly speaking) that the process enters the teeth with probability $ \alpha \epsilon / (2 + \alpha \epsilon)$ and continues in the spine otherwise (we will make this intuition more precise later). So, as $\epsilon \to 0$, excursions into the teeth become less likely. On the other hand, the density of the junction points increases as $\epsilon \to 0$, so that while the process is on the spine it very frequently meets a junction point.
Besides this description via the generator $\mathcal{L}^\epsilon$, we can describe the process $Z^\epsilon$ in two other complementary ways. First, $Z^\epsilon = (X^\epsilon,Y^\epsilon)$ is a weak solution to a certain system of SDEs, involving a constraint on the local time of $Z^\epsilon$ at the junction points. With $\epsilon > 0$, the process behaves like a skew Brownian motion \cite{Lejay06}, skewed at the junction points $\epsilon \Z \times \set{0}$. The second approach to describing $Z^\epsilon = (X^\epsilon,Y^\epsilon)$ involves It\^o's excursion theory for Brownian motion \cite{Ito72, PitmanYor07}. From this point of view, the process consists of infinitely many excursions from the junction points: excursions into the teeth, and excursions across the spine. Together these excursions form an excursion-valued point process parameterized by the local time of $Z^\epsilon$ at the junction points.
Having defined the process in these two ways, we then consider the limit behavior of $Z^\epsilon$ as $\epsilon \to 0$, from the SDE point of view and from the excursion point of view. We prove that the limit is the same as in the fat comb model:
[snip]
In the thin comb model, the flux condition \eqref{e:flux} at the junctions limits the diffusion from the spine into the teeth. In the fat comb model, however, it is the domain geometry that plays this role: the relatively narrow $O(\epsilon^2)$ width of the teeth inhibits diffusion from the spine into the teeth. Although the limit processes are the same for both the thin comb and the fat comb, the fat comb is technically more challenging to analyze. In particular, analysis based on excursion theory or time-change does not work so easily.
\subsection{Motivation, Related Results.}
One motivation for studying these models comes from previous work of Young~\cite{Young88} on anamolous diffusion of a passive tracer in a microscopically disordered fluid. \JN{Add more discussion of \cite{Young88}. This would also be a good place to explain the time-fractional equations related to the limit process, since Young's results are expressed in this way. Also, the role of finite vs. infinite tooth height should be discussed.}
When the tooth height $h_0$ is finite, we have $E[ L^Y_t(0)] \sim ct$ as $t \to \infty$, for some $c > 0$. Thus, the limit process $X_t$ has diffusive behavior as $t \to \infty$, since
\[
\E[|X_t|^2] = \E[ \frac{2}{\alpha} L^Y_t(0)] \sim \frac{2}{\alpha} c t \quad \text{as}\;\; t \to \infty
\]
On the other hand, if $h_0 = +\infty$, the limit process $Y_t$ is Brownian motion on the half-line $[0,+\infty)$ In this case, $E[ L^Y_t(0)] \sim c\sqrt{t}$, so that $X_t$ behaves sub-diffusively: $\E[|X_t|^2] \sim \frac{2}{\alpha} c \sqrt{t}$ as $t \to \infty$. This anomalous scaling of $\E[|X_t|^2]$ was observed already in~\cite{Young88}.
Another motivation for these models comes from a study of the \emph{double porosity limit} by Clark~\cite{Clark98}, where the author considers flow in a periodic two-phase medium composed of low-permeability inclusions (``blocks'') distributed within a high-permeability phase (``fissures''). Assuming the region occupied by the fissures is connected, the fluid flow in this situation is modeled by the PDE
\begin{gather}
\label{e:clarkEp1}
\partial_t u^\epsilon - \dv \paren[\big]{ a^\epsilon \grad u^\epsilon} = f \,,
\end{gather}
with coefficient $a^\epsilon$ given by
\begin{equation} \label{e:clarkEp3}
a^\epsilon(x)
= \one_F\paren[\Big]{\frac{x}{\epsilon}} a\paren[\Big]{\frac{x}{\epsilon}}
+ \epsilon^2 \one_B\paren[\Big]{\frac{x}{\epsilon}} A\paren[\Big]{\frac{x}{\epsilon}}.
\end{equation}
Here $\epsilon > 0$ is the period cell size; $\one_F$ and $\one_B$ are indicator functions of the fissures and blocks, respectively; $a$ and $A$ are matrix-valued functions, uniformly positive definite and periodic in $x$. The distinguishing feature of this model is that the permeability in the blocks vanishes like $\epsilon^2$ as $\epsilon \to 0$, while the permeability is bounded from below within the fissures. Using the method of two-scale convergence \cite{Allaire92}, Clark proves in~\cite{Clark98} that as $\epsilon \to 0$, $u^\epsilon$ converges to the solution of a coupled two-scale system in which the fluid in the fissures is driven by a nonlocal boundary integral of the density of the fluid in the blocks, and the fluid in the blocks is coupled to the fluid outside through a boundary condition.
One can view the thin comb model as the simplest one-dimensional analogue of Clark's model \eqref{e:clarkEp1}-\eqref{e:clarkEp3}. To see this relation, assume $h_0 = 1$ and $\alpha = 1$, and consider the scaled process $(\tilde X^\epsilon,\tilde Y^\epsilon) = (X^\epsilon, \epsilon Y^\epsilon)$, which lies in the rescaled domain
\[
\tilde{\mathcal C}_\epsilon =\paren[\big]{\R \times \set{0}} \cup \paren[\big]{ \epsilon \Z \times [0, \epsilon) }.
\]
This scaled process has generator
\begin{equation*}
\tilde {\mathcal{L}}^\varepsilon f =
\begin{dcases}
\frac{1}{2} \partial_y^2f & \text{ if } (x,y) \in \varepsilon\Z \times(0,\epsilon) \,,\\
\frac{\epsilon^2}{2} \partial_x^2f & \text{ if } (x,y) \in \R \times \{0\} \,.
\end{dcases}
\end{equation*}
acting on functions $f$ satisfying
\begin{alignat*}{2}
\span \frac{\varepsilon^2}{2} \partial_y f(x,0)
+ \frac{1}{2}\partial_x^+f(x,0) -\frac{1}{2}\partial_x^-f(x,0) = 0
&\quad&\text{for } x \in \varepsilon\Z \,
\\
\span\partial_y f(x,\epsilon) = 0
&&\text{for } x \in \varepsilon\Z.
\end{alignat*}
In this scaled version, the spine $\R \times \set{0}$ represents the fissures where diffusion will be relatively fast, with permeability $O(1)$; the teeth, spaced $\epsilon$ apart and now having depth $\epsilon$, represent the blocks where diffusion will be relatively slow, with permability $\epsilon^2$. The condition \eqref{spine2fluxbal} is simply a flux balance condition which preserves mass. Unlike \cite{Clark98}, our approach to analyzing $Z^\epsilon$ is probabilistic.
\begin{note}[Alternative text for paragraph above]
One can view the thin comb model as the simplest one-dimensional analogue of Clark's model \eqref{e:clarkEp1}-\eqref{e:clarkEp3}. To see this relation, assume $h_0 = 1$ and $\alpha = 1$, and consider the scaled process $(\tilde X^\epsilon,\tilde Y^\epsilon) = (X^\epsilon, \epsilon Y^\epsilon)$, which lies in the rescaled domain
\[
\tilde{\mathcal C}_\epsilon =\paren[\big]{\R \times \set{0}} \cup \paren[\big]{ \epsilon \Z \times [0, \epsilon) }.
\]
The Kolmogorov equation for this scaled process is
\begin{alignat*}{2}
\span
\partial_t u^\epsilon - \frac{1}{2} \partial_x^2 u^\epsilon = 0
&\quad&\text{when } x \not \in \epsilon \Z,\ y = 0 \,,
\\
\span
\partial_t u^\epsilon - \frac{\epsilon^2}{2} \partial_y^2 u^\epsilon = 0
&&\text{when } x \in \epsilon \Z,\ y \in (0,\epsilon) \,,
\end{alignat*}
along with the flux continuity condition at the junction points:
\begin{alignat}{2}
\span
\frac{\epsilon^2}{2} \partial_y u^\epsilon + \frac{1}{2}\partial_x^+ u^\epsilon - \frac{1}{2} \partial_x^- u^\epsilon = 0
& \quad & \text{when}\ x \in \epsilon \Z,\ y = 0 \,. \label{spine2fluxbal}
\end{alignat}
In this scaled version, the spine $\R \times \set{0}$ represents the fissures where diffusion will be relatively fast, with permeability $O(1)$; the teeth, spaced $\epsilon$ apart and now having depth $\epsilon$, represent the blocks where diffusion will be relatively slow, with permability $\epsilon^2$. The condition \eqref{spine2fluxbal} is simply a flux balance condition which preserves mass. Unlike \cite{Clark98}, our approach to analyzing $Z^\epsilon$ is probabilistic.
\end{note}
The limit process described in Theorems \ref{t:zlimfat} and \ref{t:zlim} is related to limits which have been obtained for trapped random walks \cite{BenArousCabezasEA15}. \JN{Add some more discussion about the connection to trapped random walks, without getting too technical.}
\fi
\subsection*{Plan of this paper}
The rest of the paper is organized as follows.
We begin by describing the limit process~$Z$, and study its basic properties in Section~\ref{s:limitprocess}.
Next, in Section~\ref{s:fatcomb} we prove Theorem~\ref{t:zlimfat} and all the required lemmas.
In Section~\ref{s:thincomb} we prove Theorem~\ref{t:zlim} on the comb-shaped graph~$\mathcal C_\epsilon$.
The proof is similar to that of Theorem~\ref{t:zlimfat}, but the technicalities are much simpler.
Finally, in Section~\ref{s:thincomb} we provide an alternate proof of Theorem~\ref{t:zlim} using the trapped Brownian motion framework in~\cite{BenArousCabezasEA15}.
\iffalse
\subsection{old stuff, which I think we can cut...}
\begingroup\color{red}
could be described by the system:\JN{I hesitate to use the term ``Kolmogorov equation'' here, because this system doesn't preserve mass. Maybe it is better to define the generator and its domain, instead of Kolmogorov equations? }
\begin{alignat}{2}
\label{e:spine}
\partial_t u^\epsilon - \frac{1}{2} \partial_x^2u^\epsilon &= 0
&\qquad&\text{when } x \not \in \epsilon \Z,\ y = 0 \,,
\\
\label{e:teeth}
\partial_tu^\epsilon - \frac{1}{2} \partial_y^2u^\epsilon &= 0
&&\text{when } x \in \epsilon \Z,\ y \in (0,h_0) \,,
\intertext{with boundary conditions}
\label{e:flux}
\frac{\alpha \epsilon}{2} \partial_yu^\epsilon + \frac{1}{2}\partial_x^+u^\epsilon -\frac{1}{2}\partial_x^-u^\epsilon
&= 0
&& (x,y,t)\in \epsilon\Z \times \{0\}\times (0,T) \,,
\\
\label{e:reflect}
\partial_yu(x, y, t) &= 0 &&(x,y,t)\in \epsilon\Z \times \set{h_0} \times(0,T)
\,.
\end{alignat}
\begin{gather}
\label{e:clarkEp4}
\partial_t u^\epsilon - \dv \paren[\big]{ a^\epsilon \grad u^\epsilon} = f \,,\\
\label{e:clarkEp5}
u^\epsilon_0(x)
= \one_F\paren[\Big]{ \frac{x}{\epsilon} } u_0(x)
+ \one_B\paren[\Big]{ \frac{x}{\epsilon} } U_0\paren[\Big]{x, \frac{x}{\epsilon}}\,.
\end{gather}
Explicitly, the effective system is
\begin{alignat}{2}
\label{e:clark1}
\span \partial_t u - \nabla_x \cdot \paren[\big]{ \bar a \cdot \grad_x u}
+ Q = f
&\quad& \text{for } x \in \Omega,\ t > 0\,,
\\
\span \partial_t U - \nabla_y \cdot \paren[\big]{ \bar A \cdot \grad_y U} = 0 \,,
&& \text{for } y \in \Omega_B,\ t > 0 \,,
\\
\span
U(x, y, t) = u(x, t)
&& \text{for } y \in \partial \Omega_B,\ x \in \Omega\,,
\\
\span
U(x, y, 0) = U_0(x, y)
&& \text{for } x \in \Omega,\ y \in \Omega_B\,,
\\
\span
u(x, 0) = u_0(x)
&& \text{for } x \in \Omega\,.
\end{alignat}
Here $Q = Q(x,t)$ is defined by
\begin{equation*
Q(x,t) = \int_{y \in \partial \Omega_B} \bar A(y) \grad_y U(x, y,t) \, d\nu(y)\,,
\end{equation*}
and $\bar a$ is a uniformly elliptic matrix representing the effective diffusivity.
\endgroup
\fi
\section{The Limit Process.}\label{s:limitprocess}
Before proving our main results in this paper, we give a more thorough description of the limit process $Z = (X,Y)$.
There are two canonical constructions of this process.
The first, relatively well-known construction involves directly writing $Y$ as a time-changed Brownian motion, and this is presented in Section~\ref{s:timechange}.
The second construction involves a characterization using the generator.
While the technicalities using this second approach are more involved, they relate to the PDE analogue and immediately yield Corollary~\ref{c:pde}.
\begin{remark}\label{r:h0eq1}
The process~$Z$ depends on the parameters~$\alpha > 0$, and $h_0 \in (0, \infty]$.
To simplify the presentation, we will subsequently assume~$h_0 = 1$.
The case $h_0 = \infty$ may be handled by replacing the normal reflection at $1$ with a diffusion on the semi-infinite interval $(0, \infty)$.
\end{remark}
\subsection{Construction via Time Changes.}\label{s:timechange}
We begin by constructing the limit process~$Z$ using a time-changed Brownian motion.
To construct the process~$Y$, let $\bar{B_t}$ be a standard doubly reflected Brownian motion on the interval $(0, 1)$.
(Recall that in Remark~\ref{r:h0eq1} we assumed~$h_0 = 1$ for simplicity.)
Let $L^{\bar{B}}_s(0)$ be the local time of $\bar B$ at $0$, and define
\begin{equation*}
\varphi(s) \stackrel{\Delta}{=} s + \frac{2}{\alpha}L^{\bar{B}}_s(0), \quad s \geq 0 \,.
\end{equation*}
Let $T$, defined by
\begin{equation}\label{e:Tdef}
T(t) = T_t \stackrel{\Delta}{=} \varphi^{-1}(t) = \inf \set{ s \geq 0 \st \varphi(s) \geq t }\,,
\end{equation}
denote the inverse of $\varphi$.
Since $\varphi$ is strictly increasing, note that $T$ is continuous.
Thus the process~$Y$, defined by
\begin{subequations}
\begin{equation}\label{e:limitdef1}
Y_t \stackrel{\Delta}{=} \bar B_{T_t}\,,
\end{equation}
is a continuous process on $[0, 1]$.
Clearly, on any interval of time where $Y$ remains inside the interval $(0, 1]$, trajectories of $Y$ and $\bar B$ are identical.
When $Y$ hits $0$, however, the trajectories are slowed down on account of the time change~$T$.
The behavior at~$0$ is known as a \emph{sticky reflection with parameter~$1/\alpha$} at~$0$, and we refer the reader to~\cite[14, \S5.7]{ItoMcKean74} or the original papers of Feller~\cites{Feller52,Feller54} for more details.
Clearly once the process~$Y$ is known, the process~$X$ can be recovered using~\eqref{e:Xtc}, reproduced here for convenience:
\begin{equation}\label{e:limitdef2}
X_t \stackrel{\Delta}{=} \bar W_{ \frac{2}{\alpha}L^{Y}_t(0) }\,.
\end{equation}
\end{subequations}
Here~$\bar W$ is standard one dimensional Brownian motion that is independent of $\bar B$.
Intuitively, we think of $\R \times \{0\}$ as the spine of the limiting comb, and ${\mathbb R} \times (0,h_0]$ as the continuum of teeth.
The process $T_t$ may be interpreted as the time accumulated in the teeth, and $ \frac{2}{\alpha}L^Y_t(0)$ is the time accumulated in the spine.
\subsection{The SDE Description.}
We now describe the process $Z = (X, Y)$ via a system of SDEs.
Let $W$ and $B$ be two independent standard one dimensional Brownian motions.
We claim that the process $Z$ can be characterized as the solution of the system of SDEs
\begin{subequations}
\begin{gather}
\label{e:sdeX}
dX_t = \one_{\set{Y_t = 0}} \, d W_t \,,\\
\label{e:sdeY}
dY_t = \one_{\set{Y_t \not= 0}} \, d B_t
- dL^Y_t(1)
+ dL^Y_t(0)\,,\\
\label{e:localtime}
\alpha \one_{\set{Y_t = 0}} \, dt = 2\, dL^Y_t(0)\,,
\end{gather}
\end{subequations}
with initial distribution~$\mu$.
Existence of a process $Z$ satisfying~\eqref{e:sdeX}--\eqref{e:localtime} can be shown abstractly using the Hille-Yosida theorem, and we refer the reader to~\cite{Cohn18} for the details.
\begin{comment}
(see Lemma~\textcolor{red}{l:HilleYosidaConstruction}, below).%
\footnote{%
\color{blue}(To be removed later.)
This construction goes as follows.
Define $A_Yf = \frac{1}{2} \partial_y^2$ and $\mathcal D(A_Y)$ to be the set of all functions $f \in C^2([0, 1])$ such that $A_Y f$ is continuous and
\begin{equation*}
\partial_y f(0) = \remove{\frac{1}{2}}\partial_y^2 f(0)
\qquad\text{and}\qquad
\partial_y f(1) = 0\,.
\end{equation*}
Now, using the Hille-Yosida theorem we obtain a Fellerian Markov process $Y$ with generator $A$.
Let $\bar W$ be an independent Brownian motion, and define $X$ to be the time-changed process $W_{L^Y_t(0)}$.
}
However, in this case a simpler and direct construction exists.
\end{comment}
Instead, we will show existence by showing that the process~$Z$ constructed in the previous section is a solution to~\eqref{e:sdeX}--\eqref{e:localtime}.
\begin{lemma}\label{l:sdeZ}
The process~$Z = (X, Y)$ defined by~\eqref{e:limitdef1}--\eqref{e:limitdef2} is a weak solution to the system~\eqref{e:sdeX}--\eqref{e:localtime}.
\end{lemma}
The proof of Lemma~\ref{l:sdeZ} boils down to an SDE characterization of sticky Brownian motion that was recently shown by Engelbert and Peskir~\cite{EngelbertPeskir14}.
We remark that in~\cite{EngelbertPeskir14} the authors also show weak uniqueness of the appropriate SDE.
While we present the proof of existence below, we refer the reader to~\cite{EngelbertPeskir14} for the proof of uniqueness.
\begin{comment}
Let~$\mathcal X$ be the metric space $\R \times [0, 1]$ endowed with the metric
\begin{equation*}
d( (x_1, y_1), (x_2, y_2) ) =
\begin{cases}
\abs{y_1 - y_2} & \text{if } x_1 = x_2 \,,\\
\abs{x_1 - x_2} + y_1 + y_2 & \text{if } x_1 \neq x_2 \,.
\end{cases}
\end{equation*}
\end{comment}
\begin{proof
By the Tanaka formula we have
\begin{equation}\label{e:Tanaka1}
\bar B_t = \tilde{B}_t + L_t^{\bar B}(0) - L_t^{\bar B}(1) \,,
\end{equation}
where $\tilde B$ is a Brownian motion. Since $T_t$ is a continuous and increasing time change, $\tilde B_{T_t}$ is still a continuous martingale, $L^Y_t(0) = L^{\bar B}_{T_t}(0)$ and $L^Y_t(1) = L^{\bar B}_{T_t}(1)$.
Note first
\begin{equation}
\alpha\int_0^t \one_{\set{Y_s = 0}} \, ds = \alpha\int_0^t \one_{\set{\bar B_{T_s} = 0}} \, d\varphi(T_s) = \alpha\int_0^{T_t}\one_{\set{\bar B_s = 0}} \, d\varphi(s).
\end{equation}
Then since $\set{t \st \bar B_t = 0}$ has Lebesgue measure 0 and $L_t^{\bar B}$ only increases on this set, we decompose $\alpha\varphi(s) = \alpha s + 2L^{\bar B}_s$ to obtain
\begin{equation}
\alpha\int_0^{T_t}\one_{\set{\bar B_s = 0}} \, d\varphi(s) = 2\int_0^{T_t}\one_{\set{\bar B_s = 0}} dL^{\bar B}_s(0) = 2L^{\bar B}_{T_t}(0) = 2 L^Y_t(0) \,,
\end{equation}
which implies \eqref{e:localtime}. Notice that since $(2/\alpha)L^Y_t(0)$ is independent of $\bar W$, $X_t$ is a martingale with quadratic variation
\begin{equation}\label{e:qvX}
\qv{X}_t = \frac{2}{\alpha}L_t^Y(0) \,.
\end{equation}
In addition we have
\begin{equation*}
\qv{\tilde B_T}_t = T_t\,.
\end{equation*}
Thus, for the process~$B$ defined by
\begin{equation}\label{e:Bdef}
B_t \stackrel{\Delta}{=} \tilde B_{T_t} + \bar W_{\frac{2}{\alpha}L^Y_t(0)}\,,
\end{equation}
we have $\qv{B}_t = t$.
For the filtration, we let
\begin{equation*}
\mathcal G_t
= \sigma\paren[\Big]{ \mathcal N \cup \mathcal F^{\bar B}_{T_t} \cup \mathcal F^{X}_t }
\end{equation*}
where $\mathcal N$ denotes the collection of all $\mathcal F^{(\bar B, \bar W)}_\infty$-null sets. Since $\bar B$ and~$\bar W$ are independent, it is easy to see that for all $s \geq 0$, $X_t - X_s$ is independent of $\mathcal G_s$, and both $\tilde B_{T_t}$ and $X_t$ are $\mathcal G$-martingales.
Thus, $B$ is also a $\mathcal G$-martingale, and by L\'evy's criterion must be a Brownian motion.
Now~\eqref{e:sdeX}--\eqref{e:sdeY} follow from~\eqref{e:localtime}, \eqref{e:Bdef} and the fact that
\begin{equation*}
\int_0^t \one_{\set{Y_s = 0}} \, d\tilde B_{T_s} = 0 \quad \text{ and } \quad \int_0^t \one_{\set{Y_s \not= 0}} \, dX_s = 0\,.
\qedhere
\end{equation*}
\end{proof}
\subsection{Computing the Generator (Lemma~\ref{l:Zgen}).}\label{s:zgen}
We now compute the generator of~$Z$.
In the teeth (when $y > 0$) this is a standard calculation with It\^o's formula.
In the spine (when $y = 0$), however, one needs to estimate the time spent in the spine.
We state this precisely and carry out the details here.
\begin{lemma}\label{l:Zgen}
Let $\Omega_0 = \R \times [0, 1)$, and define the operator $A$ by
\begin{equation} \label{Adef}
A \stackrel{\Delta}{=} \frac{1}{2} \partial_y^2\,.
\end{equation}
Define the domain of $A$, denoted by $\mathcal{D}(A)$, to be the set of all functions $g \in C_0(\Omega_0) \cap C^2_b(\Omega_0)$ such that
\begin{equation}\label{e:DAflux}
\partial_y g(x,1) = 0 \,,
\qquad\text{and}\qquad
\partial_x^2 g(x,0) + \alpha \partial_y g(x,0)
= \partial_y^2 g(x,0)\,.
\end{equation}
The generator of the process~$Z$ (defined by~\eqref{e:limitdef1}--\eqref{e:limitdef2}) is the operator~$A$ with domain $\mathcal D(A)$.
\end{lemma}
\begin{proof
Choose $g \in \mathcal{D}(A)$ and apply It\^o's formula to obtain
\begin{align*}g(X_t,Y_t) = g(X_0,Y_0) &+ \int_0^t \partial_xg(X_s,Y_s)dX_s + \int_0^t \partial_y g(X_s,Y_s) \, dY_s\\
&+ \frac{1}{\alpha}\int_0^t \partial^2_x g(X_s,Y_s) \, dL_s^Y(0) + \frac{1}{2}\int_0^t \partial_y^2 g(X_s, Y_s) \, dT_s \,.
\end{align*}
Taking expectations gives
\begin{multline}\label{e:tmpg2}
\E}%{{\mathbb E}^{(x,y)}\brak[\Big]{g(X_t,Y_t) - g(x,y)} = \E}%{{\mathbb E}^{(x,y)}\brak[\Big]{\int_0^t\partial_yg(X_s,Y_s) \, dY_s}\\
+\E}%{{\mathbb E}^{(x,y)}\brak[\Big]{ \frac{1}{\alpha}\int_0^t\partial^2_xg(X_s,Y_s) \, dL_s^Y(0) + \frac{1}{2}\int_0^t \partial_y^2 g(X_s,Y_s) \, dT_s}
\,.
\end{multline}
Now for $y \in (0, 1)$ we know $Y$ is a Brownian motion before it first hits $0$ or $1$, and hence $\lim_{t\to 0}\P}%{{\mathbb P}^y(L_t^Y(0) \neq 0 ) = 0$.
Moreover by definition of $T$, we know $T_t = t$ when $\{L^Y_t = 0\}$.
Consequently
\begin{equation*}
\lim_{t\to 0}\E}%{{\mathbb E}^{(x,y)} \brak[\Big]{\frac{g(X_t,Y_t) - g(x,y)}{t}} = \frac{1}{2}\partial_y^2g(x,y) \,.
\end{equation*}
For $y = 1$ we note
\begin{multline}\label{e:tmpg1}
\lim_{t\to 0}\E}%{{\mathbb E}^{(x,1)} \brak[\Big]{\frac{g(X_t,Y_t) - g(x,y)}{t}}
\\
= \frac{1}{2}\partial_y^2g(x,1) + \lim_{t\to 0}\E}%{{\mathbb E}^{(x,1)}\brak[\Big]{\frac{1}{t}\int_0^t \partial_y g(X_s,Y_s) \, dY_s } \,.
\end{multline}
By~\eqref{e:Tanaka1} we know $\E^{(x,1)} L_t^{Y}(1) = O(\sqrt{t})$, and hence the right hand side of~\eqref{e:tmpg1} is finite if and only if $\partial_y g(x,1) = 0$.
Finally, we compute the generator on the spine $y = 0$.
First we show that if we start $Y$ at $0$ then for a short time it spends ``most'' of the time at 0. More precisely we claim
\begin{equation}
\lim_{t\to 0}\E}%{{\mathbb E}^0\brak[\Big]{\frac{T_t}{t}} = 0\label{e:T_estimate} \, .
\end{equation}
Here we clarify that the~$0$ superscript on~$\E$ refers to the initial distribution of the process~$\bar B$, where as the double superscript~$\E^{(x,y)}$, or measure superscript~$\E^\mu$ used earlier refers to the initial distribution of the joint process~$Z = (X, Y)$.
Let $M_t$ be the running maximum of $\tilde{B}$. Note that since $L^{\bar B} = L^{\tilde{B}}$ on $\{M_t < 1\}$, we have
\begin{multline*}
\P}%{{\mathbb P}^0\paren[\Big]{L^{\bar B}_t(0) \leq r} \leq \P}%{{\mathbb P}^0\paren[\Big]{L^{\tilde B}_t(0) \leq r} + \P}%{{\mathbb P}^0\paren[\Big]{M_t > 1}\\
= 1 - 2 \P}%{{\mathbb P}^0\paren[\Big]{r < \tilde B_t < 1} \leq \sqrt{\frac{2}{\pi}}\paren[\Big]{\frac{r}{\sqrt{t}} +\sqrt{t}e^{-\frac{1}{2t}}} \,.
\end{multline*}
Thus,
\begin{align*}
\E}%{{\mathbb E}^0\brak[\Big]{\frac{T_t}{t}}
&= \int_0^1 \P}%{{\mathbb P}^0 \paren[\Big]{T_t > st} \, ds
= \int_0^1 \P}%{{\mathbb P}^0 \paren[\Big]{st + 2L^{\bar B}_{st}(0) \leq t} \, ds
\\
&= \int_0^1 \P}%{{\mathbb P}^0 \paren[\Big]{L^{\bar B}_{st}(0) \leq \frac{(1-s)t}{2} } \, ds
\leq \int_0^1\sqrt{\frac{2}{\pi}}\paren[\Big]{\frac{2(1-s)}{\sqrt{s}}\sqrt{t} + \sqrt{st} \, e^{-\sfrac{1}{2st}}}\, ds
\\
&\leq C\sqrt{t} \,.
\end{align*}
With this estimate, we can now compute generator on the spine.
Using equation \eqref{e:T_estimate} we see
\begin{equation}\label{e:tmpLtYbyt}
\E}%{{\mathbb E}^{0}\left[\frac{L^Y_t(0)}{t}\right] = \E}%{{\mathbb E}^{0}\left[\frac{L^{\bar B}_{T_t}(0)}{t}\right] = \frac{\alpha}{2}\E}%{{\mathbb E}^{0}\left[\frac{t - T_t}{t}\right]\xrightarrow{t\rightarrow 0}\frac{\alpha}{2} \,.
\end{equation}
Using \eqref{e:Tanaka1} we have,
\begin{equation*}
\E}%{{\mathbb E}^{0}\left[\frac{Y_t}{t}\right] = \E}%{{\mathbb E}^{0}\left[\frac{\bar B_{T_t}}{t}\right] = \E}%{{\mathbb E}^{0}\left[\frac{\tilde{B}_{T_t} + L^{\bar B}_{T_t}(0) - L^{\bar B}_{T_t}(1) }{t}\right] \,.
\end{equation*}
Since $T_t \leq t$, the third term tends to 0 and using the modulus of continuity for Brownian motion the first term does as well.
Therefore we also have
\begin{equation}\label{e:tmpYtByt}
\E}%{{\mathbb E}^{0}\left[\frac{Y_t}{t}\right] \xrightarrow{t\rightarrow 0} \frac{\alpha}{2} \, .
\end{equation}
Thus using~\eqref{e:T_estimate}, \eqref{e:tmpLtYbyt} and~\eqref{e:tmpYtByt} in equation~\eqref{e:tmpg2} gives
\begin{equation*}
\lim_{t\to 0} \frac{1}{t} \E}%{{\mathbb E}^{(x,y)}\brak[\Big]{g(X_t,Y_t) - g(x,y)}
= \frac{\alpha}{2} \partial_y g(x, 0) + \frac{1}{2} \partial_x^2g(x, 0) + 0
\,,
\end{equation*}
finishing the proof.
\end{proof}
\subsection{PDE Homogenization (Corollaries~\ref{c:var}, \ref{c:pde}, and Proposition~\ref{p:ftime}).}
Once the generator of~$Z$ is known, the behavior of the variance (Corollary~\ref{c:var}) and PDE homogenization result (Corollary~\ref{c:pde}) can be deduced quickly.
\begin{proof}[Proof of Corollary~\ref{c:var}]
We first assume~$h_0 = 1$ as in Remark~\ref{r:h0eq1}.
Using Theorem~\ref{t:zlimfat} and~\eqref{e:qvX} we see
\begin{equation}\label{e:varX}
\lim_{\epsilon \to 0}
\E^{(x,0)} \abs{X^\epsilon_t - x}^2
= \E^{(x,0)} \abs{X_t - x}^2
= \frac{2}{\alpha} \E^{0} L^Y_t(0)\,.
\end{equation}
Now equation~\eqref{e:varShort} follows from~\eqref{e:tmpLtYbyt}.
For the long time limit (when~$h_0 = 1$) we note that by ergodicity of~$\bar B$, we know that $\E^0 \abs{ L_t^{\bar B} / t - 1/2} \to 0$ as $t \to \infty$.
Thus using~\eqref{e:Tdef} we must have
\begin{equation*}
\lim_{t \to \infty} \E^0 \abs[\Big]{\frac{T(t)}{t} - \frac{\alpha}{\alpha + 1} } = 0 \,.
\end{equation*}
Consequently,
\begin{equation*}
\E}%{{\mathbb E}^{0}\paren[\Big]{\frac{L^Y_t(0)}{t}}
= \E}%{{\mathbb E}^{0}\paren[\Big]{\frac{L^{\bar B}_{T_t}}{t}}
= \frac{\alpha}{2}\E}%{{\mathbb E}^{0}\paren[\Big]{\frac{t - T_t}{t}}
\xrightarrow{t\rightarrow \infty}\frac{\alpha}{2(\alpha + 1)} \,,
\end{equation*}
and together with~\eqref{e:varX} this implies~\eqref{e:varLong}.
This finishes the proof of~\eqref{e:varShort} and~\eqref{e:varLong} when $h_0 = 1$.
The case for arbitrary finite $h_0$ is similar.
When $h_0 = \infty$, the process $Y$ is a sticky Brownian motion on the half line, and the distribution of $L^Y_t(0)$ can be computed explicitly.
Namely (see for instance~\cite{Howitt07}) we have
\begin{align}
\label{e:sbmOT}
\frac{2}{\alpha} L^Y_t(0)
= \int_0^t \one_{\set{Y_s = 0}} \, ds
&\sim \frac{2\abs{N}}{\alpha} \paren[\Big]{ t + \frac{N^2}{\alpha^2}}^{1/2} - \frac{2 N^2}{\alpha^2}\,,
\end{align}
where~$N$ is the standard normal.
Taking expectations and using~\eqref{e:varX} immediately yields~\eqref{e:varShort} and~\eqref{e:varLongInf}, finishing the proof.
\end{proof}
\iffalse
\begingroup\color{red}
\begin{proof}[Proof of Corollary~\ref{c:var} when $h_0 = \infty$.]
Using~\eqref{e:limitdef1} and \eqref{e:limitdef2} we have $\E[ (X_t)^2] = \frac{2}{\alpha} \E[L^Y_t(0)] = \frac{2}{\alpha} \E[L^B_{T_t}(0)]$. Since $\varphi(s) \geq (2/\alpha) L^B_s$, we also know that
\[
T_t \leq (L^B)^{-1}(\alpha t/2) = \inf \{ s \geq 0\;|\; L^B_s \geq (2/\alpha) t \}.
\]
Therefore, the upper bound
\[
\E[ (X_t)^2] = \frac{2}{\alpha} \E[L^B_{T_t}(0)] \leq \frac{2}{\alpha} \E[L^B_{(L^B)^{-1}(\alpha t/2)}] = t
\]
holds for all $t \geq 0$. So, the presence of the teeth cannot enhance diffusion.
When $h_0 < \infty$, however, the large time behavior of $\E[ (X_t)^2]$ differs. Since $T_t \leq t$ holds with probability one and $L^B_t$ is non-decreasing, we always have the upper bound $\E[ (X_t)^2] \leq \frac{2}{\alpha} \E[L^B_t(0)]$. Without loss of generality, we may suppose that $B_0 = 0$. The Tanaka formula implies that $\E[ L^B_t] = \E[ |B_t|] = \sqrt{t} \E[ |B_1|] = \sqrt{2t/\pi}$. Therefore,
\[
\E[ (X_t)^2] \leq \frac{2}{\alpha} \sqrt{\frac{2t}{\pi}}, \quad \forall \; t > 0.
\]
To see that this scaling is sharp as $t \to \infty$, observe that $T_t \geq t - \frac{2}{\alpha} L^B_t$, so that $L^B_{T_t} \geq L^B_{ t - \frac{2}{\alpha} L^B_t}$. It is well-known that the local time $L^B_t$ has the same law as the running maximum of $B_t$, so by the reflection principle and Brownian scaling, $\P^0(\frac{1}{\sqrt{t}} L^B_t > r) = 2 \P^0(\frac{1}{\sqrt{t}} B_t > r) = \P^0( |B_1| > r)$. For $\delta > 0$, let $G_{\delta,t}$ be the event that $(2/\alpha)L^B_t \leq \delta t$. Then, we have the lower bound.
\begin{align}
\frac{\alpha}{2 \sqrt{t}} \E[ (X_t)^2] & \geq \frac{1}{\sqrt{t}}\E[ \one_{G_{\delta,t}} L^B_{(1 - \delta)t}] \nonumber \\
& = \frac{1}{\sqrt{t}}E[ L^B_{(1 - \delta)t}] - \frac{1}{\sqrt{t}}\E[ L^B_{(1 - \delta)t} \one_{G_{\delta,t}^C} ] \nonumber \\
& \geq \frac{1}{\sqrt{t}} E[ L^B_{(1 - \delta)t}] - \frac{1}{\sqrt{t}} \E[ |L^B_{(1 - \delta)t}|^2]^{1/2} \P( G_{\delta,t}^C)^{1/2} \nonumber \\
& = \sqrt{\frac{2 (1 - \delta)}{ \pi}} - \sqrt{( 1 - \delta)} \P( G_{\delta,t}^C)^{1/2}.
\end{align}
Since $\P( G_{\delta,t}^C) \to 0$ as $t \to \infty$, it follows that
\[
\liminf_{t \to \infty} \frac{\alpha}{2 \sqrt{t}} \E[ (X_t)^2] \geq \sqrt{\frac{2 (1 - \delta)}{\pi}}
\]
for any $\delta > 0$. Thus, this limit is bounded below by $\sqrt{\frac{2}{\pi}}$.
\end{proof}
\begin{note}[2018-08-16 GI: Alternate computation of the variance for infinite height.]
First the distribution of the occupation time at~$0$ of sticky Brownian motion is well-known.
I found this in Christopher John Howitt's thesis from Warwick \href{http://wrap.warwick.ac.uk/56226/}{here}. (He attributes it to a paper of Yor, which I found no trace of.)
Up to a factor of $2$ and local time conventions, we have
\begin{align}
\label{e:sbmOT}
\int_0^t \one_{\set{Y_s = 0}} \, ds
&\sim \frac{\abs{N}}{\alpha} \paren[\Big]{ t + \frac{N^2}{4 \alpha^2}}^{1/2} - \frac{N^2}{2 \alpha^2}\,,
\end{align}
where~$N$ is the standard normal.
Now by It\^o's formula
\begin{equation*}
\E X_t^2
= \frac{2}{\alpha} \E L^Y_t(0)
= \E \int_0^t \one_{\set{Y_s = 0}} \, ds\,.
\end{equation*}
Computing this using~\eqref{e:sbmOT} gives
\begin{equation*}
\E X_t^2 \approx
\begin{dcases}
\frac{1}{\alpha} \paren[\Big]{\frac{2t}{\pi}}^{1/2} & t \to \infty \,,\\
t & t \to 0\,.
\end{dcases}
\end{equation*}
This misses Jim's calculation by a factor of 2, which presumably is because of some computational error or local time convention mismatch.
Note also that $\E X_t^2 \approx t$ for $t$ small!
\end{note}
\endgroup
\fi
\begin{proof}[Proof of Corollary~\ref{c:pde}]
By the Kolmogorov backward equation~\cite[\S5.6]{Friedman75} we known that the function~$u^\epsilon$ (defined by~\eqref{e:heat1}--\eqref{e:heat2}) satisfies
\begin{equation*}
u^\epsilon(z, t) = \E^z u_0(Z^\epsilon_t)\,.
\end{equation*}
Consequently
\begin{equation*}
\int_{\Omega_\epsilon} u^\epsilon(z, t) \, d\mu^\epsilon(z)
= \E^{\mu^\epsilon} u_0( Z^\epsilon_t )
\xrightarrow{\epsilon \to 0}
= \E^{\mu} u_0( Z_t )\,,
\end{equation*}
by Theorem~\ref{t:zlimfat}.
Thus, if we set
\begin{equation}\label{e:udef}
u(z, t) = \E^z u_0(Z_t)\,,
\end{equation}
we see that~\eqref{e:uepConv} holds.
It only remains to verify that~$u$ satisfies~\eqref{e:rho1}--\eqref{e:rho4} hold.
To see this, recall that the function~$u$ defined by~\eqref{e:udef} belongs to $C(0, \infty; \mathcal D(A))$ and satisfies the Kolmogorov equations
\begin{alignat*}{2}
\span
\partial_t u - A u = 0
&\qquad& t > 0\,,
\\
\span
u(\cdot, t) = u_0
&& \text{when } t = 0\,.
\end{alignat*}
The first equation above implies~\eqref{e:rho1} by definition of~$A$ (equation~\eqref{Adef}).
Equations~\eqref{e:rho2} and~\eqref{e:rho3} follow from the fact that $u(\cdot, t) \in \mathcal D(A)$ for all $t > 0$, and equation~\eqref{e:rho4} follows from the second equation above.
\end{proof}
We now obtain evolution equations for the slice of $u$ at $y = 0$, as stated in Proposition~\ref{p:ftime}.
\begin{proof}[Proof of Proposition~\ref{p:ftime}]
Let $u_1 = u - g$, and observe that $u_1$ satisfies~\eqref{e:rho1} with initial data $u_1(x, y, 0) = u_0(x, 0) = v_0(x)$, and boundary conditions
\begin{equation}\label{e:u1bc}
u_1(x, 0, t) = u(x, 0, t) = v(x, t)
\qquad\text{and}\qquad
\partial_y u_1(x, 1, t) = 0\,.
\end{equation}
(Recall that in Remark~\ref{r:h0eq1} we have already set $h_0 = 1$ for simplicity.)
We now treat~$x$ as a parameter, and solve~\eqref{e:rho1} using separation of variables (in $y$, $t$) with boundary conditions~\eqref{e:u1bc}.
A direct calculation shows
\begin{equation}\label{e:pyu1}
\partial_y u_1(x, 0, t) = - \partial_t^w v\,,
\end{equation}
and hence
\begin{equation}\label{e:pyu}
\partial_y u(x, 0, t) = -\partial_t^w v(x, t) + \partial_y g(x, 0, t)\,.
\end{equation}
Now for $t > 0$ using equation~\eqref{e:rho1} and~\eqref{e:rho2} and continuity of second derivatives of $u$ up to $y = 0$ we see
\begin{equation}\label{e:ev2}
\partial_t v(x, t)
= \frac{\alpha}{2} \partial_y u(x, 0, t) + \frac{1}{2} \partial_x^2 v(x, t)\,.
\end{equation}
Using~\eqref{e:pyu} and~\eqref{e:ev2} yields~\eqref{e:ev} as claimed.
\end{proof}
\begin{remark}
For brevity, we have suppressed the explicit separation of variables calculation deriving~\eqref{e:pyu1}.
One can avoid this calculation by using the Laplace transform as follows.
Following standard convention, we will denote the Laplace transform of a function using an upper case letter using the variable $s$, instead of $t$.
Explicitly, given a function $f$, we define its Laplace transform, denoted by $F$ or $\mathcal Lf$, by
\begin{equation*}
F(s) \stackrel{\Delta}{=} \mathcal Lf(s) = \int_0^\infty e^{-s t} f(t) \, dt\,.
\end{equation*}
For functions that depend on both space and time variables, the Laplace transform will only be with respect to the time variable.
Taking the Laplace transform of~$u_1$ yields the ODE in the variable $y$
\begin{equation*}
s U_1 - v_0 - \frac{1}{2} \partial_y^2 U_1 = 0\,,
\end{equation*}
with boundary conditions $U_1(x, 0, s) = V(x, s)$, and $\partial_y U_1(x, 1, s) = 0$.
Solving this ODE yields
\begin{equation*}
U_1(x, y, s)
= \frac{v_0}{s} +
\paren[\Big]{\frac{1}{1 + e^{2 \sqrt{2s}} }}
\paren[\Big]{V - \frac{v_0}{s}}
\brak[\Big]{ e^{y \sqrt{2s} } + e^{\sqrt{2s}(2 - y)} }\,,
\end{equation*}
and hence
\begin{equation*}
\partial_y U_1 (x, 0, s )
= -\sqrt{2s}
\paren[\Big]{V - \frac{v_0}{s}}
\tanh \sqrt{2s}
= -\frac{2 \tanh \sqrt{2s} }{\sqrt{2s}} \paren[\Big]{ s V - v_0} \,.
\end{equation*}
Choosing~$w$ to be a function with Laplace transform~\eqref{e:LW}, implies~\eqref{e:pyu1} as claimed.
\end{remark}
\begin{comment
The direct calculation using the Laplace transform above obtains an expression for the Laplace transform of $\partial_y U_1$ at $y = 0$.
To express $\partial_y u_1$ at $y = 0$ as the time convolution operator $\partial_t w$, one needs to prove the existence of a non-negative function~$w$ whose Laplace transform is given by the identity~\eqref{e:LW}.
The standard way to do this is to use Bernstein's theorem~\cite[\S XIII.4]{Feller71} and check that~$\mathcal Lw$ is a completely monotone function.
Unfortunately, in our case, this condition is not easy to check.
Instead, we use the heat kernel on the interval $(0, 1)$ to obtain an explicit formula for the function~$w$ directly.
Indeed, a direct calculation shows that
\begin{equation*}
w(t) = K'(0, 0, t)\,,
\end{equation*}
where $K'$ is the heat kernel on $(0, 1)$ with Neumann boundary conditions at $y = 0$ and Dirichlet boundary conditions at $y = 1$.
\end{comment}
\section{Comb-Shaped Domains (Theorem \ref{t:zlimfat}).}\label{s:fatcomb}
We now turn to the proof of Theorem~\ref{t:zlimfat}. Recall that $Z_t^{\epsilon,+} = \pi_\epsilon(Z_t^\epsilon) = (X_t^\epsilon, \max (Y_t^\epsilon,0))$. The main ingredients in the proof are the following lemmas.
\begin{lemma}\label{l:FCtightness}
Let $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ be the reflected Brownian motion on the comb-shaped domain~$\Omega_\epsilon$, as described in Theorem~\ref{t:zlimfat}.
Then, for any $T > 0$, the family of processes $Z^\epsilon$ is tight in $C([0,T]; \R^2)$.
\end{lemma}
\begin{lemma}\label{l:MPuniqness}
Let $A$ be the generator defined in~\eqref{Adef}, with domain $\mathcal D(A)$. Weak uniqueness holds for the martingale problem for $A$.
\end{lemma}
\begin{lemma}\label{l:FCgenerator}
If $f \in \mathcal D(A)$, and $K \subset \Omega_0$ is compact, then
\begin{equation}\label{e:fconv}
\lim_{\epsilon \to 0} \sup_{z \in K \cap \Omega_\epsilon} \E^z\paren[\Big]{
f(Z^{\epsilon,+}_t)
- f(Z^{\epsilon,+}_0)
- \int_0^t Af(Z^{\epsilon,+}_s) \, ds
} = 0\,.
\end{equation}%
\end{lemma}
Momentarily postponing the proof of these lemmas, we prove Theorem~\ref{t:zlimfat}.
\begin{proof}[Proof of Theorem~\ref{t:zlimfat}]
Suppose first $Z^{\epsilon,+} \to Z'$ weakly along some subsequence.
We claim $Z'$ should be a solution of the martingale problem for $A$ with initial distribution~$\mu$.
To see this set
\begin{equation*}
M^\epsilon_t = f( Z^{\epsilon,+}_t) - f( Z^{\epsilon,+}_0 )
- \int_0^t A f( Z^{\epsilon,+}_r ) \, dr
\end{equation*}
and observe
\begin{equation*}
\E^{\mu^\epsilon} \paren[\big]{
M^\epsilon_t
\given \mathcal F_s
}
= M^\epsilon_s
+ \E^{Z^\epsilon_s} \paren{M^\epsilon_{t-s}}\,,
\end{equation*}
by the Markov property.
Using Lemma~\ref{l:FCgenerator}, and taking limits along this subsequence, the last term on the right vanishes.
Since this holds for all $f \in \mathcal D(A)$ and $\mathcal D(A)$ is dense in $C_0(\Omega_0)$, $Z'$ must be a solution of the martingale problem for~$A$.
Since $Z^{\epsilon,+} \to Z'$ weakly and $\pi_\epsilon^*(\mu^\epsilon) \to \mu$ weakly by assumption, we have $Z(0) \sim \mu$.
By uniqueness of solutions to the martingale problem for~$A$ (Lemma~\ref{l:MPuniqness}), the above argument shows uniqueness of subsequential limits of~$Z^{\epsilon,+}$.
Combined with tightness (Lemma~\ref{l:FCtightness}), and the fact that $Z$ is a solution to the martingale problem for~$A$ (Lemma~\ref{l:Zgen}), this gives weak convergence as desired.
\end{proof}
It remains to prove Lemmas~\ref{l:FCtightness}--\ref{l:FCgenerator}.
We do this in Sections~\ref{s:FCtightness}, \ref{s:MPuniqness} and~\ref{s:FCgenerator}, below.
\subsection{Proof of Tightness (Lemma~\ref{l:FCtightness}).}\label{s:FCtightness}
To prove tightness, we need an auxiliary lemma comparing the oscillation of trajectories in the spine to that of Brownian motion.
This will also be used in the proof of Lemma~\ref{l:FCgenerator}.
\begin{lemma}\label{lem:BrownianComp}
Let $W'$ be a standard Brownian motion on $\R$ with $W'(0) = 0$. For any $T > 0$, $\epsilon \in (0,1/2]$, $z \in \Omega_\epsilon$, and any $a, \delta > 0$, we have
\begin{equation} \label{XWcomp}
\P^z \paren[\Big]{ \sup_{ \substack{r,t \in [0,T] \\ |t - r| \leq \delta}} |X^\epsilon(t) - X^\epsilon(r)| \geq a } \leq \P \paren[\Big]{ \sup_{ \substack{r,t \in [0,T] \\ |t - r| \leq \delta}}4 |W'(t) - W'(r)| \geq a - 2\epsilon }\,.
\end{equation}
\end{lemma}
\begin{proof}
Let
\begin{equation*}
\tau_0 = \inf \set[\big]{ t \geq 0 \st X^\epsilon(t) \in \epsilon \paren[\big]{\Z + \frac{1}{2}}}\,,
\end{equation*}
and inductively define
\begin{equation*}
\tau_{k+1} = \inf \set[\big]{ t \geq \tau_k \st \abs{X^\epsilon(t) - X^\epsilon(\tau_k)} = \epsilon }\,,
\end{equation*}
for $k \geq 0$.
By symmetry of the domain, observe that $k \mapsto X^\epsilon(\tau_k)$ defines a simple random walk on the discrete points $\epsilon(\Z + \sfrac{1}{2})$. Next, define
\[
\tau_k' = \inf \set { t \geq \tau_k \st |X^\epsilon(t) - X^\epsilon(\tau_k)| = \epsilon/4 }, \quad k \geq 0.
\]
In particular, $\tau_k < \tau_k' < \tau_{k+1}$. At time $\tau_k$, $X^\epsilon(\tau_k)$ is in the spine, at the midpoint between two adjacent teeth. For $t \in [\tau_k, \tau_k']$, $X^\epsilon(t)$ is in the spine and cannot enter the teeth, because $|X^\epsilon(t) - x| \leq \epsilon/4$ where $x = X^\epsilon(\tau_k) \in \epsilon ( \Z + \frac{1}{2})$.
Define the increments $\Delta_k X^\epsilon = X^\epsilon(\tau_{k+1}) - X^\epsilon(\tau_k) \in \{ - \epsilon, + \epsilon \}$.
By the strong Markov property and symmetry of the domain, the random variables $\{ (\tau_k' - \tau_k) \}_k \cup \{ \Delta X^\epsilon_k \}_k$ are independent.
Now, suppose that $W'(t)$ is an independent Brownian motion on $\R$, with $W'(0) = 0$. Define another set of stopping times inductively by $\sigma_0 = 0$ and
\begin{align*}
\sigma_{k+1} & = \inf \set { t \geq \sigma_k \st |W'(t) - W'(\sigma_k)| = \epsilon/4 }, \quad k \geq 0.
\end{align*}
Let $\Delta \sigma_k = \sigma_{k+1} - \sigma_k$, and $\Delta_k W' = W'(\sigma_{k+1}) - W'(\sigma_k) \in \{ -\epsilon/4, \epsilon/4\}$. Observe that the family of random variables
\[
\{(\sigma_{k+1} - \sigma_k) , 4 \Delta W'_k \}_{k \geq 0}
\]
has the same law as the family
\[
\{ (\tau_k' - \tau_k) , \Delta X^\epsilon_k \}_{ k \geq 0}.
\]
Next, define
\[
K(t) = \max \{ k \geq 0 \;|\; \tau_k \leq t\},
\]
and observe that if $|t - r| \leq \delta$ and $0 \leq r \leq t \leq T$, then we must have $\tau_{K(t)} - \tau_{K(r) + 1} \leq \delta$ and thus
\[
\sum_{j = K(r) + 1}^{K(t) - 1} (\tau_j' - \tau_j) \leq \delta, \quad \quad \text{and} \quad \quad \sum_{j = 0}^{K(t) - 1} (\tau_j' - \tau_j) \leq T.
\]
In this case,
\begin{align*}
\MoveEqLeft[4] \abs{X^\epsilon(t) - X^\epsilon(r)} \leq 2 \epsilon + \abs{X^\epsilon(K(t)) - X^\epsilon(K(r)+ 1)} \nonumber \\
& = 2 \epsilon + \abs[\Big]{ \sum_{j= K(r) + 1}^{K(t) - 1} \Delta X^\epsilon_j} \\
& \leq 2 \epsilon + \sup_{0 \leq \ell \leq m} \abs[\Big]{ \sum_{j= \ell+1}^{m- 1} \Delta X^\epsilon_j} \one_{\set[\big]{ \sum_{j = \ell+1}^{m-1} (\tau_j' - \tau_j) \leq \delta }} \one_{\set[\big]{ \sum_{j = 0}^{m-1} (\tau_j' - \tau_j) \leq T }} \,.
\end{align*}
This last supremum has the same law as
\begin{multline*}
\sup_{0 \leq \ell \leq m} \abs[\Big]{ \sum_{j= \ell+1}^{m-1} 4 \Delta W'_j}
\one_{\set[\big]{ \sum_{j = \ell+1}^{m-1} (\sigma_{j+1} - \sigma_j) \leq \delta }} \one_{\set[\big]{\sum_{j = 0}^{m-1} (\sigma_{j+1} - \sigma_j) \leq T }}
\\
= \sup_{0 \leq \ell \leq m} 4 \abs{ W'(\sigma_m) - W'(\sigma_{\ell+1})} \, \one_{\set{ \sigma_{m} - \sigma_{\ell+1} \leq \delta }} \, \one_{ \set{ \sigma_m - \sigma_0 \leq T }} \,.
\end{multline*}
Since the right hand side of the above is bounded by
\begin{equation*}
\sup_{ \substack{r,t \in [0,T] \\ \abs{t - r} \leq \delta}}4 \abs{W'(t) - W'(r)} \,,
\end{equation*}
we obtain \eqref{XWcomp}.
\end{proof}
We now prove Lemma~\ref{l:FCtightness}.
\begin{proof}[Proof of Lemma~\ref{l:FCtightness}]
Note first that Lemma~\ref{lem:BrownianComp} immediately implies that the processes~$X^\epsilon$ are tight.
Indeed, by~\eqref{XWcomp} we see
\begin{equation}
\lim_{\delta \to 0} \limsup_{\epsilon \to 0} \P^{\mu^\epsilon} \paren[\Big]{ \sup_{ \substack{r,t \in [0,T] \\ |t - r| \leq \delta}} |X^\epsilon(t) - X^\epsilon(r)| \geq a } = 0 \label{tightX1}\,.
\end{equation}
Moreover, since $\mu^\epsilon$ converge weakly to the probability measure~$\mu$, the distributions of $X^\epsilon_0$ are tight.
This implies implies tightness of the processes $X^\epsilon$.
For tightness of~$Y^\epsilon$, we note as above that the distributions of $Y^\epsilon_0$ are already tight.
In order to control the time oscillations, fix $T > 0$, and let
\begin{equation*}
d Z^\epsilon = d B_t + d L^{\partial \Omega_\epsilon}_t\,,
\end{equation*}
be the semi-martingale decomposition of~$Z^\epsilon$ (see for instance~\cite{StroockVaradhan71}).
Here $B = (B_1, B_2)$ is a standard Brownian motion and $L^{\partial \Omega_\epsilon}$ is the local time of $Z^\epsilon$ on $\partial \Omega_\epsilon$.
Let $\omega(\delta) = \omega_T(\delta)$, defined by
\begin{equation*}
\omega(\delta) = \sup_{ \substack{s,t \in [0,T] \\ |t - s| \leq \delta}} |B_2(t) - B_2(s)| \,,
\end{equation*}
be the modulus of continuity for $B_2$ over $[0, T]$.
Let $[s,t] \subset [0,T]$ with $|t - s| \leq \delta$.
If $0 < Y^\epsilon_r < 1$ for all $r \in (s,t)$, then we must have
\[
|Y^\epsilon(t) - Y^\epsilon(s)| = |B_2(t) - B_2(s)| \leq \omega(\delta) \,.
\]
Otherwise, for some $r \in (s,t)$ either $Y_r = 0$ or $Y_r = 1$.
Let $G_\delta$ be the event that $\omega(\delta) < 1/2$; on this event $Y$ cannot hit both $0$ and $1$ on the interval $[s,t]$.
Define
\begin{equation*}
\eta_- = \inf \set{ r > s \st Y^\epsilon_r \in \{0,1\} }\,,
\quad\text{and}\quad
\eta_+ = \sup \set{ r < t \st Y^\epsilon_r \in \{0,1\} }\,.
\end{equation*}
In this case we have
\begin{align*}
|Y^\epsilon_t - Y^\epsilon_s| & \leq \max \paren{ |Y^\epsilon(\eta_-) - Y^\epsilon(s)|\;, \; |Y^\epsilon(t) - Y^\epsilon(\eta_+)| } + \one_{G_\delta^c} + \epsilon^2 \\
& = \max \paren{ |B({\eta_-}) - B(s)|\;, \; |B(t) - B(\eta_+)| } + \one_{G_\delta^c} + \epsilon^2 \leq \omega(\delta) + \one_{G_\delta^c} + \epsilon^2.
\end{align*}
Combining the two cases, we see that for any $z \in \Omega_\epsilon$,
\begin{equation*}
\P^z \paren[\Big]{\sup_{ \substack{s,t \in [0,T] \\ |t - s| \leq \delta}} |Y^\epsilon(t) - Y^\epsilon(s)| > a } \leq \P(\omega(\delta) > a-\epsilon^2) + \P(G^c_\delta).
\end{equation*}
Since the right hand side is independent of $z$, integrating over $z$ with respect to $\mu^\epsilon$ implies
\begin{equation*}
\lim_{\delta \to 0} \limsup_{\epsilon \to 0} \P^{\mu^\epsilon} \paren[\Big]{\sup_{ \substack{s,t \in [0,T] \\ |t - s| \leq \delta}} |Y^\epsilon(t) - Y^\epsilon(s)| > a } = 0
\end{equation*}
holds for any $a > 0$.
This shows tightness of $Y^\epsilon$ in $C([0,T])$, finishing the proof of Lemma~\ref{l:FCtightness}.
\end{proof}
\subsection{Uniqueness for the Martingale Problem (Lemma~\ref{l:MPuniqness})}\label{s:MPuniqness}
The proof of Lemma~\ref{l:MPuniqness} relies on the existence of regular solutions to the corresponding parabolic equation.
We state this result next.
\begin{lemma}\label{l:PDEexistence}
For all $f \in \mathcal D(A)$, there exists a solution to
\begin{equation}\label{e:dtuEqAu}
\partial_t u - A u = 0\,,\qquad
u(\cdot, 0) = f\,,\qquad
\text{with }
u(\cdot, t) \in \mathcal D(A)\,.
\end{equation}
\end{lemma}
Given Lemma~\ref{l:PDEexistence}, the proof of Lemma~\ref{l:MPuniqness} is standard (see for instance~\cite{RogersWilliams00a,EthierKurtz86}).
For the readers convenience, we describe it briefly here.
\begin{proof}[Proof of Lemma~\ref{l:MPuniqness}]
Suppose $Z, Z'$ are two processes satisfying the martingale problem for $A$.
Let $f \in \mathcal D(A)$ be any test function, and $u$ be the solution in $\mathcal D(A)$ of $\partial_t u - Au = 0$ with initial data $f$.
Then for any $z \in \Omega_0$, and fixed $T > 0$, the processes $u(Z_t, T-t)$ and $u(Z'_t, T-t)$ are both martingales under the measure $P^z$.
Hence
\begin{align*}
\E^\mu f(Z_T)
&= \int_{\Omega_0}\E^z f(Z_T) \, \mu(dz)
= \int_{\Omega_0}\E^z u(Z_t, T-t) \, \mu(dz)
= \int_{\Omega_0} u(z, T) \, \mu(dz)
\\
&= \int_{\Omega_0}\E^z u(Z'_t, T-t) \, \mu(dz)
= \int_{\Omega_0}\E^z f(Z'_T) \, \mu(dz)
= \E^\mu f(Z'_T)\,.
\end{align*}
Since $\mathcal D(A)$ is dense in $C_0(\Omega_0)$ this implies $Z$ and $Z'$ have the same one dimensional distributions.
By the Markov property, this in turn implies that the laws of $Z$ and $Z'$ are the same.
\end{proof}
It remains to prove Lemma~\ref{l:PDEexistence}.
\begin{proof}[Proof of Lemma~\ref{l:PDEexistence}]
Let $v(x, t) = u(x, 0, t)$.
Since~\eqref{e:dtuEqAu} is equivalent to~\eqref{e:rho1}--\eqref{e:rho3}, Proposition~\ref{p:ftime}%
\footnote{
We remark that the proof of Proposition~\ref{p:ftime} is self contained, and does not rely on Theorem~\ref{t:zlimfat}.
Thus its use here is valid and does not lead to circular logic loop.%
}
implies that~$v$ satisfies the Basset type equation~\eqref{e:ev}.
\begin{comment}
we must have $\partial_t u = \frac{1}{2} \partial_y^2 u$ for $y \in (0, 1)$.
Consequently we have the identity
\begin{equation}\label{e:HLheatIdentities}
\partial_y u(x, 0, t)
= - \int_0^t K'_{t -s}( 0, 0) \partial_t u(x, 0, s) \, ds
+ \int_0^1 \partial_y K_{t}(0, z) f(x, z) \, dz\,.
\end{equation}
Here $K'$ is the heat kernel on $(0, 1)$ with Neumann boundary conditions at $y = 0$ and Dirichlet boundary conditions at $y = 1$, and $K$ is the heat kernel on $(0, 1)$ with Neumann boundary conditions at both $y = 0$ and $y = 1$.
Thus if we set $w(t) = K'_t(0, 0)$ and $v(x, t) = u(x, 0, t)$, then the flux condition~\eqref{e:DAflux} guarantees
\begin{equation}\label{e:tfracV}
\partial_t v + \frac{\alpha}{2} \partial_t^w v - \frac{1}{2} \partial_x^2 v
= g
\end{equation}
where
\begin{equation*}
g(x, t) \stackrel{\Delta}{=} \alpha\int_0^1 \partial_y K_{t}(0, z) f(x, z) \, dz \,,
\quad\text{and}\quad
\partial_t^w v(x, t) \stackrel{\Delta}{=} \int_0^t w(t-s) \partial_t v(x, s) \, ds\,.
\end{equation*}
The operator $\partial_t^w$ above is a \emph{generalized Caputo derivative}.
\end{comment}
For the homogeneous equation associated with~\eqref{e:ev}, existence and uniqueness is proved in~\cite{Chen17}.
The inhomogeneous equation can be solved using an analog of Duhamel's principle~\cites{Umarov12,UmarovSaydamatov06}.
Explicitly, for $s \geq 0$, let $\tilde v_s$ be a solution to the equation
\begin{subequations}
\begin{align}
\label{e:tvsEvol}
\span
\partial_t \tilde v_s(x,t) + \frac{\alpha}{2} \partial_t^w \tilde v_s(x,t)
- \frac{1}{2}\partial_x^2 \tilde v_s(x,t) = 0 \,,
&& \text{for } t > s\,,
\\
\label{e:tvsId}
\span
\tilde v_s(x,s) = \paren[\Big]{I + \frac{\alpha}{2} \mathcal I^{w}_s}^{-1} \frac{\alpha f(x,\cdot)}{2} \,.
\end{align}
\end{subequations}
Here $\mathcal I^w_\cdot$ is the integral operator with kernel $w$ defined by
\begin{equation*}
\mathcal I^w_t h = \int_0^t w( t - s ) h(s) \, ds\,,
\end{equation*}
for any function~$h \colon (0, \infty) \to \R$.
Since $\mathcal I^w$ is a compact operator, the operator $(I + \mathcal (\alpha/2)I^w )$ is invertible, ensuring the initial condition~\eqref{e:tvsId} can be satisfied.
For convenience, define $\tilde v_s( x, r ) = \tilde v_s(x, s )$ when $r < s$.
Now, one can directly check that the function $v$ defined by
\begin{equation*}
v(x, t) \stackrel{\Delta}{=} \int_0^t \tilde v_s( x, t ) \, ds \,,
\end{equation*}
is a strong solution to the inhomogeneous equation~\eqref{e:ev}.
Since $u$ satisfies the heat equation for $y \in (0, 1)$ we can write $u$ in terms of $v$ and $f$ using the heat kernel.
Explicitly, we have
\begin{equation*}
u(x, y, t)
= \frac{\alpha}{2} \int_0^1 K_t''(y, z) f(z) \, dz
+ \kappa \int_0^t \partial_z K_{t-s}''(y, 0) v(x, s) \, ds\,,
\end{equation*}
where $K''$ is the heat kernel on $(0, 1)$ with Dirichlet boundary conditions at $y = 0$ and Neumann boundary conditions at $y = 1$.
Since $v$ is $C^{2,1}$ this immediately implies $u \in C^{2, 1}$.
Thus to show $u(\cdot, t) \in \mathcal D(A)$ we only need to verify the flux condition~\eqref{e:DAflux}.
This, however, follows immediately from the fact that $\partial_y^2 u(x, 0, t) = 2 \partial_t u(x, 0, t) = 2 \partial_t v(x, t)$ and equation~\eqref{e:ev2}.
\end{proof}
\subsection{Generator Estimate (Lemma~\ref{l:FCgenerator}).}\label{s:FCgenerator}
The main idea behind the proof of Lemma~\ref{l:FCgenerator} is to balance the local time $Z^\epsilon$ spends at the ``gate'' between the spine and teeth, and the time spent in the spine.
Explicitly, let $S \stackrel{\Delta}{=} \R \times (-\epsilon,0)$ denote the spine of~$\Omega_\epsilon$, and $T$, defined by
\[
T \stackrel{\Delta}{=} \bigcup_{k \in \epsilon \Z} \set[\big]{ (x,y) \st |x - \epsilon k| < \frac{\alpha\epsilon^2}{2}, \ y \in (0,1) }\,,
\]
denote the collection of the teeth (see~\eqref{e:OmegaEp} and Figure~\ref{f:fatcomb}).
Let the ``gate'' $G$, defined by
\[
G \stackrel{\Delta}{=} \partial T \cap \partial S = \bigcup_{k \in \epsilon \Z} \set[\big]{ (x,0) \st |x - \epsilon k| \leq \frac{\alpha\epsilon^2}{2} } \,,
\]
denote the union of short segments connecting the spine and teeth. Let $L^G_t$ denote the local time of $Z^\epsilon_t$ at the set $G$. Now the required local time balance can be stated as follows.
\begin{lemma}\label{l:FCLocalTimeG}
For every $g \in C^1_b(\R)$ and $K \subseteq \Omega_0$ compact we have
\begin{equation}\label{claim2G}
\lim_{\epsilon \to 0} \sup_{z \in K \cap \Omega_\epsilon} \E^z\paren[\Big]{ \alpha\int_0^t g(X_s^\epsilon) \one_{\{Y_s^\epsilon < 0\}} \,ds - 2 \int_0^t g(X_s^\epsilon) dL^{G}_s} = 0 \,.
\end{equation}
\end{lemma}
Next, we will also need to show that the local times on the left edges and right edges of the teeth balance.
Explicitly, let $\partial T^-$, $\partial T^+$ defined by
\begin{gather*}
\partial T^- \stackrel{\Delta}{=} \set[\big]{ (x, y) \in Z^\epsilon \st x \in \epsilon \Z - \frac{\alpha\epsilon^2}{2},\ y > 0 }\,,
\\
\llap{\text{and}\qquad} \partial T^+ \stackrel{\Delta}{=} \set[\big]{ (x, y) \in Z^\epsilon \st x \in \epsilon \Z + \frac{\alpha\epsilon^2}{2},\ y > 0 }\,.
\end{gather*}
denote the left and right edges of the teeth respectively.
Let $L^+$ and $L^-$ be the local times of $Z^\epsilon$ about $\partial T^-$ and $\partial T^+$ respectively, and let $L^{\pm}$ denote the difference
\begin{equation*}
L^\pm = L^- - L^+\,.
\end{equation*}
The balance on the teeth boundaries we require is as follows.
\begin{lemma}\label{l:FCLocalTimeT}
For every $f \in \mathcal D(A)$ and $K \subseteq \Omega_0$ compact, we have
\begin{equation}\label{claim1x}
\lim_{\epsilon \to 0} \sup_{z \in K \cap \Omega_\epsilon} \E^z \paren[\Big]{ \int_0^t \frac{1}{2} \partial_x^2 f(Z^{\epsilon,+}_s) \one_{\{Y_s^\epsilon > 0\}}\,ds + \int_0^t \partial_x f(Z^{\epsilon,+}_s) \, d L^{\pm}_s } = 0\,.
\end{equation}
\end{lemma}
Momentarily postponing the proofs of Lemmas~\ref{l:FCLocalTimeG} and~\ref{l:FCLocalTimeT}, we prove Lemma~\ref{l:FCgenerator}.
\begin{proof}[Proof of Lemma~\ref{l:FCgenerator}]
Given $f \in \mathcal D(A)$, we define $f^\epsilon \colon \Omega_\epsilon \to \R$ by
\[
f^\epsilon(x,y) \stackrel{\Delta}{=} f(x, y^+)\,.
\]
Thus, $f(Z^{\epsilon,+}_t) = f^\epsilon(Z^\epsilon_t)$, and~\eqref{e:fconv} reduces to showing
\begin{equation*
\lim_{\epsilon \to 0} \sup_{z \in K \cap \Omega_\epsilon} \E^z\paren[\Big]{
f^\epsilon(Z^\epsilon_t)
- f^\epsilon(Z^\epsilon_0)
- \int_0^t \frac{1}{2} \partial_y^2 f^\epsilon (Z_s^\epsilon) \, ds
} = 0 \,.
\end{equation*}
Since $f \in \mathcal{D}(A)$, we have $\partial_x^2 f(x,0) + \alpha\partial_y f(x,0) = \partial_y^2 f(x,0)$ and $\partial_y f(x,1) = 0$. Therefore, the extension $f^\epsilon$ satisfies $\partial_x^2 f^\epsilon(x,y) = \partial_y^2 f^\epsilon(x,0^+) - \alpha\partial_y f^\epsilon(x,0^+)$ for $(x,y) \in S$, as well as $\partial_y f^\epsilon = 0$ for $(x,y) \in S$. Notice that $\partial_{y} f^\epsilon$ may be discontinuous across $G$. Using these facts and It\^o's formula, we compute
\begin{align*}
\E^z \paren[\Big]{ f^\epsilon(Z^\epsilon_t) - f^\epsilon(Z^\epsilon_s)} & = \E^z \paren[\Big]{ \int_0^t \frac{1}{2} \left( \partial_y^2 f(Z^{\epsilon,+}_s) + \partial_x^2 f(Z^{\epsilon,+}_s) \right) \one_{\{Y_s^\epsilon > 0\}} \, ds} \\
& \quad + \E^z \paren[\Big]{ \int_0^t \frac{1}{2} \partial_x^2 f(X_s^\epsilon,0^+) \one_{\{Y_s^\epsilon < 0\}} \, ds} \\
& \quad + \E^z \paren[\Big]{ \int_0^t \partial_y f(X_s^\epsilon,0^+) \, dL^{G}_s + \int_0^t \partial_x f(Z^{\epsilon,+}_s) \, d L^{\pm}_s } \\
& = \E^z \paren[\Big]{ \int_0^t \frac{1}{2} \left( \partial_y^2 f(Z^{\epsilon,+}_s) + \partial_x^2 f(Z^{\epsilon,+}_s)\right) \one_{\{Y_s^\epsilon > 0\}} \, ds } \\
& \quad + \E^z \paren[\Big]{ \frac{1}{2} \int_0^t \left( \partial_y^2 f(X_s^\epsilon,0^+) - \alpha\partial_y f(X_s^\epsilon,0^+) \right) \one_{\{Y_s^\epsilon < 0\}} \, ds} \\
& \quad + \E^z\paren[\Big]{ \int_0^t \partial_y f(X_s^\epsilon,0^+) \, dL^{G}_s + \int_0^t \partial_x f(Z^{\epsilon,+}_s) \, d L^{\pm}_s }\,,
\end{align*}
and hence
\begin{align*}
& \E^z \paren[\Big]{ f^\epsilon(Z^\epsilon_t) - f^\epsilon(Z^\epsilon_0) - \int_0^t \frac{1}{2} \partial_y^2 f^\epsilon (Z_s^\epsilon) \, ds}\\
& \quad \quad \quad \quad = \E^z \paren[\Big]{ \int_0^t \frac{1}{2} \partial_x^2 f(Z^{\epsilon,+}_s) \one_{\{Y_s^\epsilon > 0\}}\,ds + \int_0^t \partial_x f(Z^{\epsilon,+}_s) \, d L^{\pm}_s }\\
& \quad \quad \quad \quad \quad - \frac{1}{2} \E^z \paren[\Big]{ \int_0^t \alpha\partial_y f(X_s^\epsilon,0^+) \one_{\{Y_s^\epsilon < 0\}} \,ds - 2 \int_0^t \partial_y f(X_s^\epsilon,0^+) dL^{G}_s}\,.
\end{align*}
Using Lemmas~\ref{l:FCLocalTimeG} and~\ref{l:FCLocalTimeT} we see that the supremum over $z \in \Omega_\epsilon \cap K$ of the right hand side of the above vanishes as $\epsilon \to 0$.
This proves Lemma~\ref{l:FCgenerator}.
\end{proof}
It remains to prove Lemmas~\ref{l:FCLocalTimeG} and~\ref{l:FCLocalTimeT}, and we do this in Sections~\ref{s:FCLocalTimeG} and~\ref{s:FCLocalTimeT} respectively.
\subsection{Local Time at the Gate (Lemma~\ref{l:FCLocalTimeG}).}\label{s:FCLocalTimeG}
The crux in the proof of Lemma~\ref{l:FCLocalTimeG} is an oscillation estimate on the solution to a specific Poisson equation with Neumann boundary conditions (Proposition~\ref{p:uosc1}, below).
We state this when it is first encountered, and prove it in the next subsection.
\begin{proof}[Proof of Lemma~\ref{l:FCLocalTimeG}]
The expectation in~\eqref{claim2G} can be written as
\begin{multline} \label{claim2Gsum}
\E^z \paren[\Big]{ \int_0^t \alpha g(X_s^\epsilon) \one_{\{Y_s^\epsilon < 0\}} \,ds - 2 \int_0^t g(X_s^\epsilon) \, dL^{G}_s} \\
= \sum_{k \in \Z} g(\epsilon k) \E^z \paren[\Big]{ \alpha\int_0^t \one_{\{Y_s^\epsilon < 0\}} \one_{\{ |X_s^\epsilon - \epsilon k| < \epsilon /2 \}} \,ds - 2 \int_0^t \one_{\{ |X_s^\epsilon - \epsilon k| < \epsilon /2 \}} \, dL^{G}_s} \\
+ R^\epsilon
\end{multline}
where the remainder term $R^\epsilon$ is given by
\begin{multline*}
R^\epsilon \stackrel{\Delta}{=}
\alpha\sum_{k \in \Z}\E^z \paren[\Big]{ \int_0^t (g(X^\epsilon_s) - g(\epsilon k))\one_{\{Y_s^\epsilon < 0\}} \one_{\{ |X^\epsilon_s - \epsilon k| < \epsilon /2 \}} \,ds}\\
-2\E^z \paren[\Big]{\int_0^t (g(X^\epsilon_s) - g(\epsilon k)) \one_{\{ |X^\epsilon_s - \epsilon k| < \epsilon /2 \}} dL^{G}_s} \stackrel{\Delta}{=} R^\epsilon_1 + R^\epsilon_2\,.
\end{multline*}
To estimate $R^\epsilon$, for any $\delta > 0$ we choose sufficiently large $M > 0$ such that
\begin{equation}\label{R-largek}
\sup_{(x,y) \in K}\E \paren[\Big]{\int_0^t\one_{\set{|x| + 4 |W_s| + 2 \geq M}} \, ds} < \frac{\delta}{\norm{g}_\infty} \,,
\end{equation}
where $W$ is a standard Brownian motion in $\R$. Here we write $\P}%{{\mathbb P}$ and $\E}%{{\mathbb E}$ (without superscripts) to denote the probability measure and expected value for a standard Brownian motion. By Lemma \ref{lem:BrownianComp}, we have
\begin{equation*}
\P^z(|X^\epsilon_s| + 1 \geq M) \leq \P(x + 4 |W_s| + 2 \geq M ) \,,
\end{equation*}
where $z = (x,y)$ and so the above estimate can be applied for $X^\epsilon$ independent of $\epsilon \in (0,1/2]$. Since $g$ is continuous and hence uniformly continuous on $[-M,M]$, for any $\delta > 0$ we can choose $\epsilon > 0$ such that if $x_1,x_2 \in [-M,M]$ with $\abs{x_1 - x_2} < \epsilon$ then $\abs{g(x_1) - g(x_2)} < \delta$. For such $\epsilon$ and for integers $k \in \epsilon^{-1}[-M,M]$ we have
\begin{multline}\label{R-smallk}
\E^{(x,y)}
\int_0^t\abs{g(\epsilon k) - g(X_s^\epsilon)}
\one_{\{Y_s^\epsilon < 0,\; \abs{X^\epsilon_s - \epsilon k} < \epsilon/2\}}\, ds\\
\leq \delta \int_0^t\P^z\paren[\Big]{\abs{X^\epsilon_s - \epsilon k} < \epsilon/2}\, ds\, .
\end{multline}
Combining the above with \eqref{R-largek}, gives the following estimate of $R^\epsilon_1$
\begin{align*}
\abs{R^\epsilon_1} &\leq \alpha\Biggl(\delta \sum_{\substack{k \in \mathbb{Z} \\ \epsilon k \in [-M,M]}}\int_0^t\P^z\paren[\Big]{\abs{X^\epsilon_s - \epsilon k} < \frac{\epsilon}{2}}\, ds\\
&\qquad\qquad+ 2\norm{g}_\infty\sum_{|\epsilon k| > M}
\int_0^t\P^z\paren[\Big]{\abs{X^\epsilon_s - \epsilon k} < \frac{\epsilon}{2}}\, ds\Biggr)
\\
&\leq \alpha(t + 2)\delta \,.
\end{align*}
Since $\delta > 0$ was arbitrary this proves $R^\epsilon_1\to 0$ as $\epsilon \to 0$. An estimate for $R^\epsilon_2$ can be obtained in the same manner.
Namely,
\begin{align*}
\abs{R^\epsilon_2} &\leq 2\paren[\bigg]{\delta\E^z\paren[\big]{L^G_t} +2\norm{g}_\infty\sum_{\substack{k \in \mathbb{Z} \\ |\epsilon k| \geq M}}
\E^z\paren[\Big]{\int_0^t\one_{\set{\abs{X^\epsilon_s - \epsilon k} < \epsilon/2}}\, dL^G_s}}\\
&\leq c(t)\delta + 2\norm{g}_\infty
\E^x\paren[\Big]{\int_0^t\one_{\set{|X^\epsilon_s|+1 \geq M}}\, dL^G_s}\,.
\end{align*}
Let $\tau = \inf\set{t \st \abs{X_t^\epsilon} + 1 \geq M}$ and note that by the Markov property
\begin{align*}
\nonumber
\E^z\paren[\Big]{\int_0^t\one_{\set{|X^\epsilon_s| +1 \geq M}}\, dL^G_s}
&\leq \E^z\paren[\Big]{\E^{X^\epsilon_\tau}\paren[\Big]{L^G_{t - t\wedge\tau}}}
\\
&\leq \paren[\Big]{\sup_{z'}\E^{z'}\paren[\big]{L^G_t}} \P^z(\tau < t)\,.
\end{align*}
Applying It\^o's formula to $w(Z^\epsilon)$, where
\begin{equation*}
w(x,y) \stackrel{\Delta}{=}
\begin{dcases}
\frac{1}{2}(1 - y)^2\,, & y \in [0,1]\,,\\% \ \remove{x \in [-\epsilon^2/2, \epsilon^2/2]\,,}\\
0, & \text{otherwise,}
\end{dcases}
\end{equation*}
shows
\begin{equation}\label{e:ELG}
\E^z(L^G_t) = O(1)
\quad\text{as } t \to 0\,.
\end{equation}
By choosing $M$ larger, if necessary, we have
\begin{equation*}
\sup_{z \in K} \P^z(\tau<t)<\delta
\end{equation*}
for all $\epsilon \in (0,1/2]$. Since $\delta > 0$ is arbitrary, this shows that $R^\epsilon_2\to 0$ as $\epsilon \to 0$.
Next, we need a PDE estimate to control the expression
\begin{align*}
\E^z\paren[\Big]{ \alpha\int_0^t\one_{\{Y_s^\epsilon < 0\}} \one_{\{ |X_s - \epsilon k| < \epsilon /2 \}} \,ds - 2 \int_0^t \one_{\{ |X_s - \epsilon k| < \epsilon /2 \}} dL^{G}_s}.
\end{align*}
from \eqref{claim2Gsum}. To this end, let $Q$ be a region of width~$\epsilon$ directly below the tooth at~$x = 0$, and $G_0$ be the component of $G$ contained in $[-\epsilon/2,\epsilon/2] \times \R$.
Explicitly, let
\begin{equation}\label{e:QG0}
Q \stackrel{\Delta}{=}
\brak[\Big]{ -\frac{\epsilon}{2}, \frac{\epsilon}{2}}
\times \brak[\big]{-\epsilon,0}
\qquad\text{and}\qquad
G_0 = \set[\Big]{ (x,0) \st
- \frac{\alpha\epsilon^2}{2} < x < \frac{\alpha\epsilon^2}{2} }\,.
\end{equation}
Let $\mu^\epsilon$ denote the one dimensional Hausdorff measure supported on~$G_0$ (i.e. a measure supported on $G_0$).
\begin{proposition}\label{p:uosc1}
Let the function $u^\epsilon\colon \Omega_\epsilon \to \R$ be the solution of
\begin{alignat}{2}
\label{uoscpde}
\span
- \Delta u^\epsilon = \alpha\one_{Q} - \mu^\epsilon
&\qquad& \text{in } \Omega_\epsilon
\\
\label{e:uoscBC}
\span
\partial_\nu u^\epsilon = 0
&& \text{on } \partial \Omega_\epsilon \,,
\end{alignat}
with the normalization condition
\begin{equation} \label{uepsnorm1}
\inf_{\Omega_\epsilon} u^\epsilon = 0 \,.
\end{equation}
Then there exists a constant $C > 0$, independent of~$\epsilon$ such that
\begin{equation}\label{uosc1}
\sup_{\Omega_\epsilon} u^\epsilon(z)
\leq C \epsilon^{2} \abs{\ln \epsilon}\,.
\end{equation}
\end{proposition}
\begin{remark*}
Existence of a solution to~\eqref{uoscpde}--\eqref{e:uoscBC} can be proved by
using~\cite[Thm.\ 2.2]{Droniou00} and a standard approximation argument to deal with the unbounded domain. See also \cite[Thm.\ 2.2.1.3]{Grisvard85}.
\end{remark*}
Throughout the remainder of this proof and the section, we will use the convention that $C > 0$ is a constant that is independent of~$\epsilon$. We apply It\^o's formula to the function $u^\epsilon$ defined in Proposition \ref{p:uosc1} to obtain
\begin{align*}
2 \E^z \paren{ u^\epsilon(Z^\epsilon_t) - u^\epsilon(Z^\epsilon_0) }
& = - \E^z \paren[\Big]{ \alpha\int_0^t \one_{Q}(Z_s^\epsilon) \, ds - 2 L^{G_0}_t }\,. \\
& = \E^z\paren[\Big]{ \alpha\int_0^t \one_{\{Y_s^\epsilon < 0\}} \one_{\{ |X_s| < \epsilon /2 \}} \,ds - 2 \int_0^t \one_{\{ |X_s| < \epsilon /2 \}} dL^{G}_s}.
\end{align*}
The oscillation bound \eqref{uosc1} now implies
\[
\left| \E^z\paren[\Big]{ \alpha\int_0^t \one_{\{Y_s^\epsilon < 0\}} \one_{\{ |X_s - \epsilon k| < \epsilon /2 \}} \,ds - 2 \int_0^t \one_{\{ |X_s - \epsilon k| < \epsilon /2 \}} dL^{G}_s} \right| \leq C \epsilon^2 |\log \epsilon|
\]
holds for all $k$ and $x \in \R$. Because of \eqref{R-largek}, we can restrict the sum in \eqref{claim2Gsum} to $k \in \Z$ for which $\epsilon |k| \leq M$ (i.e.\ only $O(\epsilon^{-1})$ terms in the sum). Therefore,
\begin{multline*
\sum_{\substack{k \in \Z \\\epsilon |k| \leq M} } \E^z\paren[\Big]{ \alpha\int_0^t \one_{\{Y_s^\epsilon < 0\}} \one_{\{ |X_s^\epsilon - \epsilon k| < \epsilon /2 \}} \,ds - 2 \int_0^t \one_{\{ |X_s^\epsilon - \epsilon k| < \epsilon /2 \}} dL^{G}_s} \\
\leq O(\epsilon |\log (\epsilon)|).
\end{multline*}
Combining this with the above estimates, we conclude that \eqref{claim2G} holds.
\end{proof}
To complete the proof of Lemma~\ref{l:FCLocalTimeG}, it remains to prove Proposition~\ref{p:uosc1}.
We do this in the next subsection.
\subsection{An Oscillation Estimate for the Neumann Problem (Proposition~\ref{p:uosc1}).}\label{s:uosc1}
The proof of Proposition~\ref{p:uosc1} involves a ``geometric series'' argument using the probabilistic representation.
Explicitly, we obtain the desired oscillation estimate by estimating the probabilities of successive visits of $Z^\epsilon$ between two segments.
The key step in the proof involves the so called narrow escape problem (see for instance~\cite{HolcmanSchuss14}), which guarantees that the probability that Brownian motion exists from a given interval on the boundary of a domain vanishes logarithmically with the interval size.
In our specific scenario, however, we can not directly use the results of~\cite{HolcmanSchuss14} and we prove the required estimates here.
\begin{proof}[Proof of Proposition \ref{p:uosc1}]
Note first that
\begin{equation*}
\int_{\Omega_\epsilon} \paren[\big]{ \alpha\one_Q - \mu^\epsilon } \, dz = 0\,,
\end{equation*}
and hence a bounded solution to~\eqref{uoscpde}--\eqref{e:uoscBC} exists.
Moreover, because the measure $\alpha\one_{Q}(z) - \mu^\epsilon$ is supported in $\bar{Q}$, the function $u^{\epsilon}$ is harmonic in
$
\Omega_{\epsilon}
\setminus \bar{Q}
$.
Thus, by the maximum principle,
\begin{equation*}
\sup_{\Omega_\epsilon} u^\epsilon \leq \sup_{Q} u^\epsilon\,.
\end{equation*}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\fill[spinefill] (-5,0) rectangle (5,3);
\fill[spinefill] (-.5,3) rectangle (.5,5);
\fill [blue!20!white] (-2,0) rectangle (2,3);
\draw node at (0,1.5) {Q};
\draw (-5,0) -- (5,0);
\draw (-3.5,3) -- (-.5,3);
\draw (.5,3) -- (3.5,3);
\draw (-.5,3) -- (-.5,5);
\draw (.5,3) -- (.5,5);
\draw[dashed](-.5,3) -- (.5,3);
\draw[dashed](-.5,4) -- (.5,4);
\draw node at (0,3.3) {$G_0$};
\draw node at (0,4.3) {$A'$};
\draw[dashed](-3,0) -- (-3,3);
\draw[dashed](3,0) -- (3,3);
\draw node at (-3.2,1.5) {$D'$};
\draw node at (3.2,1.5) {$D'$};
\draw[dashed](-2,0) -- (-2,3);
\draw[dashed](2,0) -- (2,3);
\draw node at (-2.2,1.5) {$D$};
\draw node at (2.2,1.5) {$D$};
\fill[spinefill] (-4.5,3) rectangle ++(1,2);
\draw (-3.5,3) -- (-3.5,5);
\draw (-4.5,3) -- (-4.5,5);
\fill[spinefill] (3.5,3) rectangle ++(1,2);
\draw (3.5,3) -- (3.5,5);
\draw (4.5,3) -- (4.5,5);
\draw (-5,3) -- (-4.5,3);
\draw (5,3) -- (4.5,3);
\end{tikzpicture}
\caption{Image of one period of $\Omega_\epsilon$.}
\end{center}
\end{figure}
Define $Q' \supseteq Q$ to be the region that enlarges~$Q$ by $\epsilon^2$ on the top, and $\epsilon/4$ on the sides.
Precisely, let
\begin{equation*}
Q' \stackrel{\Delta}{=}
\Omega_{\epsilon} \bigcap \paren[\Big]{ \brak[\big]{-\frac{3\epsilon}{4}, \frac{3\epsilon}{4}} \times \brak[\big]{-\epsilon,\epsilon^2} }\,.
\end{equation*}
The first step is to estimate the oscillation of $u^\epsilon$ on the top and side portion of $Q'$.
Let~$A'$ and $D'$, defined by
\begin{equation}\label{e:ad}
A' \stackrel{\Delta}{=}
\brak[\Big]{-\frac{\alpha\epsilon^2}{2}, \frac{\alpha\epsilon^2}{2}}
\times \set[\big]{ \alpha \epsilon^2 }
\qquad\text{and}\qquad
D' \stackrel{\Delta}{=} \set[\Big]{\pm \frac{3\epsilon}{4}}
\times \brak[\big]{-\epsilon,0}
\end{equation}
denotes the top and sides of~$Q'$ respectively.
We aim to show
\begin{align}\label{uosctopbottom}
\sup_{a,d \in A' \cup D'} \abs{u^\epsilon(a) - u^\epsilon(d)} \leq C \epsilon^2 \abs{\ln \epsilon} \,.
\end{align}
Let $\tau_0$ be the first time at which the process $Z^\epsilon_t$ hits the gate~$G_0$ (defined in~\eqref{e:QG0}).
The stopping time $\tau_0$ is finite almost surely, but has infinite expectation.
We claim that the distribution of~$Z^\epsilon_{\tau_0}$ on~$G$ is bounded below by a constant multiple of the Hausdorff measure, uniformly over all initial points in $A' \cup D'$.
\begin{lemma}\label{lem:rholower}
For any~$z \in A' \cup D'$, let $\rho(z, \cdot)$, defined by
\begin{equation*}
\rho(z, r) = \P^z (Z^\epsilon_{\tau_0} \in dr )\,,
\end{equation*}
denote the density of the random variable $Z^\epsilon_{\tau_0}$ on~$G_0$.
Then, there exists $\delta > 0$ such that
\begin{equation}\label{rholowerS}
\rho(z,r) \geq \frac{\delta}{ \alpha \epsilon^2}\,,
\end{equation}
for all $z \in A'\cup D'$ and $r \in G_0$.
\end{lemma}
Momentarily postponing the proof of this lemma, we note that for any $a, d \in A' \cup D'$, we have
\begin{align*}
\MoveEqLeft
\E^{a} u^\epsilon(Z^\epsilon_{\tau_0}) - \E^{d} u^\epsilon(Z^\epsilon_{\tau_0}) = \int_{G_0} \rho(a,r) u^\epsilon(r) \,dr - \int_{G_0} \rho(d,r) u^\epsilon(r) \,dr \\
& = \int_{G_0} \paren[\Big]{ \rho(a,r) - \frac{\delta}{\alpha \epsilon^{2}} } u^\epsilon(r) \,dr
- \int_{G_0} \paren[\Big]{ \rho(d,r) - \frac{\delta}{\alpha \epsilon^{2}} } u^\epsilon(r) \,dr \\
& \leq \paren{1 - \delta} \paren[\Big]{\sup_{G_0} u^\epsilon - \inf_{G_0} u^\epsilon }
\leq \paren{1 - \delta} \paren[\Big]{\sup_{r_1, r_2 \in G_0} \abs{u^\epsilon(r_1) - u^\epsilon(r_2)} }\,.
\end{align*}
To obtain the second last inequality above we used the fact that
\begin{equation}
\rho(z,r) - \frac{\delta}{ \alpha \epsilon^{2}} \geq 0\,,
\end{equation}
which is guaranteed by Lemma~\ref{lem:rholower}.
Now by It\^o's formula,
\begin{align}\label{uaubbound}
\nonumber
u^\epsilon(a) - u^\epsilon(d) & = \E^{a} u^\epsilon(Z^\epsilon_{\tau_0}) - \E^{d} u^\epsilon(Z^\epsilon_{\tau_0}) - \frac{1}{2} \E^a \paren[\Big]{
2 L^{G_0^+}_{\tau_0}
- \alpha\int_0^{\tau_0} \one_{Q}(Z^\epsilon_s) \, ds } \\
\nonumber
& \qquad + \frac{1}{2} \E^d \paren[\Big]{
2 L^{G_0^+}_{\tau_0}
- \alpha\int_0^{\tau_0} \one_{Q}(Z^\epsilon_s) \, ds } \\
\nonumber
\leq ( 1 - \delta)& \sup_{r_1,r_2 \in G_0} |u^\epsilon(r_1) - u^\epsilon(r_2)| - \frac{1}{2} \E^a \paren[\Big]{
2 L^{G_0^+}_{\tau_0}
-\alpha \int_0^{\tau_0} \one_{Q}(Z^\epsilon_s) \, ds } \\
& \qquad + \frac{1}{2} \E^d \paren[\Big]{
2 L^{G_0^+}_{\tau_0}
- \alpha\int_0^{\tau_0} \one_{Q}(Z^\epsilon_s) \, ds }
\end{align}
Note that by definition of~$\tau_0$ we have we have $L_{\tau_0}^{G_0^+} = 0$ for all $a,d \in A' \cup D'$.
Also, if $a \in A'$, then $Y^\epsilon_s > 0$ for all $s \in [0,\tau_0]$ with probability one.
Hence
\begin{multline}\label{e:uaubbound2}
\sup_{a, d \in A' \cup D'} \Bigl\lvert
- \frac{1}{2} \E^a \paren[\Big]{
2 L^{G_0^+}_{\tau_0}
- \alpha\int_0^{\tau_0} \one_{Q}(Z^\epsilon_s) \, ds
}
\\
+ \frac{1}{2} \E^d \paren[\Big]{
2 L^{G_0^+}_{\tau_0}
- \alpha\int_0^{\tau_0} \one_{ Q }(Z^\epsilon_s) \, ds
}
\Bigr\rvert
\leq \alpha\sup_{d \in D'} \E^d \int_0^{\tau_0} \one_Q(Z^\epsilon_s) \,ds\,.
\end{multline}
We claim that the term on the right is bounded by~$C \epsilon^2 \abs{\ln \epsilon}$.
To avoid distracting from the main proof, we single this out as a lemma and postpone the proof.
\begin{lemma}\label{lem:ABtoG}
With the above notation,
\begin{equation*}
\sup_{d \in D'} \E^d \int_0^{\tau_{0}} \one_{Q}(Z^\epsilon_s) \, ds \leq C \epsilon^2 \abs{\ln \epsilon} \,.
\end{equation*}
\end{lemma}
Using Lemma~\ref{lem:ABtoG} and~\eqref{e:uaubbound2} in~\eqref{uaubbound} we conclude
\begin{align} \label{uaubbound2}
\sup_{a,d \in A' \cup D'} \abs{ u^\epsilon(a) - u^\epsilon(d)} \leq ( 1 - \delta) \sup_{r_1,r_2 \in G_0} \abs{u^\epsilon(r_1) - u^\epsilon(r_2)} + C \epsilon^2 \abs{\ln \epsilon} \,.
\end{align}
To finish proving~\eqref{uosctopbottom}, we will now have to control the oscillation of~$u^\epsilon$ on $G_0$ in terms of the oscillation of~$u^\epsilon$ on $A' \cup D'$.
For this, given $Z^\epsilon_0 \in G_0$, let $\tau'_0$ be the first time that $Z^\epsilon_t$ hits $A' \cup D'$.
By It\^o's formula again, we have for all $r_1,r_2 \in G_0$:
\begin{multline} \label{uxuySbound}
u^\epsilon(r_1) - u^\epsilon(r_2)
\leq \sup_{a',d' \in A' \cup D'} (u^\epsilon(a') - u^\epsilon(d'))
\\
-\frac{1}{2} \E^{r_1} \paren[\Big]{
2 L^{G_0}_{\tau'_0}
- \alpha\int_0^{\tau'_0} \one_{Q} \, ds }
+ \frac{1}{2} \E^{r_2} \paren[\Big]{
2 L^{G_0}_{\tau'_0}
- \alpha\int_0^{\tau'_0} \one_{Q } \, ds }\,.
\end{multline}
We claim that the last two terms above are $O(\epsilon^2)$.
For clarity of presentation we single this out as a Lemma and postpone the proof.
\begin{lemma}\label{lem:GtoAB}
With the above notation
\begin{equation*}
\sup_{r \in G_0} \abs[\Big]{
\E^r \paren[\Big]{
2 L^{G_0}_{\tau'_0} - \alpha\int_0^{\tau'_0} \one_{Q}(Z^\epsilon_s) \, ds }
}
\leq C \epsilon^2\,.
\end{equation*}
\end{lemma}
Using~\eqref{uxuySbound} and Lemma \ref{lem:GtoAB}, we see
\begin{align}
\sup_{r_1,r_2 \in G_0} \abs{u^\epsilon(r_1) - u^\epsilon(r_2)}
\leq \sup_{a,d \in A' \cup D'} \abs{u^\epsilon(a) - u^\epsilon(d)}
+ C \epsilon^2\,.
\end{align}
Combining this with \eqref{uaubbound2}, we obtain
\begin{equation*}
\sup_{a,d \in A' \cup D'} \abs{u^\epsilon(a) - u^\epsilon(d)}
\leq (1 - \delta) \paren[\Big]{ \sup_{a,d \in A' \cup D'} \abs{u^\epsilon(a) - u^\epsilon(d)} + C \epsilon^2 \abs{\ln \epsilon}} + C \epsilon^2\,.
\end{equation*}
and hence
\begin{align}
\label{oscboundary}
\sup_{a,d \in A' \cup D'}
\abs{u^\epsilon(a) - u^\epsilon(d)}
\leq C \paren[\Big]{ \frac{1 - \delta}{\delta} } \epsilon^2 \abs{\ln \epsilon} + \frac{C}{\delta} \epsilon^2\,.
\end{align}
This proves~\eqref{uosctopbottom} as desired.
Now we turn this into an oscillation bound on $u^\epsilon$ over the interior.
Observe that for any $z \in \Omega_\epsilon$,
\begin{align}
u^\epsilon(z) = \E^z[ u^\epsilon(Z^\epsilon_{\tau'_0})] + \frac{1}{2} \E^z \paren[\Big]{
2 L^{Y^\epsilon}_{\tau'_0} (0^+)
- \alpha\int_0^{\tau'_0} \one_{\set{Y^\epsilon_s \leq 0} } \, ds }
\end{align}
These last terms can be estimated with the same argument used in Lemma~\ref{lem:GtoAB}, leading to
\[
\sup_{z \in \Omega_\epsilon} \abs{u^\epsilon(z) - \E^z u^\epsilon(Z^\epsilon_{\tau'_0})} \leq C \epsilon^2 \,.
\]
The combination of this and \eqref{oscboundary} implies that
\begin{equation*}
\sup_{z_1,z_2 \in \Omega_\epsilon} \abs{u^\epsilon(z_1) - u^\epsilon(z_2)}
\leq \sup_{z_1,z_2 \in \Omega_\epsilon}
\abs{\E^{z_1} u^\epsilon(Z^\epsilon_{\tau'_0})
- \E^{z_2} u^\epsilon(Z^\epsilon_{\tau'_0})}
+ C \epsilon^2
\leq C \epsilon^2 (\abs{\ln \epsilon} + 1)\,.
\end{equation*}
This implies \eqref{uosc1}, concluding the proof.
\end{proof}
For the proof of Lemma~\ref{lem:rholower} we will use a standard large deviation estimate for Brownian motion.
We state the result we need below.
\begin{lemma}\label{lem:steering}
Let $W_t$ be a standard Brownian motion in $\R^d$. Let $\gamma \in C([0,T];\R^d)$ be absolutely continuous with $S(\gamma) = \int_0^T |\gamma'(s)|^2 \,ds < \infty$. Then
\[
\P\left( \sup_{t \in [0,T]} |W(t) - \gamma(t)| \leq \delta \right) \geq \frac{\P(K)}{2} e^{- \frac{1}{2} S(\gamma) - \sqrt{2S(\gamma)/\P(K)} }
\]
where $K$ is the event $\{ \sup_{t \in [0,T]} |W(t)| \leq \delta \}$.
\end{lemma}
The proof of Lemma~\ref{lem:steering} is standard -- it follows from a change of measure, as in the proof of Theorem 3.2.1 of \cite{FreidlinWentzell12}, for example. For convenience we provide a proof at the end of this section, and prove Lemmas~\ref{lem:rholower}, \ref{lem:ABtoG} and~\ref{lem:GtoAB} next.
\begin{proof}[Proof of Lemma \ref{lem:rholower}]
We need to show that for an interval $[r_1,r_2] \subset [-\alpha\epsilon^2/2, \alpha\epsilon^2/2]$,
\[
\inf_{z \in A' \cup D'} \P^z\left( Z^\epsilon_{\tau_0} \in [r_1,r_2] \times \{0\} \right) \geq C \frac{|r_2 - r_1|}{\alpha \epsilon^2}\,.
\]
Suppose $z \in D'$ (the case $z \in A'$ is similar but less complicated by the domain geometry). In order to hit $G_0$, the process must first hit the boundary of $B(0,\alpha\epsilon^2)$ which is a ball of radius $\alpha\epsilon^2$, centered at the origin $(0,0)$, since $G_0 \subset B(0,\alpha\epsilon^2)$. So, by the strong Markov property, it suffices to show that
\[
\inf_{z \in B(0,\epsilon^2) } \P^z\left( Z^\epsilon_{\tau_0} \in [r_1,r_2] \times \{0\} \right) \geq C \frac{|r_2 - r_1|}{\alpha \epsilon^2}.
\]
\begin{figure}[hbt]
\begin{center}
\begin{tikzpicture}[scale=.9]
\fill[spinefill]
plot [smooth, tension=.5]
coordinates { (4.83, 2.71) (4, 2) (2.5,1.5) (-1,1.5) (-3.5,1.5) }
-- (-3.5,-.5)
-- plot [smooth, tension=.5]
coordinates {(-1,-.5) (2.5,-.5) (4, 0) (4.83, .71) }
-- cycle;
\draw plot [smooth, tension=.5] coordinates { (4.83, 1.71) (4, 1) (2.5,.5) (-1,.5) (-3.5,.5)};
\draw[dotted] plot [smooth, tension=.5] coordinates { (4.83, 2.71) (4, 2) (2.5,1.5) (-1,1.5) (-3.5,1.5)};
\draw[dotted] plot [smooth, tension=.5] coordinates { (4.83, .71) (4, 0) (2.5,-.5) (-1,-.5) (-3.5,-.5)};
\draw[dashed, green] (-1, .5) circle (2.692582);
\node[draw,circle,inner sep=2pt,fill] at (-3.5, .5) {};
\draw node at (-3.5, .8) {$\gamma(T)$};
\draw node at (2.5,.8) {$\gamma$};
\draw[ultra thick, red, |-|](-2.5,3) -- (2.5,3);
\draw[ultra thick, green, |-|](-2,3) -- ++(2,0);
\draw (-5,3) -- (-2.5,3);
\draw (2.5,3) -- (5,3);
\draw node at (1.8,2.6) {$\textcolor{red}{G_0}$};
\draw (-5,3) arc (180:360:5cm);
\node[draw,circle,inner sep=2pt,fill] at (-1, 3) {};
\draw node at (-1, 3.5) {$(r_0,0)$};
\draw[decorate,decoration={brace,mirror}] (-1, 2.8) -- ++(1,0)
node[pos=.5, anchor=north, yshift=-2pt] {$\kappa$};
\draw [dashed](-1,3) -- (-1,-2);
\draw node at (-1.3,1.5) {$\ell$};
\node[draw,circle,inner sep=2pt,fill] at (4.83, 1.71) {};
\draw node at (5.13,1.71) {$z$};
\coordinate (A) at (1,1.5);
\coordinate (B) at ( 1,.5);
\draw[<->] (A) -- (B) node[midway,fill=spinefill] {$\delta$};
\end{tikzpicture}
\caption{The curve $\gamma$ starts on $\partial B(0,\epsilon^2)$, goes through the line $\ell$ while keeping a distance $\delta$ from the gate $G_0$. }
\label{tikz:slit}
\end{center}
\end{figure}
Suppose that $[r_1, r_2] = [r_0 - \kappa, r_0 + \kappa]$.
Let $\ell = \{r_0\} \times [-\epsilon^2,0)$ be the vertical line segment of length~$\epsilon^2$ below the desired exit interval.
Let $T = \epsilon^4$, $\delta = \epsilon^2/4$, and let $\gamma$ be a curve parametrized by arc-length such that $\gamma(0) = z$ and the event $\sup_{t \in [0,T]} |Z^\epsilon(t) - \gamma(t)| \leq \delta$ implies that $Z^\epsilon$ hits~$\ell$ before $G_0$ (one example of such a curve is shown in Figure~\ref{tikz:slit}). We can choose such a curve $\gamma$ for which $|\gamma'| \leq O(\epsilon^{-2})$, so that the quantity $S(\gamma)$ in Lemma~\ref{lem:steering} is bounded independent of $\epsilon$ and of $z = \gamma(0) \in B(0,\epsilon^2)$. Notice also that the set $K$ from Lemma~\ref{lem:steering} satisfies
\begin{equation*}
\P(K) = \P\paren[\big]{ \sup_{t \in [0,T]} |W(t)| \leq \delta} = \P\paren[\Big]{ \sup_{t \in [0,1]} |W(t)| \leq \frac{\delta}{\sqrt{T}}}
\end{equation*}
by Brownian scaling. Then since $\delta/\sqrt{T}$ is constant, this probability is bounded below and Lemma~\ref{lem:steering} states the probability that $Z^\epsilon_t$ hits~$\ell$ before $G_0$ is bounded below (away from zero), independent of $\epsilon$.
By the Markov property it now suffices to finish the proof assuming $z_0 \in \ell$. Then consider the unique circle with center at $z_0 \in \ell$ such that the circle intersects $G_0$ at the points $(r_0 - \kappa, 0)$ and $(r_0 + \kappa, 0)$. By symmetry of Brownian motion, the exit distribution on the circle is uniform. The probability that $Z^\epsilon_{\tau_0} \in [r_0 - \kappa, r_0 + \kappa]$ is at least the probability of exiting this circle along the arc above $G_0$, which is the ratio of the arc length to the circumference. This probability is bounded below by $2 \kappa /(\alpha \epsilon^{2}) \gtrsim |r_1 - r_2|/(\alpha \epsilon^{2})$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:ABtoG}]
By the Markov property, Lemma~\ref{lem:ABtoG} will follow from the estimate
\begin{equation}\label{e:exit1}
\sup_{z\in Q}\E^z\int_0^{\tau_0} \one_Q(Z^\epsilon_s) \,ds \leq C \epsilon^2 \abs{\ln \epsilon}\,.
\end{equation}
Let $D = \set{\pm \epsilon/2}\times[-\epsilon,0]$ be the sides of~$Q$, and recall $D'$ (defined in~\eqref{e:ad}) denotes the sides of~$Q'$.
We consider two sequences of stopping times, ~$\zeta_i$, $\eta_i$, denoting successive visits of of~$Z^\epsilon$ to $G_0 \cup D'$ and $D$ respectively.
Precisely, let $\eta_0 = 0$, inductively define
\begin{align*}
\zeta_i &= \inf\set{s > \eta_{i-1} \st Z^\epsilon_s \in G_0 \cup D'}\\
\eta_i &= \inf\set{s > \zeta_i \st Z^\epsilon_s \in D}\,,
\shortintertext{for $i \in \set{1, 2, \dots}$, and let}
M &= \min\set{n\in \mathbb{N} \st Z^\epsilon_{\zeta_n}\in G_0}\,.
\end{align*}
Notice that $\zeta_M = \tau_0$.
Using the strong Markov property, and the fact that $Z^\epsilon_s\notin Q$ for $s\in (\zeta_i,\eta_i)$ for all $i < M$, we obtain
\begin{align}\label{e:stopseq1}
\nonumber
\E^z\int_0^{\tau_0}\one_Q(Z^\epsilon_s)\,ds
&= \E^z\sum_{i=1}^M\int_{\eta_{i-1}}^{\zeta_i}\one_Q(Z^\epsilon_s)\,ds
= \E^z\sum_{i=1}^M \E^{Z^\epsilon_{\eta_{i-1}}}\int_{0}^{\zeta_1}\one_Q(Z^\epsilon_s)\,ds
\\
&\leq (\E^z M) \paren[\Big]{
\sup_{d \in D} \E^d \int_0^{\zeta_1} \one_Q (Z^\epsilon_s) \, ds
}\,.
\end{align}
Since $\zeta_1$ is bounded by the exit time of a one dimensional Brownian motion (the first coordinate of $Z^\epsilon$) from an interval of length $3\epsilon/2$, we know
\begin{equation*
\sup_{d \in D} \E^d \zeta_1 \leq C \epsilon^2\,.
\end{equation*}
Using this in~\eqref{e:stopseq1} shows
\begin{equation}\label{e:exit2}
\E^z\int_0^{\tau_0}\one_Q(Z^\epsilon_s)\,ds
\leq C \epsilon^2 \E^z M\,.
\end{equation}
We now estimate $\E^z M$. Notice that
\begin{align*}
\P^z( M \geq n )
&= \P^z( Z^\epsilon_{\zeta_1} \not\in G_0,\ Z^\epsilon_{\zeta_2} \not\in G_0,\ \dots,\ Z^\epsilon_{\zeta_n} \not\in G_0)\\
&= \E^z \paren[\Big]{
\one_{\set{Z^\epsilon_{\zeta_1} \not\in G_0,\ Z^\epsilon_{\zeta_2} \not\in G_0,\ \dots,\ Z^\epsilon_{\zeta_{n-1}} \not\in G_0} }
\P^{Z^\epsilon_{\eta_{n-1}}} (Z^\epsilon_{\zeta_{1}} \not\in G_0)
}
\\
&\leq \P^z \paren{
Z^\epsilon_{\zeta_1} \not\in G_0,\ Z^\epsilon_{\zeta_2} \not\in G_0,\ \dots,\ Z^\epsilon_{\zeta_{n-1}} \not\in G_0}
\paren[\Big]{ \sup_{d \in D} \P^{d} (Z^\epsilon_{\zeta_{1}} \not\in G_0) }
\\
&= \P^z( M \geq n-1 )
\paren[\Big]{ \sup_{d \in D} \P^{d} (Z^\epsilon_{\zeta_{1}} \not\in G_0) }\,.
\end{align*}
Thus, by induction
\begin{equation*}
\P^z ( M \geq n ) \leq
\paren[\Big]{ \sup_{d \in D} \P^{d} (Z^\epsilon_{\zeta_{1}} \not\in G_0) }^n\,.
\end{equation*}
Now we claim that there exist a constant $c_0>0$, independent of $\epsilon$, such that
\begin{equation}\label{e:exitp1}
\sup_{d\in D} \P^d\paren[\big]{ Z^\epsilon_{\zeta_1} \not\in G_0} < 1 - \frac{c_0}{\abs{\ln \epsilon}}\,.
\end{equation}
This is the key step in the proof.
Once established, it implies
\begin{equation*}
\E^z M = \sum_{n = 1}^\infty \P^z( M \geq n )
\leq \sum_{n = 1}^\infty \paren[\Big]{1 - \frac{c_0}{\abs{\ln \epsilon }}}^{n-1} = \frac{\abs{\ln \epsilon}}{c_0}\,,
\end{equation*}
which when combined with with~\eqref{e:exit2} yields
\begin{equation}\label{e:exit3}
\sup_{z\in D}\E^z\int_0^{\tau_1}\one_Q(Z^\epsilon_s)\,ds \leq \frac{C \epsilon^2 \abs{\ln \epsilon}}{c_0}\,.
\end{equation}
This proves~\eqref{e:exit1} and finishes the proof of Lemma~\ref{lem:ABtoG}.
\medskip
Thus it only remains to prove~\eqref{e:exitp1}.
We will prove it by showing
\begin{equation}\label{e:exitp0}
\inf_{z\in D} \P^z\paren[\big]{ Z^\epsilon_{\zeta_1} \in G_0} > \frac{c_0}{\abs{\ln \epsilon}}\,.
\end{equation}
We will prove this in three stages.
First, by scaling, it is easy to see that the probability that starting from~$D$ the process $Z^\epsilon$ hits $B(0, \epsilon / 4)$ before $D'$ with probability~$c_0 > 0$.
Next, using the explicit Greens function in an annulus we show that the probability that starting from $B(0, \epsilon/4)$, the process $Z^\epsilon$ hits $B(0, \alpha\epsilon^2)$ before exiting $B(0, \epsilon/2)$ with probability $c_0 / \abs{\ln \epsilon}$.
Finally, by scaling, it again follows that that starting from $B(0, \alpha\epsilon^2)$ the process $Z^\epsilon$ hits $G_0$ before exiting $B(0, 2\alpha\epsilon^2)$ with probability $c_0 > 0$.
For the first stage, consider the stopping times
\begin{align*}
\sigma_{\epsilon/4} & = \inf \set[\Big]{ t > 0 \st Z^\epsilon_t \in B\paren[\Big]{0, \frac{\epsilon}{4}} } \,,\\
\sigma_{D'} & = \inf \set{ t > 0 \st Z^\epsilon_t \in D' } \,.
\end{align*}
By rescaling, it immediately follows that
\begin{equation}\label{p0lower}
\inf_{z \in D} \P( \sigma_{\epsilon/4} < \sigma_{D'} \st Z^\epsilon_0 = z) \geq p_1\,,
\end{equation}
for some $p_1 > 0$, independent of $\epsilon$.
For the second stage suppose for $Z^\epsilon_0 \in \partial B(0,\epsilon/4)$.
Consider the stopping times~$\sigma_{\alpha\epsilon^2}$ and $\sigma_{\epsilon/2}$ defined by
\begin{align*}
\sigma_{\alpha\epsilon^2} & = \inf \{ t > 0 \st Z^\epsilon_t \in \partial B(0,\epsilon^2) \}\,,\\
\sigma_{\epsilon/2} & = \inf \{ t > 0 \st Z^\epsilon_t \in \partial B(0,\epsilon/2) \}\,.
\end{align*}
The function
\[
f(z) = \frac{\ln (2|z|/\epsilon)}{\ln(2 \alpha\epsilon)}
\]
is harmonic in $B(0,\epsilon/2) \setminus B(0,\alpha\epsilon^2)$ and satisfies $f = 1$ on $\partial B(0,\alpha\epsilon^2)$, and $f = 0$ on $\partial B(0,\epsilon/2)$. This implies that for all $z \in B(0, \epsilon/4)$ we have
\begin{equation}\label{loglower}
\P^z( \sigma_{\alpha\epsilon^2} < \sigma_{\epsilon/2} )
= f(z)
= \frac{\ln (1/2)}{\ln(2 \epsilon)} \,.
\end{equation}
Finally, for the last stage, let $\sigma_{2\alpha\epsilon^2}$ be the first time $Z^\epsilon$ exits $B(0, 2\alpha\epsilon^2)$.
By scaling, it immediately follows that for all $z \in \partial B(0, \alpha\epsilon^2)$
\begin{equation}\label{peps2G0}
\P^z( \tau_0 < \sigma_{2\alpha\epsilon^2} ) \geq p_2\,,
\end{equation}
for some constant $p_2 > 0$, independent of~$\epsilon$.
The strong Markov property and~\eqref{p0lower}, \eqref{loglower}, and~\eqref{peps2G0} imply
\[
\inf_{z\in D} \P(Z^\epsilon_{\zeta_1} \in G_0\st Z^\epsilon_{0}= z) \geq p_1 \cdot \frac{\ln (1/2)}{\ln(2 \alpha\epsilon)} \cdot p_2 \,.
\]
By the time-homogeneity of the Markov process $Z^\epsilon$, this establishes \eqref{e:exitp0}, finishing the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:GtoAB}]
To estimate the local time term, consider the function
\begin{equation*}
w(x,y) =
\begin{dcases}
\alpha \epsilon^2 - y\,, & y \in [0,\alpha \epsilon^2]\,,\\% \ \remove{x \in [-\epsilon^2/2, \epsilon^2/2]\,,}\\
\alpha \epsilon^2, & \text{otherwise,}
\end{dcases}
\end{equation*}
which satisfies $\partial_y w(x,0^+) - \partial_y w(x,0^-) = - 1$ for $x \in [-\alpha\epsilon^2/2, \alpha\epsilon^2/2]$. Let $\tau_{A'}$ be the first hitting time to the set $A'$, where we know $w = 0$.
Using It\^o's formula we obtain
\[
\E^z L^{G_0}_{\tau_{A'}} = w(z), \quad z \in G_0.
\]
Clearly $\tau_{A'} \geq \tau'_0$, and so
\[
\sup_{z \in G_0} \E^z L^{G_0}_{\tau'_0} \leq \sup_{z \in G_0} \E^z L^{G_0}_{\tau_{A'}} = \alpha \epsilon^2 \,.
\]
Next, we estimate the term
\begin{equation}\label{e:intZtau0prime}
\sup_{z \in G_0} \E^z \int_0^{\tau'_0} \one_{Q}(Z^\epsilon_s) \, ds \,.
\end{equation}
Let $\tau_{D'} = \inf \{ t > 0 \st Z^\epsilon_t \in D'\}$, so that $\tau_{D'} \geq \tau'_0$. Let $H = \{ (x,y) \in \R^2 \st y = -\epsilon \}$ denote the bottom boundary of $\Omega_0$, and let $H' = [-3\epsilon/4,3\epsilon/4] \times \{ -\epsilon \} = \bar{Q'} \cap H$.
We now consider repeated visits to $H'$ before hitting $D'$.
For this, define the stopping times $\{\zeta_k \}_{k=0}^\infty$ inductively by
\begin{align*}
\zeta_0 & = \inf \set{ t > 0 \st Z^\epsilon_t \in H }\,,\\
\zeta_k & = \inf \set{ t \geq \zeta_{k-1} + \epsilon^2 \st Z^\epsilon_t \in H }\,,
\quad \text{for } k = 1,2,3,\dots\,,\\
\shortintertext{and define}
M &= \min \{ k \in \mathbb{N} \st Z^\epsilon_{\zeta_k} \in H \setminus H' \}\,.
\end{align*}
Observe that if $Z^\epsilon_0 \in G_0$, then $\tau_{D'} \leq \zeta_M$.
Indeed, since $Z^\epsilon_{\zeta_M} \in H \setminus H'$ and trajectories of process $Z^\epsilon$ is continuous, they must must have passed through the set $D'$ at some time before~$\zeta_M$.
Now, to bound~\eqref{e:intZtau0prime} we observe
\begin{align}
\int_0^{\tau'_0} \one_{Q}(Z^\epsilon_s) \, ds \leq \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + \sum_{k=1}^M \int_{\zeta_{k-1}}^{\zeta_k} \one_{Q}(Z^\epsilon_s) \, ds. \label{tausum}
\end{align}
On the event $\{M > k-1\}$ we must have $Z^\epsilon_{\zeta_{k-1}} \in H'$. Using this observation, the strong Markov property, and the time-homogeneity of the process, we see that for any $z \in G_0$ we have
\begin{align}
\E^z \int_0^{\tau'_0} \one_{Q}(Z^\epsilon_s) \, ds & \leq \E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + \E^z \sum_{k=1}^M \int_{\zeta_{k-1}}^{\zeta_k} \one_{Q}(Z^\epsilon_s) \, ds \nonumber \\
& = \E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + \E^z \sum_{k=1}^M \E^{Z^\epsilon_{\zeta_{k-1}}} \int_{\zeta_{0}}^{\zeta_1} \one_{Q}(Z^\epsilon_s) \, ds \nonumber \\
& \leq \E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + \E^z \sum_{k=1}^M \sup_{z' \in H'} \E^{z'} \int_{\zeta_{0}}^{\zeta_1} \one_{Q}(Z^\epsilon_s) \, ds \nonumber \\
& = \E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + (\E^z M) \sup_{z' \in H'} \E^{z'} \int_{\zeta_{0}}^{\zeta_1} \one_{Q}(Z^\epsilon_s) \, ds\,. \label{tausum2
\end{align}
We now bound the right hand side of~\eqref{tausum2}.
Note
\begin{equation}\label{e:EM1}
\E^z M
= \sum_{j = 1}^\infty \P^z(M \geq j)
= \sum_{j = 1}^\infty \P^z( Z^\epsilon_{\zeta_{0}} \in H',\ Z^\epsilon_{\zeta_1} \in H',\ \dots,\ Z^\epsilon_{\zeta_{j-1}} \in H' )\,.
\end{equation}
By the Markov property
\begin{align}
\nonumber
\P^z\paren[\big]{ Z^\epsilon_{\zeta_{i+1}} \in H',\ Z^\epsilon_{\zeta_i} \in H'} &= \E^z \paren[\big]{ \one_{Z^\epsilon_{\zeta_i} \in H'} \P^{Z^\epsilon_{\zeta_i}} ( Z^\epsilon_{\zeta_1} \in H' ) }
\\
\label{e:zetai1}
&\leq \paren[\Big]{ \sup_{z' \in H'} \P^{z'}( Z^\epsilon_{\zeta_1} \in H' ) } \P^z( Z^\epsilon_{\zeta_i} \in H' )
\end{align}
Now using Lemma~\ref{lem:steering} and the fact that $\zeta_1 \geq \epsilon^2$, one can show that
\begin{equation*}
\sup_{z' \in H'} \P^{z'}( Z^\epsilon_{\zeta_1} \in H' ) \leq 1 - c_0\,,
\end{equation*}
for some constant $c_0 > 0$, independent of~$\epsilon$.
Combining this with~\eqref{e:zetai1} and using induction we obtain
\begin{equation*}
\sum_{j = 1}^\infty \P^z( Z^\epsilon_{\zeta_{0}} \in H',\ Z^\epsilon_{\zeta_1} \in H',\ \dots,\ Z^\epsilon_{\zeta_{j-1}} \in H' )
\leq \sum_{j=1}^\infty (1 - c_0)^{j-1}\,.
\end{equation*}
Thus, using~\eqref{e:EM1} we see
\begin{equation*}
\E^z M \leq \frac{1}{c_0}\,.
\end{equation*}
Using this in~\eqref{tausum2} we have
\begin{align}
\E^z \int_0^{\tau'_0} \one_{Q}(Z^\epsilon_s) \, ds & \leq \E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + \frac{1}{c_0} \sup_{z' \in H'} \E^{z'} \int_{\zeta_{0}}^{\zeta_1} \one_{Q}(Z^\epsilon_s) \, ds \nonumber \\
\label{Qtime1}
& \leq \E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds + \frac{1}{c_0} \paren[\Big]{ \epsilon^2 + \sup_{z' \in \Omega} \E^{z'} \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds }\,.
\end{align}
To bound this, consider the function
\begin{equation*}
v(x,y)
= \begin{dcases}
\tfrac{1}{2} \paren{\epsilon^2 - y^2} \,, & y \in [-\epsilon,0]\,, \\
\tfrac{1}{2} \epsilon^2 \,, & y > 0\,.
\end{dcases}
\end{equation*}
and observe that for any $z \in \Omega_\epsilon$,
\begin{equation*}
\E^z \int_0^{\zeta_0} \one_{Q}(Z^\epsilon_s) \, ds
\leq \E^z \zeta_0
= v(z) \leq \frac{\epsilon^2}{2} \,.
\end{equation*}
Substituting this in~\eqref{Qtime1} shows
\[
\E^z \int_0^{\tau'_0} \one_{Q}(Z^\epsilon_s) \, ds \leq \paren[\Big]{\frac{1}{2} + \frac{3}{2c_0} } \epsilon^2\,,
\]
completing the proof.
\end{proof}
Finally, for completeness we prove Lemma~\ref{lem:steering}.
The proof is a standard argument using the Girsanov theorem, and can for instance be found in~\cite{FreidlinWentzell12} (see Theorem 3.2.1, therein).
\begin{proof}[Proof of Lemma~\ref{lem:steering}]
Define $Y(t) = W(t) - \gamma(t)$. Let $B(t)$ be an independent Brownian motion in $\R$ with respect to measure $\P$. Let define a new measure $\bm{Q}$ by
\[
\frac{d \bm{Q}}{d\P} = e^{- \int_0^T \gamma'(s) \,dB(s) - \frac{1}{2} \int_0^T |\gamma'(s)|^2 \,ds}
\]
Let $\tilde K$ be the event $\tilde K = \tilde K_{T,\delta} = \{ \sup_{t \in [0,T]} |B(t)| \leq \delta \}$. Let $S(\gamma) = \int_0^T |\gamma'(s)|^2 \,ds$. According to the Girsanov theorem,
\begin{align*}
\P( \sup_{ t \in [0,T]} |Y(t)| \leq \delta) & = \bm{Q}( \tilde K ) \nonumber \\
& = \E_{\P}\left[ \one_{\tilde K} e^{- \int_0^T \gamma'(s) \,dB(s) - \frac{1}{2} \int_0^T |\gamma'(s)|^2 \,ds} \right]\\
& = e^{- \frac{1}{2} S(\gamma)} \E_{\P}\left[ \one_{\tilde K} e^{- \int_0^T \gamma'(s) \,dB(s)} \right]
\end{align*}
Now, by Chebychev and the It\^o isometry,
\[
\P\left( \int_0^T \gamma'(s) \,dB(s) \geq \alpha \sqrt{S(\gamma)} \right) \leq \frac{1}{\alpha^2}
\]
So, if$\frac{1}{\alpha^2} \leq \frac{1}{2} \P(\tilde K)$, we have
\[
\P \left( \sup_{ t \in [0,T]} |Y(t)| \leq \delta\right) \geq e^{- \frac{1}{2} S(\gamma) - \alpha \sqrt{S(\gamma)} }) \frac{1}{2} \P(\tilde K)
\]
In particular, by choosing $\alpha = \sqrt{2/\P(\tilde K)} > 0$, we have
\[
\P \left( \sup_{ t \in [0,T]} |Y(t)| \leq \delta\right) \geq e^{- \frac{1}{2} S(\gamma) - \sqrt{2S(\gamma)/\P(\tilde K)} } \frac{1}{2} \P(\tilde K)
\]
Note that $\P(\tilde K) = \P(K)$ since $B$ and $W$ have the same law under $\P$.
\end{proof}
\subsection{Local Time on Teeth Boundaries (Lemma~\ref{l:FCLocalTimeT}).}\label{s:FCLocalTimeT}
The last remaining lemma to prove is Lemma~\ref{l:FCLocalTimeT} which is the local time balance within the teeth. We again use the symmetry and geometric series arguments as in the proof of Proposition~\ref{p:uosc1}.
\begin{proof}[Proof of Lemma~\ref{l:FCLocalTimeT}]
As with \eqref{claim2G}, we will estimate
\begin{multline} \label{claim1xk}
I_k \stackrel{\Delta}{=} \E^z \Bigl( \int_0^t \frac{1}{2} \partial_x^2 f(Z^\epsilon_s) \one_{\{Y_s^\epsilon > 0\}} \one_{\{|X_s^\epsilon - \epsilon k| < \epsilon/2\}}\,ds
\\
+ \int_0^t \partial_x f(Z^\epsilon_s) \one_{\{|X_s^\epsilon - \epsilon k| < \epsilon/2\}} \, d L^{\pm}_s \Bigr)
\end{multline}
for any $z \in K \cap \Omega_\epsilon$.
As before, Lemma~\ref{l:FCLocalTimeT} will follow if we can show that for any finite $M$, $\sum_{\epsilon \abs{k} < M} I_k$ vanishes as $\epsilon \to 0$.
Since there are $O(1 / \epsilon)$ terms in the sum, it suffices to bound each $I_k$ by $o(\epsilon)$.
Without loss of generality, assume $k = 0$ and let $T_0 = [-\alpha\epsilon^2/2,\alpha\epsilon^2/2] \times [0,1]$ denote the tooth centered at $k = 0$.
Define the function~$\tilde f \colon T_0 \to \R$ by
\begin{equation*}
\tilde f(x,y) \stackrel{\Delta}{=} f(x,y) - f(0,y) - x \partial_x f(0,y) \,,
\end{equation*}
Note that for all $(x,y) \in T_0$ we have
\begin{equation*}
\tilde f(0,y) = 0\,,
\qquad
\partial_x \tilde f(0,y) = 0\,,
\qquad\text{and}\qquad \partial_x^2\tilde f(x,y) = \partial_x^2 f(x,y)\,.
\end{equation*}
and hence $\norm{\tilde f}_\infty = O(\epsilon^4)$.
Moreover,
\[
\partial_y^2\tilde f(x,y) = \partial_y^2 f(x,y) - \partial_y^2 f(0,y) - x \partial_x \partial_y^2 f(0,y) = O(\epsilon^4 ),
\]
assuming $\partial_y^2 f \in C^1$, and $\partial_y \tilde f(x,0) = O(\epsilon^4)$ for $x \in [-\alpha\epsilon^2/2,\alpha\epsilon^2/2]$.
We now extend the definition of $\tilde f$ continuously outside of $T_0$ (into the spine) to a $O(\epsilon^2)$ neighborhood of $G$ as follows.
Let $\eta(x,y)$ be a smooth, radially-symmetric cutoff function, vanishing outside of $B_{2}(0,0)$ and such that $\eta(z) = 1$ for $\abs{z} \leq 1$.
Then, for $y \leq 0$ (i.e.\ outside the tooth $T_0$), define
\begin{equation*}
\tilde f(x,y) \stackrel{\Delta}{=}
\eta\paren[\Big]{ \frac{x}{\alpha\epsilon^2}, \frac{y}{\alpha\epsilon^2}}
\paren[\Big]{ f(x,0) - f(0,0) - x \partial_x f(0,0) }\,.
\end{equation*}
In this way, $\tilde f$ has the additional properties that
\begin{enumerate}
\item $\tilde f$ vanishes outside of $T_0 \cup B_{2 \alpha\epsilon^2}(0,0)$,
\item $\partial_y \tilde f = 0$ on $(\partial Q) \setminus G$,
\item The jump in $\partial_y \tilde f$ across $G$ is $O(\epsilon^4)$,
\item $\Delta \tilde f= O(1)$ in the region $B^-_{2\alpha \epsilon^2} = \{y \leq 0\} \cap B_{2 \alpha\epsilon^2}(0,0)$.
\end{enumerate}
This last point stems from the fact that $|f(x,0) - f(0,0) - x \partial_x f(0,0) | = O(\epsilon^4)$. In view of this construction, we see that
\begin{align*}
I_0 = \E^z \Bigl( \int_0^t \frac{1}{2} (\partial_x^2\tilde f &+ \partial_y^2\tilde f) (Z^\epsilon_s) \one_{\{Z_s^\epsilon \in T_0 \}} \,ds - \int_0^t \partial_x \tilde f(Z^\epsilon_s) \one_{\{Z^\epsilon_s \in T_0\}}d L^{+}_s \Bigr) \\
&\qquad\qquad + \E^z \paren[\Big]{ \int_0^t \partial_x f(0,Y_s^\epsilon)d(L^-_s - L^+_s) } + O(\epsilon^2) t \\
& = R_1 + R_2 + O(\epsilon^2) t\,.
\end{align*}
Notice how we have introduced the $\partial_y^2\tilde f$ term for the price of $O(\epsilon^2)t$. We also still have $\partial_y \tilde f(x,1) = 0$ on the top boundary of the tooth. By It\^o's formula applied to $\tilde f$, we have
\begin{align*}
R_1 & = \E^z [\tilde f(Z^\epsilon_t) - \tilde f(Z^\epsilon_0)] + \E^z \paren[\Big]{ \int_0^t \partial_y \tilde f(X^\epsilon_s,0) dL^G } + \E^z \paren[\Big]{\int_0^t O(1)\one_{B_{2\alpha\epsilon^2}}(Z_s) \,ds} \\
& = O(\epsilon^4) + O(\epsilon^2) \E^z \paren[\Big]{ L^G_t } + O(1) \E^z \paren[\Big]{\int_0^t \one_{B^-_{2 \alpha\epsilon^2}}(Z_s) \,ds} \\
& = O(\epsilon^4) + O(\epsilon^2) + O(1) R_3,
\end{align*}
by since $\E^z L^G_t = O(1)$ by~\eqref{e:ELG}.
We now estimate the term $R_2$.
By symmetry with respect to reflection in the $y$ coordinate, we note that
\[
\E^{z'} \paren[\Big]{ \int_0^t \partial_x f(0, Y_s^\epsilon)d(L^-_s - L^+_s) } = 0
\]
for any $z' = (0,y)$ on the axis of the tooth $T_0$.
Thus by symmetry and the Markov property, it suffices to estimate
\begin{equation*}
\E^z \paren[\Big]{ \int_0^\tau \partial_x f(0, Y_s^\epsilon) \, dL^+_s }\,,
\end{equation*}
where $\tau = \inf\set{t \st X^\epsilon_t = 0}$ is the first time that $Z^\epsilon_t$ reaches this $x$-axis $\{0\} \times \R$, and $z$ is to the right of the $y$-axis.
Clearly this is bounded by $\norm{\partial_x f}_\infty \E^z L^+_\tau$.
Moreover, using $x \varmin \alpha \epsilon^2 / 2$ as a test function, we immediately see $\E^z L^+_\tau \leq \alpha \epsilon^2 / 2$.
This shows $R_2 = O(\epsilon^2)$ as desired.
Finally, we estimate the term
\[
R_3 = \E^z \paren[\Big]{\int_0^t \one_{B^-_{2 \alpha\epsilon^2}}(Z_s) \,ds},
\]
where $B^-_{2 \alpha\epsilon^2} = \{y \leq 0\} \cap B_{2 \epsilon^2}(0,0)$.
The geometry of the domain $\Omega_\epsilon$ makes this estimate a little tedious.
Since the proof is very similar to the arguments used in the proof of Proposition~\ref{p:uosc1}, we do not spell out all the details here.
We will show that $R_3 \leq O(\epsilon^3 |\log(\epsilon)|)$.
For this, we first claim
\[
\sup_{z \in \Omega_\epsilon \cap K} \E^z \paren[\Big]{\int_0^{\tau_{4\alpha\epsilon^2}} \one_{B^-_{2\alpha \epsilon^2}}(Z_s) \,ds} \leq O(\epsilon^4)
\]
where $\tau_{4 \alpha\epsilon^2} = \inf \{ t \;|\; Z^\epsilon_t \in D^-_{4 \epsilon^2} \}$, and $D^-_{4\alpha \epsilon^2} = \{ y \leq 0\} \cap \partial B_{4 \alpha\epsilon^2}(0,0)$.
This follows by directly applying It\^o's formula with a function $f$ satisfying $\Delta f \leq 0$ in $\{ y \leq 0\} \cap B_{4 \alpha\epsilon^2}(0,0) \}$, with $\Delta f \leq - c < 0$ in $B^-_{2\alpha \epsilon^2}$.
Next, we claim that there is $C > 0$ such that
\[
\inf_{z \in D^-_{4 \alpha\epsilon^2} } \P^z \paren[\Big]{ \sigma_{\epsilon/2} \leq \tau_{2\alpha \epsilon^2} } \geq \frac{C}{\abs{\log(\epsilon)}} \,,
\]
where $\sigma_{\epsilon/2} = \inf \{ t \st |X^\epsilon_t| = \epsilon/2 \}$ and $\tau_{2 \alpha\epsilon^2} = \inf \{ t \st Z^\epsilon_t \in B^-_{2 \alpha\epsilon^2} \}$.
This is the narrow escape asymptotics~\cites{HolcmanSchuss14}, and follows from a direct calculation with the Greens function in a manner similar to the proof of~\eqref{e:exitp1}.
Finally, we claim that for any $t > 0$, there is $C > 0$ such that
\[
\inf_{\{ |x| = \epsilon/2 \}} \P}%{{\mathbb P}^z(\tau_{2 \alpha\epsilon^2} \geq t) \geq C \epsilon.
\]
This follows from comparison between $X^\epsilon_t$ and a standard Brownian motion on $\R$, via Lemma \ref{lem:BrownianComp}. Thus, starting from $z \in D^-_{4 \alpha\epsilon^2}$, with probability at least $C \epsilon/|\log(\epsilon)|$ the process $Z_t$ will make a long excursion such that it doesn't return to $B^-_{2\alpha \epsilon^2}$ before time $t$. Using the same geometric series argument as in the proof of Lemma~\ref{lem:ABtoG}, we have
\[
R_3 \leq C (\log(\epsilon)/\epsilon) \sup_{z} \E^z \paren[\Big]{\int_0^{\tau_{4\alpha\epsilon^2}} \one_{B^-_{2\alpha \epsilon^2}}(Z_s) \,ds} = O(\epsilon^3 |\log(\epsilon)|) \,,
\]
as claimed.
Finally, combining all these estimates we conclude that for any $k$, $I_k$ (defined in~\eqref{claim1xk}) is at most $O(\epsilon^2)$.
Consequently $\sum_{\epsilon \abs{k} < M} I_k \to 0$ as $\epsilon \to 0$, concluding the proof.
\end{proof}
\subsection{Remarks About Other Scalings.}\label{sec:otherscaling}
Consider a comb-shaped domain with the general scaling described in Remark \ref{scale2}.
For clarity, let us suppose that
\[
w_S(\epsilon) = \epsilon^{\sigma} \,,
\qquad\text{and}\qquad
w_T(\epsilon) = \frac{\alpha \epsilon^{1 + \sigma}}{2}\,,
\]
for some~$\sigma > 0$. Theorem \ref{t:zlimfat}, which we have proved already, pertains to the case $\sigma = 1$. In the cases $\sigma <1$ and $\sigma > 1$, the same arguments may be applied, showing that the limit process is the same as with $\sigma = 1$.
Only a minor modification of Proposition~\ref{p:uosc1} and its supporting lemmas are required, and we sketch those modifications here.
Analogous to the previous definition \eqref{e:QG0}, we define the sets
\begin{equation}
Q = \brak[\big]{ -\frac{\epsilon}{2}, \frac{\epsilon}{2}} \times \brak[\big]{ -\epsilon^\sigma,0} \quad \quad \text{and}\quad \quad G_0 = \set[\Big]{ (x,0) \st - \alpha \frac{ \epsilon^{1 + \sigma}}{2} < x < \alpha \frac{ \epsilon^{1 + \sigma}}{2} }. \label{Qnew}
\end{equation}
Notice that $Q$ is no longer a square if $\sigma \neq 1$. In the case $\sigma > 1$, the bound $0 \leq u^\epsilon \leq C \epsilon^{2} |\ln \epsilon|$ in Proposition \ref{p:uosc1} remains unchanged. The proofs of Lemma~\ref{lem:rholower}, Lemma~\ref{lem:ABtoG}, and Lemma~\ref{lem:GtoAB}, extend in a straightforward way. In particular, the lower bound in Lemma \ref{lem:rholower} becomes $\rho(z,r) \geq \delta / (\alpha \epsilon^{1 + \sigma})$. In the proof of \eqref{e:exitp0} within Lemma \ref{lem:ABtoG}, the balls $B(0,\epsilon^\sigma/4)$ and $B(0,\alpha \epsilon^{1 + \sigma})$ fill the roles of $B(0,\epsilon/4)$ and $B(0,\alpha \epsilon^2)$ in the previous proof.
In the case $\sigma \in (0, 1)$, the bound on $u^\epsilon$ in Proposition \ref{p:uosc1} becomes $0 \leq u^\epsilon \leq C \epsilon^{1 +\sigma} |\ln \epsilon|$. Nevertheless, this bound is still $o(\epsilon)$, so that the rest of the argument for the proof of Lemma~\ref{l:FCgenerator} proceeds as before. To prove this modification of Proposition \ref{p:uosc1}, we can modify Lemma \ref{lem:rholower}, Lemma \ref{lem:ABtoG}, and Lemma \ref{lem:GtoAB}, as follows. First, $A'$ and $D'$ are defined to be the sets
\[
A' \stackrel{\Delta}{=} \brak[\Big]{ - \alpha \frac{\epsilon^{1 + \sigma}}{2} , \alpha \frac{\epsilon^{1 + \sigma}}{2} } \times \{ \alpha \epsilon^{1 + \sigma} \} \quad \quad \text{and} \quad \quad D' \stackrel{\Delta}{=} \{ \pm \epsilon^{\sigma} \} \times [-\epsilon^\sigma,0].
\]
With these definitions, the lower bound of Lemma \ref{lem:rholower} becomes $\rho(z,r) \geq \frac{\delta}{\alpha \epsilon^{1 + \sigma}}$. In Lemma \ref{lem:ABtoG}, the analogous bound becomes $O(\epsilon^{1 + \sigma}| \ln(\epsilon|)$. Here, the logarithmic factor arises in the same way as before. The $\epsilon^{1 + \sigma}$ factor comes from the fact that for a Brownian motion on ${\mathbb R}$, the expected time spent in $[-\epsilon,\epsilon]$ before hitting $\pm \epsilon^\sigma$ is $O(\epsilon^{1 + \sigma})$. Similarly, the bound in Lemma \ref{lem:GtoAB} is $O(\epsilon^{1 + \sigma})$. Together these imply the $O(\epsilon^{1 +\sigma} |\ln \epsilon|)$ upper bound in Proposition \ref{p:uosc1}.
\section{Comb-Shaped Graphs (Theorem~\ref{t:zlim}).}\label{s:thincomb}
\subsection{An SDE Description of \texorpdfstring{$Z^\epsilon$}{Z-epsilon}.}
We begin by constructing the graph diffusion $Z^\epsilon$ on the comb $\mathcal C_\epsilon$.
Following the approach of Freidlin and Sheu~\cite{FreidlinSheu00}, let $\mathcal{L}^\varepsilon$ be the linear operator defined by
\begin{equation}\label{e:Lep}
\mathcal{L}^\varepsilon f =
\begin{dcases}
\frac{1}{2} \partial_y^2f & \text{ if } (x,y) \in \varepsilon\Z \times(0,1) \,,\\
\frac{1}{2} \partial_x^2f & \text{ if } (x,y) \in \R \times \{0\} \,.
\end{dcases}
\end{equation}
Let the domain, denoted by $\mathcal{D}(\mathcal{L}^\varepsilon)$, be the set of all functions
\begin{equation*}
f\in C_0(\Omega_\varepsilon) \cap C^2_b(\Omega_\varepsilon - J_\epsilon)
\end{equation*}
such that $\mathcal{L}^\varepsilon f \in C_0(\Omega_\varepsilon)$ and
\begin{subequations}
\begin{alignat}{2}
\label{e:flux1}
\span \alpha \varepsilon \partial_y f(x,0)
+ \partial_x^+f(x,0) -\partial_x^-f(x,0) = 0
&\quad&\text{for } x \in \varepsilon\Z \,,
\\
\label{e:flux2}
\span\partial_y f(x,1) = 0
&&\text{for } x \in \varepsilon\Z \,
\end{alignat}
\end{subequations}
The general theory in~\cite[\S4.1--4.2]{EthierKurtz86} (see also~\cite[Theorem~3.1]{FreidlinWentzell93}) can be used to show the existence of a continuous Fellerian Markov process $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ that has generator $\mathcal{L}^\epsilon$.
In the teeth, and in between the nodes, it is clear that $Z^\epsilon$ is simply a Brownian motion.
The flux conditions~\eqref{e:flux1}--\eqref{e:flux2} introduce local time terms at junction points and ends of the teeth.
This can be stated precisely in terms of an It\^o formula as in the following Lemma.
\begin{lemma}\label{l:graphIto}
Let $F$ be the set of all functions $f \in C(\mathcal C_\epsilon)$ such that $f$ is smooth on $\mathcal C_\epsilon - J_\epsilon$ and all one sided derivatives exist at the junction points $J_\epsilon$.
There is a Brownian motion~$W$ such that for any for any $f \in F$ we have
\begin{align*}
df(Z^\epsilon_t)
&= \one_{\set{Y^\epsilon_t = 0}} \partial_x f(Z^\epsilon_t) \, dW_t
+ \frac{1}{2} \one_{\set{Y^\epsilon_t = 0}} \partial_x^2 f(Z^\epsilon_t) \, dt
\\
&\quad
+ \one_{\set{Y^\epsilon_t > 0}} \partial_y f(Z^\epsilon_t) \, dW_t
+ \frac{1}{2} \one_{\set{Y^\epsilon_t > 0}} \partial_y^2 f(Z^\epsilon_t) \, dt
\\
&\quad
\frac{1}{2 + \alpha \epsilon}
\paren[\Big]{
\partial_x^+ f(Z^\epsilon_t)
- \partial_x^- f(Z^\epsilon_t)
+ \alpha \epsilon \partial_y f(Z^\epsilon_t)
} \, d\ell_t\,.
\end{align*}
Here $\ell$ defined by
\begin{equation}\label{e:ellDef}
\ell_t = L_t^{Z^\epsilon}(J_\epsilon)
\end{equation}
is the local time of the joint process $Z^\epsilon_t = (X^\epsilon_t, Y^\epsilon_t)$ about the junction points $\epsilon \Z \times \set{0}$.
\end{lemma}
\begin{remark}
The coefficients of each of~$\partial_x^-$, $\partial_x^+$ and $\partial_y$ in the local time term above can heuristically be interpreted the chance that~$Z^\epsilon$ enters the teeth.
\end{remark}
\begin{proof}
We refer the reader to Section 2 (and specifically Lemma 2.3) in Freidlin and Sheu~\cite{FreidlinSheu00} where stochastic calculus for graph diffusions is developed in a general setting.
\end{proof}
Notice that choosing~$f(x,y) =x$ and~$f(x,y) = y$ in Lemma~\ref{l:graphIto} yields the following SDEs:
\begin{subequations}
\begin{align}
\label{e:sdeXep}
dX^\varepsilon_t &= \one_{\{Y^\varepsilon_t = 0\}} \, dW_t\,,
\\
\label{e:sdeYep}
dY^\varepsilon_t &= \one_{\{Y^\varepsilon_t > 0\}} \, dW_t
+ \frac{\alpha\epsilon}{2+ \alpha \epsilon}\, d\ell_t - dL^{Y^\epsilon}_t(1)\,
\end{align}
\end{subequations}
Note that~\eqref{e:sdeXep} and~\eqref{e:sdeYep} are coupled through the local time term $d\ell$, which is the local time of the joint process $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ at the junction points~$J_\epsilon$.
We claim that with the additional assumption that the process spends~$0$ time in junctions, weak uniqueness holds for~\eqref{e:sdeXep}--\eqref{e:sdeYep}, and thus this system can in fact be used to characterize the process $Z^\epsilon$.
Since this will not be used in this paper, we refer the reader to Engelbert and Peskir~\cite{EngelbertPeskir14} for the proof of similar results.
\begin{comment}[2018-08-05 GI: Old stuff for removal,coltext=red]
We begin by describing the process $Z^\epsilon$ on the thin comb,
The aim of this section is to describe the process $Z^\epsilon$, the diffusion associated with~\eqref{e:spine}--\eqref{e:reflect}, in terms of the underlying SDE.
Roughly speaking the process $Z^\epsilon$ is a ``skew'' Brownian motion that enters the teeth with small probability ($\alpha \epsilon / (2 + \alpha \epsilon)$) at the junction points $\epsilon \Z \times \set{0}$, and continues in the spine otherwise.
Let $W_t$ be a standard Brownian motion on ${\mathbb R}$. Consider the system
[SNIP]
where $L_t^{Y^\epsilon}(1)$ is the local time of the process $Y^\epsilon$ about $1$, and
[snip~\eqref{e:ellDef}]
Explicitly,
\begin{equation} \label{localdef}
\ell_t
=\lim_{\delta \to 0} \frac{1}{2 \delta} \int_0^t
\one_{\set{d( Z^\epsilon(s), \varepsilon \Z) < \delta } } \, ds
\,,
\end{equation}
where $d$ denotes the graph distance between two points in $\mathcal C_\epsilon$. As we will shortly see, the process $Z^\epsilon$ is precisely the diffusion associated with~\eqref{e:spine}--\eqref{e:reflect}.
Following Freidlin and Sheu \cite{FreidlinSheu00}, a weak solution to the system~\eqref{e:sdeXep}--\eqref{e:sdeYep} can be constructed abstractly as follows.
\begin{proposition}\label{p:existenceSDE}
The process $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ is a weak solution to the system \eqref{e:sdeXep}--\eqref{e:sdeYep} with initial distribution~$\mu^\epsilon$.
That is, there is a non-decreasing process $\ell_t$ adapted to $\mathcal{F}^{Z^\epsilon}_t$ and a standard Brownian motion $W_t$ such that \eqref{e:sdeXep}--\eqref{e:sdeYep} holds.
Moreover, under the additional assumption that process spends measure zero time at the junctions, i.e.
\begin{equation}
\int_0^t \one_{\{Y^\epsilon_s = 0,X_s^\epsilon \in \epsilon {\mathbb Z}\}} \, ds= 0\,,
\end{equation}
the solutions are unique in law.
\end{proposition}
The fact that the process $Z^\epsilon$ satisfies~\eqref{e:sdeXep}--\eqref{e:sdeYep} follows from the It\^o formula in this context~\cite{FreidlinSheu00}, and for clarity of presentation, the proof is deferred to the end of this section. Later, in Section \ref{s:excursion}, we will give an alternative representation of this weak solution in terms of time-changed Brownian motion (see Proposition \ref{p:Ztimechange}).
\end{comment}
\subsection{Proof of Convergence (Theorem \ref{t:zlim}).}\label{sec:thinconv}
We now prove~Theorem~\ref{t:zlim}.
As with the proof of Theorem~\ref{t:zlimfat}, we need to prove tightness and a ``generator estimate''.
We state the results we require as the following two lemmas.
\begin{lemma}\label{l:tightness}
Let $Z^\epsilon = (X^\epsilon, Y^\epsilon)$ be the process on the comb-shaped graph~$\mathcal C_\epsilon$, as defined above. Then for any $T > 0$, the family of processes $Z^\epsilon$ is tight on $C([0,T]; \R^2)$.
\end{lemma}
\begin{lemma}\label{l:freidlin}
Let $A$ be the generator \eqref{Adef}. If $f \in \mathcal D(A)$, and $K \subseteq \Omega_0$ is compact as a subset of ${\mathbb R}^2$, then
\begin{equation*}
\lim_{\epsilon\to 0} \sup_{z \in K \cap \mathcal C_\epsilon} \E^{z} \paren[\Big]{
f(Z^\epsilon_t) - f(Z_0) - \int_0^t Af(Z^\epsilon_s) \, ds
}
= 0
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{t:zlim}]
Using Lemmas~\ref{l:tightness} and~\ref{l:freidlin} as replacements for Lemmas~\ref{l:FCtightness} and~\ref{l:FCgenerator} respectively, the proof of Theorem~\ref{t:zlim} is identical to that of Theorem~\ref{t:zlimfat}.
\end{proof}
The remainder of this section is devoted to proving Lemmas~\ref{l:tightness} and~\ref{l:freidlin}.
\iffalse
\begin{todo}[2017-08-07 GI: Couple of technical points]
\begin{itemize}[\textbullet]
\item
\textcolor{red}{In~\cite[Lemma 8.3.1]{FreidlinWentzell93}, he looks at convergence of processes of the form $\Gamma(Z^\epsilon) \to \bar Z$, where $\Gamma$ is some map from a metric space $\Omega \to \bar \Omega$.
In our case we have maps $\Gamma_\epsilon \colon \mathcal C_\epsilon \to \bar \Omega$, and we need to double check it is OK.}
(2017-08-11 GI: Should be OK. Wrote rough sketch below.)
\item
\remove{In Lemma~\ref{l:freidlin} we need $\sup_x$ for all initial points in some compact set.} (2017-08-11 GI: added now.)
\end{itemize}
\end{todo}
The remainder of this section is devoted to the proofs of Proposition~\ref{p:existenceSDE}, and \textcolor{red}{Lemmas~\ref{l:sdeZ}--\ref{l:MPuniqness}} used above.
Proposition~\ref{p:existenceSDE} is a direct consequence of the It\^o formula for graph diffusions proved in~\cite[Lemma~2.3]{FreidlinSheu00}.
For convenience, we restate this It\^o formula here in our context.
\fi
\begin{comment}
\textcolor{red}{
2018-08-04 GI: I don't think we need to state the general version here.
If we decide to state it, we should at least clean up the statement (define $\sigma_i$, notation, and at least state that $\alpha_i$'s are the ``entrance probabilites'' into the $i^\text{th}$ branch, and are normalized so that $\sum \alpha_i = 1$.
}
\begin{lemma}[Lemma 2.3 in~\cite{FreidlinSheu00}]\label{l:graphItoGen}
Consider a graph diffusion $Z(t)$ which in a neighborhood $\Gamma$ of a vertex $O$ has generator $A$ defined by
\begin{equation*}
Af = L_if = \frac{1}{2}\sigma^2_i \partial_z^2 f (z) + b_i(z) \partial_z f (z),\; z>0\
\end{equation*}
on branch $i$ and has domain $\mathcal{D}(A) = \set{f \in C^\infty_b(\Gamma)\st \rho(f) = 0}$
where
\begin{equation*}
\rho(f) = \sum_{i = 1}^N \alpha_i D_if(O)\,,
\end{equation*}
where $D_if$ is the one sided derivative of $f$ at $O$ and $C_b^\infty(\Gamma)$ is the set of functions which are $C_b^\infty$ on each of the branches and continuous on $\Gamma$.
Write $Z(t) = (z(t),i(t))$ with $z(t) = d(Z(t),O)$ where $d$ is the graph distance and $i(t)$ is a label for the branch on which $Z$ resides at time $t$. Then for $\tau = \inf\set{s \st Z(s) \notin \Gamma}$,
\begin{multline*}
F(Z(t\wedge\tau)) = F(Z(0)) + \int_0^{t\wedge \tau} \sigma_{i(s)}(z(s))\frac{dF_{i(s)}}{dz}(z(s))d\, W_s \\
+ \int_0^{t\wedge\tau} AF(Z(s))ds + \rho(F)\ell(t\wedge\tau)
\end{multline*}
where $\ell$ is the local time of $Z$ at $O$.
\end{lemma}
\end{comment}
\begin{proof}[Proof of Lemma~\ref{l:tightness}]
We write both $X^\epsilon$ and $Y^\epsilon$ as time-changed Brownian motions as follows. Let $S(t) = \int_0^t \one_{\set{Y^\epsilon_s = 0}} \,ds$. Then letting $S^{-1}(t)$ be the right-continuous inverse, by the Dambis-Dubins-Schwartz time change theorem (see for instance~\cite[Section 3.4.B]{KaratzasShreve91}), $\bar W_t = X^\epsilon_{S^{-1}(t)}$ is a Brownian motion and $X^\epsilon_t = \bar W_{S(t)}$.
Similarly we can time change $Y^\epsilon$ using $R(t) = \int_0^t \one_{\set{Y_t^\epsilon > 0}} \,ds$. Equation~\eqref{e:sdeYep} tells us that $\bar B_t = Y^\epsilon_{R^{-1}(t)}$ satisfies
\begin{equation*}
d\bar B_t = d \tilde B_t + dL^{\bar B}_t(0) - dL^{\bar B}_t(1) \, .
\end{equation*}
where $\tilde B_t$ is a Brownian motion and hence $\bar B_t$ is a doubly-reflected Brownian motion on $[0,1]$ such that $Y^\epsilon_t = \bar B_{R(t)}$.
Since $S(t) - S(s) \leq t - s$ and $R(t) - R(s) \leq t - s$ holds with probability one, the moduli of continuity of $X^\epsilon$ and $Y^\epsilon$ over $[0,T]$ are no more than those of $\bar W$ and $\bar B$ over $[0,T]$, respectively.
This implies tightness.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l:freidlin}]
We claim for any $k \in \N$ we have
\begin{equation*}
L^{Z^\epsilon}(\epsilon k, 0) =
L^{X^\epsilon}(\epsilon k,0) + L^{Y^\epsilon}(\epsilon k, 0) \,,
\quad\text{and}\quad
L^{Y^\epsilon}(\epsilon k, 0) = \frac{\alpha\epsilon}{2} L^{X^\epsilon}(\epsilon k, 0) \,.
\end{equation*}
The first equality is immediate from the definition, and the second equality is proved in~\cite{FreidlinSheu00}.
(The second equality can also be deduced the independent excursion construction in Section~\ref{s:excursion}, below).
Consequently
\begin{equation}\label{e:LtZ}
L^{Z^\epsilon}(\epsilon k, 0)
= \frac{2 + \alpha\epsilon}{2} L^{X^\epsilon}(\epsilon k, 0)
= \frac{2 + \alpha\epsilon}{\alpha\epsilon} L^{Y^\epsilon}(\epsilon k, 0) \,.
\end{equation}
For any $f\in \mathcal{D}(A)$, Lemma~\ref{l:graphIto} gives
\begin{multline}\label{e:fZt1}
f(Z^\epsilon_t) - f(Z^\epsilon_0) = \int_0^t \partial_yf(Z^\epsilon_s)\one_{\{Y_s^\epsilon >0\}} \, dY^\epsilon_s
+ \int_0^t \partial_xf(Z^\epsilon_s)\one_{\{Y_s^\epsilon = 0\}} \, dX^\epsilon_s \\+ \int_0^t \frac{1}{2}\partial^2_yf(Z^\epsilon_s)\one_{\{Y_s^\epsilon >0\}} + \frac{1}{2}\partial^2_xf(Z^\epsilon_s)\one_{\{Y_s^\epsilon = 0\}}\, ds \\
+ \sum_{k \in \Z} \paren[\Big]{
\frac{\alpha\epsilon}{2 + \alpha\epsilon}\partial_yf(\epsilon k,0)
+ \frac{1}{2 + \alpha\epsilon} \paren[\big]{
\partial_x^+f(\epsilon k, 0) - \partial_x^-f(\epsilon k, 0)
}}
L^{Z^\epsilon}_t(\epsilon k, 0)\,.
\end{multline}
The first integral on the right of equation~\eqref{e:fZt1} can be rewritten as
\begin{equation*}
\begin{split}
\int_0^t \partial_yf(Z^\epsilon_s)\one_{\{Y_s^\epsilon >0\}} \, dY^\epsilon_s
&=
\int_0^t \partial_yf(Z^\epsilon_s)\one_{\{Y_s^\epsilon >0\}} \, dW_s
- \int_0^t \partial_yf(X^\epsilon_s,1) \, dL^{Y^\epsilon}_s(1)
\\
&= \int_0^t\partial_yf(Z^\epsilon_s)\one_{\{Y_s^\epsilon >0\}} \, dW_s \,.
\end{split}
\end{equation*}
Here we used the fact that $\partial_y f(x, 1) = 0$ for any $f \in \mathcal{D}(A)$.
Returning to~\eqref{e:fZt1}, we note that $f\in C^2(\R \times\{0\})$ implies $\partial_x^+f(\epsilon k, 0)=\partial_x^-f(\epsilon k, 0)$.
Thus for $(x,y) \in K\cap \mathcal C_\epsilon $, taking expectations on both sides and using~\eqref{e:LtZ} gives
\begin{align*}
\MoveEqLeft
\E^{(x,y)} \paren[\Big]{ f(Z^\epsilon_t) - f(Z^\epsilon_0) -\int_0^t Af(Z^\epsilon_s) \, ds }\\
&=\frac{1}{2} \E^{(x,y)} \Bigl( \int_0^t \partial^2_yf(Z^\epsilon_s)\one_{\{Y_s^\epsilon >0\}} + \partial^2_xf(Z^\epsilon_s)\one_{\{Y_s^\epsilon = 0\}} - \partial^2_yf(Z^\epsilon_s)\, ds\\
&\qquad\qquad+ \epsilon \sum_{k \in \Z} \partial_yf(\epsilon k,0) L^{X^\epsilon}_t(\epsilon k, 0) \Bigr)\\
&=\frac{\alpha}{2} \E^{(x,y)} \paren[\Big]{-\int_0^t \partial_yf(X_s^\epsilon,0)\one_{\{Y^\epsilon_s = 0\}} \, ds + \epsilon \sum_{k \in \Z} \partial_yf(\epsilon k,0) L^{X^\epsilon}_t(\epsilon k, 0)}
\\
&= I + \mathit{II}\,,
\end{align*}
\iffalse
Adding and subtracting
\[
\frac{1}{2}\sum_k\partial_yf(\epsilon k,0)\int_0^t\one_{\{Y^\epsilon_s = 0, \; \abs{X^\epsilon_s - \epsilon k} < \frac{\epsilon}{2} \}} \, ds
\]
on the right gives
\begin{equation*}
\E^{(x,y)} \paren[\Big]{ f(Z^\epsilon_t) - f(Z^\epsilon_0) -\int_0^t Af(Z^\epsilon_s) \, ds }
= I + \mathit{II}\,,
\end{equation*}
\fi
where
\begin{gather*}
I \stackrel{\Delta}{=} \frac{\alpha}{2} \sum_{k \in \Z}\E^{(x,y)}
\int_0^t\paren[\big]{
\partial_yf(\epsilon k, 0)
- \partial_yf(X_s^\epsilon, 0)
}
\one_{\{Y_s^\epsilon = 0,\; \abs{X^\epsilon_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds \,,
\\
\mathit{II} \stackrel{\Delta}{=} \frac{\alpha}{2} \sum_{k \in \Z}
\partial_yf(\epsilon k, 0)
\E^{(x,y)}\paren[\Big]{
\epsilon L^{X^\epsilon}_t
- \int_0^t\one_{\{Y_s^\epsilon = 0,\;\abs{X^\epsilon_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds
}\,.
\end{gather*}
Note that there exists Brownian motion $W$ such that $X^\epsilon_t = W_{S(t)}$
where $S(t)$, defined by
\begin{equation*}
S(t)
\stackrel{\Delta}{=} \int_0^t \one_{\set{Y^\epsilon(s) = 0}} \, ds\,,
\end{equation*}
is the amount of time the joint process spends on the spine of the comb up to time~$t$.
To estimate $I$, for any $\delta > 0$ we choose sufficiently large compact set $C \subset \mathbb{R}$ such that
\begin{equation*}
\sup_{(x,y) \in K}\E^x\paren[\Big]{\int_0^t\one_{\set{W_s \notin C}} \, ds} < \frac{\delta}{\norm{\partial_y f}_\infty} \, .
\end{equation*}
Then since $S(s) \leq s$, it follows that
\begin{equation*}
\P}%{{\mathbb P}^x(X^\epsilon_s \notin C) \leq \P}%{{\mathbb P}^x(W_s \notin C)
\end{equation*}
and so the above estimate can be applied for $X^\epsilon$ independent of $\epsilon$.
Then use uniform continuity of $\partial_y f$ in $C$ along with the above estimate.
In order to estimate $\mathit{II}$, we again use the above representation to see
\begin{multline}\label{e:II2}
\E^{(x,y)}\abs[\Big]{
\epsilon L^{X^\epsilon}_t(\epsilon k, 0) - \int_0^t\one_{\{Y_s^\epsilon = 0,\;\abs{X^\epsilon_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds
}
\\
=\E^{x} \abs[\Big]{
\epsilon L^{W}_{S(t)}(\epsilon k) - \int_0^{S(t)}\one_{\{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds
}
\,,
\end{multline}
where $S(t)$, defined by
\begin{equation*}
S(t)
\stackrel{\Delta}{=} \int_0^t \one_{\set{Y^\epsilon(s) = 0}} \, ds\,,
\end{equation*}
is the amount of time the joint process spends on the spine of the comb up to time~$t$.
Thus to show $\mathit{II} \to 0$, it suffices to estimate the right hand side of~\eqref{e:II2} as $\epsilon \to 0$. Also, by shifting the indices of the sum to compensate, we can assume that $x = 0$.
\iffalse
Therefore is suffices to estimate,
\[
\E^{0}\left|\epsilon L^{W}_t(\epsilon k) - \int_0^t\one_{\{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds\right|\;.
\]
\fi
To this end, let $f_\epsilon$ be defined by
\begin{equation*}
f_\epsilon(x) \stackrel{\Delta}{=}
\begin{dcases}
\epsilon(\epsilon k - x) -\frac{\epsilon^2}{4} & \text{ if } x < \epsilon k - \frac{\epsilon}{2} \;,\\
(x - \epsilon k)^2 & \text{ if } \epsilon k - \frac{\epsilon}{2}\leq x \leq \epsilon k + \frac{\epsilon}{2} \;,\\
\epsilon(x -\epsilon k ) -\frac{\epsilon^2}{4} & \text{ if } x > \epsilon k + \frac{\epsilon}{2} \;.
\end{dcases}
\end{equation*}
By Ito's formula we have,
\begin{multline*}
f_\epsilon(W_t) - \epsilon\abs{W_t - \epsilon k} - (f_\epsilon(W_0) - \epsilon\abs{W_0 - \epsilon k}) \\
= \int_0^t (f_\epsilon'(W_s) -\epsilon\, \text{sign}(W_s - \epsilon k)) \, dW_s + \int_0^t\one_{\{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds - \epsilon L^{W}_t(\epsilon k) \;.
\end{multline*}
Using the It\^o isometry and the inequalities
\begin{gather*}
\abs[\big]{f_\epsilon(x) - \epsilon \abs{x - \epsilon k}}
\leq \frac{\epsilon^2}{4}\,,
\\
\abs{f'_\epsilon(x) - \epsilon\, \text{sign}(x - \epsilon k)} \leq \epsilon \one_{[\epsilon k -\frac{\epsilon}{2},\epsilon k + \frac{\epsilon}{2}]} \,,
\end{gather*}
we obtain
\begin{multline*}
\E^{0}\abs[\Big]{\epsilon L^{W}_t(\epsilon k) - \int_0^t\one_{\{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}\}}\, ds }
\leq \frac{\epsilon^2}{4} + \epsilon\left(\E^0 \int_0^t\one_{\{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}\}} \,ds\right)^{\frac{1}{2}}
\\
\leq c(t)\epsilon^{\frac{3}{2}} \,,
\end{multline*}
since
\begin{equation*}
\E^0 \int_0^t\one_{\{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}\}} \,ds = \int_0^t \P^0 \paren[\Big]{\abs{W_s - \epsilon k} < \frac{\epsilon}{2}} \, ds \leq c\int_0^t\frac{\epsilon}{\sqrt{s}} \, ds = 2c \epsilon \sqrt{t}\,.
\end{equation*}
We break up the sum in $\mathit{II}$ and estimate as follows,
\begin{equation*}
\mathit{II} \leq \norm{\partial_y f}_\infty\paren[\Big]{\sum_{\abs{k} > N / \epsilon}\E}%{{\mathbb E}^0[\epsilon L_t^{X^\epsilon}(\epsilon k,0)] + \int_0^t \P}%{{\mathbb P}^0\paren[\big]{\abs{X^\epsilon_s} > N - \frac{\epsilon}{2}} \,ds + \frac{2 N }{\epsilon}c(t)\epsilon^{\frac{3}{2}}} \,.
\end{equation*}
We can again use that $X^\epsilon$ has the same distribution as a Brownian motion with a time change $S(t) \leq t$ to replace $X^\epsilon$ with $W$, i.e.
\begin{equation*}
\mathit{II} \leq \norm{\partial_y f}_\infty\paren[\Big]{\sum_{\abs{k} > N / \epsilon}\E}%{{\mathbb E}^0[\epsilon L_t^{W}(\epsilon k)] + \int_0^t \P}%{{\mathbb P}^0\paren[\big]{\abs{W_s} > N - \frac{\epsilon}{2}} \,ds + Nc(t)\epsilon^{\frac{1}{2}}} \,.
\end{equation*}
Setting $N$ sufficiently large and then sending $\epsilon \to 0$ gives us $\mathit{II}\to 0$ as $\epsilon \to 0$.
This completes the proof.
\end{proof}
\section{Excursion Description on the Comb Graph.}\label{s:excursion}
In this section we describe the how diffusion~$Z^\epsilon$ on the comb-shaped graph~$\mathcal C_\epsilon$ (defined in Section~\ref{s:ithincomb}) can be constructed from the point of view of It\^o's excursion theory (c.f.~\cite{Ito72, PitmanYor07}).
We identify the components of~$Z^\epsilon$ as a trapped Brownian motion in the framework of Ben Arous \etal~\cite{BenArousCabezasEA15}, and use this to provide an alternate description of the limiting behavior as~$\epsilon \to 0$.
\subsection{The Excursion Decomposition of~\texorpdfstring{$Z^\epsilon$}{Z-epsilon}.}
The trajectories of $Z^\epsilon$ can be decomposed as a sequence of excursions where each excursion starts and ends at the junction points $J_\epsilon = \epsilon \Z \times \{0\}$, and travels entirely in the teeth, or entirely in the spine.
The excursions into the teeth of the comb (excursions of $Y^\epsilon$ into $(0,1]$ while $X^\epsilon \in \epsilon \Z$) should be those of a reflected Brownian motion on $[0,1]$.
The excursions into the spine (excursions of $X^\epsilon$ into ${\mathbb R} \setminus \epsilon \Z$ with $Y^\epsilon = 0$) should be those of a standard Brownian motion on ${\mathbb R}$ between the points $\epsilon \Z$.
Thus one expects that that by starting with a standard Brownian motion $\bar X$ on ${\mathbb R}$ and an independent reflected Brownian motion $\bar Y$ on $[0,1]$, we can glue excursions of $\bar X$ and $\bar Y$ appropriately and obtain the diffusion $Z^\epsilon$ on the comb-shaped graph~$\mathcal C_\epsilon$.
We describe this precisely as follows.
Let $\bar X$ be a standard Brownian motion on ${\mathbb R}$ and let $L^{\bar X}_t(x)$ denote its local time at~$x \in {\mathbb R}$.
Let $L_t^{\bar X}(\epsilon \Z)$, defined by
\begin{equation*}
L^{\bar X}_t(\epsilon \Z) \stackrel{\Delta}{=} \sum_{k \in \Z} L^{\bar X}_t(\epsilon k) =
\lim_{\delta \to 0} \frac{1}{2 \delta} \int_0^t \sum_{k \in \Z} \one_{(\epsilon k-\delta, \epsilon k+\delta)}(\bar X_s) \,ds \,,
\end{equation*}
denote the local time of $\bar X$ at the junction points $\epsilon \Z$.
Let $\tau^{\bar X, \epsilon}$ be the right-continuous inverse of $L^{\bar X}_t(\epsilon \Z)$ defined by
\begin{equation*}
\tau^{\bar X,\epsilon}(\ell) = \inf \set[\big]{ t > 0 \st L^{\bar X}_t(\epsilon \Z) > \ell }, \quad \ell \geq 0.
\end{equation*}
Notice that the functions $t \mapsto L^{\bar X}_t$ and $\ell \mapsto \tau^{\bar X, \epsilon}(\ell)$ are both non-decreasing.
Let $\bar Y$ be a reflected Brownian motion on $[0,1]$ which is independent of $\bar X$. As above, let $L^{\bar Y}(0)$ be the local time of $\bar Y$ about $0$, and let $\tau^{\bar Y}$, defined by
\[
\tau^{\bar Y}(\ell) = \inf \set[\big]{ t > 0 \st L^{\bar Y}_t(0) > \ell }\,,
\]
be its right-continuous inverse.
Given $\alpha \in (0,1)$, we define the random time-changes $\psi^{\bar X, \epsilon}$ and $\psi^{\bar Y, \epsilon}$ by
\begin{equation}\label{e:psiXdef}
\psi^{\bar X,\epsilon}(t)
= \inf \set[\big]{
s > 0 \st
s + \tau^{\bar Y}\paren[\Big]{
\frac{\alpha \epsilon}{2} L^{\bar X}_s(\epsilon \Z)
}
> t
}\,,
\end{equation}
and
\begin{equation}\label{e:epsiYdef}
\psi^{\bar Y, \epsilon}(t)
= \inf \set[\big]{
s > 0 \st
s + \tau^{\bar X,\epsilon}\paren[\Big]{
\frac{2}{\alpha \epsilon} L^{\bar Y}_s(0)
}
> t
} \,.
\end{equation}
Note both $\psi^{\bar X, \epsilon}$ and~$\psi^{\bar Y, \epsilon}$ are continuous and non-decreasing functions of time.
\begin{proposition}\label{p:Ztimechange}
The time-changed process $Z^\epsilon$ defined by
\[
Z^\epsilon(t) \stackrel{\Delta}{=} \paren[\big]{ \bar X(\psi^{\bar X,\epsilon}(t)), \bar Y( \psi^{\bar Y,\epsilon}(t)) }
\]
is the same process~$Z^\epsilon$ in Theorem~\ref{t:zlim}.
Namely it is a Markov process with generator $\mathcal{L}^\epsilon$ (defined in equation~\eqref{e:Lep}), and is a weak solution of the system~\eqref{e:sdeXep}--\eqref{e:sdeYep}.
\end{proposition}
This gives an alternate and natural representation of $Z^\epsilon = (X^\epsilon,Y^\epsilon)$. One can view this time-change representation as the pre-limit analogue of the representation~\eqref{e:limitdef1} for the limit system~\eqref{e:sdeX} -- \eqref{e:localtime}.
For clarity of presentation, we postpone the proof of Proposition~\ref{p:Ztimechange} to Section~\ref{s:Ztimechange}.
\begin{remark}
For simplicity, throughout this section we assume the initial distribution of~$Z^\epsilon$ is~$\delta_{(0,0)}$, and denote expectations using the symbol $\E$ without any superscript.
The main results here (in particular Theorem~\ref{t:XYconv}, below) can directly be adapted to the situation for more general initial distributions as in Theorem~\ref{t:zlim}.
\end{remark}
\subsection{Description as a Trapped Brownian Motion.}\label{s:tbm}
We now show how this representation can be explained in the framework of trapped Brownian motions as defined by Ben Arous, \etal~\cite{BenArousCabezasEA15} (see Definition 4.11 therein). Recall that a trapped Brownian motion, denoted by $B[\mu]$, is a process of the form $B(\psi(t))$ where $B(t)$ is a standard Brownian motion and the time-change $\psi$ has the form
\[
\psi(t) = \inf \set[\big]{ s > 0 \st \phi[\mu,B]_s > t }\,,
\]
where
\[
\phi[\mu,B]_s = \mu \left(\{ (x,\ell) \in {\mathbb R} \times [0,\infty) \;|\; L^B(x,s) \geq \ell \} \right) \,,
\]
and $\mu$ is a (random) measure on ${\mathbb R} \times [0,\infty)$ called the trap measure.
For example, when $\mu$ is the Lebesgue measure on ${\mathbb R} \times [0,\infty)$, then $\phi[\mu,B] = t$, and $\psi(t) = t$.
Alternately, if $\mu$ has an atom at $(x,\ell)$ of mass $r > 0$, then $B(\psi(t))$ is trapped at $x$ for a time $r$ at the moment its local time at $x$ exceeds $\ell$.
To use this framework in our scenario, we need to identify a trap measure under which $X^\epsilon$ is a trapped Brownian motion.
We do this as follows.
First note that the process $\tau^{\bar Y}_\ell$, appearing in the time change~\eqref{e:psiXdef}, is a L\'evy subordinator.
Thus,
there exists a function $\eta^{\bar Y}(s):(0,\infty) \to (0,\infty)$, and a Poisson random measure $N^{\bar Y}$ on $[0,\infty) \times [0,\infty)$ with intensity measure $d\ell \times \eta^{\bar Y}(s) \, ds$,
such that
\begin{align}\label{tauPoisson}
\tau^{\bar Y}_\ell = \int_{[0,\ell]} \int_{[0,\infty)} s N^{\bar Y} (d\ell \times ds)\,.
\end{align}
In the definition of $\psi^{\bar X,\epsilon}(t)$ above, we have
\[
\tau^{\bar Y}\paren[\Big]{ \frac{\alpha \epsilon}{2} L^{\bar X}_s( \epsilon \Z) } = \tau^{\bar Y}\paren[\Big]{ \sum_{k \in \Z} \frac{\alpha \epsilon}{2} L^{\bar X}_s(\epsilon k) }.
\]
Because $\tau_\ell^{\bar Y}$ has stationary, independent increments, this is equal in law to
\begin{align*}
\tau^{\bar Y}\paren[\Big]{ \frac{\alpha \epsilon}{2} L^{\bar X}_s( \epsilon \Z) } \stackrel{d}{=} \sum_{k \in \Z} \tau^{\bar Y_k}\paren[\Big]{ \frac{\alpha \epsilon}{2} L^{\bar X}_s(\epsilon k) },
\end{align*}
where $\{ \bar Y_k\}_{k \in \Z}$ are a family of independent reflected Brownian motions on $[0,1]$. That is, the time change $\psi^{\bar X,\epsilon}(t)$ has the same law as
\begin{align}\label{psihatdef}
\tilde \psi^{\bar X,\epsilon}(t) = \inf \set[\big]{ s > 0 \st s + \sum_{k \in \Z} \tau^{\bar Y_k}\paren[\Big]{ \frac{\alpha \epsilon}{2} L^{\bar X}_s(\epsilon k) } > t }\,.
\end{align}
Each of the processes $\tau^{\bar Y_k}$ can be represented as in~\eqref{tauPoisson} with independent Poisson random measures $N^{\bar Y_k}$:
\begin{align}
\tau^{\bar Y_k}_\ell = \int_{[0,\ell]} \int_{[0,\infty)} s N^{\bar Y_k} (d\ell \times ds). \label{tauPoisson2}
\end{align}
Since each of the random measures $N^{\bar Y_k}$ is atomic, we may define $\{(\ell_{j,k}, s_{j,k})\}_{j=1}^\infty$ to be the random atoms of $N^{\bar Y_k}$ by
\begin{align}
N^{\bar Y_k} = \sum_{j = 1}^\infty \delta_{(\ell_{j,k}, s_{j,k})}. \label{tauPoisson3}
\end{align}
Then define a random measure on ${\mathbb R} \times [0,\infty)$:
\begin{align}
\mu^{\bar X,\epsilon} = dx \times d\ell + \sum_{k \in \Z} \sum_{j = 1}^\infty s_{j,k} \delta_{( \epsilon k, (2/ (\alpha \epsilon)) \ell_{j,k})} \label{mutrapx}
\end{align}
Returning to~\eqref{psihatdef}, we now have the representation
\[
s + \sum_{k \in \Z} \tau^{\bar Y_k}\left( \frac{\alpha \epsilon}{2} L^{\bar X}_s(\epsilon k) \right) = \mu^{\bar X,\epsilon} \left( \{ (x,\ell) \in {\mathbb R} \times [0,\infty) \;|\; \ell \leq L^{\bar X}_s(x) \}\right).
\]
It is easy to check that $\mu^{\bar X}$ defines a L\'evy trap measure, in the sense of \cite{BenArousCabezasEA15}, Definition 4.10. This proves the following:
\begin{proposition}
Let $\bar X$ be a standard Brownian motion on ${\mathbb R}$ and let $\bar X[\mu^{\bar X,\epsilon}]$ be the trapped Brownian motion (see Definition 4.11 of \cite{BenArousCabezasEA15}) with trap measure $\mu^{\bar X,\epsilon}$ defined by~\eqref{mutrapx}. Then the law of $X^\epsilon$ coincides with the law of $\bar X[\mu^{\bar X,\epsilon}]$.
\end{proposition}
The process $Y^{\epsilon}$ admits a similar representation as a trapped (reflected) Brownian motion. To this end, we first note that $\tau^{\bar X,\epsilon}_\ell$ is also a L\'evy subordinator which and can be written as
\begin{align}
\tau^{\bar X,\epsilon}_\ell = \int_{[0,\ell]} \int_{[0,\infty)} s N^{\bar X,\epsilon} (d\ell \times ds), \label{tauPoissonX}
\end{align}
where $N^{\bar X,\epsilon}$ is a Poisson random measure on $[0,\infty) \times [0,\infty)$ with intensity measure $d\ell \times \eta^{\bar X,\epsilon}(s)ds$.
\begin{lemma}\label{l:X-scaling}
The excursion length measure $\eta^{\bar X,\epsilon}$ satisfies the scaling relation,
\begin{equation*}
\eta^{\bar X,\epsilon}(s) = \epsilon^{-3}\eta^{\bar X,1}(\epsilon^{-2} s), \quad s > 0.
\end{equation*}
\end{lemma}
\begin{proof}
This follows in directly from the standard scaling properties of Brownian motion and its local time, and we omit the details.
\end{proof}
Letting $\{ (s_j,\ell_j) \}_{j=1}^\infty$ denote the atoms of $N^{\bar X,\epsilon}$ we then define a random measure on $[0,1] \times [0,\infty)$ by
\begin{align}\label{mutrapy}
\mu^{\bar Y,\epsilon} = dy \times d\ell + \sum_{j=1}^\infty s_j \delta_{(0,(\alpha \epsilon/2) \ell_j)} \,.
\end{align}
This also is a L\'evy Trap Measure in the sense of \cite{BenArousCabezasEA15} (replacing ${\mathbb R}$ by $[0,1]$), and one can easily see that the associated trapped Brownian motion is precisely the process~$Y^\epsilon$.
\begin{proposition}
Let $\bar Y$ be a reflected Brownian motion on $[0,1]$, and let $\bar Y[\mu^{\bar Y,\epsilon}]$ be the trapped Brownian motion with trap measure $\mu^{\bar Y,\epsilon}$ defined by~\eqref{mutrapy}.
Then the law of $Y^\epsilon$ coincides with the law of $\bar Y[\mu^{\bar Y,\epsilon}]$.
\end{proposition}
\subsection{Convergence as \texorpdfstring{$\epsilon \to 0$}{epsilon to 0}.}
We now use Theorem 6.2 of \cite{BenArousCabezasEA15} to study convergence of $X^\epsilon$ and $Y^\epsilon$ as $\epsilon \to 0$.
The key step is to establish convergence of the trap measures, as in the following lemma.
\begin{lemma}\label{lem:trapmeasurelimits}
Let $N_*^{\bar Y}$ be a Poisson random measure on ${\mathbb R} \times [0,\infty) \times [0,\infty)$ with intensity measure $dx \times d\ell \times \frac{1}{2} \eta^{\bar Y}(s) \, ds$.
As $\epsilon \to 0$, the random measures $\mu^{\bar X,\epsilon}$ on ${\mathbb R} \times [0,\infty)$, defined in~\eqref{mutrapx}, converge vaguely in distribution to the random measure $\mu^{X}_*$ defined by
\begin{equation*}
\mu^{X}_*(A) = \int_{\mathbb R} \!\int_0^\infty \one_A(x,\ell) dx \, d\ell + \frac{\alpha}{2} \int_{{\mathbb R}} \int_0^\infty \int_0^\infty \one_A(x,\ell) s N_*^{\bar Y}\left(dx \times d\ell \times ds \right) \,,
\end{equation*}
for all $A \in \mathcal{B}({\mathbb R} \times [0,\infty))$. The random measures $\mu^{\bar Y,\epsilon}$ on $[0,1] \times [0,\infty)$, defined in~\eqref{mutrapy}, converge vaguely in distribution to the measure $\mu^{Y}_*$ defined by
\begin{equation*}
\mu^{Y}_*(A) = \int_0^1 \! \int_0^\infty \one_A(y,\ell) dy \, d\ell + \frac{2}{\alpha} \int_0^\infty \one_{A}(0,\ell) \,d\ell \quad \quad A \in \mathcal{B}([0,1] \times [0,\infty)) \,.
\end{equation*}
\end{lemma}
Momentarily postponing the proof of Lemma~\ref{lem:trapmeasurelimits}, we state the main convergence result in this section.
\begin{theorem}\label{t:XYconv}
Let $R(t)$ be a Brownian motion on $[0,1]$ reflected at both endpoints $x = 0,1$, and $B$ be a standard Brownian motion on ${\mathbb R}$.
\begin{enumerate}
\item
As $\epsilon \to 0$, we have $Y^\epsilon \to Y$ vaguely in distribution on $D([0,\infty))$.
Here $Y = R[\mu_*^{\bar Y}]$ is a reflected Brownian motion that is sticky at $0$.
\item
As $\epsilon \to 0$, we have $X^\epsilon \to B[\mu_*^{\bar X}]$ vaguely in distribution on $D([0,\infty))$.
The limit process here may also be written as $B((2/\alpha) L^Y_t(0))$.
\end{enumerate}
\end{theorem}
\begin{remark}
Using the SDE methods in Section~\ref{s:thincomb} we are able to obtain joint convergence of the pair $(X^\epsilon, Y^\epsilon)$ (Theorem~\ref{t:zlim}).
The trapped Brownian motion framework here, however, only provides convergence of the processes $X^\epsilon$ and $Y^\epsilon$ individually.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{t:XYconv}]
The convergence of $Y^\epsilon$ to $R[\mu_*^{\bar Y}]$ is an immediate consequence of Theorem 6.2 of \cite{BenArousCabezasEA15}, Lemma \ref{lem:trapmeasurelimits} above, and the properties of Poisson random measures.
To identify the limiting process $R[\mu_*^{\bar Y}]$ as a sticky Brownian motion, observe that the time change has the form
\begin{equation*}
\mu_*^{\bar Y} \paren[\big]{ \set[\big]{ (y,\ell) \in [0,1] \times [0,\infty) \st L^{R}(y,s) \geq \ell } } = s + \frac{2}{\alpha} L^{R}(0,s)\,.
\end{equation*}
Thus, the limit process is $Y(t) = R(\psi(t))$ where
\begin{equation*}
\psi(t) = \inf \{ s > 0 \;|\; s + \frac{2}{\alpha} L^{R}(0,s) > t \}\,.
\end{equation*}
This is precisely a sticky Brownian motion (see Lemma~\ref{l:sdeZ}).
For the second assertion of the Theorem, the convergence of $X^\epsilon$ to $B[\mu_*^{\bar X}]$ is again an immediate consequence of Theorem 6.2 of \cite{BenArousCabezasEA15} and Lemma \ref{lem:trapmeasurelimits} above.
Thus we only need to show that the trapped Brownian motion $B[\mu_*^{\bar X}]$ has the same law as the process $X_t$ from Theorem \ref{t:zlim}.
To compare the two processes, we first write them in a similar form.
Let $L^{\bar B}_t(0)$ is the local time of $\bar B$ at $0$, and let $\tau^{\bar B}_\ell$ be the inverse
\[
\tau^{\bar B}_\ell = \inf \{ t > 0 \;|\; L^{\bar B}_t(0) > \ell \}.
\]
Then, we have
\[
X_t = \bar W_{\frac{2}{\alpha} L^{\bar B}_{T(t)}} = \bar W(h^{-1}(t))
\]
where
\[
h^{-1}(t) = \inf \{ r > 0 \;|\; r + \tau^{\bar B}_{r\alpha/2} > t \}
\]
The fact that $(2/\alpha) L^{\bar B}_{T(t)} = h^{-1}(t)$ follows from the definition of $T(t)$, which implies $(2/\alpha) L^{\bar B}_{T(t)} + T(t) = t$.
Therefore, the two processes are
\[
B[\mu_*^{\bar X}] = B (\phi^{-1}(t)) \quad \quad \quad X_t = \bar W(h^{-1}(t))
\]
where $\phi$ is:
\[
\phi(r) = \phi[\mu_*,B]_r = \mu_* \left( \{ (x,\ell) \in {\mathbb R} \times [0,\infty) \;|\; L^B(x,r) \geq \ell \} \right)
\]
If $A_r^B = \{ (x,\ell) \in {\mathbb R} \times [0,\infty) \;|\; L^B(x,r) \geq \ell \}$, then by definition of the trap measure $\mu_*$,
\begin{align}
\phi(r) = r + \frac{\alpha}{2} \int_{A_r^B \times [0,\infty)} s N_*^{\bar Y}\left(dx \times d\ell \times ds \right) \label{timechangecomp}
\end{align}
The last integral has the same law as $\tau^{\bar B}_{r\alpha/2}$. Hence, $h$ and $\phi$ have the same law.
Notice that $h$ is independent of $\bar W$.
We claim that $\phi$ is also independent of $B$.
To see this observe that the distribution of~$\phi(r)$ only depends on~$B$ through the volume of~$A_r^B$, which equals $r$ almost surely.
This shows~$\phi$ is independent of~$B$, and thus $B(\phi^{-1}(t))$ and $\bar W(h^{-1}(t))$ have the same law.
\end{proof}
It remains to prove Lemma~\ref{lem:trapmeasurelimits}.
\begin{proof}[Proof of Lemma \ref{lem:trapmeasurelimits}]
It suffices to show for rectangles $A = [x_0, x_1]\times[\ell_0,\ell_1]$ that
\begin{equation*}
\mu^{\bar X,\epsilon}(A) \to \mu^{X}_*(A)
\end{equation*}
in distribution. We calculate the characteristic function using \cite[Thm~2.7]{Kyprianou06},
\begin{align*}
\E}%{{\mathbb E}[e^{i\beta\mu^{\bar X,\epsilon}(A)}] &= \exp\paren[\Big]{i\beta\abs{A} + \sum_{\epsilon k \in [x_0,x_1]}\int_{\frac{\epsilon}{2}\ell_0}^{\frac{\epsilon}{2}\ell_1}\int_0^\infty (1 - e^{i\beta s}) \eta^{\bar Y}(s)\, ds }\\
&= \exp\paren[\Big]{i\beta\abs{A} + \paren[\Big]{\floor[\Big]{\frac{x_1}{\epsilon}} - \ceil[\Big]{\frac{x_0}{\epsilon}}}\frac{\epsilon(\ell_1 - \ell_0)}{2}\int_0^\infty (1 - e^{i\beta s}) \eta^{\bar Y}(s)\, ds }\\
&\to \exp\paren[\Big]{i\beta\abs{A} + \frac{\abs{A}}{2}\int_0^\infty (1 - e^{i\beta s}) \eta^{\bar Y}(s)\, ds }\,
\end{align*}
as $\epsilon \to 0$. We note that this last formula is the characteristic function for $\mu^X_\star(A)$. The calculation for $\mu^{\bar Y,\epsilon}(A)$ uses Lemma~\ref{l:X-scaling} and a change of variables as follows
\begin{align*}
\E}%{{\mathbb E}[e^{i\beta\mu^{\bar Y,\epsilon}(A)}] &= \exp\paren[\Big]{i\beta\abs{A} + \one_{[y_0,y_1]}(0)\int_{\frac{2}{\epsilon}\ell_0}^{\frac{2}{\epsilon}\ell_1}\int_0^\infty (1 - e^{i\beta s}) \eta^{\bar X, \epsilon}(s)\, ds }\\
&= \exp\paren[\Big]{i\beta\abs{A} + \one_{[y_0,y_1]}(0)\frac{2(\ell_1 - \ell_0)}{\epsilon^4}\int_0^\infty (1 - e^{i\epsilon^2\beta s}) \eta^{\bar X,1}(\epsilon^{-2} s)\, ds }\\
&= \exp\paren[\Big]{i\beta\abs{A} + \one_{[y_0,y_1]}(0)\frac{2(\ell_1 - \ell_0)}{\epsilon^2}\int_0^\infty (1 - e^{i\epsilon^2\beta s}) \eta^{\bar X, 1}(s)\, ds }\,.
\end{align*}
Notice that by switching the integrals, we find
\begin{align*}
\frac{1}{\epsilon^2}\int_0^\infty(1-e^{i\beta\epsilon^2 s})\eta^{\bar X, 1}(s)\, ds &= \frac{1}{\epsilon^2}\int_0^\infty(-\beta i\epsilon^2\int_0^s e^{i\beta\epsilon^2 r}\, dr)\eta^{\bar X, 1}(s)\, ds\\
&= \int_0^\infty e^{i\beta\epsilon^2 r} \int_r^\infty \eta^{\bar X, 1}(s)\, ds \, dr \, .
\end{align*}
Since $\eta^{\bar X, 1}$ has exponential tails, we can send $\epsilon \to 0$, use dominated convergence and switch the integrals again to find
\begin{align*}
\lim_{\epsilon\to 0}\frac{1}{\epsilon^2}\int_0^\infty(1-e^{i\beta\epsilon^2 s})\eta^{\bar X, 1}(s)\, ds =\int_0^\infty s \eta^{\bar X, 1}(s)\, ds = 1 \,
\end{align*}
and hence
\begin{equation*}
\E}%{{\mathbb E}[e^{i\beta\mu^{\bar Y,\epsilon}(A)}] \to \E}%{{\mathbb E}[e^{i\beta\mu^{Y}_*(A)}] \, .
\qedhere
\end{equation*}
\end{proof}
\subsection{Proof of the Excursion Decomposition (Proposition~\ref{p:Ztimechange}).}\label{s:Ztimechange}
To abbreviate the notation, we will now write $L^{\bar X}_t$ and $L^{\bar Y}_t$ for $L^{\bar X}_t(\epsilon \Z)$ and $L^{\bar Y}_t(0)$, respectively.
Notice that $L^{\bar X}_t$ depends on $\epsilon$ while $L^{\bar Y}_t$ does not.
Let $X^\epsilon(t) = \bar X(\psi^{\bar X,\epsilon}(t))$ and $Y^{\epsilon}(t) = \bar Y( \psi^{\bar Y,\epsilon}(t))$.
The proof of Proposition~\ref{p:Ztimechange} follows quickly from It\^o's formula, and the following two lemmas:
\begin{lemma}\label{l:claimLtratio}
For every $t\geq 0$, we have
\begin{equation}\label{e:claimLtratio}
L^{X^\epsilon}_t = \frac{2}{\alpha \epsilon} L^{Y^\epsilon}_t \,.
\end{equation}
\end{lemma}
\begin{lemma}\label{l:jqvXY}
The joint quadratic variation of $X^\epsilon$ and $Y^\epsilon$ is $0$.
\end{lemma}
Momentarily postponing the proof of these lemmas, we prove Proposition~\ref{p:Ztimechange}.
\begin{proof}[Proof of Proposition~\ref{p:Ztimechange}]
For any $f \in \mathcal D(\mathcal L^\epsilon)$, It\^o's formula gives
\begin{align*}
\MoveEqLeft
\E f(Z^\epsilon_t) - f(Z^\epsilon_0)
= \frac{1}{2}\E \int_0^{\psi^{\bar X, \epsilon}(t)}
\partial_x^2 f( \bar X_s, \bar Y_s)
\one_{\bar X_s \not\in \epsilon \Z} \, ds
\\
&+
\frac{1}{2} \E \int_0^t
\paren[\Big]{
\partial_x f( (X^\epsilon_s)^+, Y^\epsilon_s )
- \partial_x f( (X^\epsilon_s)^-, Y^\epsilon_s )
}
\, dL^{X^\epsilon}_s(\epsilon \Z)
\\
&+ \frac{1}{2}\E \int_0^{\psi^{\bar Y, \epsilon}(t)}
\partial_y^2 f( \bar X_s, \bar Y_s)
\one_{\bar Y_s \in (0, 1)} \, ds
+ \E \int_0^t
\partial_y f( X^\epsilon_s, (Y^\epsilon_s)^+ )
\, dL^{Y^\epsilon}_s(0)\,.
\end{align*}
Here we used the fact that $\qv{X^\epsilon, Y^\epsilon} = 0$ (Lemma~\ref{l:jqvXY}) and $\partial_y f(x, 1) = 0$ (which is guaranteed by the assumption $f \in \mathcal D( \mathcal L^\epsilon)$).
Using~\eqref{e:claimLtratio} this simplifies to
\begin{multline*}
\E f(Z^\epsilon_t) - f(Z^\epsilon_0)
= \E \int_0^{\psi^{\bar X, \epsilon}(t)}
\partial_x^2 f( \bar X_s, \bar Y_s)
\one_{\bar X_s \not\in \epsilon \Z} \, ds
\\
+ \E \int_0^{\psi^{\bar Y, \epsilon}(t)}
\partial_y^2 f( \bar X_s, \bar Y_s)
\one_{\bar Y_s \in (0, 1)} \, ds
\\
+ \frac{1}{2}\E \int_0^t
\paren[\Big]{
\partial_x f( (X^\epsilon_s)^+, Y^\epsilon_s )
- \partial_x f( (X^\epsilon_s)^-, Y^\epsilon_s )
+ \alpha \epsilon
\partial_y f( X^\epsilon_s, (Y^\epsilon_s)^+ )
}
\, dL^{X^\epsilon}_s(\epsilon \Z) \,.
\end{multline*}
Since $f \in \mathcal D( \mathcal L^\epsilon)$ and $L^{X^\epsilon}$ only increases when $Y^\epsilon = 0$ and $X^\epsilon \in \epsilon \Z$, the last integral above vanishes.
Consequently,
\begin{equation*}
\lim_{t \to 0} \frac{1}{t} \E \paren[\big]{ f(Z^\epsilon_t) - f(Z^\epsilon_0) } = \mathcal L^\epsilon f(0, 0)\,
\end{equation*}
showing that the generator of~$Z^\epsilon$ is $\mathcal L^\epsilon$ as claimed.
The fact that $Z^\epsilon$ satisfies~\eqref{e:sdeXep} and~\eqref{e:sdeYep} follows immediately by choosing $f(x, y) = x$ and $f(x, y) = y$ respectively.
\end{proof}
It remains to prove Lemmas~\ref{l:claimLtratio} and~\ref{l:jqvXY}.
\begin{proof}[Proof of Lemma~\ref{l:claimLtratio}]
We first claim that for any $t \geq 0$, we have
\begin{equation}\label{e:psixPlusPsiy}
\psi^{\bar X, \epsilon}(t) + \psi^{\bar Y, \epsilon}(t) = t\,.
\end{equation}
To see this, define the non-decreasing, right continuous function
\begin{equation*}
H(t) \stackrel{\Delta}{=} \tau^{\bar Y} \paren[\Big]{ \frac{\alpha \epsilon}{2} L^{\bar X}_t( \epsilon \Z ) }\,.
\end{equation*}
Using the properties of $\tau^{\bar Y}$, $L^{\bar X}$, $\tau^{\bar X,\epsilon}$, and $L^{\bar Y}$, it is easy to check that the right continuous inverse of $H$ is
\[
H^{-1}(t) = \inf \{ s > 0 \;|\;\; H(s) > t \} = \tau^{\bar X,\epsilon} \left( \frac{2}{\alpha \epsilon} L^{ \bar Y}_s(0) \right).
\]
Therefore, $\psi^{\bar X, \epsilon}$ and $\psi^{\bar Y, \epsilon}$ are the right continuous inverse functions of $t \mapsto t + H(t)$ and $t \mapsto t + H^{-1}(t)$, respectively, meaning that
\begin{align}
\psi^{\bar X, \epsilon}(t) & = \inf \left \{ s \;|\;\; s + H(s) > t \right\}, \nonumber \\
\psi^{\bar Y, \epsilon}(t) & = \inf \left \{ r \;|\;\; r + H^{-1}(r) > t \right\}. \nonumber
\end{align}
In general, $H(H^{-1}(r)) \geq r$ and $H^{-1}(H(s)) \geq s$ must hold, but equality may not hold due to possible discontinuities in $H$ and $H^{-1}$.
Fix $t > 0$, and let $[t_0,t_1]$ be the maximal interval such that $t \in [t_0,t_1]$ and $\psi^{\bar X, \epsilon}$ is constant on the interval $[t_0,t_1]$. Possibly $t_0 = t_1 = t$, but let us first suppose that the interval has non-empty interior, $t_0 < t_1$. This implies that $H(s)$ has a jump discontinuity at a point $s = \psi^{\bar X, \epsilon}(t_1)$ such that $s + H(s^-) = t_0$ and $s + H(s^+) = s + H(s) = t_1$. Also, $H^{-1}(H(s)) = s$ must hold for such a value of $s$. So, for $\ell = H(s) = H(\psi^{\bar X, \epsilon}(t_1))$ we have
\[
\ell + H^{-1}(\ell) = H(s) + s = t_1.
\]
Therefore, $\psi^{\bar Y, \epsilon}(t_1) = \ell$, since
\[
\psi^{\bar Y, \epsilon}(t_1) = \inf \left \{ r \;|\;\; r + H^{-1}(r) > t_1 \right\}.
\]
This means that $\psi^{\bar Y, \epsilon}(t_1) = H(s)$. Therefore,
\[
\psi^{\bar Y, \epsilon}(t_1) + \psi^{\bar X, \epsilon}(t_1) = H(s) + s = t_1
\]
must hold. Now let extend the equality to the rest of the interval $[t_0,t_1]$. By assumption, $\psi^{\bar X, \epsilon}(t) = \psi^{\bar X, \epsilon}(t_1)$ for all $t \in [t_0,t_1]$. Since $H$ has a jump discontinuity at $s$, this means $H^{-1}(r)$ is constant on the interval $[H(s^-),H(s)]$. Hence, the function $r + H^{-1}(r)$ is affine with slope 1 on the interval $[H(s^-),H(s)] = [\psi^{\bar Y, \epsilon}(t_1) - (t_1 - t_0), \psi^{\bar Y, \epsilon}(t_1)]$. Therefore, for all $t \in [t_0,t_1]$, we must have
\[
\psi^{\bar Y, \epsilon}(t) = \psi^{\bar Y, \epsilon}(t_1) + t - t_1.
\]
This shows that for all $t \in [t_0,t_1]$, we have
\begin{align*}
\psi^{\bar X, \epsilon}(t) + \psi^{\bar Y, \epsilon}(t) = \psi^{\bar X, \epsilon}(t_1) + \psi^{\bar Y, \epsilon}(t_1) + t - t_1 = t.
\end{align*}
Applying the same argument with the roles of $\psi^{\bar X, \epsilon}$, $\psi^{\bar Y, \epsilon}$, $H$ and $H^{-1}$ reversed, we conclude that $\psi^{\bar X, \epsilon}(t) + \psi^{\bar Y, \epsilon}(t) = t$ must hold if either $\psi^{\bar X, \epsilon}$ or $\psi^{\bar Y, \epsilon}$ is constant on an interval containing $t$ which has non-empty interior. The only other possibility is that both $\psi^{\bar X, \epsilon}$ and $\psi^{\bar Y, \epsilon}$ are strictly increasing through $t$. In this case, $H$ must be continuous at $\psi^{\bar X, \epsilon}(t)$ and $H^{-1}$ must be continuous at $\psi^{\bar Y, \epsilon}(t)$. Thus, $H^{-1}(H(\psi^{\bar X, \epsilon}(t)))= \psi^{\bar X, \epsilon}(t)$ and $H(H^{-1}(\psi^{\bar Y, \epsilon}(t))) = \psi^{\bar Y, \epsilon}(t)$ holds. The rest of the argument is the same as in the previous case.
This proves~\eqref{e:psixPlusPsiy}.
Now, since $X^\epsilon$ and $Y^\epsilon$ are time changes of $\bar X$ and $\bar Y$ respectively, we know that the local times are given by
\begin{equation*}
L^{X^\epsilon}_t
\stackrel{\Delta}{=} L^{X^\epsilon}( \epsilon \Z)
= L^{\bar X}_{\psi^{\bar X,\epsilon}(t)}\,,
\quad\text{and}\quad
L^{Y^\epsilon}_t
\stackrel{\Delta}{=} L^{Y^\epsilon}(0)
= L^{\bar Y}_{\psi^{\bar Y,\epsilon}(t)} \,.
\end{equation*}
By definition of $\psi^{\bar X, \epsilon}$, we know
\begin{equation*}
t = \psi^{\bar X, \epsilon}(t)
+ \tau^{\bar Y} \paren[\Big]{
\frac{\alpha \epsilon}{2}
L^{\bar X}(\psi^{\bar X, \epsilon}(t))
}\,.
\end{equation*}
Using~\eqref{e:psixPlusPsiy} this gives
\begin{equation*}
\psi^{\bar Y, \epsilon}(t)
= \tau^{\bar Y} \paren[\Big]{
\frac{\alpha \epsilon}{2}
L^{\bar X}(\psi^{\bar X, \epsilon}(t))
}\,,
\end{equation*}
and using the fact that $\tau^{\bar Y}$ is the inverse of $L^{\bar Y}$, we get~\eqref{e:claimLtratio} as desired.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l:jqvXY}]
Fix $\delta > 0$, and define a sequence of stopping times $0 = \sigma_0 < \theta_1 < \sigma_1 < \theta_2 < \sigma_2 < \dots$ inductively, by
\begin{align}
\sigma_0 & = 0 \nonumber \\
\theta_{k+1} & = \inf \left \{ t > \sigma_k \;|\; \text{either $Y^\epsilon_t = \delta$ or $d(X^\epsilon_t,\epsilon \mathbb{Z}) = \delta$} \right\}, \quad k = 0,1,2,3,\dots \nonumber \\
\sigma_{k+1} & = \inf \left \{ t > \theta_k \;|\; \text{$Y_t = 0$ and $X^\epsilon_t \in \epsilon \mathbb{Z}$} \right\}, \quad k = 0,1,2,3,\dots \nonumber
\end{align}
Then for $T > 0$, we decompose the joint quadratic variation over $[0,T]$ as
\[
\qv{X^\epsilon,Y^\epsilon}_{[0,T]} = \sum_{k \geq 0} \qv{X^\epsilon,Y^\epsilon}_{[\sigma_k \wedge T, \theta_{k+1} \wedge T]} + \qv{X^\epsilon,Y^\epsilon}_{[\theta_{k+1} \wedge T,\sigma_{k+1} \wedge T]}.
\]
We claim that for all $k$,
\begin{equation}
\qv{X^\epsilon,Y^\epsilon}_{[\theta_{k+1} \wedge T,\sigma_{k+1} \wedge T]} = 0 \label{qvarzero}
\end{equation}
holds with probability one. Hence,
\begin{align}
\left|\qv{X^\epsilon,Y^\epsilon}_{[0,T]} \right| & \leq \sum_{k \geq 0} \left| \qv{X^\epsilon,Y^\epsilon}_{[\sigma_k \wedge T, \theta_{k+1} \wedge T]} \right| \nonumber \\
& \leq \sum_{k \geq 0} \frac{1}{2} \qv{X^\epsilon,X^\epsilon}_{[\sigma_k \wedge T, \theta_{k+1} \wedge T]} + \frac{1}{2} \qv{Y^\epsilon,Y^\epsilon}_{[\sigma_k \wedge T, \theta_{k+1} \wedge T]} \nonumber \\
& \leq \sum_{k \geq 0}|( \theta_{k+1} \wedge T )- (\sigma_k \wedge T)| \nonumber \\
& \leq \left| \left\{ t \in [0,T] \;|\; \;\; |\bar Y_t| \leq \delta, \;\;\text{and} \;\; d(\bar X_t, \epsilon \mathbb{Z}) \leq \delta \quad \right\} \right|.
\end{align}
As $\delta \to 0$, the latter converges to $0$ almost surely, which proves that $\qv{X^\epsilon,Y^\epsilon} = 0$.
To establish the claim \eqref{qvarzero}, we may assume $\theta_k < T$, for otherwise, the statement is trivial. At time $\theta_k$, we have either $X^\epsilon_{\theta_k} \notin \epsilon \mathbb{Z}$ or $Y_{\theta_k} = \delta$. In the former case, we must have $X_t \notin \epsilon \mathbb{Z}$ for all $t \in [\theta_k,\sigma_k)$. Hence, $\psi^{\bar Y,\epsilon}(t)$ and $Y^\epsilon_t$ are constant for all $t \in [\theta_k,\sigma_k)$. In the other case, $Y_t > 0$ for all $t \in [\theta_k,\sigma_k)$ while $X_t$ is constant on $[\theta_k,\sigma_k]$. In either case, this implies that $\qv{X^\epsilon,Y^\epsilon}_{[\theta_k \wedge T, \sigma_k \wedge T]} = 0$ holds with probability one.
\end{proof}
\bibliographystyle{halpha-abbrv}
|
1,116,691,497,277 | arxiv | \section{Introduction}
\noindent In recent years there has been a boom in quantum computing. Different models of hardware have been proposed, with the most dominant being based on superconducting circuits or trapped ions (cf. IBM, Rigetti, IonQ). All these architectures use discrete values (qubits) as the basis for computation. However, this is not the only possible approach --- one might also create a quantum computer based on qudits or even continuous variables \cite{gaussian_quantum_information}. The later is the approach that the Canadian company Xanadu is pursuing, building a photonic quantum computer utilizing the continuous variable paradigm. They have built the open-source libraries Strawberry Fields and PennyLane which provide tools for simulating photonic circuits and perfoming machine learning experiments.
\medskip
\noindent Our goal in this paper is to take a hands-on approach towards optimization problems on continuous-variable quantum computers. This type of problem has already been researched both for discrete quantum computing and quantum annealing in \cite{qaoa}, \cite{crooks}, and \cite{maxcut_d_wave}.
\noindent In particular, we consider a solution to the Max-Cut problem on weighted graphs. We embed a graph into a quantum state, and then we optimize the parametrizable part of the circuit using a well-chosen cost function. For readers new to continuous-variable quantum computing, the appendices can serve as a quick introduction to the terminology and foundations of the field.
\subsection{Related work}
In recent years, researchers started to use a new approach for creating quantum algorithms, namely variational circuits \cite{var_alg}. Algorithms like Variational Quantum Eigensolver (VQE) \cite{vqe} or Quantum Approximate Optimization Algorithm (QAOA) \cite{qaoa} have been succesfully employed to graph problems like the Max-Cut \cite{qaoa} and the traveling salesman problems \cite{hadfield}. However, these algorithms use the discrete variable quantum computing paradigm and there has not been much work done to solve this type of problems with continuous-variable quantum computing; with the notable recent exception of \cite{cv-qaoa}.
Among all the graph problems, the Max-Cut problem has thus far gotten much more attention than others as can be found in \cite{qaoa}, \cite{crooks}, and \cite{zhou}. Since many researchers use it as the first problem for testing and benchmarking their variational optimization algorithms, we decided to follow this trend.
\medskip
\noindent We have based our work on the methods described in \cite{gbs2} and \cite{cvqnn} and developed them further to solve the Max-Cut problem on a simulator of a photonic quantum computer.
\subsection{Acknowledgements}
We would like to thank Nathan Killoran and Josh Izaac for their help and guidance, Maria Schuld for help with the QMLT package and Nicol\'as Quesada for help with understanding the Takagi decomposition. We would also like to acknowledge Wayne Nixalo and Witold Kowalczyk for their contribution at the initial stage of this research.
\subsection{Organization}
This paper is organized as follows: in section \ref{sec:theoretical_framework} we present the theoretical framework we used in this research. We introduce two ways of representing a graph as a quantum state using either a Gaussian covariance matrix or the Takagi decomposition.
In section \ref{sec:experimental_results}, we present results of the simulation made on graphs of different sizes and offer preliminary results for extending the research to a machine learning model.
In section \ref{sec:conclusions}, we conclude the paper and give directions for further research.
Appendices \ref{sec:appendix_A}, \ref{sec:appendix_B}, \ref{sec:appendix_C} and \ref{sec:appendix_D} introduce the reader to the continuous-variables quantum computing paradigm.
\section{Theoretical framework}
\label{sec:theoretical_framework}
\subsection{Representation of a graph as a quantum state}
\noindent Let $(G,w)$ be a weighted graph where $G = (V,E)$ is a graph with vertices $V$ and edges $E$, and $w$ is a weight function which attaches to each edge between vertices $i$ and $j$ a real number $w(i,j)$. We set $w(i,j) = 0$ if there is no edge connecting $i$ and $j$.
\medskip
\noindent We write $A = (a_{ij})$ for the weighted adjacency matrix, that is we set $a_{ij} = w(i,j)$ for each $i,j \in V$. Inspired by \cite{gbs} and \cite{gbs2}, we match $A$ with a Gaussian covariance matrix (i.e. a matrix describing a state created by a Gaussian quantum circuit). We can either calculate it directly or perform the Takagi decomposition, which allows us to omit direct calculations.
We describe both methods, but we use only the second one for the numerical simulations.
\subsubsection{Gaussian covariance matrix}
\noindent We assume that $G$ has $n$ vertices.
Let $\mathbb{I}_n$ be the identity matrix. We define
$$\mathbb{X} = \begin{bmatrix}0 & \mathbb{I}_n \\ \mathbb{I}_n & 0 \end{bmatrix}$$
\noindent The Gaussian covariance matrix associated with $A$ is defined to be
$$\sigma _{A} = (\mathbb{I}_{2n} - \mathbb{X}A)^{-1} - \mathbb{I}_{2n}/2$$
Let $c$ be an auxilary real number which will be our parameter to be determined for each graph separately. We choose $c$ such that $\sigma _{c\mathbb{I}_{2n}+A}$ is symplectic and positive definite, if that is possible. The reason we need to scale $A$ is because $\sigma_A$ does not always give a proper Gaussian covariance matrix\footnote{That is, it cannot be always represented by a gaussian quantum state} (cf. \cite{gbs2}, especially Appendix A). For arbitrary $A$, the above might not work. That is why we introduce $d$ to be another auxilary positive real, which we use as a parameter. Adopting the method in \cite{gbs2} we define
$$A' = \begin{bmatrix}A & 0 \\ 0 & A \end{bmatrix}$$
and set
$$\sigma ' _{c,d,A} = (\mathbb{I}_{4n} - \mathbb{X}d\cdot(c\mathbb{I}_{4n}+A'))^{-1} - \mathbb{I}_{4n}/2$$
where $\mathbb{X}$ here is defined with $\mathbb{I}_{2n}$. Pursuant to Appendix A in \cite{gbs2} for any $A$ there are always $c,d>0$ such that $\sigma ' _{c,d,A}$ is symplectic and positive definite.
\medskip
\noindent Now, using Strawberry Fields we are able to associate a quantum circuit with $\sigma _{d(c\mathbb{I}_{2n}+A)}$ if that is possible, or with $\sigma ' _{c,d,A'}$ if not. Note that we try to avoid using $\sigma ' _{c,d,A'}$ as it doubles the dimensions and the number of qumodes we need to use.
\noindent The result is a quantum circuit for which the output probability distribution depends on matrix $A$.
\noindent This method introduces additional parameters and requires choosing them in such a way that all the matrices meet the required conditions. However, the same result can be achieved by using the Takagi decomposition, as described in the section below. In this research we have used the later approach.
\subsubsection{Takagi decomposition}
\label{sec:takagi}
Let's take a set of $N$ squeezed states, with a squeezing parameter $r_i$, followed by an interferometer described by a matrix $U$ (see fig. \ref{fig:circuit_takagi}). If the matrices meet the condition:
\begin{equation}
\label{eq:takagi}
B = U D U^T
\end{equation}
\noindent where $D$ is a diagonal matrix with elements $d_i$ (which are the eigenvalues of $B$) on the diagonal and $r_i=\arctanh(d_i)$, then the probability distribution of such a state depends on matrix $B$ \cite{gbs}.
\medskip
\noindent Therefore, if we want to embed a weighted graph described by a distance matrix $B$ in a circuit, we do not need to calculate the covariance matrix; it's enough to perform the Takagi decomposition which is given by the equation \ref{eq:takagi} and to set the parameters of the gates accordingly.
\medskip
\noindent There are two restrictions on matrix $B$: it has to be symmetrical and its eigenvalues must be from the interval $[-1, 1]$ so that it fits the $\arctanh$ function.
Matrix $A$ is always symmetrical, but it can have arbitrary eigenvalues. Therefore in order to embed it, we need to rescale it by multipling it by a constant, so that it meets the second condition.
\medskip
\noindent This method has several advantages over the previous one: it does not require calculating the covariance matrix explicitly, it does not introduce any parameters and it is much simpler. Those reasons make it an attractive method for state preparation as we do in this paper.
\begin{figure}
\begin{center}
\begin{minipage}{.7\textwidth}
\Qcircuit @C=1em @R=.9em {
& \gate{S(r_0)} & \multigate{3}{U} \qw & \qw\\
& \gate{S(r_1)} & \ghost{U} \qw & \qw\\
& \gate{S(r_2)} & \ghost{U} \qw & \qw\\
& \gate{S(r_3)} & \ghost{U} \qw & \qw\\
}
\end{minipage}
\end{center}
\caption{Circuit used for the preparation of the initial state. The parameters of the squeeze gates, as well as the exact form of the interferometer matrix come from the Takagi decomposition (see sec \ref{sec:takagi}).}
\label{fig:circuit_takagi}
\end{figure}
\subsection{Max-Cut problem}
Let $(G,w)$ be a weighted graph. A cut is a partition $(S,V \backslash S)$ of the vertex set $V$ into sets $S$ and $V \backslash S$. The weight $w(S, V \backslash S)$ of a cut is given by
\begin{equation}
\label{eq:1}
w(S, V \backslash S) = \sum _{i \in S, j \in V \backslash S} w(i,j)
\end{equation}
\noindent The maximum cut is the cut of maximum weight and its weight is denoted by $mc(G,w)$, i.e.
$$ mc(G,w) = max _{S \subset V} w(S, V \backslash S)$$
\medskip
\noindent We can represent the set $S$ as a list: $s_N, s_{N-1}, ..., s_1$, where $N = |V|$ and $s_{i} \in {0, 1}$. With this representation we can express the weight of the cut $S$ as:
\begin{equation}
\label{eq:max_cut}
w(S) = \sum_{i,j \in E} w_{i,j} (1 - s_i * s_j),
\end{equation}
\noindent where $E$ is set of edges of the graph $(G,w)$ and $w_{i,j}$ is the weight of edge between nodes $i$ and $j$.
\medskip
\subsection{Circuit design}
\label{sec:circuit_design}
For simulating the circuits we used the Strawberry Fields library \cite{sf} and for training the parameters of those circuits we used the Quantum Machine Learning Toolbox (QMLT) \cite{qmlt}. Our circuit consists of two parts. The first one is associated with embedding the graph in the circuit, the second one is used for finding the solution for a given graph (see fig. \ref{fig:full_circuit}).
\medskip
\noindent We perform the embedding using the following procedure, according to sec. \ref{sec:takagi}:
\begin{enumerate}
\item Create a distance matrix $A$ of the given graph.
\item Rescale the matrix so all the eigenvalues are between -1 and 1 (excluding these values). After this procedure we get matrix $A'$.
\item Perform the Takagi decomposition of the matrix $A'= U D U^T$.
\item Take diagonal elements $d_i$ of the matrix $D$. The values $r_i = \arctanh(d_i)$ correspond to the initial squeezing of each mode.
\item The matrix $U$ corresponds to the matrix describing an interferometer applied to the squeezed modes.
\item The probability distribution of this state corresponds to the matrix $A$.
\end{enumerate}
\medskip
\noindent The second part of the circuit is based on the architecture proposed in \cite{cvqnn}. It consists of an interferometer, a layer of squeeze gates, a second interferometer, a layer of displacement gates and a layer of non-Gaussian gates --- either Kerr or cubic phase gates.
\medskip
\noindent Squeezing, displacement and non-Gaussian gates have been parametrized and the parameters of the interferometers have been fixed. We have done this in order to limit the number of parameters that need to be optimized and keep our analysis simple.
\medskip
\noindent All the parameters were initialized with random numbers. For both squeezing and displacement gates, the magnitude was drawn from the uniform distribution over [-0.5, 0.5] and the phase from the uniform distribution [0, 2$\pi$]. In case of non-Gaussian gates (which have only one parameter), it was also drawn from the uniform distribution over [-0.5, 0.5].
\medskip
\noindent Using a wider range as the support of the uniform distribution has been tested but it has not yielded any benefits in the results or training process and increased the risk of the simulation getting numerically unstable.
\begin{figure}
\begin{center}
\begin{minipage}{.7\textwidth}
\Qcircuit @C=1em @R=.7em {
& \gate{S(r_0)} & \multigate{3}{U} & \multigate{3}{U} & \gate{S} & \multigate{3}{U} & \gate{D} & \gate{NG} & \qw \\
& \gate{S(r_1)} & \ghost{U} & \ghost{U} & \gate{S} & \ghost{U} & \gate{D} & \gate{NG} & \qw \\
& \gate{S(r_2)} & \ghost{U} & \ghost{U} & \gate{S} & \ghost{U} & \gate{D} & \gate{NG} & \qw \\
& \gate{S(r_3)} & \ghost{U} & \ghost{U} & \gate{S} & \ghost{U} & \gate{D} & \gate{NG} & \qw \\
}
\end{minipage}
\end{center}
\caption{The circuit used for performing the optimization. The initial squeeze gates and the interferometer create the quantum state associated with the graph. Subsequent operations are parametrized (with the exception of the interferometers) and optimized (see sec. \ref{sec:circuit_design}).}
\label{fig:full_circuit}
\end{figure}
\subsection{Solution encoding}
We decided to use photon count as the output of the circuit, hence the output of each qumode could be in principle an integer from 0 up to infinity. However, in the simulation we are limited by the cutoff dimension so values for qumodes were capped at this value. In most cases it was equal to 17, but for some simulations we needed to lower this number to 9 due to memory constraints of the machines used.
\medskip
\noindent In our case, binary encoding is sufficient since we divide nodes into two groups. This is why we decided to treat output of 0 as 0 and all non-zero outputs as ones. This means that the output [0, 5, 1, 3] will encode the solution [0, 1, 1, 1]. Other encodings are also possible, for example using homodyne measurements and encoding 0 and 1 in negative and positive values of position or momentum.
\subsection{Training algorithm}
\label{sec:training}
For training the circuit we used the QMLT framework.
It has three modes: "optimization," "supervised" and "unsupervised". We have used "optimization," with stochastic gradient descent as an optimizer and L2-regularization. These are all standard choices --- something that deserves some more attention is the loss function we have used.
\medskip
\noindent For the classical, non-probabilistic algorithm, the most obvious choice of the loss function would be taking the output of the algorithm and calculating the cost from the equation \ref{eq:1}. However, in the case of the quantum circuit, the output is probabilistic, and so we need to take multiple results and calculate our loss function as an average over all the samples. A single result is a sample from a probability distribution hence with a growing number of samples we can reproduce the distribution more accurately.
\medskip
\noindent In our case, we are simulating the algorithm directly which enables us to use a probability distribution directly, without relying on the sampling. This could be approximated on the real devices by increasing the number of samples. The main problem with this approach is that in order to calculate the cost for the whole distribution, we also need to evaluate all the possible solutions classically. One might be inclined to ask what is the point of running the optimization procedure if we have to evaluate all the possible solutions classically anyway?
\medskip
\noindent There are two reasons why we do not think this is a significant issue in this case. Firstly, the problem we are dealing with is a toy problem and the research is preliminary. Hence the results we present can be treated as an upper bound on what could be done using this algorithm on a real machine instead of a simulator. Secondly, in the real scenario we will not be able to use the full distribution and we will need to rely on sampling. This will naturally limit the number of unique solutions we will need to evaluate.
\section{Experimental results}
\label{sec:experimental_results}
We have performed several tests to evaluate how the circuit works in different setups. In the spirit of other projects done by Xanadu, we used the codename "Yellow Submarine". The source code of the simulation can be found at: \url{https://github.com/BOHRTECHNOLOGY/yellow_submarine} and the code from the different experiments we performed at \url{https://github.com/BOHRTECHNOLOGY/public_research/tree/master/Experiments/Yellow_submarine}.
\subsection{Training parameters}
\label{sec:training_params}
We have been using the QMLT framework with initial learning rate equal to $0.25$ and regularization strength equal to $10^{-3}$.
The values of the regularization strength and the learning rate have been chosen experimentally.
We have checked different values for these parameters and $10^{-3}$ was the highest value of regularization which had not forced the parameters to vanish over time.
Learning rates with values above $0.5$ resulted in instability during the learning process.
\subsection{Influence of the non-Gaussian gates}
We ran the simulation using both weighted and unweighted graphs with 4, 5 and 6 nodes. Of particular interest to us in the training is how the parameters of the different gates evolve, and more specifically, how non-Gaussian gates influence the simulation and its overall results. We investigate this for the displacement gate, the squeeze gate, the Kerr gate and the cubic phase gate.
The plots we present here show the results for one 4-nodes graph, though they are representative for other graphs.
\subsubsection{Loss function}
We used a loss function as described in sec. \ref{sec:training}. In the case presented here, it could achieve a minimum value of -1 - this would mean that there is 100\% probability of getting a correct solution from the circuit. The second best solution had value of -0.75.
\medskip
\noindent As can be seen in the fig. \ref{fig:loss_function}, our circuits converged to value around -0.9, which indicates that the correct solution was the most frequent one. Also, the results are similar regardless of the non-Gaussian gates used.
\medskip
\noindent Solutions of the Max-Cut problem have symmetry: [1, 0, 0, 1] has the same cost as [0, 1, 1, 0]. It is worth noting that training always converged to returning only the single best solution, not a superposition of all the best solutions.
\medskip
\subsubsection{Displacement gate parameters}
The displacement gate has two parameters: magnitude and phase. Values of magnitude always converged towards one of two or three values (see fig. \ref{fig:d_gate} A). This suggests that during the learning process, the displacement gate parameter magnitude contributes significantly to the end result --- this has also been confirmed by a simulation with the displacement gates removed from the circuit.
\medskip
\noindent We also note that the phase parameter usually does not change much from the initial value (see fig. \ref{fig:d_gate} B), and the rate of change is much smaller than in the case of magnitude. This suggests that phase paramater plays less significant role than the magnitude part.
\medskip
\subsubsection{Squeeze gate parameteres}
The squeeze gate parameter magnitude does vary during the simulation but does not converge towards any specific value (see fig. \ref{fig:s_gate}). The parameter phase vary only a little bit throughout the simulations. Therefore, we conclude that as with the displacement gate, the phase parameter is less important than the magnitude.
\medskip
\subsubsection{Kerr gate}
The Kerr gate parameter remains entirely constant throughout the entire training for all simulations. The small change in the value of this parameter that can be seen in fig. \ref{fig:kerr_gate} comes from regularization.
This, in conjunction with the fact that the results with and without Kerr gate are very similar, seems to indicate that the Kerr gate does not participate at all in the computation of the final answer.
\medskip
\subsubsection{Cubic phase gate}
The cubic phase gate parameter does change during the training, but its behavior is much less consistent. Most often it just changes during the training, but sometimes converges towards specific values (see fig. \ref{fig:cubic_phase_gate}). It also does not speed up the convergence or help to lower the final value of the cost function.
Additionally, The presence of the cubic phase gate sometimes induced spikes in the loss function which shows the instability it introduces in the training process. This needs to be compensated for with different hyper-parameters and might be the subject of a more comprehensive study in the future.
\subsection{Influence of the embedding}
In the setup that we have proposed, we can omit the embedding part of the circuit and use only the variational part. This means that the variational part will act on a vacuum state instead of state corresponding to a graph. Since information about the graph structure is encoded in the cost function, the optimization process will still drive the solution toward some local minimum. We have checked whether the presence of the graph embedding improves the results.
\medskip
\noindent Depending on the graph we tried to solve, the presence of the embedding had negligible to slightly negative influence on the results and training process.
It was the strongest for the graph with 4 nodes, where the final value of the cost function was on average 10\% higher and the convergence was up to two times slower in some cases. However, this effect was much weaker for graphs with 5 and 6 nodes, sometimes even unobservable.
\medskip
\noindent We have checked if circuits containing some non-Gaussian gates are influenced more than the others, but no such correlation has been found.
\subsection{A quantum circuit as a machine learning model}
We also wanted to check if our circuit can be treated as a machine learning model, i.e. if it can be trained using one set of graphs and then generalize to solve graphs that had not been presented to it.
\medskip
\noindent We used the following procedure to achieve this:
\medskip
\noindent Given a training set $X$ of $n$ matrices $x_i$, for each optimization step we have embedded every matrix $x_i$ once, and calculated the loss function. Then we have taken the sum of the loss functions for the whole set $X$ and used it to update the values of the parameters (in order to achieve this, we needed to slightly modify the source code of the QMLT library. The source code is available in the repository).
\medskip
\noindent In our case the set $X$ consisted of four 4x4 matrices. Each matrix represented a graph with star topology --- there was a central node and all the other nodes were connected only to it. In each set of the matrices, a different node was the central one. This type of a training set has important properties:
\begin{itemize}
\item A star topology guarantees that we will always have only one optimal solution - namely that the central node is in one group and all the other nodes are in the other one.
\item Since in each case the central node is different, all the graphs in the set $X$ have different optimal solutions.
\item All the possible solutions occur in equal proportion.
\item It is therefore easy to tell if a given circuit has learned to solve the problem for a given graph, or if it simply converged to one of the local minima.
\end{itemize}
\medskip
\begin{figure}[ht]
\centering
\includegraphics[width=50mm]{star_topology.png}
\caption{The star graph used in the "machine learning model" approach. The numbers indicate assignment to one of the groups. The assignment shown in the figure is the optimal solution to the Max-Cut problem for this graph. For each graph in the training set X, the central nodes had different indices.}
\label{fig:star_topology}
\end{figure}
\medskip
\noindent Apart from the original architecture, we have also tested another setup, where the parametrized part was duplicated - hence we had two layers of gates. In both cases circuits failed to solve the problem correctly. In the end they learned to return the same output regardless of what graph had been embedded in the circuit. The outputs varied between different runs, but they always converged to a configuration where two bits were on and two were off. For different non-Gaussian gates used, there was no difference in the quality of the output. However, the training process without the non-Gaussian gates was much smoother while adding non-Gaussian gates introduced oscillations, with amplitude varying between different runs (see fig. \ref{fig:ml_results}).
\section{Conclusions}
\label{sec:conclusions}
\noindent In this study, we have created a framework for solving the Max-Cut problem using photonic quantum circuits. We have checked its performance for graphs of up to 6 nodes and we have checked how using different gates affects the training process of parametric circuits. Importantly, we compared the performance of this optimization method in a scenario where only Gaussian gates are used versus one where non-Gaussian gates are added.
\medskip
\noindent Based on the results of the numerical simulations we can say that:
\begin{itemize}
\item the setup that we proposed allows to solve Max-Cut problem.
\item the presence of non-Gaussian gates does not yield any improvement and might even result in instabilities in the optimization process.
\item starting from a state described in \ref{sec:takagi} might have detrimental effect on the results.
\end{itemize}
\noindent Since the work we have done was mostly experimental, we think that these conclusions are not definite, but might be useful for other researchers implementing a variational algorithm in a continuous-variable quantum computing model. Experimenting with the machine learning approach seems especially promising, since we have only touched this topic.
\noindent Also the fact that non-Gaussian are not needed to solve the Max-Cut problem seems interesting. This suggests that Gaussian Boson Sampler, which is a device that is simpler to build than a full continuous-variable quantum computer, might be useful for solving graph problems. On the other hand, it is unclear whether this approach gives any advantage in scaling or performance over classical methods --- it would require further investigation.
\medskip
\noindent We invite other researchers to use our code. Links to the code have been provided in section \ref{sec:experimental_results}. The natural next step is to look at other combinatorial optimization problems like the Traveling Salesman Problem. We are aware that during the work on this project new tools have been released, like the PennyLane library or a new version of Strawberry Fields, but we nevertheless think that having access to the source code might be helpful.
\newpage
|
1,116,691,497,278 | arxiv | \section{Introduction}
The structure of a quantum computer \cite{lul1} happens to have much in common with the Schur-Weyl duality \cite{lul2,lul3,lul4,TiagoCruz} between the unitary group $U(n)$ and the symmetric group $\Sigma_N$, acting on the $N$-th tensor power space $h^{\otimes N}$ of the defining space $h$ of $U(n)$. Explicitly, the space $h$ is identified with the elementary memory unit, referred to as a \textit{qunit} (a qubit for the case $n=2$), and then $h^{\otimes N}$ becomes the memory of a computer composed of $N$ such qunits, possibly subdivided between several parties, like Alice, the King, etc. \cite{lul5}. The memory space $h^{\otimes N}$, with dimension dim$\,h^{\otimes N}=n^N$, provides a variety of orthonormal bases, more or less adapted to several specific purposes of quantum information processing.
The calculational basis in the space $h^{\otimes N}$ is \textit{local}, i.e. each its element carries the exact information on the position of each particular qunit. It is, therefore, a fully separable state, whereas any transmission of information requires some entanglement. Processing with such entangled states is more efficient in some non-local bases of $h^{\otimes N}$, when it allows for an easy scan of information spread over a variety of qunits, in accordance with the quantum superposition principle. The best known, and perhaps the most radical way to display such non-local variables is \textit{the Fourier transform} over the set of qunits, which corresponds to change the position by momenta. These two sets of discrete variables, form the so called mutually unbiased bases \cite{lul6,lul7,lul8,lul9}, such that the full knowledge of quantum numbers specifying one basis exactly wipes out any information on the other. Bacon et al. \cite{lul10} have argued that the irreducible basis of the Schur-Weyl duality also provides a convenient access to non-local variables. They pointed out the importance of this basis in such prominent subjects of contemporary quantum information processing as universal quantum source coding \cite{lul10a}, communication without common reference frame \cite{lul11}, and any others, in which use of this basis is optimal. They also have proposed \cite{lul12} and demonstrated explicitly \cite{lul13} a method for determination of the Schur-Weyl basis in terms of a quantum circuit, obtained in a polynomial number of steps with respect to $N$ and $n$ ($n$ and $d$, respectively, in their notation).
The aim of the present paper is to propose the new method of construction of the Schur-Weyl states. The main advantage of our approach is the size of needed calculations, linear with respect to both $N$ and $n$. Moreover, we claim that our method is more transparent from the point of view of combinatorics, associated with the duality of Schur-Weyl and the relevant representation theory \cite{lul16,lul17}.
We provide this transparency by a clear motivation of combinatoric entities at each step of the method. The main point which yields a simplification of the procedure consists in replacement of standard and semistandard Young tableaux (responsible for the irreducible bases of the symmetric and unitary group, respectively) by the double Gelfand patterns \cite{lul18,lul19}.
These two sets carry the same combinatoric information, and the former is concise (and also closer to express the essence of the Schur-Weyl duality at the level of bases), whereas the latter, being more extended, is flexible enough to indicate clearly the famous Robinson-Schensted-Knuth combinatoric algorithm \cite{robinson,schensted,knuth} as a path on the Gelfand pattern, resulting from the ramification rules (betweenness conditions) in a transparent geometric way. In particular, each Schensted insertion of a letter into an intermediate pattern is represented by a step (to the left or to the right) on this path.
To construct the Schur-Weyl states amplitudes we exploited the fundamental tensor operators, and
combinatorial bijection between semistandard Weyl tableaux and Gelfand-Tsetlin triangles \cite{lul18,gelfand,gelfand1}. We developed also a method of construction the directed graph with the vertices labelled by Gelfand-Tsetlin triangles and the edges by single node states. This graph describes different scenarios of ladder construction of spin system "node by node", leading to the formula for the Schur-Weyl states amplitudes.
The paper is organised as follows. We start in Section 2 with a brief description of the $U(n)$ invariant physical model, the representation of Schur-Weyl duality and we introduce the
Schur-Weyl states.
In Section 3 we present the Robinson-Schensted-Knuth algorithm in the language of Gelfand-Tsetlin patterns.
Section 4 is the main section where we present the algorithm of construction of the Schur-Weyl states probability amplitudes together with a simple example of calculation.
We end with concluding remarks in Section 5.
\section{Schur-Weyl duality representation in one-dimensional spin system}
The memory space $h^{\otimes N}$ of a quantum computer is a scene of two groups linear actions: the symmetric group $\Sigma_N$, defined on the set
\be
\tilde{N}=\{j = 1,2,...,N\},
\ee
and the unitary group $U(n)$, defined on the qunit $h\cong\mathbb{C}^n$. The latter definition involves the set
\begin{equation}
\tilde{n}=\{i = 1,2,...,n\}
\end{equation}
of labels of a unitary basis elements in $h$. $\Sigma_N$ and $U(n)$ are referred to \textit{dual groups}, and the corresponding \textit{dual sets}, $\tilde N$ and $\tilde n$, are usually referred to the \textit{alphabets}, of \textit{nodes} and \textit{spins}, respectively. These two alphabets define a basis
\begin{equation}\label{baza_konfiguracji}
\tilde n^{\tilde N} = \{ f: \tilde N \rightarrow \tilde n \}
\end{equation}
in the memory space $h^{\otimes N}$, such that each mapping $f: \tilde N \rightarrow \tilde n$, referred to as a \textit{configurations of spins}, labels the pure state $|f\rangle \in h^{\otimes N}$ of the form
\begin{equation}\label{konfiguracja}
|f\rangle = | i_1 \rangle \otimes | i_2 \rangle \otimes \ldots \otimes | i_N \rangle \quad (i_j \in \tilde n,\mbox{ for } j \in \tilde N).
\end{equation}
The set $ \tilde n^{\tilde N} $ is referred to as the \textit{initial}, or \textit{calculational} basis in $h^{\otimes N}$. Eq. (\ref{konfiguracja}) implies that any basis state $|f\rangle\in h^{\otimes N}$ is separable and, moreover, each qunit $h_j$, $j \in \tilde N$, is in a definite pure state $|i_j\rangle, \; i_j \in \tilde n$.
The Schur-Weyl duality \cite{lul2,schur_1927,weyl_1946} is presented in terms of actions,
$A:\Sigma_N \times h^{\otimes N} \rightarrow h^{\otimes N}$ and $B:U(n) \times h^{\otimes N} \rightarrow h^{\otimes N}$, of two dual groups on the memory space $h^{\otimes N}$. We specify these actions in the calculational basis $ \tilde n^{\tilde N} $. The action $A$ of $\Sigma _N$ permutes the qunits along the formula
\begin{equation}\label{dzialanie_A}
A(\sigma) = {f \choose f \circ \sigma^{-1}}, \quad f \in \tilde n^{\tilde N}, \quad \sigma \in \Sigma_N,
\end{equation}
whereas the action $B$ of $U(n)$ transforms uniformly the entry of each qunit, which results in a multilinear transformation
\begin{equation}\label{dzialanie_B}
\begin{array}{l}
B(a)|f\rangle = \left(a|i_1\rangle\right)\otimes (a|i_2\rangle)\otimes \ldots \otimes (a|i_N\rangle) =
\\
~~~~~~~~~~~~=\sum_{f'\in \tilde n^{\tilde N}} a_{i_1^{'}\, i_1} \ldots a_{i_N^{'}\, i_N} |f'\rangle, \quad f\in \tilde{n}^{\tilde{N}}, \quad a\in U(n)\\
\end{array}
\end{equation}
where
$a= (a_{i^{'}\, i} | i^{'}, i\in \tilde n) \in U(n)$, and $f'=(i_1^{'}, \ldots, i_N^{'}) \in \tilde n^{\tilde N} $. These two actions mutually centralize, i.e.
\begin{equation}
A(\sigma) B(a) |f\rangle = B(a) A(\sigma) |f\rangle, \quad \sigma \in \Sigma_N,\quad a\in U(n),\quad f\in \tilde n^{\tilde N},
\end{equation}
what leads to a unique decomposition of the memory space
\begin{equation}
h^{\otimes N}= \bigoplus_{\lambda \in D(N,n)} \mathcal{H}^\lambda
\end{equation}
into sectors $\mathcal{H}^\lambda$, labelled by partitions $\lambda \in D(N,n)$. The set $D(N,n)$ denotes all partitions $\lambda$ of the integer $N$ into no more than $n$ parts, i.e.
$\lambda=(\lambda_1, \ldots, \lambda_N), \; \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_N \geq 0, \; \lambda_1+\lambda_2+ \ldots + \lambda_N = N$. Each sector $\mathcal{H}^\lambda$ is irreducible under the action of direct product group $U(n)\times \Sigma_N$, what leads to the decomposition
\begin{equation}\label{rozkladH}
\mathcal{H}^\lambda = V^\lambda \otimes W^\lambda,
\end{equation}
where $V^\lambda$ and $W^\lambda$ is the carrier space of the irrep $D^\lambda$ of $U(n)$ and $\Delta^\lambda$ of $\Sigma_N$, respectively. At the level of bases, it reads
\begin{equation}
D^\lambda(a) |V^\lambda\, t\rangle = \sum_{t' \in WT(\lambda,n)} D_{t' t}^\lambda (a) |V^\lambda\, t'\rangle, \quad t \in WT(\lambda,n),
\end{equation}
and
\begin{equation}
\Delta^\lambda(\sigma) |W^\lambda\, y\rangle = \sum_{y' \in SYT(\lambda)} \Delta_{y' y}^\lambda (\sigma) |W^\lambda\, y'\rangle, \quad y \in SYT(\lambda),
\end{equation}
for the irrep $D^\lambda$ and $\Delta^\lambda$, respectively, with the sets $WT(\lambda,n)$ and $SYT(\lambda)$ labelling the corresponding irreducible basis vectors. Thus
$t \in WT(\lambda,n)$ is a Weyl tableau, or a semistandard Young tableau of the shape $\lambda$ in the alphabet $\tilde n$ of spins, and $y \in SYT(\lambda)$ is a standard Young tableau of the same shape $\lambda$ in the alphabet $\tilde N$ of nodes, whereas $D_{t' t}^\lambda (a)$ and $\Delta_{y' y}^\lambda (\sigma)$ denotes the relevant matrix elements. In this way, the duality (\ref{dzialanie_A})-(\ref{rozkladH}) imposes in the memory space $h^{\otimes N}$ the basis
\begin{equation}\label{baza_SW}
\begin{array}{l}
b_{SW} = \left\{ |\lambda\, t\, y\rangle = |V^\lambda\, t\rangle \otimes |W^\lambda\, y\rangle \;\right\}
\\[5pt] \lambda \in D(N,n), \; t \in WT(\lambda,n), \; y \in SYT(\lambda),
\end{array}
\end{equation}
referred hereafter to as \textit{the Schur-Weyl basis}.
It is worthwhile to compare these two bases, the initial $ \tilde n^{\tilde N} $ and that of Schur-Weyl $b_{SW}$, in the memory space $h^{\otimes N}$. To this purpose, one distinguishes two kinds of variables, internal and positional. The former are associated with the unitary degrees of freedom within a qunit $h$, whereas the latter relate to the labels of qunits within the memory space, and thus to the alphabet $\tilde N$ of nodes. The calculational basis is \textit{local}: for any configuration $f \in \tilde n^{\tilde N} $, an individual qunit $j \in \tilde N$ is in a definite state $|i_j\rangle \in h$, $i_j \in \tilde n$. The Schur-Weyl basis is \textit{separable} with respect to these two kinds of variables: each Weyl tableau $t \in WT(\lambda,n), \; \lambda \in D(N,n)$, is associated with a \textit{collective} internal variable, composed as an $N$-th rank tensor along the distribution of letters of the alphabet $\tilde n$ of spins in the Weyl tableau $t$, whereas each standard Young tableau $y \in SYT(\lambda), \; \lambda \in D(N,n)$, involves a symmetrized combination of positional labels of qunits along the prescription coded in the standard Young tableau $y$. Clearly, the symmetrization procedure associated with the symmetric group $\Sigma_N$ implies that the Schur-Weyl basis is \textit{nonlocal}. It is worth to mention, however, that some information on localization of qunits still remains, in the form of ,,symmetrized localization'' of each letter $j \in \tilde N$ of the alphabet of nodes, seen as a box in the standard Young tableau $y$. This information on a symmetrized localization of qunits is fully reflected in the spectrum of Jucys-Murphy operators \cite{lul20,lul22,lul23,lul24} in the group algebra of $\Sigma_N$. A brief account of comparison between the two bases in the memory space $h^{\otimes N}$ is presented in Table \ref{tabela1}.
\begin{table}[h]
\caption{A comparision between calculational and Schur-Weyl bases in the memory space $h^{\otimes N}$.}
\centering
\begin{tabular}{l|l|l}
\hline
~~~~~~~~~~~$\setminus$ Basis & calculational & Schur-Weyl \\
Variable ~$\setminus$ & $ \tilde n^{\tilde N} =\{ f:\tilde N \rightarrow \tilde n \} $ & $ b_{SW} =\{ |\lambda\, t\, y \rangle \} $\\
\hline
internal & individual & collective (tensorial) \\
positional & localised & symmetrized \\
\hline
\end{tabular}
\label{tabela1}
\end{table}
\subsection{The Schur-Weyl states}
According to quantum mechanics, it is clear, that the elements of Schur-Weyl basis (\ref{baza_SW}) can be presented as linear combination of the calculational basis elements (\ref{baza_konfiguracji}) (cf. also Tab. \ref{tabela1})
\begin{equation}\label{stany_SW}
|\lambda \, t \, y \rangle \; = \; \sum_{f \in \tilde n^{\tilde N}}
\langle f | \lambda t y \rangle \; |f \rangle.
\end{equation}
From this point of view, the Schur-Weyl basis elements $|\lambda\, t\, y\rangle$ can be seen as states with amplitudes $\langle f | \lambda t y \rangle$ and symmetry described by the Weyl $t$ and Young $y$ tableaux.
To work with these states in optimal way, it is important to find their irreducible representation. To do this, let us consider the action $A$ of the symmetric group in a purely combinatorial manner. This action decomposes the set of all magnetic configurations $\tilde n^{\tilde N}$ into orbits of the symmetric group
\begin{equation}\label{mh11}
\mathcal{O}_\mu = \{ f \circ \sigma^{-1} | \, \sigma \in \Sigma_N \}, \;\;\; \mu \vDash N
\end{equation}
marked by $\mu$ \emph{- compositions}
\footnote{A composition $\mu$ of a number $N$, $\mu \vDash N$ in short, is defined by a sequence of non-negative integers $(\mu_1, \mu_2, \ldots, \mu_n), \; \mu_i \in \mathbb{N}_{\geq 0}$ fulfilling the condition
$
\sum_{i\in \tilde n} \mu_i = N.
$
}
of the number $N$.
Part $\mu_i$ of the composition $\mu$ are defined by
\begin{equation}\label{mh12}
\mu_i = |\{ i_j = i \, | \, j \in \tilde N \}|, \,\,\, i \in \tilde n
\end{equation}
and corresponds to the number of nodes, occupied by appropriate state $|i\rangle, \; i \in \tilde n$ for each $f \in \mathcal{O}_\mu$.
Restriction of the action $A$ to the orbit $\mathcal{O_\mu}$ gives \emph{the transitive representation} of the group $\Sigma_N$
\begin{equation}\label{mh14}
\underbrace{A \big |_{\mathcal{O_\mu}}}_{\mbox{\scriptsize spanned on magetic configurations}} \equiv \underbrace{{R^{\Sigma_N:\Sigma^\mu}_{}}_{~~}}_{\mbox{\scriptsize spanned on left cosets of } \Sigma_N}
\end{equation}
with the stabiliser
$
\Sigma^\mu = \Sigma_{\mu_1} \times \Sigma_{\mu_2} \times \ldots \times \Sigma_{\mu_n}
$
being a Young's subgroup \cite{lul16,sagan}. This implies that $R^{\Sigma_N : \Sigma^\mu}$ can be spanned on the set of left cosets of the symmetric group $\Sigma_N$ with respect to the subgroup $\Sigma_\mu$, or on the set of configurations of the orbit $\mathcal{O}_\mu$. Thus one can write
\begin{equation}\label{mh16}
|\mathcal{O}_\mu| = \frac{|\Sigma_N|}{|\Sigma^\mu|} = \frac{N!}{\prod_{i \in \tilde n} \mu_i !},
\end{equation}
i.e. the number of elements of the orbit $\mathcal{O}_\mu$ is equal to the number of cosets of the group $\Sigma_N$ with respect to the Young's subgroup $\Sigma^\mu$.
On the other hand, representation theory of the unitary groups uses the Kostka numbers $K_{\lambda \mu}$ ($\lambda,\; \mu$ are partitions) \cite{mcdonald, fulton} as the dimension of the carrier space of the representation $D^\lambda$ od $U(n)$, spanned on all Weyl tableaux of the weight $\mu$. Thus, the transitive representation can be decomposed into irreps of the symmetric group
\begin{equation}\label{roz_kostka}
R^{\Sigma_N : \Sigma^\mu} \cong \sum_{ \lambda \unrhd \mu} K_{\lambda \, \mu} \,\, \Delta^{\lambda},
\end{equation}
where the sum runs over the partitions $\lambda$ being greater than or equal to $\mu$ with respect to the order of domination
\footnote{
The partition $\lambda$ is greater than or equal to $\mu$ with respect to the order of domination, when
$
\lambda \unrhd \mu \Longleftrightarrow \sum_{i'=1}^{i} \lambda_{i'} \geq \sum_{i'=1}^{i} \mu_{i'}, \,\,\, i = 1, 2, \ldots ,
$
i.e. the sum of first $i$ parts of partition $\lambda$ is greater or equal than the respective sum of first $i$ parts of partition $\mu$ for every value of $i$.
},
$\Delta^{\lambda}$ stands for irrep of $\Sigma_N$ labelled by the partition $\lambda$.
The Kostka numbers $K_{\lambda \mu}$ in (\ref{roz_kostka}) are related to the (non-empty) intersection
\begin{equation}\label{k2}
lc_{\mathbb C} \mathcal O_\mu \cap \mathcal H^\lambda, \,\,\, \lambda \unrhd \mu
\end{equation}
of transitive representation space $R^{\Sigma_N : \Sigma^\mu}$ spanned on the orbit $\mathcal{O}_\mu$ with a sector $\mathcal H^\lambda$ of a space with the permutational symmetry $\lambda$.
In another words, they represents the number of different copies of $V^{\lambda}$ inside the space spanned on the orbit $\mathcal{O}_\mu$, resulting in decomposition (\ref{rozkladH}).
The equation (\ref{roz_kostka}), written on the level of representations, can now be specified on the level of bases in a form
\begin{equation}\label{rozKostki}
|\mu \, \lambda \, t \, y \rangle \hspace{-3pt} = \hspace{-5pt} \sum_{f \in \mathcal{O}_\mu}
\langle \mu f | \lambda t y \rangle \;
|\mu f \rangle,
\end{equation}
where the probability amplitudes $\langle \mu f | \lambda t y \rangle$ allow for irreducible representation of Schur-Weyl state $|\mu \, \lambda \, t \, y \rangle$ in terms of magnetic configurations.
We refer to Eq. (\ref{rozKostki}) as to the irreducible Schur-Weyl states, because it converts an initial base of configurations $\mathcal O_\mu$ into the irreducible base
$
\{ |\mu \, \lambda \, t \, y \rangle \}
$
of the Schur-Weyl duality with the appropriate cross-section (\ref{k2}).
The index $\mu$ in Eq. (\ref{rozKostki}) indicates that we restrict ourselves to the orbit $\mathcal O_\mu$ of the symmetric group.
The standard method of determination of the Schur-Weyl states amplitudes is based on the definition (see for example \cite{bohr_mottelson})
\begin{equation}\label{spr20}
|\lambda \, t \, y \rangle =\mbox{const} \sum_{\sigma \in \Sigma_N} \Delta_{y, y_t}^\lambda (\sigma) A(\sigma) |f_0\rangle,
\end{equation}
where $A(\sigma) |f_0\rangle = |f_0 \circ \sigma^{-1}\rangle$ and $\Delta_{y, y_t}^\lambda(\sigma)$ is appropriate matrix element of irrep $\Delta^\lambda$ for $\sigma \in \Sigma_N$. As we see in Eq. (\ref{spr20}), definition requires, roughly, $N$ factorial operations (because the sum runs over all elements of the symmetric group), thus practically, this method can be applied only to small systems consisting of at most a few dozen atoms. The method of Schur-Weyl states construction which we propose in following parts of the paper, is independent on the size of the system.
\section{Robinson-Schensted-Knuth algorithm for Gelfand-Tsetlin pattern}
The Robinson-Schensted-Knuth ({\rm RSK}) algorithm \cite{robinson,schensted} and its generalization \cite{knuth,knuth1998} has many applications, see for example its utilitarity in
representation theory \cite{bjorner,kazhdan}, algebra \cite{mcdonald,sagan},
combinatorics \cite{fulton,stanley}
and physics \cite{dorotaPawel2015,dorotaPawel2018}.
Originally, it establishes a bijective correspondence between the symmetric group elements and pairs of Weyl and Young tableaux of equal shape.
In the spin system representation it provides a bijection
\begin{equation}
RSK : \tilde n^{\tilde N} \rightarrow b_{SW}
\end{equation}
between the initial basis $\tilde n^{\tilde N}$ (\ref{baza_konfiguracji}) of magnetic configurations and the irreducible basis $b_{SW}$ (\ref{baza_SW}) of the Schur-Weyl duality.
In this section we present a version of the RSK algorithm in the language of Gelfand-Tsetlin (GT) patterns \cite{gelfand,gelfand1,louck2,louck3}.
Generally speaking we substitute tableaux pairs by double Gelfand patterns \cite{lul18}.
These new irreducible basis elements carry the same combinatoric information, but are better adjusted to the Schur-Weyl duality approach and very well reflect the physics of the spin systems \cite{lul18}.
\subsection{Gelfand-Tsetlin patterns}
It is known that irreducible representations $D^\lambda$ of a unitary group $\mathrm{U}(n)$ are classified by a partitions $\lambda \in D_W(N,n) $. For consistency of further notation, such a partition is denoted as
\begin{equation}\label{gu1}
\lambda \equiv [m]_n=[m_{1n} \ldots m_{nn}].
\end{equation}
The standard basis of the carrier space $V^{[m]_n}$ of the irreducible representation $D^{[m]_n}$ is denoted by $\mathrm{GT}([m]_n, \tilde n)$, so that
\begin{equation}\label{gu11a}
V^{[m]_n}=lc_{\mathbb{C}} \; \mathrm{GT}([m]_n, \tilde n).
\end{equation}
Gelfand-Tsetlin patterns are adapted to the chain of unitary subgroups
\begin{equation}\label{gu2}
\mathrm{U}(1) \subset \mathrm{U}(2) \subset \cdots \subset \mathrm{U}(n-1) \subset \mathrm{U}(n),
\end{equation}
defined along of letters in the alphabet $\tilde n$ of spins. Consecutive restrictions of the irrep $D^{[m]_n}$ to subgroup $\mathrm{U}(i)$ along the chain (\ref{gu2}) (taken from the right to the left, i.e. $i=n,n-1, \ldots, 2,1$) are associated with partitions $[m_i]_i = (m_{1i} \ldots m_{ii})$, each corresponding to an irreducible representation $D^{[m_i]_i}$ of the intermediate subgroup $\mathrm{U}(i)$. Partitions $[m_i]_i$ can be arranged in an graphic way as
\begin{equation}\label{gu5}
\begin{array}{@{}llllllll|l@{}}
m_{1n} & & m_{2n} & & \cdots & m_{n-1 n} & & m_{nn} & \mathrm{U}(n) \\
& m_{1n-1} & & m_{2 n-1} & & \cdots & m_{n-1 n-1} &&\mathrm{U}(n\!-\!1) \\
& & \ddots & & \vdots & & \iddots && \vdots \\
& & m_{1 3} & & m_{2 3} & & m_{33} &&\mathrm{U}(3) \\
& & & m_{1 2} & & m_{2 2} & &&\mathrm{U}(2) \\
& & & & m_{1 1} && &&\mathrm{U}(1) \\
\end{array}
\end{equation}
which is known as the Gelfand-Tsetlin pattern (or triangle). According to the Weyl ramification rule, each GT pattern $(m)$ with non-negative entries $m_{i, j}$ satisfies \emph{the betweenness conditions}
\begin{equation}\label{gu1a}
m_{i-1 j} \leq m_{i-1, j-1} \leq m_{i,j}, \quad 1 \leq i \leq j \leq n.
\end{equation}
The first row of GT triangle coinciding with $[m]_n$, corresponds to a unique ray in $V^{[m]_n}$, so that the set of all GT patterns, $\mathrm{GT}([m]_n, \tilde n)$, yields an orthonormal basis for the irrep $D^{[m]_n}$ of $\mathrm{U}(n)$.
We choose the GT patterns $(m)$ instead of both semistandard Weyl tableaux $t$ or standard Young tableaux $y$, since the triangular shape (\ref{gu5}) of $(m)$ admits a transparent presentation of selection rules resulting from Weyl ramification. Namely, each row $i \in \tilde n$ of $(m)$ is an irrep of $U(i)$, which is indicated on the right side of the triangle (\ref{gu5}), and entries of the consecutive rows satisfy the betweenness condition (\ref{gu1a}).
Moreover, we write the basis state corresponding to $(m)$ in the form
\begin{equation}\label{gu7}
|(m)\rangle
=
\Bigg|
\left (
\begin{array}{@{}c@{}}
[m]_n \\
(m)_{n-1} \\
\end{array}
\right )
\Bigg \rangle,
\end{equation}
to make transparent the distinction between the label $[m]_n$ (square brackets) of the irrep of $U(n)$ (the first row of the triangle $(m)$), and its basis function $(m)_{n-1}$ (parentheses) - the remaining $(n-1)$ rows of $(m)$. In this notation, the
orthogonality condition for the basis $\mathrm{GT}([m]_n, \tilde n)$ reads
\begin{equation}\label{gu6}
\Bigg\langle {[m]_n \choose (m')_{n-1}} \Bigg| {[m]_n \choose (m)_{n-1}} \Bigg \rangle = \delta_{(m')_{n-1} (m)_{n-1}}
\end{equation}
and the dimension formula for $D^{[m]_n}$ as
\begin{equation}\label{gu10}
\mbox{dim} D^{[m]_n} = \frac{ \prod_{1 \leq i < j \leq n} (p_{i n}-p_{j n})}{1! 2! \ldots (n-1)!},
\end{equation}
where $p_{ij}= m_{ij}+j -i$ is known as the \emph{partial hook} corresponding to the $(i,j)$ entry of the GT pattern $(m)$.
We recall that the quantum state corresponding to the GT pattern $(m)$ can be presented, in a combinatorially equivalent way, by a semistandard Weyl tableau $t$ in the alphabet $\tilde n$ of spins as follows. Let the $i$ - th row of the tableau $t$ has the form
$$
\underbrace{i \ldots i}_{\tau_{ii}}
\underbrace{i+1 \ldots i+1}_{\tau_{i,i+1}}
i+2 \ldots n-1
\underbrace{n \ldots n}_{\tau_{in}},
$$
so that $\tau_{ik}, 1 \leq i \leq k \leq n$, is the occupation number of the letter $k \in \tilde n$ in the $i$ - th row of $t$ ($\tau_{ik}=0$ for $i > k$ by advantage of semistandardness of $t$).
Then clearly
\begin{equation}\label{r27}
\sum_{i \in \tilde n} \tau_{ik} = \mu_k, \quad k \in \tilde n
\end{equation}
and
\begin{equation}\label{r28}
\sum_{k \in \tilde n} \tau_{ik} = \lambda_i, \quad i \in \tilde n
\end{equation}
determine the weight $\mu=(\mu_1, \ldots, \mu_n)$ and the shape $\lambda=(\lambda_1, \ldots, \lambda_n)$ of the tableau $t$. The equivalence between the tableau $t$ and the pattern $(m)$ is given by
\begin{equation}\label{r29}
\tau_{i,k} =
\left\{
\begin{array}{l}
m_{ik}-m_{i,k-1} \mbox{ for } 1 \leq i < k, \\
m_{ii} \mbox{ for } i = k,\\
0 \mbox{ for } i > k,\\
\end{array}
\right.
\end{equation}
together with the inverse transformation
\begin{equation}\label{r30}
m_{ik} = \sum_{1 \leq k' \leq k} \tau_{ik'}.
\end{equation}
\subsection{The Schensted insertion for Gelfand-Tsetlin patterns}
It is known that each semistandard Weyl tableau $t$ can be constructed recursively with respect to consecutive letters $j=1,2, \ldots, N$ of the alphabet $\tilde N$ of nodes, in accordance with the RSK algorithm. At the $j$ - th step of this recursion, $j \in \tilde N$, one applies the Schensted insertion, i.e. inserts a given letter $k \in \tilde n$ of the alphabet $\tilde n$ of spins into the intermediate tableau $t^{j-1}$ with the shape $\lambda^{(j-1)} = \mbox{shape}\,( t^{(j-1)})$ being a partition of $j-1$, along well known rules of the RSK algorithm. We adapt here these rules for application within the GT patterns $(m)$, equivalent to the tableau $t$ (cf. \ref{r30}). In order to insert the number $k$ into $(m)$ we follow the algorithm:
\begin{enumerate}
\item
mark the element $m_{1k}$ (the first element in the $k$ row of GT pattern) and increase it by one, i.e.
$
m_{1k} := m_{1k}+1;
$
\item
then mark a subtriangle
$$
\begin{array}{ccc}
m_{1 k+1} & & m_{2 k+1} \\
& m_{1k} & \\
\end{array}
$$
where $m_{1k}$ is the element which has been increased in the first step;
\item
next check, if $m_{1k} > m_{1 k+1}$ then $m_{1 k+1} := m_{1 k+1} + 1$, in the opposite case $m_{2 k+1} := m_{2 k+1} + 1$;
\item
the element which has been increased becomes the starting point of a new subtriangle (like in step 2)
$$
\begin{array}{ccc}
m_{i j+1} & & m_{i+1 j+1} \\
& m_{ij} & \\
\end{array}
$$
where: $m_{ij}$ - element which has been increased by one in the previous step;
\item
and again, check
if $m_{ij} > m_{i j+1}$ then $m_{i j+1} := m_{i j+1} + 1$ in opposite case $m_{i+1 j+1} := m_{i+1 j+1} + 1$;
and, again, the element which has been increased becomes the new starting point of a new subtriangle;
\item
we repeat the procedure until we reach the $n$-th row of the GT pattern.
\end{enumerate}
The described procedure resembles some kinds of \emph{bubbling}, i.e. we obtain the travel path of the arguments $m_{ij}$ of the GT pattern, from row $k$ to $n$, along which the respective $m_{ij}$ are being increased by one.
\\~~\\
\noindent \textbf{Example}\\
Below we present the example of Schensted insertion for Gelfand-Tsetlin pattern. Suppose we have a triangle:
$$
{\footnotesize
\begin{array}{@{}lllllllll@{}}
7 & & 4 & & 2 & & 1 & & 0 \\
& 5 & & 2 & & 2 & & 0 \\
& & 5 & & 2 & & 0 & & \\
& & & 2 & & 0 & & & \\
& & & & 2 & & & & \\
\end{array}
}
$$
and want to put the letter $k=2$ into it.
First we mark the element $m_{12}$, next we increase the value of this element by one and compare this element with $m_{13}$ and $m_{23}$.
Since $m_{12} \leq m_{13}$ $(3<5)$, we increase $m_{23}$ by one and compare it with $m_{24}$ and $m_{34}$.
Since $m_{23} > m_{24}$ (3 $>$ 2), we increase $m_{24}$ by one and compare it with $m_{25}$ and $m_{35}$, since $m_{24} \leq m_{25}$ $(3<4)$, we increase $m_{35}$ by one, and reach the top of the GT pattern. Finally, we obtain the new GT pattern
$$
{\footnotesize
\begin{array}{@{}lllllllll@{}}
7 & & 4 & & \boxed{3} & & 1 & & 0 \\
& 5 & & \boxed{3} & & 2 & & 0 \\
& & 5 & & \boxed{3} & & 0 & & \\
& & & \boxed{3} & & 0 & & & \\
& & & & 2 & & & & \\
\end{array}
}
$$
where the rectangles mark the path of bubbling. The shape of the bubbling path is strictly defined by the conditions of standardness of the triangle.
The resulting triangle represents the basis element of irrep $D^{(74210) + e_5(3)}$ of $U(5)$, where $e_5(3)=(0,0,1,0,0)$.
\subsection{Robinson-Schensted-Knuth algorithm}\label{rskalg}
In order to show explicitly that the above algorithm of insertion is bijective one has to construct the reverse procedure. To achieve that, the number of row gaining the new cell, at each step, has to be coded. It can be resolved by adding an additional GT triangle which plays the role of the Young tableau in the classical RSK algorithm.
It leads to a double triangle of the form
\begin{equation}\label{rsg2}
\left (\begin{array}{c}
(y)_{n-1} \\
\mbox{[$m$]}_n\\
(t)_{n-1} \\
\end{array}
\right )
\end{equation}
which consists of two GT patterns with the same partition $\mbox{[$m$]}_n$ while the triangle $(y)_{n-1}$ is reflected in a horizontal plane. Symbol $(m)_{j}$ is used to mark rows from 1 to $j$ of the GT pattern $(m)$, and $[m]_j$ reflects $j$-th row of the $(m)$. In other words, the lower triangle corresponds to the standard Weyl tableau and the top triangle - to the standard Young tableau from the classical RSK algorithm.
The RSK algorithm in terms of GT patterns for the spin configuration $f$ can be defined as follows:
\begin{enumerate}
\item[a)] write down the configuration $f$ in a two-row notation
\begin{equation}\label{krs5}
f={1\; 2\; \ldots N\; \choose i_1 i_2 \ldots i_N}
\end{equation}
where the top row contains consecutive node numbers (alphabet of nodes) and the lower one - respective single-node states (alphabet of spins);
\item[b)]
draw the zero double GT pattern
\begin{equation}\label{rsg3}
\left (\begin{array}{c}
(0)_{n-1} \\
{[0]}_n\\
(0)_{n-1} \\
\end{array}
\right ),
\end{equation}
where symbol $[0]_n$ reflects row $n$ of a triangle consisting of $n$ zeros, i.e. $[\underbrace{0,0, \ldots, 0}_{n \mbox{{ \scriptsize times}}}]$;
\item[c)]
Then carry out the insertion of the successive elements (the insertion procedure described above) from the lower row of a configuration $f$ to a lower GT pattern; analogously form the top row of the configuration $f$ to the top GT pattern.
However it must be outlined that, the insertion to the upper triangle starts with increasing an element $m_{i_1 i_2}$, where $i_1$ is equal to the number of partition parts $[m]_n$, which has been increased after the insertion of a lower letter, whereas $i_2$ is a letter being inserted (from a top row of configuration (\ref{krs5})).
\end{enumerate}
\noindent
{\bf Example}
\\
Let us consider the spin configuration of the form
$$
|f\rangle = |31232\rangle = { 12345 \choose 31232}.
$$
We are looking for an appropriate double GT which is in one-to-one correspondence with $f$ according to the RSK algorithm.
Firstly, we prepare an empty configuration $|\emptyset \rangle$ and appropriate empty double GT pattern, next we insert consecutively letters from a configuration $|f\rangle$ in the following way:
\\\\
\footnotesize
$
\begin{array}{@{}lllllllll@{}}
& & & & 0 & & & & \\
& & & 0 & & 0 & & & \\
& & 0 & & 0 & & 0 & & \\
& 0 & & 0 & & 0 & & 0 \\
0 & & 0 & & 0 & & 0 & & 0 \\
& 0 & & 0 & & 0 & & 0 \\
& & 0 & & 0 & & 0 & & \\
& & & 0 & & 0 & & & \\
& & & & 0 & & & & \\
\end{array}
$
$
\begin{array}{c}
{1 \choose 3}\\
\rightarrow\\
\end{array}
$
$
\begin{array}{@{}lllllllll@{}}
& & & & 1 & & & & \\
& & & 1 & & 0 & & & \\
& & 1 & & 0 & & 0 & & \\
& 1 & & 0 & & 0 & & 0 \\
1 & & 0 & & 0 & & 0 & & 0 \\
& 1 & & 0 & & 0 & & 0 \\
& & 1 & & 0 & & 0 & & \\
& & & 0 & & 0 & & & \\
& & & & 0 & & & & \\
\end{array}
$
$
\begin{array}{c}
{2 \choose 1}\\
\rightarrow\\
\end{array}
$
\\[8pt]
$
\begin{array}{@{}lllllllll@{}}
& & & & 1 & & & & \\
& & & 1 & & 1 & & & \\
& & 1 & & 1 & & 0 & & \\
& 1 & & 1 & & 0 & & 0 \\
1 & & 1 & & 0 & & 0 & & 0 \\
& 1 & & 1 & & 0 & & 0 \\
& & 1 & & 1 & & 0 & & \\
& & & 1 & & 0 & & & \\
& & & & 1 & & & & \\
\end{array}
$
$
\begin{array}{c}
{3 \choose 2}\\
\rightarrow\\
\end{array}
$
$
\begin{array}{@{}lllllllll@{}}
& & & & 1 & & & & \\
& & & 1 & & 1 & & & \\
& & 2 & & 1 & & 0 & & \\
& 2 & & 1 & & 0 & & 0 \\
2 & & 1 & & 0 & & 0 & & 0 \\
& 2 & & 1 & & 0 & & 0 \\
& & 2 & & 1 & & 0 & & \\
& & & 2 & & 0 & & & \\
& & & & 1 & & & & \\
\end{array}
$
$
\begin{array}{c}
{4 \choose 3}\\
\rightarrow\\
\end{array}
$
\\[8pt]
$
\begin{array}{@{}lllllllll@{}}
& & & & 1 & & & & \\
& & & 1 & & 1 & & & \\
& & 2 & & 1 & & 0 & & \\
& 3 & & 1 & & 0 & & 0 \\
3 & & 1 & & 0 & & 0 & & 0 \\
& 3 & & 1 & & 0 & & 0 \\
& & 3 & & 1 & & 0 & & \\
& & & 2 & & 0 & & & \\
& & & & 1 & & & & \\
\end{array}
$
$
\begin{array}{c}
{5 \choose 2}\\
\rightarrow\\
\end{array}
$
$
\begin{array}{@{}lllllllll@{}}
& & & & 1 & & & & \\
& & & 1 & & 1 & & & \\
& & 2 & & 1 & & 0 & & \\
& 3 & & 1 & & 0 & & 0 \\
3 & & 2 & & 0 & & 0 & & 0 \\
& 3 & & 2 & & 0 & & 0 \\
& & 3 & & 2 & & 0 & & \\
& & & 3 & & 0 & & & \\
& & & & 1 & & & & \\
\end{array}
$
~~\\\\[12 pt]
And finally
\\
$$
{12345 \choose 31232} \longleftrightarrow
\begin{array}{@{}lllllllll@{}}
& & & & 1 & & & & \\
& & & 1 & & 1 & & & \\
& & 2 & & 1 & & 0 & & \\
& 3 & & 1 & & 0 & & 0 \\
3 & & 2 & & 0 & & 0 & & 0 \\
& 3 & & 2 & & 0 & & 0 \\
& & 3 & & 2 & & 0 & & \\
& & & 3 & & 0 & & & \\
& & & & 1 & & & & \\
\end{array}
$$
\normalsize
\\
According to (\ref{r29}) obtained double GT pattern is mapped to tableaux ${\scriptsize (\Yvcentermath1 \young(122,33), \young(134,25))}$.
One can easily check, that the classical RSK algorithm applied to $|f\rangle = |31232\rangle$ leads to the same result.
We have shown that the RSK algorithm can be fully realized in the GT basis which is better adapted to the symmetry of the spin systems.
This basis expresses all the rules of choice imposed by both dual groups $\Sigma_N$ and $\mathrm{U}(n)$, with the help of simple geometrical limitations of arguments of the GT pattern (compare the betweenness conditions (\ref{gu1a})).
The double GT patterns are capable to substitute a pair $(t,y)$ of Weyl and Young tableaux completely.
Proposed approach along with a ladder construction of spin nodes, are key elements in the construction of the Schur-Weyl states amplitudes.
\section{Construction of the Schur-Weyl states amplitudes}
The amplitudes $\langle f | \lambda t y \rangle$ of the state (\ref{rozKostki}) can be calculated using the ladder construction (see Fig. \ref{skladanie})
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig1.eps}
\end{center}
\caption{Scheme of a ladder coupling of consecutive nodes of spin chain. $\lambda_{1..j}=\mbox{shape}\, t_{1..j}$ standing for the shape of tableau $t_{1..j}$ during the $j$-th step of coupling, whereas $(1)$ defines the cell of a tableau (single node state).} \label{skladanie}
\end{figure}
of single nodes spin states $1,2, \ldots, j,j+1, \ldots N$ of the system. To maintain the symmetry, these couplings should be adjustments to the combinatorial growth of the Weyl tableau $t$ according to the RSK algorithm.
The addition of one node to the existing system prepared in a state $|\lambda_{1\ldots j-1}\, t_{1\ldots j-1} \rangle$ can be described in term of the Wigner-Clebsch-Gordan coefficient of the form
\begin{equation}\label{wspWCG}
{\scriptsize
\left [
\begin{array}{ccc}
\lambda_{1\ldots j-1} & (1) & \lambda_{1\ldots j}\\
t_{1\ldots j-1} & f(j) & t_{1\ldots j}
\end{array}
\right ].
}
\end{equation}
This coefficient is responsible for the add of the $j$-th node (in the single-node state $f(j)$ - represented by the second column) to an existing system which consists of $j-1$ nodes (in states $t_{1\ldots j-1}$ represented by the first column), the resulting system consists of $j$ nodes (in states $t_{1\ldots j}$ - represented by the third column).
Addition of nodes can be considered as the one-by-one process, therefore one can replace the coefficient (\ref{wspWCG}) by matrix elements of a fundamental tensor operator \cite{lul18,louck1,louck4,louck4a} using the relation
\begin{equation}\label{fop}
\left [
\begin{array}{ccc}
\lambda_{1\ldots j-1} & (1) & \lambda_{1\ldots j}\\
t_{1\ldots j-1} & f(j) & t_{1\ldots j}
\end{array}
\right ] = \langle t_{1..j} | \hat F_{f(j), row(\lambda_{1..j} \setminus \lambda_{1..j-1})} | t_{1..j-1} \rangle
\end{equation}
where $row(\lambda_{1..j} \setminus \lambda_{1..j-1})$ denotes the row number of the cell, which remain after deleting cells $\lambda_{1..j-1}$ from shape $\lambda_{1..j}$, $\hat F_{pq}$ is a fundamental tensor operator for the unitary group $U(n)$.
In general, construction of the amplitude $\langle f | \lambda t y \rangle$ is imposed by the decreasing process of the Weyl tableau $t$ during the reverse RSK algorithm applied to state $| \lambda t y \rangle$.
Each single step of this process, from $\lambda_{1\ldots j}$ to $\lambda_{1\ldots j-1}$, obeys all selection rules involved in reversed RSK algorithm at every stage, and thus contributes additively to the total value of the amplitude.
This process can be described in terms of graph $\Gamma$, which gives systematic description of all the possible ways of growth of the Weyl tableau $t$ out of magnetic configuration $f$.
Such a graph is simple and directed with minimal (initial) vertex equal to the zero Gelfand-Tsetlin pattern (i.e. a triangle of the shape lambda, filled with zeros), and maximal (final) vertex equal to the pattern which correspond (bijectively) to the Weyl tableau $t$.
Formally, graph $\Gamma$ consists of a set $GT$ of Gelfand-Tsetlin patterns as vertices and the set $\{f(i): i =1 \ldots N\}$ of single-node states which labels the edges (or arcs), such that $\Gamma = (GT, \{f(1), f(2), \ldots, f(N)\})$.
Edge $f(j)=(t_{12..j-1}, t_{12..j})$ of two adjacent vertices $(t_{12..j-1}, t_{12..j})$, where $t_{12..j-1}$ is the initial and $t_{12..j}$ is the terminal vertex of the edge $f(j)$, constructed by inserting the single node state (the letter ) $f(j)$ into the initial vertex $t_{12..j-1}$ in such a way that we obtain the state $t_{12..j}$.
The construction process of the graph $\Gamma$ can be split into two stages.
Firstly, as we mentioned above, we read off the sequence of the partitions $\lambda_{RS} = (\lambda = \lambda_{12\ldots N}=[m]_n, \lambda_{12\ldots N-1}=[m]_{N-1}, \ldots, \lambda_{12}=[m]_2, \lambda_{1}=[m]_1)$ from inverse RSK algorithm applied to the state $|\lambda t y\rangle$, where $[m]_j$ is the $j$-th row of Gelfand-Tsetlin pattern (see Sect. (\ref{rskalg})), and $\lambda_{12\ldots j}$ is the shape of the Weyl (or Young) tableau at $j$-th step of reverse RSK algorithm.
Secondly, we construct the graph by \emph{insertion}, one by one, the consecutive letters $f(j)$ of configuration $f=f(1) f(2) \ldots f(N)$ to the Gelfand-Tsetlin patterns, starting from a triangle consisting of zeros only.
Insertion of one letter $f(j)$ into the Gelfand-Tsetlin pattern $t_{1..j-1}$ increase by one appropriate elements located in the rows $j$, $f(j)~\leq~j~\leq~n$, i.e.
\begin{equation}\label{k8}
\left (
\begin{array}{c}
[m]_n + e_n(\tau_n) \\
\mbox{[$m$]}_{n-1} + e_{n-1}(\tau_{n-1}) \\
\vdots \\
\mbox{[$m$]}_{f(j)} + e_{f(j)}(\tau_{f(j)})\\
(m)_{{f(j)}-1}\\
\end{array}
\right )
\end{equation}
where $\tau_j \in \{ 1,2, \ldots j \}$, and $[m]_j + e_j(\tau_j)$ denotes $j$-th row of the Gelfand-Tsetlin pattern, $e_j(\tau_j)$ is vector of zeros of the length $j$ with $1$ at the position $\tau_j$.
To calculate the probability amplitude of the adding the node $j$ prepared in state $f(j)$ to the system consisting of $j-1$ nodes prepared in the state $t_{1..j-1}$ we use a fundamental tensor operators. These operators, in the Gelfand-Tsetlin basis representation, can be calculated using a technique called \emph{pattern calculus}. This is used to determine matrix elements of tensor operators of any unitary groups with the help of symbolic diagrams and appropriate processing rules. Pattern calculus approach converting many complicated dependencies between the arguments of a vector state into \emph{obvious} geometrical limitations (betweenness conditions). Louck in \cite{lul18} has shown that this kind of fundamental tensor operator can be calculated using the formula
\begin{equation}\label{fso}
\begin{array}{l}
\left.
\begin{picture}(1,1)
\raisebox{2pt}{ \put(0,0){\line(1,3){13}} \put(0,0){\line(1,-3){13}} }
\end{picture}
\;\;\;
\begin{array}{c}
\mbox{[$m$]}_n + e_n(\tau_n) \\
\mbox{[$m$]}_{n-1} + e_{n-1}(\tau_{n-1}) \\
\vdots \\
\mbox{[$m$]}_k + e_k(\tau_k)\\
(m)_{k-1}\\
\end{array}
\right |
\hat F_{k, \, \tau_n}
\left |
\begin{array}{l@{}}
[m]_n \\
\mbox{[$m$]}_{n-1} \\
\vdots \\
\mbox{[$m$]}_k \\
(m)_{k-1}\\
\end{array}
\right.\;\;\;\;\,
\begin{picture}(1,1)
\raisebox{2pt}{ \put(0,0){\line(-1,3){13}} \put(0,0){\line(-1,-3){13}} }
\end{picture}
=
\prod_{j=k+1}^{n} \mbox{sgn}(\tau_{j-1} - \tau_j)
\\ \\
\sqrt{
\left |
\frac
{\mathop{\prod_{i=1}^{j-1}}_{i \neq \tau_{j-1}}
\prod_{i=1, i \neq \tau_{j-1}}^{j-1} (p_{\tau_j,j} - p_{i,j-1})
\prod_{i=1, i \neq \tau_{j}}^{j} (p_{\tau_{j-1},j-1} - p_{i,j}+1)}
{\prod_{i=1, i \neq \tau_{j}}^{j} (p_{\tau_j,j} - p_{i,j})
\prod_{i=1, i \neq \tau_{j-1}}^{j-1} (p_{\tau_{j-1},j-1} - p_{i,j-1}+1)}
\right |
}
\cdot
\\
\sqrt{
\left |
\frac
{\prod_{i=1}^{k-1} (p_{\tau_k,k} - p_{i,k-1})}
{\prod_{i=1, i \neq \tau_{k}}^{k} (p_{\tau_k,k} - p_{i,k})}
\right |
}
\end{array}
\end{equation}
for $k \in \{2,3, \ldots, n-1 \}$.
If $k=n$ the first factor of the rhs of Eq. (\ref{fso})
is equal to 1; while for $k=1$ the second factor of the rhs of Eq. (\ref{fso})
is equal to 1.
Here $k=f(j)$ denotes the added node.
The partial hook $p_{ij} = m_{ij}+j-i$, $e_i(j)$ is the unit vector of the length $i$ with $1$ on the position $j$, $[m]_i$ represents $i$-th row of Gelfand-Tsetlin pattern $(m)$, whereas $(m)_i$ denotes rows from 1 to $i$ of pattern $(m)$.
Equation (\ref{fso}) allows to express any matrix element of any fundamental tensor operator in a basis of Gelfand-Tsetlin patterns.
It is obvious that operation (\ref{k8}) can leads to a collection of patterns (because $\tau_j \in \{ 1,2, \ldots j \}$), but we choose only those, for which the $n$-th row $[m]_n + e_n(\tau_n)$ is equal to a partition $\lambda_{12..j}\in \lambda_{RS}$ (or in other words, the shape of a resulting Gelfand triangle should coincide with $\lambda_{12..j}$ the intermediate partition of the RSK algorithm), and the standardness of the Gelfand-Tsetlin pattern is conserved (or in other words, the betweenness conditions are satisfied).
In the language of graphs, this means that, the out-degree of the vertex (i.e. the number of edges coming from vertex) $t_{1..j-1}$, what we denote $deg^+(t_{1..j-1})$, can be greater than (or equal to) one.
Using the above insertion procedure, we start to insert the first letter $f(1)$ into the zero Gelfand triangle $t_0$ (i. e. a triangle of the shape $\lambda$, filled in by zeroes, which is the minimal vertex of our graph), what results in reaching the vertex $t_1$. This leads to a directed graph, consisting of two vertices $(t_0, t_1)$, joined by the edge $f(1)$
$$
\begin{array}{ccc}
&(t_0)&\\
&\downarrow &{\tiny f(1)}.\\
&(t_1)&\\
\end{array}
$$
Next, one inserts the letter $f(2)$ into the triangle $t_1$ leading to a set of vertices $t_{12}=\{t_{12}^i \; : \; i=1,2,...\}$. Geometrically this represents a graph exhibiting branches, with $deg^+(t_1) \geq 1$.
$$
\begin{array}{ccccc}
&&(t_0)&&\\
&&\;\;\;\;\;\;\;\; \downarrow {\tiny f(1)}&&\\
&&(t_1)&&\\
\;\;\;\;\;\;\;\;\;\;\;\; \swarrow f(2)&\ldots &\;\;\;\;\;\;\;\;\downarrow f(2)& \ldots &\\
(t_{12}^1)&\ldots &(t_{12}^k)& \ldots & \ldots \\
\end{array}
$$
This is followed by further insertion of the letter $f(3)$ into each vertex $t_{12}^i$ from the set $t_{12}$ using the same rules, which produces the set $t_{123}$ of vertices composed of three letters. The same routine is followed for all remaining letters of the configuration $f$.
One can observe that the out-degree $deg^+(t_{12..j})\geq 1$, and can be seen that the insertion rules themselves suggest a quick growth of the graph in a tree-like manner. Nevertheless, symmetry constraints imposed by the physical system, guarantee that final graph will result in the shape of a rhomb (see example below) with the maximal (final) vertex (resulting from the insertion of the last letter $f(N)$ of the configuration $f$) equal to $\lambda t$.
Our approach to calculation of amplitudes out of the graph $\Gamma$ resembles the $n$ slit interference experiment with electrons. In this experiment the calculation of probability amplitude for the transition of an electron, from a source $s$ through a sequence of walls, with slits in them, to the detector $x$, is given by the formula
\be\label{nslitExper}
\langle x|s\rangle = \sum_{\mbox{{\scriptsize
\begin{tabular}{c}
\mbox{all paths} \\
\mbox{from $s$ to $x$}\\
\end{tabular}
}}} \;\; \prod_
{\scriptsize
\begin{tabular}{c}
\mbox{all parts (edges)} \\
\mbox{of a path}\\
\end{tabular}
}
A_{\scriptsize \mbox{a part of a path}}
\ee
where $A_{\scriptsize \mbox{a part of a path}}$ denotes the probability amplitude of transition through a part of a given path.
To take into account indistinguishability of different ways of system creation, described by the graph $\Gamma$, we adopting quantum interference (\ref{nslitExper}) to our situation.
We calculate probability amplitude, as the sum over all different paths of the graph, from minimal to maximal vertex, of product of all edges of the one path, of appropriate fundamental tensor operators.
More precisely, it can be written in terms of formula as\\
\begin{equation}\label{wsp}
\langle f | \lambda t y \rangle =
\hspace{-20pt}
\sum_
{\tiny
\begin{tabular}{c}
\mbox{all different} \\
\mbox{paths from minimal}\\
\mbox{to maximal vertex}\\
\mbox{of the graph}
\end{tabular}
}
\prod_{
\tiny
\begin{tabular}{c}
\mbox{all edges} \\
\mbox{of the one path}\\
\mbox{of the graph}
\end{tabular}
}
\left.
\begin{picture}(1,1)
\raisebox{2pt}{ \put(0,0){\line(1,3){13}} \put(0,0){\line(1,-3){13}} }
\end{picture}
\;
{\small
\begin{array}{c}
\mbox{[$m$]}_n + e_n(\tau_n) \\
\mbox{[$m$]}_{n-1} + e_{n-1}(\tau_{n-1}) \\
\vdots \\
\mbox{[$m$]}_k + e_k(\tau_k)\\
(m)_{k-1}\\
\end{array}
}
\right |
\hat F_{k, \tau_n}
\left |
{\small
\begin{array}{l@{}}
[m]_n \\
\mbox{[$m$]}_{n-1} \\
\vdots \\
\mbox{[$m$]}_k \\
(m)_{k-1}\\
\end{array}
}
\right. \;\;\,
\begin{picture}(1,1)
\raisebox{2pt}{ \put(0,0){\line(-1,3){13}} \put(0,0){\line(-1,-3){13}} }
\end{picture}
\end{equation}
where $\hat F_{k, \tau_n}$ is the fundamental tensor operator (\ref{fso}), $k=f(j)$ and
$\tau_n=row(\lambda_{1..j}\setminus \lambda_{1..j-1})$.
Equation (\ref{wsp}) allows to calculate the probability amplitudes of Schur-Weyl state $| \lambda t y \rangle$ in the magnetic configurations $f$ representation.
~~\\~~
\noindent
\textbf{Example}
Let us consider representation system consisting of $N=4$ nodes and single node spin $s=1$ ($n=3$), prepared in the Schur-Weyl state:
\begin{equation}\label{stateExample}
\begin{array}{l}
\Big | \; \lambda =(3,1), t~=~{\scriptsize \Yvcentermath1 \young(123,2)}, y~=~{\scriptsize \Yvcentermath1 \young(134,2)} \Big\rangle =
{\scriptsize \mathbf{\frac{\sqrt{3}}{6}} \; |1,3,2,2\rangle} +
{\scriptsize \frac{\sqrt{3}}{4}\; |1,2,2,3\rangle} +
\vspace{4pt}
\\
{\scriptsize \frac{\sqrt{3}}{4}\; |1,2,3,2\rangle} -
{\scriptsize \frac{\sqrt{3}}{4}\; |2,1,2,3\rangle} -
{\scriptsize \frac{\sqrt{3}}{6}\; |3,1,2,2\rangle} -
{\scriptsize \frac{\sqrt{3}}{4}\; |2,1,3,2\rangle} -
\vspace{4pt}
\\
{\scriptsize \frac{\sqrt{3}}{12}\; |2,3,1,2\rangle} +
{\scriptsize \frac{\sqrt{3}}{12}\; |3,2,1,2\rangle} -
{\scriptsize \frac{\sqrt{3}}{12}\; |2,3,2,1\rangle} +
{\scriptsize \frac{\sqrt{3}}{12}\; |3,2,2,1\rangle}.
\end{array}
\end{equation}
As an example, we show how to calculate the first probability amplitude (marked as bold) of the state (\ref{stateExample}), i.e.
$$
\Big\langle 1,3,2,2 \; \Big | \; (3,1), {\scriptsize \Yvcentermath1 \young(123,2)}, {\scriptsize \Yvcentermath1 \young(134,2)} \Big\rangle.
$$
According to presented algorithm we read the sequence\\ $\lambda_{RS} = (\lambda_{1234}=(3,1), \lambda_{123}=(2,1), \lambda_{12}=(1,1), \lambda_{1}=(1))$ from the reversed RSK algorithm.
Next, we construct the graph $\Gamma$
$$
\begin{array}{ccccc}
&
{\tiny
\left (
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
0 & & 0 & & 0 \\
& 0 & & 0 & \\
& & 0 & & \\
\end{array}
\right )}
&&&\\
\vspace{-3pt}
&&\\
\vspace{-3pt}
&\downarrow {\tiny 1}&\\
&&\\
&
{\tiny
\left (
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 0 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right )}
&&&\lambda_1 =(1,0,0) \\
&&\\
&\downarrow {\tiny 3}&\\
&&\\
&
{\tiny
\left (
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 1 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right )}
&&&\lambda_{12} =(1,1,0) \\
&&\\
\swarrow {\tiny 2}&&\searrow {\tiny 2}\\
&&\\
{\tiny
\left (
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
2 & & 1 & & 0 \\
& 2 & & 0 & \\
& & 1 & & \\
\end{array}
\right )}
&&
{\tiny
\left (
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
2 & & 1 & & 0 \\
& 1 & & 1 & \\
& & 1 & & \\
\end{array}
\right )}
&&
\lambda_{123} =(2,1,0) \\
&&\\
\searrow {\tiny 2}&&\swarrow {\tiny 2}\\
&&\\
&
{\tiny
\left (
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
3 & & 1 & & 0 \\
& 2 & & 1 & \\
& & 1 & & \\
\end{array}
\right )}
&&&\;\;\;\;\;\; \lambda_{1234} = \lambda =(3,1,0). \\
\end{array}
$$
~~\\~~\\
For the graph presented above, we can read amplitude as ,,sum over two paths''
$$
\Big \langle {\scriptsize (1,3,2,2)} \Big | {\scriptsize (3,1)}, {\scriptsize \Yvcentermath1 \young(123,2)}, \; {\scriptsize \Yvcentermath1 \young(134,2)} \Big\rangle =
$$
$$
{\tiny
\left <
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 1 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right |
t_{32}
\left |
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 0 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right >
\;
\left <
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
2 & & 1 & & 0 \\
& 2 & & 0 & \\
& & 1 & & \\
\end{array}
\right |
t_{21}
\left |
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 1 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right >
\;
\left <
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
3 & & 1 & & 0 \\
& 2 & & 1 & \\
& & 1 & & \\
\end{array}
\right |
t_{21}
\left |
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
2 & & 1 & & 0 \\
& 2 & & 0 & \\
& & 1 & & \\
\end{array}
\right >
}
\; +
$$
$$
{\tiny
\left <
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 1 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right |
t_{32}
\left |
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 0 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right >
\;
\left <
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
2 & & 1 & & 0 \\
& 1 & & 1 & \\
& & 1 & & \\
\end{array}
\right |
t_{21}
\left |
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
1 & & 1 & & 0 \\
& 1 & & 0 & \\
& & 1 & & \\
\end{array}
\right >
\;
\left <
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
3 & & 1 & & 0 \\
& 2 & & 1 & \\
& & 1 & & \\
\end{array}
\right |
t_{21}
\left |
\begin{array}{@{}c@{}c@{}c@{}c@{}c@{}}
2 & & 1 & & 0 \\
& 1 & & 1 & \\
& & 1 & & \\
\end{array}
\right >
}
\; =
$$
$$
{\tiny
\left(\frac{1}{\sqrt{2}}\right)\left(\frac{1}{\sqrt{2}}\right)\left(\frac{\sqrt{3}}{12}\right) + \left(\frac{1}{\sqrt{2}}\right)\left(\frac{1}{\sqrt{6}}\right)\left(\frac{3}{4}\right)= \; \frac{\sqrt{3}}{6}
}
$$
One can check, that standard method (\ref{spr20}) gives the same result.
\section{Concluding remarks}
We have presented a new method of Schur-Weyl states generation for quantum spin system.
Using this method we can calculate probability amplitudes in a very efficient way using only simple combinatorial operations.
In contrast to the standard method \cite{bohr_mottelson} it is size independent of the physical system and it is well adopted to the computer implementation.
This method is devoted to the researchers who investigate symmetry of quantum systems consisting of many identical subsystems and want to reduce the size of eigenproblem, or in general, diminish the representation matrix of any physical quantities, represented in the symmetric or unitary group algebra.
Clearly, the algorithm can be used for appropriate subsets of irreducible bases, in particular for an inhomogeneous models, with the weight $\mu$, being a partition which differs from a rectangular shape.
Another property of the Schur-Weyl states is the fact, that representation of physical systems in Hilbert space spanned on such states enable to extract information hidden in nonlocal degrees of freedom. This feature can be very useful in broad range of problems in Quantum Computations, especially in quantum algorithms constructions.
The novelty of the proposed algorithm and also the main idea behind it, is the fact, that while constructing the quantum state via addition of consecutive nodes, we are dealing with a ,,combinatorial quantum interference'' of all the possible ways of addition the new node to an already existing system. This observation leads to formula (\ref{wsp}) which significantly simplifies the calculation of probability amplitudes of Schur-Weyl states.
All operations which constitute the algorithm are reduced to simple arithmetic operations such as multiplication and summation and hence, the algorithm is polynomial in time with respect to $N$ and $n$, in contrary to the standard method \cite{bohr_mottelson} which uses the summation over the symmetric group, and thus grows exponentially with $N$.
\providecommand{\href}[2]{#2}
|
1,116,691,497,279 | arxiv | \section{Introduction}
Let $M$ be a factor of type $II_1$ with a normalized trace
$\tau$. Murray and von Neumann introduced
the fundamental group ${\mathcal F}(M)$ of $M$ in \cite{MN}.
The fundamental group ${\mathcal F}(M)$ of $M$ is a
subgroup of the multiplicative group
$\mathbb{R}_+^{\times}$ of positive real numbers.
They showed that if $M$ is hyperfinite, then
${\mathcal F}(M) = {\mathbb R_+^{\times}}$.
In our previous paper \cite{NW},
we introduced the fundamental group $\mathcal{F}(A)$
of a simple unital $C^*$-algebra $A$ with a unique normalized trace $\tau$
based on the computation of Picard groups by
Kodaka \cite{kod1}, \cite{kod2}, \cite{kod3}.
We compute the fundamental groups of several nuclear or nonnuclear
$C^*$-algebras.
$K$-theoretical obstruction enable us to compute the fundamental
group easily.
There has been many works on the computation of fundamental groups of the factors
of type $II_1$.
Voiculescu \cite{Vo} showed that the fundamental group
${\mathcal F}(L(\mathbb{F}_{\infty}))$
of the group factor $L(\mathbb{F}_{\infty})$
of the free group $\mathbb{F}_{\infty}$ contains the positive rationals and
Radulescu proved that
${\mathcal F}(L(\mathbb{F}_{\infty})) = {\mathbb R}_+^{\times}$ in
\cite{Ra}. Connes \cite{Co} showed that if $G$ is an countable
ICC group with property (T), then ${\mathcal F}(L(G))$ is a countable group.
Recently, Popa
showed that any countable subgroup of $\mathbb R_+^{\times}$
can be realized as the fundamental group of some
factor of type $II_1$ with separable predual in \cite{Po1}.
Furthermore Popa and Vaes \cite{PV} exhibited a large family $\mathcal{S}$
of subgroups of $\mathbb{R}_{+}^\times$, containing $\mathbb{R}_{+}^\times$
itself, all of its countable subgroups, as well as uncountable subgroups with
any Haussdorff dimension in $(0,1)$, such that for each $G\in\mathcal{S}$
there exist many free ergodic measure preserving actions of $\mathbb{F}_{\infty}$
for which the associated $II_1$ factor $M$ has fundamental group equal to $G$.
In this paper we show that any countable subgroup of $\mathbb{R}_+^{\times}$
can be realized as the fundamental group of a separable simple unital
$C^*$-algebra with unique trace.
Furthermore for any fixed countable subgroup $G$ of $\mathbb{R}_+^{\times}$,
there exist uncountably many mutually nonisomorphic such algebras $A$ with
$G = \mathcal{F}(A)$.
We apply a method of Blackadar \cite{Bla} and Phillips
\cite{Phi} to the type $II_1$ factors of Popa \cite{Po1}.
Our new examples are nonnuclear.
On the other hand, for an additive subgroup $E$ of
$\mathbb{R}$ containing 1,
we define the positive inner multiplier group $IM_+(E)$ of $E$ by
$$
IM_+(E) = \{t \in {\mathbb R}_+^{\times} \ | t \in E, t^{-1} \in E, \text{ and }
tE = E \}.
$$
Then we have $\mathcal{F}(A) \subset IM_+(\tau_*(K_0(A)))$.
Almost all examples provided in \cite{NW}
satisfy $\mathcal{F}(A)=IM_+(\tau_*(K_0(A)))$.
We should note that not all countable subgroups of $\mathbb{R}_{+}^{\times}$
arise as $IM_+(E)$. For example,
$\{9^n \in \mathbb{R}_{+}^{\times} \ |
n \in {\mathbb Z} \}$ does not arise as $IM_+(E)$
for any additive subgroup $E$ of $\mathbb{R}$ containing 1. Therefore
if the fundamental group a $C^*$-algebra $A$ is equal to
$\{9^n \in \mathbb{R}_{+}^{\times} \ |
n \in {\mathbb Z} \}$ and $A$ is in a classifiable class by
the Elliott invariant, then
$\tau_* : K_0(A) \rightarrow \tau_*(K_0(A))$ cannot be an
order isomorphism. Matui informed us that there exists such an AF-algebra.
\section{Hilbert $C^*$-modules and Picard groups}
We recall some definitions and notations in \cite{NW}.
Let $A$ be a simple unital $C^*$-algebra with a unique normalized trace $\tau$
and
$\mathcal{X}$ a right Hilbert $A$-module.
(See \cite{Lan}, \cite{MT} for the basic facts on Hilbert modules.)
We denote by $L_A(\mathcal{X})$
the algebra of the adjointable operators on $\mathcal{X}$.
For $\xi,\eta \in \mathcal{X}$, a "rank one operator" $\Theta_{\xi,\eta}$
is defined by $\Theta_{\xi,\eta}(\zeta)
= \xi \langle\eta,\zeta\rangle_A$ for $\zeta \in \mathcal{X}$.
We denote by $K_A(\mathcal{X})$ the closure
of the linear span of "rank one operators" $\Theta_{\xi,\eta}$.
We call a finite
set $\{\xi_i\}_{i=1}^n\subseteq \mathcal{X}$ a {\it finite basis} of $\mathcal{X}$ if
$\eta =\sum_{i=1}^n\xi_i\langle\xi_i,\eta\rangle_A$ for any $\eta\in\mathcal{X}$, see \cite{KW}, \cite{W}.
It is also called a frame as in \cite{FL}.
If $A$ is unital and there exists a finite basis for $\mathcal{X}$, then
$L_A(\mathcal{X})=K_A(\mathcal{X})$.
Let $\mathcal{H}(A)$ denote the
set of isomorphic classes $[\mathcal{\mathcal{X}}]$ of
right Hilbert $A$-modules $\mathcal{X}$ with finite basis.
Let $B$ be a $C^*$algebra.
An $A$-$B$-equivalence bimodule is an $A$-$B$-bimodule $\mathcal{F}$ which is simultaneously a
full left Hilbert $A$-module under a left $A$-valued inner product $_A\langle\cdot ,\cdot\rangle$
and a full right Hilbert $B$-module under a right $B$-valued inner product $\langle\cdot ,\cdot\rangle_B$,
satisfying $_A\langle\xi ,\eta\rangle\zeta =\xi\langle\eta ,\zeta\rangle_B$ for any
$\xi, \eta, \zeta \in \mathcal{F}$. We say that $A$ is {\it Morita equivalent} to $B$
if there exists an $A$-$B$-equivalence bimodule.
The dual module $\mathcal{F}^*$ of an $A$-$B$-equivalence bimodule $\mathcal{F}$ is a set
$\{\xi^* ;\xi\in\mathcal{F} \}$ with the operations such that $\xi^* +\eta^*=(\xi +\eta )^*$,
$\lambda\xi ^*=(\overline{\lambda}\xi)^*$, $b\xi^* a=(a^*\xi b^*)^*$,
$_B\langle\xi^*,\eta^*\rangle =\langle\eta ,\xi\rangle_B$ and
$\langle \xi^*,\eta^*\rangle_A =\;_A\langle\eta ,\xi\rangle$.
Then $\mathcal{F}^*$ is a $B$-$A$-equivalence bimodule.
We refer the reader to \cite{RW},\cite{R2} for the basic facts on
equivalence bimodules and Morita equivalence.
We review elementary facts on the Picard groups of
$C^*$-algebras introduced by Brown, Green and Rieffel
in \cite{BGR}.
For $A$-$A$-equivalence bimodules
$\mathcal{E}_1$ and
$\mathcal{E}_2$, we say that $\mathcal{E}_1$ is isomorphic to $\mathcal{E}_2$ as an equivalence
bimodule if there exists a $\mathbb{C}$-liner one-to-one map $\Phi$ of $\mathcal{E}_1$ onto
$\mathcal{E}_2$ with the properties such that $\Phi (a\xi b)=a\Phi (\xi )b$,
$_A\langle \Phi (\xi ) ,\Phi(\eta )\rangle =\;_A\langle \xi ,\eta\rangle$ and
$\langle \Phi (\xi ) ,\Phi(\eta )\rangle_A =\langle\xi,\eta\rangle_A$ for $a,b\in A$,
$\xi ,\eta\in\mathcal{E}_1$.
The set of isomorphic classes $[\mathcal{E}]$ of the $A$-$A$-equivalence
bimodules $\mathcal{E}$ forms a group under the product defined by
$[\mathcal{E}_1][\mathcal{E}_2] = [\mathcal{E}_1 \otimes_A \mathcal{E}_2]$.
We call it the {\it Picard group} of $A$ and denote it by $\mathrm{Pic}(A)$.
The identity of $\mathrm{Pic}(A)$ is given by
the $A$-$A$-bimodule $\mathcal{E}:= A$ with
$\; _A\langle a_1 ,a_2 \rangle = a_1a_2^*$ and $\langle a_1 ,a_2\rangle_A = a_1^*a_2$ for
$a_1,a_2 \in A$. The inverse element of $[\mathcal{E}]$ in the Picard group of $A$
is the dual module $[\mathcal{E}^*]$.
Let $\alpha$ be an automorphism of $A$, and let
$\mathcal{E}_{\alpha}^A=A$ with the obvious left $A$-action and the obvious $A$-valued inner product.
We define the right $A$-action on $\mathcal{E}_\alpha^A$ by
$\xi\cdot a=\xi\alpha(a)$ for
any $\xi\in\mathcal{E}_\alpha^A$ and $a\in A$, and the right $A$-valued inner product by
$\langle\xi ,\eta\rangle_A=\alpha^{-1} (\xi^*\eta)$ for any $\xi ,\eta\in\mathcal{E}_\alpha^A$.
Then $\mathcal{E}_{\alpha}^A$ is an $A$-$A$-equivalence bimodule. For $\alpha, \beta\in\mathrm{Aut}(A)$,
$\mathcal{E}_\alpha^A$ is isomorphic to $\mathcal{E}_\beta^A$ if and only if
there exists a unitary $u \in A$ such that
$\alpha = ad \ u \circ \beta $. Moreover, ${\mathcal E}_\alpha^A \otimes
{\mathcal E}_\beta^A$ is
isomorphic to $\mathcal{E}_{\alpha\circ\beta}^A$. Hence we obtain an homomorphism $\rho_A$
of $\mathrm{Out}(A)$ to $\mathrm{Pic}(A)$.
An $A$-$B$-equivalence bimodule $\mathcal{F}$ induces an isomorphism $\Psi$
of $\mathrm{Pic}(A)$ to $\mathrm{Pic}(B)$ by
$\Psi ([\mathcal{E}])=[\mathcal{F}^*\otimes\mathcal{E}\otimes\mathcal{F}]$
for $[\mathcal{E}]\in\mathrm{Pic}(A)$.
Therefore if $A$ is Morita equivalent to $B$, then $\mathrm{Pic}(A)$ is isomorphic to
$\mathrm{Pic}(B)$.
Since $A$ is unital, any
$A$-$A$-equivalence bimodule is a finitely generated projective $A$-module as a right
module with a finite basis $\{\xi_i\}_{i=1}^n$.
Put $p=(\langle\xi_i,\xi_j\rangle_A)_{ij} \in M_n(A)$.
Then $p$ is a projection and $\mathcal{E}$ is isomorphic to
$pA^n$ as a right Hilbert $A$-module
with an isomorphism of $A$ to $pM_n(A)p$ as a $C^*$-algebra.
Define a map $\hat{T}_A : \mathcal{H}(A) \rightarrow
\mathbb{R}_{+}$ by
$\hat{T}_A([\mathcal{X}])=\sum_{i=1}^n\tau (\langle\xi_i,\xi_i\rangle_A)$,
where $\{\xi_i\}_{i=1}^n$ is a finite basis of $\mathcal{X}$.
Then $\hat{T}_A([\mathcal{X}])$
does not depend on the choice of basis and $\hat{T}_A$ is well-defined.
We can define a map $T_A$ of $\mathrm{Pic}(A)$ to $\mathbb{R}_{+}$
by the same way of $\hat{T}_A$.
We showed that $T_A$ is a multiplicative map
and $T_A(\mathcal{E}_{id}^A) = 1$ in \cite{NW}.
Moreover, we can show the following proposition by a similar argument
in the proof of Proposition 2.1 in \cite{NW}.
\begin{pro}\label{pro:multiplicative}
Let $A$ and $B$ be simple unital $C^*$-algebras with unique trace.
Assume that $\mathcal{F}$ is an $A$-$B$-equivalence bimodule and
$\mathcal{X}$ is a right Hilbert $A$-module. Then
\[\hat{T}_B([\mathcal{X}\otimes\mathcal{F}])=
\hat{T}_A([\mathcal{X}])\hat{T}_B([\mathcal{F}]).\]
\end{pro}
We denote by $Tr$ the usual unnormalized trace on $M_n(\mathbb{C})$.
Put
$$
\mathcal{F}(A) :=\{ \tau\otimes Tr(p) \in \mathbb{R}^{\times}_{+}\ | \
p \text{ is a projection in } M_n(A) \text{ such that } pM_n(A)p \cong A \}.
$$
Then $\mathcal{F}(A)$ is equal to
the image of $T_A$ and
a multiplicative subgroup of $\mathbb{R}^{\times}_{+}$ by Theorem 3.1 in \cite{NW}.
We call
$\mathcal{F}(A)$ the {\it fundamental group} of $A$.
If $A$ is separable, then $\mathcal{F}(A)$ is countable.
We shall show that the fundamental group is a Morita equivalence invariant for
simple unital $C^*$-algebras with unique trace.
\begin{pro}
Let $A$ and $B$ be simple unital $C^*$-algebras with unique trace.
If $A$ is Morita equivalent to $B$, then $\mathcal{F}(A)=\mathcal{F}(B)$.
\end{pro}
\begin{proof}
By assumption, there exists an $A$-$B$-equivalence bimodule $\mathcal{F}$,
and $\mathcal{F}$ induces an isomorphism
$\Psi$ of $\mathrm{Pic}(A)$ to $\mathrm{Pic}(B)$ such that
$\Psi ([\mathcal{E}])=[\mathcal{F}^*\otimes\mathcal{E}\otimes\mathcal{F}]$ for
$[\mathcal{E}]\in\mathrm{Pic}(A)$.
Since $\mathcal{F}^*\otimes\mathcal{F}$ is isomorphic to $\mathcal{E}_{id}^B$,
Proposition \ref{pro:multiplicative} implies
\[\hat{T}_A([\mathcal{F}^*])\hat{T}_B([\mathcal{F}])=
T_B([\mathcal{F}^*\otimes\mathcal{F}])=1.\]
For $[\mathcal{E}]\in \mathrm{Pic}(A)$,
\[T_B([\mathcal{F}^*\otimes\mathcal{E}\otimes\mathcal{F}])=
\hat{T}_A([\mathcal{F}^*])\hat{T}_B([\mathcal{E}\otimes\mathcal{F}])
=\hat{T}_A([\mathcal{F}^*])T_A([\mathcal{E}])\hat{T}_B([\mathcal{F}])\]
by Proposition \ref{pro:multiplicative}.
Therefore $T_B([\Psi (\mathcal{E})])=T_A([\mathcal{E}])$ and
$\mathcal{F}(A)=\mathcal{F}(B)$.
\end{proof}
\section{New examples}
An idea of our construction comes from the following results of
Blackadar, Proposition 2.2 of \cite{Bla} and
Phillips, Lemma 2.2 of \cite{Phi}.
\begin{lem}[\cite{Bla}(Blackadar)]
Let $M$ be a simple $C^*$-algebra, and let $A\subset M$ be a separable $C^*$-subalgebra.
Then there exists a simple separable $C^*$-subalgebra $B$ with $A\subset B\subset M$.
\end{lem}
\begin{lem}[\cite{Phi}(Phillips)]
Let $M$ be a unital $C^*$-algebra, and let $A\subset M$ be a separable $C^*$-subalgebra.
Then there exists a separable $C^*$-subalgebra $B$ with $A\subset B\subset M$ such that
every tracial state on $B$ is the restriction of a tracial state on $M$.
\end{lem}
The following lemma is just a combination of the two results above.
\begin{lem}\label{lem:key}
Let $M$ be a simple $C^*$-algebra with unique trace $\hat{\tau}$, and let $A\subset M$
be a separable $C^*$-subalgebra.
Then there exists a simple separable $C^*$-subalgebra $B$ with $A\subset B\subset M$
such that $B$ has a unique trace $\tau$ that is a restriction of $\hat{\tau}$.
\end{lem}
\begin{thm}\label{thm:main}
Let $G$ be a countable subgroup of $\mathbb{R}_{+}^{\times}$.
Then there exist uncountably many mutually nonisomorphic separable
simple nonnuclear unital $C^*$-algebras $A$
with unique trace such that the fundamental group $\mathcal{F}(A)=G$.
\end{thm}
\begin{proof}
First we shall show that there exists a separable
simple unital $C^*$-algebra $A$
with unique trace such that $\mathcal{F}(A)=G$.
There exists a type $II_1$ factor $M$ with separable predual such that
$\mathcal{F}(M)=G$, which is constructed by Popa \cite{Po1}.
Let $S_1\subset M$ be a countable subset that is weak operator dense in $M$.
We denote by $\hat{\tau}$ the unique trace of $M$.
We enumerate the countable semigroup $G\cap (0,1]$
by $\{t_m:m\in\mathbb{N}\}$.
Since $\mathcal{F}(M)=G$ and $M$ is a factor of type $II_1$,
for any $m\in\mathbb{N}$
there exist a projection $p_m$
in $M$ such that $\hat{\tau}(p_m)=t_m$ and an isomorphism $\phi_m$ of $M$ onto $p_mMp_m$
.
Define $B_0\subset M$ be the unital $C^*$-subalgebra
of $M$ generated by $S_1$ and $\{p_m:m\in\mathbb{N}\}$.
By Lemma \ref{lem:key},
there exists a separable simple unital
$C^*$-algebra $A_0$ with a unique trace $\tau_0$
such that $B_0\subset A_0\subset M$.
Let $B_1\subset M$ be the $C^*$-subalgebra
of $M$ generated by $A_0$, $\cup_{m\in\mathbb{N}}\phi_m(A_0)$ and
$\cup_{m\in\mathbb{N}}\phi_m^{-1}(p_mA_0p_m)$. By the same way,
there exists a separable simple unital $C^*$-algebra $A_1$
with a unique trace $\tau_1$
such that $B_1\subset A_1\subset M$.
We construct inductively $C^*$-algebras $B_n \subset A_n \subset M$
as follows:
Let $B_n\subset M$ be the $C^*$-subalgebra
of $M$ generated by $A_{n-1}$, $\cup_{m\in\mathbb{N}}\phi_m(A_{n-1})$ and
$\cup_{m\in\mathbb{N}}\phi_m^{-1}(p_mA_{n-1}p_m)$. By Lemma \ref{lem:key},
there exists a separable simple unital $C^*$-algebra $A_n$
with a unique trace $\tau_n$
such that $B_n\subset A_n\subset M$.
Then we have
$$
B_0 \subset A_0\subset B_1\subset A_1\subset \dots
B_n \subset A_n \dots \subset M,
$$
and
$\phi_m(A_{n-1})\subset p_mA_np_m$ and $\phi_m^{-1}(p_mA_{n-1}p_m)\subset A_n
\text { for any } m \in\mathbb{N}$.
Set $A=\overline{\cup_{n=0}^{\infty}A_n}$. Then $A$ is a separable
simple unital $C^*$-algebra
$A$ with a unique trace $\tau$.
By the construction,
$\phi_m(A)=p_mAp_m$ for any $m\in\mathbb{N}$. Hence
$G \subset \mathcal{F}(A)$.
Since $\pi_\tau (A)''$ is isomorphic to $M$,
$$
\mathcal{F}(A) \subset \mathcal{F}(\pi_\tau (A)'')
= \mathcal{F}(M) = G
$$
by Proposition 3.29 of \cite{NW}.
Thus $\mathcal{F}(A) = G$. Moreover $A$ is not nuclear, because
$A$ is weak operator dense in a factor $M$ that is not hyperfinite.
Next we shall show that there exist uncountably many mutually nonisomorphic
such examples.
Let $E$ be a countable additive subgroup of $\mathbb{R}$.
We enumerate by $\{r_m:\in\mathbb{N}\}$ the positive elements of $E$.
Since $M$ is a factor of type $II_1$, for any
$m\in\mathbb{N}$ there exist a natural number $k$ and
a projection $q_m\in M_k(M)$ such that $\hat{\tau}\otimes Tr(q_m)=r_m$.
Define $S_2 \subset M $ to be the union of
the matrix elements of $q_m$ for running $m \in\mathbb{N}$.
Let $C_0$ the $C^*$-subalgebra of $M$
generated by $S_2$ and $A$.
By a similar argument as the first paragraph,
we can construct a separable
simple unital $C^*$-algebra $C$ with unique trace such that
$\mathcal{F}(C)=G$ and $C_0\subset C \subset M$.
Then it is clear that $E$ is contained in $\tau_*(K_0(C))$.
Since no countable union of
countable subgroups of $\mathbb{R}$ can contain all countable subgroups of $\mathbb{R}$,
we can construct uncountably many mutually nonisomorphic examples by the choice of $E$.
\end{proof}
\begin{rem}
In fact, we show that there exist uncountably many Morita inequivalent
separable simple nonnuclear unital $C^*$-algebras $A$ with unique trace
such that the fundamental group $\mathcal{F}(A)=G$ in the proof above.
\end{rem}
\begin{rem}\label{rem:main}
We can choose a $C^*$-algebra $A$ in the theorem above
so that $A$ has stable rank one and
real rank zero and
$\tau_* : K_0(A) \rightarrow \tau_*(K_0(A))$ is an order isomorphism
by using Lemma 2.3, Lemma 2.4 and Lemma 2.5 of \cite{Phi}.
Then we have the following exact sequence
by Proposition 3.26 of \cite{NW}:
\[\begin{CD}
{1} @>>> \mathrm{Out}(A) @>\rho_A>> \mathrm{Pic}(A) @>T>> \mathcal{F}(A)
@>>> {1} \end{CD}. \]
\end{rem}
\begin{rem}
We do not know whether any countable subgroup
of $\mathbb{R}_{+}^{\times}$ can be realized as the fundamental group
of a separable unital simple
{\it nuclear} $C^*$-algebra
with unique trace.
\end{rem}
\begin{lem}\label{lem:K0}
Let $M_1$ and $M_2$ be factors of type $II_1$, and let $A_0\subset M_1$ and $B_0\subset M_2$
be separable $C^*$-subalgebras.
Then there exist separable simple unital $C^*$-algebras $A$ and $B$ with the unique
traces $\tau_A$ and $\tau_B$ such that
$A_0\subset A\subset M_1$, $B_0\subset B\subset M_2$ and $(\tau_{A})_*(K_0(A))=(\tau_{B})_*(K_0(B))$.
\end{lem}
\begin{proof}
Let $\tau_1$ be the unique trace on $M_1$ and $\tau_2$ the unique trace on $M_2$.
Since $A_0$ and $B_0$ are separable $C^*$-algebras,
$(\tau_{1}|_{A_0})_{*}(K_0(A_0))$ and $(\tau_{2}|_{B_0})_{*}(K_0(B_0))$ are
countable groups.
We enumerate the positive elements of $(\tau_{1}|_{A_0})_{*}(K_0(A_0))$ by
$\{t_{m}:m\in\mathbb{N}\}$ and the positive elements of $(\tau_{2}|_{B_0})_{*}(K_0(B_0))$ by
$\{r_{m}:m\in\mathbb{N}\}$.
Since $M_1$ and $M_2$ are factors of type $II_1$, for any
$m\in\mathbb{N}$
there exist a natural number $k$ and
projections $p_{m}\in M_k (M_1)$ and $q_{m}\in M_k(M_2)$ such that
$\tau_{1}\otimes Tr(p_{m})=r_{m}$ and $\tau_{2}\otimes Tr(q_{m})=t_{m}$.
Put $S_1 \subset M_1 $ (resp. $S_2 \subset M_2$) to be the union of
the matrix elements of $p_m$ (resp. $q_m$) for running $m \in\mathbb{N}$.
Define $C_1\subset M_1$ (resp. $D_1\subset M_2$) be the unital $C^*$-subalgebra
of $M_1$ (resp. $M_2$) generated by $A_0$ and $S_1$
(resp. $B_0$ and $S_2$).
By Lemma \ref{lem:key},
there exist separable simple unital
$C^*$-algebras $A_1$ and $B_1$ with a unique trace such that
$C_1\subset A_1\subset M_1$ and $D_1\subset B_1\subset M_2$.
Then we have
$(\tau_{1}|_{A_0})_{*}(K_0(A_0))\subset (\tau_{2}|_{B_1})_{*}(K_0(B_1))$ and
$(\tau_{2}|_{B_0})_{*}(K_0(B_0))\subset (\tau_{1}|_{A_1})_{*}(K_0(A_1))$.
In a similar way, we construct inductively simple separable unital $C^*$-algebras
$A_n \subset M_1$ and $B_n \subset M_2$ with unique trace such that
$(\tau_{1}|_{A_{n-1}})_{*}(K_0(A_{n-1}))\subset (\tau_{2}|_{B_n})_{*}(K_0(B_n))$ and
$(\tau_{2}|_{B_{n-1}})_{*}(K_0(B_{n-1}))\subset (\tau_{1}|_{A_n})_{*}(K_0(A_n))$.
Set $A=\overline{\cup_{n=1}^{\infty}A_n}$ and $B=\overline{\cup_{n=1}^{\infty}B_n}$.
Then $A$ and $B$ are separable simple unital $C^*$-algebras with unique trace.
We denote by $\tau_A$ the unique trace on $A$ and by $\tau_B$ the unique traces on $B$.
By the construction,
$(\tau_A)_{*}(K_0(A))=(\tau_B)_{*}(K_0(B))$.
\end{proof}
We denote by $\mathrm{Ell}(A)$ the Elliott invariant $(K_0(A),K_0(A)_+,[1]_0,K_1(A))$.
\begin{cor}
For any countable subgroups $G_1$ and $G_2$ of $\mathbb{R}_{+}^{\times}$,
there exist separable simple nonnuclear unital $C^*$-algebras $A$ and $B$
with unique trace such that
$\mathrm{Ell}(A)\cong\mathrm{Ell}(B)$,
$\mathcal{F}(A)=G_1$ and $\mathcal{F}(B)=G_2$.
\end{cor}
\begin{proof}
The proof of Theorem \ref{thm:main}, Lemma \ref{lem:K0} and Lemma 2.5 of \cite{Phi}
implies that there exist separable simple nonnuclear unital $C^*$-algebras
$A$ and $B$ with the unique traces $\tau_A$ and $\tau_B$ such that
$(\tau_A)_* : K_0(A) \rightarrow (\tau_A)_*(K_0(A))$ and
$(\tau_B)_* : K_0(B) \rightarrow (\tau_B)_*(K_0(B))$ are order isomorphisms,
$\mathcal{F}(A)=G_1$, $\mathcal{F}(B)=G_2$, $K_1(A)=K_1(B)=0$ and $
(\tau_A)_{*}(K_0(A))=(\tau_B)_{*}(K_0(B))$.
Since $(\tau_A)_*$ and $(\tau_B)_*$ are order isomorphisms and
$(\tau_A)_{*}(K_0(A))=(\tau_B)_{*}(K_0(B))$, we see that $\mathrm{Ell}(A)\cong\mathrm{Ell}(B)$.
\end{proof}
For a positive number $\lambda$, let
$G_{\lambda} = \{ {\lambda}^n \in \mathbb{R}_{+}^{\times} \ |
\ n \in \mathbb{Z} \}$ be the multiplicative subgroup of
$\mathbb{R}_{+}^{\times}$ generated by $\lambda$.
In the below we shall consider whether $G_{\lambda}$
can be realized as the fundamental group of a nuclear $C^*$-algebra.
\begin{pro}
Let $\lambda$ be a prime number or a positive transcendental number.
Then there exists a simple $AF$-algebra $A$ with unique trace such that
$\mathcal{F}(A) = G_{\lambda}$.
\end{pro}
\begin{proof}
Let $\lambda$ be a prime number. Consider a UHF-algebra
$A = M_{{\lambda}^\infty}$. Then $\mathcal{F}(A) = G_{\lambda}$ as in
Example 3.11 of \cite{NW}. Next we assume that $\lambda$ is a
positive transcendental number. Let $R_{\lambda}$
be the unital subring of $\mathbb{R}$
generated by $\lambda$. Then the set $(R_{\lambda})^\times_{+}$ of positive
invertible elements in $R_{\lambda}$ is equal to $G_{\lambda}$. The proof of
Theorem 3.14 of \cite{NW} shows that there exists a simple unital $AF$-algebra $A$
with unique trace such that $\mathcal{F}(A) = G_{\lambda}$.
\end{proof}
Let $\mathcal{O}$ be an order of a real quadratic field or
a real cubic field with one real embedding.
Then $\mathcal{O}^{\times}_{+}=G_{\lambda}$ is singly
generated and the generator $\lambda >1$ is called
the fundamental unit of $\mathcal{O}$ by Dirichlet's unit theorem.
We refer the reader to \cite{Neu} for details.
The proof of Theorem 3.14 of \cite{NW} implies the following proposition.
\begin{pro}
Let $\lambda$ be a fundamental unit of an order of a real quadratic field
or a cubic field with one real embedding.
Then there exists a simple $AF$-algebra $A$ with unique trace such that
$\mathcal{F}(A) = G_{\lambda}$.
\end{pro}
Note that if $p$ is a prime number and $n \geq 2$, then
the subgroup $G_{\lambda}$ of $R_+^{\times}$
generated by ${\lambda} = p^n$ can not be
the positive inner multiplier group $IM_+(E)$ for any
additive subgroup $E$ of
$\mathbb{R}$ containing 1. In fact, on the contrary, suppose that
$G_{\lambda} = IM_+(E)$ for some $E$. Then there exists a unital subring
$R$ of $\mathbb{R}$ such that $G_{\lambda}= R_+^{\times}$ by
Lemma 3.6 of \cite{NW}. Then
$\frac{1}{p} = \frac{1}{\lambda} + \dots + \frac{1}{\lambda}
\in R_+^{\times}$. This contradicts that $\frac{1}{p} \notin G_{\lambda}$.
However, we have another construction.
\begin{ex}\label{ex:matui} For $\lambda = 3^2 = 9$,
Matui shows us the following example:
Let $A$ be an $AF$-algebra such that
$$
K_0(A)=\{(\frac{b}{9^a},c) \in \mathbb{R} \times \mathbb{Z} \ |
\ a,b,c\in\mathbb{Z},b\equiv c\; \mathrm{mod}\; 8\},
$$
$$
K_0(A)_{+}=\{(\frac{b}{9^a},c)\in K_0(A):\frac{b}{9^a}>0\}\cup \{(0,0)\}
\ \ \text{and} \ \ [1_A]_0=(1,1).
$$
Then
$$
\mathcal{F}(A) = G_9:=
\{ 9^n \in \mathbb{R}_{+}^{\times} \ |
\ n \in \mathbb{Z} \}
$$
Moreover $\tau_* : K_0(A) \rightarrow \tau_*(K_0(A))$ is not an
order isomorphism and
$\mathcal{F}(A) \not= IM_+(\tau_*(K_0(A)))$.
Furthermore Katsura suggests us the following examples:
Let $\lambda = p^n$ for a prime number $p$ and a natural number $n \geq 2$.
Then there exists a simple $AF$-algebra $A$ with unique trace such that
$\mathcal{F}(A) = G_{\lambda}$.
First consider the case that $\lambda \geq 8$.
Define
$$
E=\{(\frac{b}{p^{na}},c) \in \mathbb{R} \times \mathbb{Z}
\ | \ a,b,c\in\mathbb{Z},b\equiv c\; \mathrm{mod}\; (p^n-1) \}
$$
$$
E_+=\{(\frac{b}{p^{na}},c)\in E:\frac{b}{p^{na}}>0\}\cup \{(0,0)\}
\ \ \text{and} \ \ [u]_0=(1,1).
$$
Then there exists a simple $AF$-algebra $A$
such that
$(K_0(A),K_0(A)_+,[1_A]_0)=(E,E_+,u)$ by \cite{EHS}.
The classification theorem of \cite{E} and some computation yield that
$\mathcal{F}(A) = G_{\lambda}$.
Next consider the case that $\lambda = 2^2=4$.
Let
$$
E=\{(\frac{b}{16^{a}},c) \in \mathbb{R} \times \mathbb{Z} \ | \
a,b,c\in\mathbb{Z},b\equiv c\; \mathrm{mod}\; 5 \}
$$
$$
E_+=\{(\frac{b}{16^a},c)\in E:\frac{b}{16^{a}}>0\}\cup \{(0,0)\}
\ \ \text{and} \ \ [u]_0=(1,1).
$$
Consider a simple $AF$-algebra $A$ such that
$(K_0(A),K_0(A)_+,[1_A]_0)=(E,E_+,u)$. Then $\mathcal{F}(A) = G_{4}$.
\end{ex}
|
1,116,691,497,280 | arxiv | \section{\large Introduction}
It is shown in \cite{Nekrasov-Okounkov} that
the Seiberg-Witten solutions \cite{Seiberg-Witten}
of $4d$ $\mathcal{N}=2$ supersymmetric gauge theories
emerge through \textit{random partition},
where
Nekrasov's formulas \cite{Nekrasov-Okounkov,Nekrasov}
for these gauge theories are
understood as the partition functions of random partition.
The integrable structure of random partition
is elucidated in \cite{Marshakov-Nekrasov}, and
thereby the integrability of correlation functions
among single-traced chiral observables is explained.
Such an extension of the Seiberg-Witten geometries
also becomes attractive to understand $4d$ $\mathcal{N}=1$
supersymmetric gauge theories by providing a powerful tool
\cite{Itoyama}.
Integrable structure of melting crystal model
with external potential is clarified
in \cite{Nakatsu-Takasaki}.
Melting crystal model,
known as
\textit{random plane partition}
has a significant relation with
$5d$ $\mathcal{N}=1$ supersymmetric gauge theories.
Nekrasov's formula
for these gauge theories
can be retrieved from the partition function
of melting crystal model \cite{MNTT1},
where the model is interpreted as a $q$-deformed random partition.
It is argued \cite{Nakatsu-Takasaki}
a relation between
loop operators of $5d$ $\mathcal{N}=1$
supersymmetric Yang-Mills (SYM)
and external potentials of the melting crystal model.
We start Section 2 with
providing a brief review about $5d$ $\mathcal{N}=1$ SYM
in $\Omega$ background \cite{Losev-Marshakov-Nekrasov}.
We introduce loop operators of this theory.
Computation of correlation functions
among these operators is discussed.
Generating function of the correlation functions of $U(1)$ theory
reproduces the partition function of the aforementioned
melting crystal model.
In Section 3 we discuss a common integrable structure
of $5d$ $\mathcal{N}=1$ SYM in $\Omega$ background
and melting crystal model for the case of the $U(1)$ theory.
In Section 4 we present an extension of
the Seiberg-Witten geometry of the $U(1)$ theory
by using the loop operators.
\section{\large
Loop operators of $5d$ $\mathcal{N}=1$ SYM in $\Omega$ background}
We first consider
an ordinary $5d$ $\mathcal{N}=1$ SYM on $\mathbb{R}^4\times S^1$.
Let $E$ be the $SU(N)$-bundle on $\mathbb{R}^4$ with $c_2(E)=n \geq 0$.
A gauge bundle of this theory is the $SU(N)$-bundle $\pi^*E$ on
$\mathbb{R}^4\times S^1$ pulled back from $\mathbb{R}^4$.
$\pi$ is the projection from
$\mathbb{R}^4\times S^1$ to $\mathbb{R}^4$.
All the fields in the vector multiplet
are set to be periodic along $S^1$.
The bosonic ingredients are a $5d$ gauge potential $A_M(x,t)dx^M$
and a scalar field $\varphi(x,t)$
taking the value in $su(N)$.
These describe a $5d$ Yang-Mills-Higgs system.
The gauge potential can be separated into two parts
$A_{\mu}(x,t)dx^{\mu}$ and $A_t(x,t)dt$,
respectively the components
of the $\mathbb{R}^4$- and the $S^1$-directions.
Let $\mathcal{A}_E$ be the infinite dimensional affine space
consisting of all the gauge potentials on $E$.
$A_{\mu}(x,t)dx^{\mu}$ describes a loop $A(t)$ in $\mathcal{A}_E$,
where the loop is parametrized by the fifth-dimensional circle.
As for $A_t(x,t)$, together with $\varphi(x,t)$,
the combination $A_t+i\varphi$ describes a loop $\phi(t)$
in $\Omega^0(\mathbb{R}^4,\mbox{ad}E \otimes \mathbb{C})$,
the space of all the sections of
$\mbox{ad}E \otimes \mathbb{C}$,
where ad$E$ is the adjoint bundle on $\mathbb{R}^4$ with fibre $su(N)$.
Taking account of the periodicity,
the same argument is also applicable to the gauginos.
The vector multiplet thereby describes
a loop in the configuration space of the $4d$ theory.
In the case of the Yang-Mills-Higgs system,
the loop $A(t)$ gives
a family of covariant differentials on $E$ as $d_{A(t)}=d+A(t)$.
For the loop $\phi(t)$,
since it involves $A_t(x,t)$,
it becomes convenient to introduce the differential operator
\begin{eqnarray}
\mathcal{H}(t)\equiv
\frac{d}{dt}+\phi(t)\,.
\label{H(t)}
\end{eqnarray}
\subsection{ \normalsize
$5d$ $\mathcal{N}=1$ SYM in $\Omega$ background}
Via the standard dimensional reductions,
$6d$ $\mathcal{N}=1$ SYM gives
lower dimensional Yang-Mills theories with $8$ supercharges,
including the above theory.
Furthermore,
the dimensional reductions
in the $\Omega$ background
provide powerful tools
to understand these theories
\cite{Losev-Marshakov-Nekrasov}.
The $\Omega$ background is
a $6d$ gravitational background
on $\mathbb{R}^4 \times T^2$
described by a metric of the form:
$
ds^2=
\sum_{\mu=1}^4
(dx^{\mu}-\sum_{a=5,6}V_a^{\mu}dx^a)^2+\sum_{a=5,6}(dx^a)^2\,,
$
where two vectors $V_5^{\mu},V_6^{\mu}$
generate rotations on two-planes $(x^1,x^2)$ and $(x^3,x^4)$
in $\mathbb{R}^4$.
By letting
$V_1=x^2\frac{\partial}{\partial x^1}-x^1\frac{\partial}{\partial x^2}$
and
$V_2=x^4\frac{\partial}{\partial x^3}-x^3\frac{\partial}{\partial x^4}$,
they are respectively the real part and the imaginary part of
the combination
\begin{eqnarray}
V_{\epsilon_1,\epsilon_2}
&\equiv& \epsilon_1V_1+\epsilon_2V_2\,,
\hspace{6mm}\epsilon_1,\epsilon_2 \in \mathbb{C}.
\label{V_epsilon}
\end{eqnarray}
The above combination is expressed in component as
$V_{\epsilon_1,\epsilon_2}
=\Omega^\mu_{~\nu} x^\nu\frac{\partial}{\partial x^\mu}$.
To see the dimensional reduction in the $\Omega$-background,
we first consider the bosonic part of the $5d$ SYM.
The corresponding Yang-Mills-Higgs system is modified
from the previous one.
However,
the system is eventually controlled by
replacing $\mathcal{H}(t)$ with
\begin{eqnarray}
\mathcal{H}_{\epsilon_1,\epsilon_2}(t)
\equiv
\mathcal{H}(t)+\mathcal{K}_{\epsilon_1,\epsilon_2}(t)\,.
\label{H_epsilon}
\end{eqnarray}
Here $\mathcal{K}_{\epsilon_1,\epsilon_2}(t)$
is an another differential operator of the form
\cite{student}
\begin{eqnarray}
\mathcal{K}_{\epsilon_1,\epsilon_2}(t)
\equiv
V_{\epsilon_1,\epsilon_2}^\mu \partial_{A(t)\,\mu}
+
\frac{1}{2}\Omega^{\mu \nu}\mathcal{J}_{\mu \nu}\,,
\label{K_epsilon}
\end{eqnarray}
where
$\mathcal{J}_{\mu \nu}$ denote
the $SO(4)$ Lorentz generators of the system.
This operator generates a $T^2$-action
by taking the commutators with $d_{A(t)}$ and $\mathcal{H}(t)$.
For instance, we have
\begin{eqnarray}
[d_{A(t)}, \mathcal{K}_{\epsilon_1,\epsilon_2}(t)]
=-\iota_{V_{\epsilon_1,\epsilon_2}}F_{A(t)}\,.
\label{torus action on A}
\end{eqnarray}
The right hand side is precisely
the transformation brought about on $\mathcal{A}_E$
by the infinitesimal rotation
$\delta x^\mu=-V^{\mu}_{\epsilon_1,\epsilon_2}$.
The supercharges $Q_{\alpha a}$ and $\bar{Q}^{\dot{\alpha}}_{a}$
are realized in a way different from the case
of $\epsilon_1=\epsilon_2=0$.
Note that we use the $4d$ notation
such that $\alpha,\dot{\alpha}$ and $a$
denote the indices of the Lorentz group $SU(2)_L\times SU(2)_R$
and the R-symmetry $SU(2)_I$.
By the standard argument,
we may interpret the $5d$ SYM as a topological field theory.
Actually,
by regarding the diagonal $SU(2)$ of $SU(2)_R\times SU(2)_I$ as
a new $SU(2)_R$, we can extract a supercharge that behaves as
a scalar under the new Lorentz symmetry.
We write the scalar supercharge as $Q_{\epsilon_1,\epsilon_2}$.
The gaugino acquires a natural interpretation
as differential forms,
$\eta(x,t),\psi_{\mu}(x,t)$ and $\xi_{\mu \nu}(x,t)$.
These give
fermionic loops, $\eta(t), \psi(t)$ and $\xi(t)$.
The main part of the $Q$-transformation takes the forms
\begin{eqnarray}
&&
Q_{\epsilon_1,\epsilon_2}A(t)=\psi(t)\,,
\hspace{7mm}
Q_{\epsilon_1,\epsilon_2}\psi(t)=[d_{A(t)},
\mathcal{H}_{\epsilon_1,\epsilon_2}(t)]\,,
\label{Q transform (A, psi)}
\\
&&
Q_{\epsilon_1,\epsilon_2}
\mathcal{H}_{\epsilon_1,\epsilon_2}(t)=0\,,
\label{Q transform H}
\end{eqnarray}
where $\psi(t)$ is a fermionic loop in
$\Omega^1(\mathbb{R}^4,\mbox{ad}E)$.
\subsection{\normalsize
Loop operators and their correlation functions}
Taking account of the relation $\phi(x,t)=A_t(x,t)+i\varphi(x,t)$,
the following path-ordered integral
provides an analogue of a holonomy of the gauge potential.
\begin{eqnarray}
W^{(0)}(x;t_1,t_2)=\mbox{P}e^{-\int_{t_2}^{t_1}dt\phi(x,t)}\,,
\label{W_(0)}
\end{eqnarray}
where the symbol means the path-ordered integration,
more precisely,
it is defined by the differential equation
\begin{eqnarray}
(\frac{d}{dt_1}+\phi(x,t_1))W^{(0)}(x;t_1,t_2)=0\,,
\hspace{5mm}
W^{(0)}(x;t_2,t_2)=1\,.
\label{def O_(0)}
\end{eqnarray}
The trace of the holonomy along the circle
defines a loop operator as
\begin{eqnarray}
\mathcal{O}^{(0)}(x)=
\mbox{Tr}\, W^{(0)}(x; R,0)\,,
\label{O_(0)}
\end{eqnarray}
where $R$ is the circumference of $S^1$.
The above operator is an analogue of
the Wilson loop along the circle.
Unlike the case of $\epsilon_1=\epsilon_2=0$,
it is not $Q$-closed except at $x=0$.
To see this, note that the $Q$-transformations
(\ref{Q transform (A, psi)}) and (\ref{Q transform H})
imply
$Q_{\epsilon_1,\epsilon_2}\phi(t)=
-\iota_{V_{\epsilon_1,\epsilon_2}}\psi(t)$.
By using this, we find
\begin{eqnarray}
Q_{\epsilon_1,\epsilon_2}\mathcal{O}^{(0)}(x)
=
\int_0^{R}dt_1
\mbox{Tr}\Bigl\{\,W^{(0)}(x;\,R,t_1)\,
\iota_{V_{\epsilon_1,\epsilon_2}}\psi(x,t_1)\,
W^{(0)}(x;\,t_1,0)\Bigr\}\,.
\label{QO_(0)}
\end{eqnarray}
Since the right hand side of the above formula vanishes only at $x=0$,
this means that $\mathcal{O}^{(0)}(x)$ becomes $Q$-closed only at $x=0$.
The above property may be explained
in terms of the equivariant de Rham theory.
To see this,
let us first generalize the path-ordered integral (\ref{W_(0)})
by exponentiating the combination
$F_{A(t)}-\psi(t)+\phi(t)$ in place of $\phi(t)$ as
\begin{eqnarray}
W(x;\,t_1,t_2)
=\mbox{P}e^{-\int_{t_2}^{t_1}dt \big(F_{A(t)}-\psi(t)+\phi(t)\big)(x)}\,,
\label{W}
\end{eqnarray}
where the right hand side is given by a differential equation
similar to (\ref{def O_(0)}).
This means that $W$ has the components,
according to degrees of differential forms on $\mathbb{R}^4$,
as $W=W^{(0)}+W^{(1)}+\cdots+W^{(4)}$,
where the indices denote the degrees.
We generalize the loop operator (\ref{O_(0)}) as
\begin{eqnarray}
\mathcal{O}(x)=
\mbox{Tr}\, W(x; R,0)\,.
\label{O}
\end{eqnarray}
This also has components as $\mathcal{O}=\mathcal{O}^{(0)}
+\mathcal{O}^{(1)}+\cdots+\mathcal{O}^{(4)}$.
Eq. (\ref{QO_(0)}) can be now expressed as
$Q_{\epsilon_1,\epsilon_2}\mathcal{O}^{(0)}
=\iota_{V_{\epsilon_1,\epsilon_2}}\mathcal{O}^{(1)}$.
This is actually the first equation
among a series of the equations that $\mathcal{O}^{(i)}$ obey.
Such equations eventually show up
by expanding the identity \cite{Nakatsu-Noma-Takasaki}
\begin{eqnarray}
(d_{\epsilon_1,\epsilon_2}+Q_{\epsilon_1,\epsilon_2})\mathcal{O}(x)=0\,,
\label{formula of O}
\end{eqnarray}
where $d_{\epsilon_1,\epsilon_2}\equiv
d-\iota_{V_{\epsilon_1,\epsilon_2}}$
is the $T^2$-equivariant differential on $\mathbb{R}^4$.
We can also consider the loop operators
encircling the circle many times.
Correspondingly we introduce
\begin{eqnarray}
\mathcal{O}_k(x)=
\mbox{Tr}\,W(x;kR,0)\,,
\hspace{6mm}
k=1,2,\cdots
\label{O_k}
\end{eqnarray}
These satisfy
\begin{eqnarray}
(d_{\epsilon_1,\epsilon_2}+
Q_{\epsilon_1,\epsilon_2})\mathcal{O}_k(x)=0\,.
\label{formula of O_k}
\end{eqnarray}
Let us examine the correlation functions
$\langle\,
\prod_{a}\int_{\mathbb{R}^4}\mathcal{O}_{k_a}\,
\rangle^{\epsilon_1,\epsilon_2}$.
Since the integral
$\int_{\mathbb{R}^4}\mathcal{O}_k=\int_{\mathbb{R}^4}\mathcal{O}_k^{(4)}$
is $Q$-closed by virtue of the formula (\ref{formula of O_k}),
these can be computed by a supersymmetric quantum mechanics (SQM)
which is substantially equivalent to
the $5d$ SYM as the topological field theory.
Such a SQM turns to
be $T^2$-equivariant SQM on $\tilde{\mathcal{M}}_n$
\cite{Nekrasov},
where $\tilde{\mathcal{M}}_n$
is the moduli space of the framed $n$ instantons.
The $Q$-transformation
(\ref{Q transform (A, psi)})
is converted to the supersymmetry of the quantum mechanics
\begin{eqnarray}
Q_{\epsilon_1,\epsilon_2}m(t)=\chi(t)\,,
\hspace{7mm}
Q_{\epsilon_1,\epsilon_2}\chi(t)=
-\frac{dm(t)}{dt}+\mathcal{V}_{\epsilon_1,\epsilon_2}(m(t))\,,
\label{Q transform (m,chi)}
\end{eqnarray}
where $\mathcal{V}_{\epsilon_1,\epsilon_2}$ is
the Killing vector
induced by the variation
$\delta A=\iota_{V_{\epsilon_1,\epsilon_2}}F_A$ on $\mathcal{A}_E$.
The combination $F_{A(t)}-\psi(t)+\phi(t)$ can be
identified with a loop space analogue of
the $T^2$-equivariant curvature $\mathcal{F}_{\epsilon_1,\epsilon_2}$
of the universal connection,
where the universal bundle becomes equivariant by
the $T^2$-action on $\mathcal{A}_E \times \mathbb{R}^4$.
In the computation of the correlation function,
by virtue of the supersymmetry (\ref{Q transform (m,chi)}),
only the constant modes $m_0,\chi_0$ contribute to
the observable,
and the above combination precisely becomes
$\mathcal{F}_{\epsilon_1,\epsilon_2}$
\cite{Losev-Marshakov-Nekrasov}.
This means that $\mathcal{O}_k(x)$ truncates
to the equivariant Chern character
$\mbox{Tr} \, e^{-kR\, \mathcal{F}_{\epsilon_1,\epsilon_2}}$.
Thus we obtain the finite dimensional integral representation
\begin{eqnarray}
\Big\langle
\,\prod_{a}\int_{\mathbb{R}^4}\mathcal{O}_{k_a}\,
\Big \rangle_{n-instanton}^{\epsilon_1,\epsilon_2}
=
\frac{1}{(2\pi i R)^{ \frac{\dim \tilde{\mathcal{M}}_n}{2} }}
\int_{\tilde{\mathcal{M}}_n}
\hat{A}_{T^2}(R\,{\bf t}_{\epsilon_1,\epsilon_2},
\,\tilde{\mathcal{M}}_n)\,
\prod_{a}
\int_{\mathbb{R}^4}
\mbox{Tr} \,
e^{-k_aR \, \mathcal{F}_{\epsilon_1,\epsilon_2}}\,.
\label{correlator of O_a}
\end{eqnarray}
where $\hat{A}_{T^2}(\cdot\,,\tilde{\mathcal{M}}_n)$ is
the $T^2$-equivariant $\hat{A}$-genus of the tangent bundle
of $\tilde{\mathcal{M}}_n$,
and
$\bf{t}_{\epsilon_1,\epsilon_2}$
is a generator of $T^2$ that gives the Killing vector
$\mathcal{V}_{\epsilon_1,\epsilon_2}$.
Introducing the coupling constants $t=(t_1,t_2,\cdots)$,
the generating function of the correlation functions
is given by
$\mathcal{Z}_{\epsilon_1,\epsilon_2}(t)
=\left \langle
e^{\sum_{k}t_k \int_{\mathbb{R}^4}\mathcal{O}_k}
\right \rangle^{\epsilon_1,\epsilon_2}$.
Since $n$-instanton
contributes with the weight $(R\Lambda)^{2nN}$,
where $\Lambda$ is the dynamical scale,
letting $Q=(R\Lambda)^2$,
we can express the generating function as
\begin{eqnarray}
\mathcal{Z}_{\epsilon_1,\epsilon_2}(t)
=
\sum_{n=0}
Q^{nN}
\left \langle
e^{\sum_{k}t_k \int_{\mathbb{R}^4}\mathcal{O}_k}
\right \rangle_{n-instanton}^{\epsilon_1,\epsilon_2} \,.
\label{generating function SU(N)}
\end{eqnarray}
\subsection{\normalsize
Application of localization technique}
The right hand side of
the formula (\ref{correlator of O_a})
is eventually replaced with a statistical sum over partitions.
To see their appearance,
note that the integration
localizes to the fixed points of the $T^2$-action.
However,
the fixed points in $\tilde{\mathcal{M}}_n$
are small instanton singularities
since the variation
$\delta A= -\iota_{V_{\epsilon_1,\epsilon_2}}F_A$
vanishes there.
These can be resolved by instantons
on a non-commutative $\mathbb{R}^4$.
Applying such a regularization via the non-commutativity,
the fixed points get isolated,
so that they are eventually labelled by using partitions
\cite{Nakajima}.
A partition $\lambda=(\lambda_1,\lambda_2,\cdots)$ is
a sequence of nonnegative integers
satisfying $\lambda_i \geq \lambda_{i+1}$
for all $i \geq 1$.
Partitions are identified with the Young diagrams
in the standard manner.
The size is defined by $|\lambda|=\sum_{i \geq 1}\lambda_i$,
which is the total number of boxes of the diagram.
Let us describe the formula (\ref{correlator of O_a})
for the $U(1)$ theory.
The relevant computation of the localization
can be found in \cite{Nakajima,Nakajima-Yoshioka-lec}.
We truncate $\epsilon_{1,2}$ as
$-\epsilon_{1}=\epsilon_2=i\hbar$,
where $\hbar$ is a positive real parameter.
Consequently,
the formula becomes a $q$-series,
where $q=e^{-R\hbar}$.
The fixed points in $\tilde{\mathcal{M}}_n$
are labelled by partitions of $n$.
The equivariant $\hat{A}$-genus takes the following form
at the partition $\lambda$ of $n$:
\begin{eqnarray}
(2\pi iR)^{-2n}
\left.
\hat{A}_{T^2}(R\,{\bf t}_{-i\hbar,i\hbar}
\,\tilde{\mathcal{M}}_n)
\right |_{\lambda}
=
(-)^{n}
\Bigl(\frac{\hbar}{2\pi}\Bigr)^{2n}
\Bigl(\prod_{s \in \lambda}h(s)\Bigr)^2
q^{\frac{\kappa(\lambda)}{2}}
s_{\lambda}(q^{\rho})^2\,,
\label{U(1) Dirac_index}
\end{eqnarray}
where $h(s)$ denotes the hook length of the box $s$ of
the Young diagram $\lambda$,
and
$s_{\lambda}(q^{\rho})$ is the Schur function
$s_{\lambda}(x_1,x_2,\cdots)$ specialized to $x_i=q^{i-\frac{1}{2}}$.
Similarly,
the fixed points in $\tilde{\mathcal{M}}_n \times \mathbb{R}^4$
are labelled by partitions of $n$.
Denoting them as $(\lambda,0)$,
the equivariant Chern character takes the form
$
\left.
\mbox{Tr}\,e^{-kR \mathcal{F}_{-i\hbar,i\hbar}}
\right|_{(\lambda,0)}
=\mathcal{O}_k(\lambda)
$,
where $\mathcal{O}_k(\lambda)$ is given by
\begin{eqnarray}
\mathcal{O}_{k}(\lambda)
=
(1-q^{-k})
\sum_{i=1}^{\infty}
\Bigl\{
q^{k(\lambda_i-i+1)}-q^{k(-i+1)}
\Bigr\}
+1\,.
\label{O_k(lambda)}
\end{eqnarray}
The above functions have been exploited in
\cite{Marshakov-Nekrasov, Kanno-Moriyama}
from the $4d$ gauge theory viewpoint.
By taking account of
(\ref{U(1) Dirac_index}) and (\ref{O_k(lambda)}),
the formula (\ref{correlator of O_a})
becomes eventually as
\begin{eqnarray}
\Big\langle
\,\prod_{a}\int_{\mathbb{R}^4}\mathcal{O}_{k_a}\,
\Big \rangle_{n-instanton}^{-i\hbar,i\hbar}
=
(-)^n
\sum_{|\lambda|=n}
q^{\frac{\kappa(\lambda)}{2}}
s_{\lambda}(q^{\rho})^2
\prod_{a}
\hbar^{-2}
\mathcal{O}_{k_a}(\lambda)\,.
\label{correlator of O_a_U(1)}
\end{eqnarray}
Although we have not taken into account,
the Chern-Simon term can be added
to a $5d$ gauge theory,
with the coupling constant being quantized,
in particular,
for the $U(1)$ theory,
$m=0,\pm 1$.
It modifies
the right hand side of (\ref{correlator of O_a_U(1)})
by giving a contribution of the form
$(-)^{m|\lambda|}q^{-\frac{m\kappa(\lambda)}{2}}$,
for each $\lambda$ \cite{MNTT1}.
Hereafter,
we consider the case of
the $U(1)$ theory having
the Chern-Simon coupling, $m=1$.
The corresponding generating function becomes
\begin{eqnarray}
\mathcal{Z}^{U(1)}_{-i\hbar,i\hbar}(t)
=
\sum_{\lambda}
Q^{|\lambda|}
s_{\lambda}(q^{\rho})^2
e^{\hbar^{-2}\sum_{k=1}t_k \mathcal{O}_k(\lambda)}\,.
\label{Z_U(1)}
\end{eqnarray}
\section{\large
Integrability of $5d$ $\mathcal{N}=1$ SYM in $\Omega$ background}
We can view the generating function (\ref{Z_U(1)})
as a $q$-deformed random partition.
To see this,
note that the $4d$ limit $R \rightarrow 0$ makes
$q=e^{-R\hbar} \rightarrow 1$,
the Boltzmann weight takes at this limit,
the form
$(\Lambda/\hbar)^{2|\lambda|}
\bigl(\prod_{s \in \lambda}h(s)\bigr)^{-2}$,
which is the standard weight of a random partition.
It can be also viewed as a melting crystal model,
known as random plane partition.
The corresponding model
is studied in \cite{Nakatsu-Takasaki}
as a melting crystal model with external potential,
where the Chern characters $\mathcal{O}_k$ correspond
precisely to the external potentials.
\subsection{\normalsize
Melting crystal model}
A plane partition $\pi$ is an array of
non-negative integers
\begin{eqnarray}
\begin{array}{cccc}
\pi_{11} & \pi_{12} & \pi_{13} & \cdots \\
\pi_{21} & \pi_{22} & \pi_{23} & \cdots \\
\pi_{31} & \pi_{32} & \pi_{33} & \cdots \\
\vdots & \vdots & \vdots & ~
\end{array}
\label{pi}
\end{eqnarray}
satisfying
$\pi_{ij}\geq \pi_{i+1 j}$ and $\pi_{ij}\geq \pi_{i j+1}$
for all $i,j \geq 1$.
Plane partitions are identified
with the $3d$ Young diagrams.
The $3d$ diagram $\pi$
is a set of unit cubes such that $\pi_{ij}$ cubes
are stacked vertically on each $(i,j)$-element of $\pi$.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=0.43]{planepartition.eps}
\caption{\textit{The $3d$ Young diagram (a)
and the corresponding sequence of partitions
(b).}}
\end{center}
\label{three-dimensional Young diagram}
\end{figure}
Diagonal slices of $\pi$ become partitions,
as depicted in Fig.1.
Denote $\pi(m)$ the partition along the $m$-th diagonal slice,
where $m \in \mathbb{Z}$.
In particular,
$\pi(0)=(\pi_{11},\pi_{22},\cdots)$
is the main diagonal one.
This series of partitions satisfies the condition
\begin{eqnarray}
\cdots \prec \pi(-2) \prec \pi(-1) \prec
\pi(0) \succ \pi(1) \succ \pi(2) \succ \cdots,
\label{interlace relations}
\end{eqnarray}
where
$\mu \succ \nu$ means the interlace relation;
$\mu \succ \nu$
$\Longleftrightarrow$
$\mu_1 \geq \nu_1 \geq \mu_2 \geq \nu_2
\geq \mu_3 \geq \cdots$.
The hamiltonian picture emerges from the above relations,
by viewing a plane partition as evolutions of partitions
by the discrete time $m$.
Eventually it is described \cite{Ok-Res}
by using $2d$ free complex fermions $\psi,\psi^*$.
We may separate the relations (\ref{interlace relations})
into two parts, each describing
the evolutions for $m \leq 0$ and $m \geq 0$.
These two types of the evolutions are
realized in the $2d$ CFT
by using operators $G_{\pm}$ of the forms \cite{Ok-Res}
\begin{eqnarray}
G_{\pm}=
e^{
\sum_{k=1}^{\infty}
\frac{q^{\frac{k}{2}}}{k(1-q^k)}
J_{\pm k}},
\label{G_pm}
\end{eqnarray}
where $J_{\pm k}=\sum_{n=-\infty}^{\infty}:\psi_{\pm k-n}\psi^*_n:$
are the modes of the $U(1)$ current.
Using the free fermion description,
one can express the generating function as
\begin{eqnarray}
\mathcal{Z}^{U(1)}_{-i\hbar,i\hbar}(t)
=
\langle 0 |
G_+
Q^{L_0} e^{\frac{1}{\hbar^2}\sum_{k}t_k \hat{\mathcal{O}}_k}
G_-
|0\rangle\,,
\label{fermionic representation}
\end{eqnarray}
where $L_0=\sum_{n=-\infty}^{\infty}n:\psi_{-n}\psi_n^*:$ is
an element of the Virasoro algebra.
The loop operators $\mathcal{O}_k$
are converted to operators $\hat{O}_k$ in the above representation.
They are fermion bilinears given by
\begin{eqnarray}
\hat{\mathcal{O}}_k=
(1-q^{-k})\sum_{n=-\infty}^{+\infty}
q^{kn}:\psi_{-n}\psi^*_{n}:
+1\,,
\label{hat O_k}
\end{eqnarray}
\subsection{\normalsize
The integrable structure}
The fermion bilinears $\hat{\mathcal{O}}_k$ can be regarded as
a commutative sub-algebra of
the quantum torus Lie algebra
realized by the free fermions \cite{Nakatsu-Takasaki}.
The adjoint actions of $G_{\pm}$
on the Lie algebra generate automorphisms of the algebra.
Among them,
taking advantage of
the shift symmetry,
the representation (\ref{fermionic representation})
can be eventually reformulated \cite{Nakatsu-Takasaki} to
\begin{eqnarray}
\mathcal{Z}^{U(1)}_{-i\hbar,i\hbar}(t)
\,=\,
\langle 0 |\,
e^{\frac{1}{2\hbar^2}\sum_{k=1}^{\infty}(-)^k(1-q^{-k})t_kJ_k}\,\,
{\bf g}_{\star}^{5d\,U(1)}\,\,
e^{\frac{1}{2\hbar^2}\sum_{k=1}^{\infty}(-)^k(1-q^{^k})t_kJ_{-k}}\,
| 0 \rangle\,.
\label{toda tau}
\end{eqnarray}
In the above formula,
${\bf g}_{\star}^{5d\,U(1)}$ is an element of $GL(\infty)$
of the form
\begin{eqnarray}
{\bf g}_{\star}^{5d\,U(1)}=
q^{\frac{W}{2}}G_-G_+Q^{L_0}G_-G_+q^{\frac{W}{2}}\,,
\label{g_U(1)}
\end{eqnarray}
where $W=W_0^{(3)}=\sum_{n=-\infty}^{\infty}n^2:\psi_{-n}\psi_n:$
is a special element of $W_{\infty}$ algebra.
The loop operators $\mathcal{O}_k$ are converted
to $J_k$ or $J_{-k}$ in (\ref{toda tau}).
These two are actually equivalent in the formula,
since
${\bf g}_{\star}^{5d\,U(1)}$
satisfies \cite{Nakatsu-Takasaki}
\begin{eqnarray}
J_k\,{\bf g}_{\star}^{5d\,U(1)}={\bf g}_{\star}^{5d\,U(1)}J_{-k}\,,
\hspace{6mm}
\mbox{for}~ k \geq 0.
\label{reduction to 1-toda}
\end{eqnarray}
Viewing the coupling constants $t$
as a series of time variables,
the right hand side of (\ref{toda tau})
is the standard form
of a tau function of $2$-Toda hierarchy
\cite{Ueno-Takasaki}.
However,
by virtue of (\ref{reduction to 1-toda}),
the two-sided time evolutions of $2$-Toda hierarchy
degenerate to one-sided time evolutions.
This precisely gives the reduction to $1$-Toda hierarchy.
Thus
the generating function becomes a tau function of
$1$-Toda hierarchy.
\section{\large
Extended Seiberg-Witten geometry of $5d$ theory}
We consider the field theory limit of the $U(1)$ theory,
which is achieved by letting $\hbar \rightarrow 0$
and amounts to the thermodynamic limit of the melting crystal model.
The system is described by the prepotential
$\mathcal{F}^{(0)}(t;\Lambda,R)$.
From the integrable system viewpoint,
$\mathcal{F}^{(0)}$ may be interpreted as a dispersion-less tau function,
since the generating function is substantially a tau function of $1$-Toda
hierarchy and $\mathcal{F}^{(0)}$ gives
the leading order part of the $\hbar$ expansion of
$\log \mathcal{Z}^{U(1)}_{-i\hbar,i\hbar}(t)$
To obtain the semi-classical solution,
one actually needs to solve the related variational problem,
which is reformulated as a Riemann-Hilbert problem.
This issue is treated in \cite{Nakatsu-Noma-Takasaki}.
\subsection{\normalsize
Seiberg-Witten curve of $U(1)$ theory}
Let us present the Seiberg-Witten curve for the $U(1)$ theory.
We first employ the following curve \cite{Maeda-Nakatsu, MNTT2}:
\begin{eqnarray}
\mathcal{C}_{\beta} :\hspace{8mm}
y+y^{-1}=
\frac{1}{R\Lambda}
(e^{-Rz}-\beta)\,,
\hspace{5mm}
z \in \mathbb{C}\,,
\label{C_beta}
\end{eqnarray}
where $\beta$ is a real parameter.
$\mathcal{C}_{\beta}$ is a double cover of the cylinder
$\mathbb{C}^*=\mathbb{C}/\frac{2\pi i}{R}$,
with a cut $I$ along the real axis on the Riemann sheet.
The coupling constants $t$ determine $\beta$ as
$\beta=\beta(t)$.
To see this,
let us introduce a meromorphic
differential of the form
\begin{eqnarray}
d\Psi=
\Big\{
1-\frac{1}{2}R^2\sum_{k=1}^{\infty}k^3t_kM_k(z)
\Big\}d\log y\,,
\end{eqnarray}
where
$M_k(z)=\sum_{n=0}^{k}d_{k-n}(\beta)e^{-nRz}$.
The coefficients $d_n(\beta)$ are given in
the asymptotic expansion
\begin{eqnarray}
\sqrt{(1-\beta e^{Rz})^2-(2R\Lambda e^{Rz})^2}
=\sum _{n=0}^{\infty}d_n(\beta)e^{nRz}\,,
\hspace{6mm}
\Re z \rightarrow -\infty.
\end{eqnarray}
Finally,
solving the Riemann-Hilbert problem,
$\beta$ is determined
by the condition \cite{Nakatsu-Noma-Takasaki}
\begin{eqnarray}
\oint_{C}z d\Psi=0
\hspace{8mm}
(\mbox{$C$: a contour encircling $I$ anticlockwise}).
\label{RH condition}
\end{eqnarray}
\subsection{\normalsize
Vevs of the loop operators}
The vev of the loop operators $\mathcal{O}_k$ can be represented
by using an analogue of the Seiberg-Witten differential.
Eventually, the vev can be organized to the contour integral
\begin{eqnarray}
\frac{\partial \mathcal{F}^{(0)}(t;\Lambda,R)}{\partial t_k}
=
\lim_{\hbar \rightarrow 0}
\langle \mathcal{O}_k \rangle
=
\frac{-kR}{2\pi i}
\oint_C
e^{-kRz}dS,,
\label{vev O_k}
\end{eqnarray}
where $dS=S'(z)dz$ is an analogue of the Seiberg-Witten differential.
$S'(z)$ is given by the indefinite integral
\begin{eqnarray}
S'(z)=
\int^z d\Psi\,.
\label{SW differential}
\end{eqnarray}
The contour integral in the right hand side of (\ref{vev O_k})
can be converted to a residue integral.
Actually,
by using coordinate $Z=e^{-Rz}$, we obtain
\begin{eqnarray}
\frac{\partial \mathcal{F}^{(0)}(t;\Lambda,R)}{\partial t_k}
=
\lim_{\hbar \rightarrow 0}
\langle \mathcal{O}_k \rangle
=
kR\, \mbox{Res}_{Z=\infty}
\Big(
Z^k dS
\Big)\,.
\label{vev O_k residue}
\end{eqnarray}
\subsubsection*{\underline{Acknowledgements}}
This article is based on a talk presented
at the international workshop
{\it ``Progress of String Theory and Quantum Field Theory''}
(Osaka City University, December 7-10, 2007).
We would like to thank the organizers of the conference for
arranging such a wonderful conference.
K.T is supported in part by Grant-in-Aid for Scientific Research
No. 18340061 and No. 19540179.
|
1,116,691,497,281 | arxiv | \section{Introduction}
Topological data analysis (TDA) has emerged as powerful machinery in machine learning (ML), allowing us to extract complementary information on the observed objects, especially, from graph-structured data. In particular, TDA has become quite popular in various ML tasks, ranging from bioinformatics~\cite{kovacev2016using, nielson2015topological},
finance~\cite{leibon2008topological, akcora2019bitcoinheist}, material science~\cite{ichinomiya2017persistent},
biosurveillance~\cite{segovia2021tlife, chen2022tamp}, network analysis~\cite{sizemore2017classification,
carstens2013persistent}, as well as insurance and agriculture~\cite{yuvaraj2021topological, jiang2022learning} (see the literature overviews~\cite{amezquita2020shape, chazal2021introduction} and the TDA applications library~\cite{giunti22}).
Recently there has emerged a highly active research area that combines the PH machinery with geometric deep learning (GDL) methods~\cite{hofer2017deep, zhao2019learning, horn2021topological}.
Persistent homology (PH) is a key approach in TDA, allowing us to extract the evolution of subtler patterns in the data shape dynamics at multiple resolution scales, which are not accessible to more conventional, non-topological methods~\cite{carlsson2009topology}. The main idea is to construct a nested sequence of topological spaces (filtration) induced from the data, and record the evolution of topological features in this sequence. In other words, the extracted patterns, or homological features, along with how long such features persist throughout the considered filtration of a scale parameter, convey a critical insight into salient graph characteristics and hidden mechanisms behind system organization.
PH has been very effective in many graph machine learning tasks, such as graph and node classification~\cite{rieck2019persistent, cai2020understanding, zhao2020persistence, hofer2020graph}, link prediction~\cite{benson2018simplicial, yan2021link} and anomaly detection~\cite{bruillard2016anomaly, ofori2021topological}.
Nevertheless, while PH has shown promise in various graph learning applications, prohibitive computational costs of PH constrain its wider usage. Indeed, most PH studies are limited to small graphs with a few thousand vertices at most. The problem is that the complexity of the standard PH algorithm is cubic in the number of simplices~\cite{otter2017roadmap}, so one needs to limit homology computations to $0$-th and $1$-th levels only. Computation of higher-level persistence for relatively large graphs can take days or weeks.
In this paper, we aim to address this fundamental bottleneck in the application of TDA to large networks by introducing two new efficient algorithms which significantly reduce the cost of computing persistence diagrams (PD) for large real-world networks: {\it CoralTDA} and {\it PrunIT}.
\vskip3pt
\noindent {\bf CoralTDA Algorithm:} Based on our observation that many vertices in large real-world networks have low degrees and do not contribute to PDs in higher dimensions, we developed the CoralTDA algorithm (Theorem \ref{thm:kcores}) where we prove that $(k+1)$-core $\mathcal{G}^{k+1}$ of a graph $\mathcal{G}$ is enough to compute the $k^{th}$ PD of the graph, i.e. $PD_k(\mathcal{G})=PD_k(\mathcal{G}^{k+1})$.
Using this property, with a much smaller core graph $\mathcal{G}^{k+1}$, we compute the exact higher persistence diagram $PD_k(\mathcal{G})$ losing no information. Our experiments show that even for lower dimensional topological features, such as $k=1$, we reduce the graph order by up to 73\% for some datasets (See Figure~\ref{fig:vertex}). Our findings show that many real-life data sets exhibit nontrivial second and third persistence diagrams, facilitating various classification problems. On the other hand, our reduction reaches 100\% for the third or higher dimensions in several networks, implying that higher PDs are trivial for these datasets.
As a result, our reduction approach improves our understanding of the existence of higher-order dimensional holes and their role in the organization of complex networks.
\vskip3pt
\noindent {\bf PrunIT Algorithm:} We further develop a topologically simple but highly efficient algorithm to facilitate computations of PDs of graphs for any dimension. In particular, for a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ and filtration by clique complexes, we show that removing (pruning) a dominated vertex from the graph does not change PDs at any level, provided that the dominated vertex enters the filtration after the dominating vertex (\Cref{thm:reduction}).
Our experiments indicate that the new algorithm is highly efficient in PD computations of a broad category of large graphs from 100K to 1M vertices, and it can reach 95\% vertex reduction (see Table~\ref{tab:prunitresults}).
Further, when we combine CoralTDA and PrunIT algorithms, we can significantly reduce the graph sizes for the computation of PDs (Figure \ref{fig:combinedresults}).
\medskip
{\bf We summarize the key novelty of our contributions as follows:}
\begin{itemize}
\item We show that the graphs’ ($k+1$)th and higher persistence diagrams only depend on their $k$-cores.
\item We introduce a highly effective pruning algorithm that significantly reduces the graph size without changing any persistence diagram of the original graph.
\item Our experiments in large datasets and large graphs show up to 95\% reduction in the graph size for the computation of persistence diagrams.
\item With our reduction algorithms, highly successful TDA methods can be applied to very large graphs and large datasets where previously its use was constrained by prohibitive computational costs.
\end{itemize}
\section{Related Work}
\label{sec:related}
There are mainly two settings in practice where we use PH to obtain a topological fingerprint of a dataset. The first one is the \textit{point cloud setting}, where the dataset comes as a point cloud in an ambient space $\mathbb{R}^n$. Then, we define PH by constructing a sequence of simplicial complexes induced by the pair-wise distances of data points (Vietoris-Rips filtration) and keeping track of the topological changes in this sequence~\cite{zomorodian2005computing, edelsbrunner2010computational}. The second one is \textit{the network setting} where the typical PH construction uses a filtering function on the network. By construction, while the principal identifier to define PH in the point cloud setting is the pair-wise distances of points, the principal identifier in the network setting is the filtering function. Because of this, PH machinery works differently in a network setting, as explained in Section \ref{sec:background}.
There are several works in the point cloud setting to reduce the computational costs and run-time of the persistence diagrams. Malott and Wilsey used the idea of data reduction and data partitioning~\cite{malott2019fast}. Mischaikow and Nanda brought the discrete Morse Theory of geometric topology to the combinatorial setting~\cite{mischaikow2013morse}. In \cite{obayashi2018volume,vcufar2021fast, dey2019persistent, escolar2016optimal}, the authors studied the same problem with different approaches in the point cloud setting.
While several works improve the run-time of PH in the point cloud setting, only a few of them could reduce the computational costs of persistent homology in the network setting. An idea is to use discrete Morse Theory to capture the topological features occurring during the process~\cite{kannan2019persistent} by applying the techniques developed in~\cite{mischaikow2013morse} to the network setting.
While the computational complexity of $k^{th}$ persistence diagram (PD) is $\mathcal{O}(n^3)$ where $n$ is the number of $k$-simplices~\cite{otter2017roadmap}, \cite{mischaikow2013morse} achieves $\mathcal{O}(m^2\times n\log{n})$ where $m$ is the number of critical $k$-simplices. With the additional time to find the critical $k$-simplices in each filtration step, the computational complexity $\mathcal{O}(m^2\times n\log{n})$ is not scalable for very large networks.
\section{Persistent Homology}
\label{sec:background}
This part provides a background on the theory of persistent homology. Homology $H_k(X)$ is an essential invariant in algebraic topology, which captures the information of the $k$-dimensional holes (connected components, loops, cavities) in a topological space $X$. For example, a connected component in a graph is a zero-dimensional hole, whereas a graph loop is a 1-dimensional hole. Persistent homology is a way to use this invariant to keep track of the changes in a controlled topological space sequence induced by the original space $X$. For basic background on persistent homology, see \cite{edelsbrunner2010computational, dey2022computational}.
There are several ways to use PH in a network setting, such as power filtration or using different complexes (e.g., Vietoris-Rips, \v{C}ech complexes) to construct the filtration for a given filtering function \cite{aktas2019persistence}. We focus on the most common methods to define PH for graphs: sub/superlevel filtrations obtained by a filtering function and the clique (flag) complexes. Sub/superlevel filtrations are the most common methods because one can inject domain information into the PH process if the chosen filtering function comes from the network domain (e.g., atomic number in protein networks, transaction amount for blockchain networks). Note that our results can be generalized to the persistent homology defined with a filtering function for different complexes.
Throughout the paper, we use the terms \textit{graph} and \textit{network} interchangeably. Let $\mathcal{G}$ be a graph with vertex set $\mathcal{V}=\{v_r\}$ and edge set $\mathcal{E}=\{e_{rs}\}$, i.e. $e_{rs}\in \mathcal{E}$ if there is an edge between the vertex $v_r$ and $v_s$ in $\mathcal{G}$. Let $f:\mathcal{V}\to \mathbb{R}$ be a filtering function defined on the vertices of $\mathcal{G}$. Let $\mathcal{I}=\{\alpha_i\}$ be a threshold set with $\alpha_0=\min_{v_r \in \mathcal{V}} f(v_r)<\alpha_1<...<\alpha_m=\max_{v_r \in \mathcal{V}} f(v_r)$. For $\alpha_i\in \mathcal{I}$, let $\mathcal{V}_i=\{v_r\in\mathcal{V}\mid f(v_r)\leq \alpha_i\}$. Let $\mathcal{G}_i$ be the induced subgraph of $\mathcal{G}$ by $\mathcal{V}_i$, i.e. $\mathcal{G}_i=(\mathcal{V}_i,\mathcal{E}_i)$ where $\mathcal{E}_i=\{e_{rs}\in \mathcal{E}\mid v_r,v_s\in\mathcal{V}_i\}$. Let $\wh{\mathcal{G}}_i$ be the clique complex of $\mathcal{G}_i$. A clique complex is obtained by filling in all the $(k+1)$-complete subgraphs with $k$-simplices. In other words, if the vertices $\{v_{r_0},v_{r_1},...,v_{r_k}\}\subset \mathcal{G}_i$ are pairwise connected by an edge in $\mathcal{G}$, then the clique complex $\wh{\mathcal{G}}_i$ contains a $k$-simplex $\sigma=[v_{r_0},v_{r_1},...,v_{r_k}]$. This simplicial complex $\wh{\mathcal{G}}_i$ obtained by filling in all complete subgraphs is called \textit{the clique complex} of $\mathcal{G}_i$. This construction induces a nested sequence of high dimensional simplicial complexes:
$$\wh{\mathcal{G}}_0\subset \wh{\mathcal{G}}_1\subset \wh{\mathcal{G}}_2\subset ...\subset \wh{\mathcal{G}}_m.$$
This sequence of simplicial complexes is called \textit{the sublevel filtration} for $\mathcal{G}$. Superlevel filtrations can be defined similarly by considering the generating sets $\{f(v_r)\geq \alpha_i\}$ instead of $\{f(v_r)\leq \alpha_i\}$ above. Here, $\wh{\mathcal{G}}_i$ can be taken as the different simplicial complexes induced by $\mathcal{G}_i$ which gives different types of filtrations~\cite{aktas2019persistence}. After obtaining the filtration, one considers the homology groups $H_k(\wh{\mathcal{G}}_i)$ of each simplicial complex $\wh{\mathcal{G}}_i$. The homology group $H_k(X)$ keeps the information of $k$-dimensional topological features in the simplicial complex $X$.
Persistent homology keeps track of the topological changes in the sequence $\{\wh{\mathcal{G}}_i\}$ by using the homology groups $\{H_k(\wh{\mathcal{G}}_i)\}$. When a $k$-dimensional hole $\sigma$ (a connected component, loop or cavity) appears in $H_k(\wh{\mathcal{G}}_i)$, we mark $b_\sigma=\alpha_i$ as its birth time. The feature $\sigma$ can disappear at a later time in $H_k(\wh{\mathcal{G}}_j)$ by merging with another feature or by being filled in. Then, we mark $d_\sigma=\alpha_j$ as its death time. Hence, we say that $\sigma$ persists along the interval $[b_\sigma,d_\sigma)$, i.e. $[\alpha_i,\alpha_j)$. The longer the interval ($d_\sigma- b_\sigma$), the more persistent the feature $\sigma$.
The multi-set $PD_k(\mathcal{G},f)=\{(b_\sigma, d_\sigma) \mid \sigma\in H_k(\wh{\mathcal{G}}_i) \mbox{ for } b_\sigma\leq i<d_\sigma\}$ is called the {\em $k^{th}$ persistence diagram} of $(\mathcal{G},f)$ which is the collection of $2$-tuples marking the birth and death times of $k$-dimensional holes $\{\sigma\}$ in $\{\wh{\mathcal{G}}_i\}$. In particular, $PD_k(\mathcal{G},f)$ represents the $k^{th}$ PD of the sublevel filtration, induced by the filtering function $f:\mathcal{V}\to\mathbb{R}$. For brevity, we suppress $f$ and use $PD_k(\mathcal{G})$ throughout the text.
\section{CoralTDA Reduction and Higher Persistence Diagrams}
\label{sec:coresanddiagrams}
A \textit{$k$-core} $\mathcal{G}^k$ of a graph $\mathcal{G}$ is the subgraph of $\mathcal{G}$ obtained by iteratively deleting all vertices (and edges connected to it) with degree less than $k$~\cite{seidman1983network}. In other words, $\mathcal{G}^k$ is the largest subgraph of $\mathcal{G}$ where all the vertices have a degree of at least $k$.
\begin{wrapfigure}{r}{2in}
\vspace{-.3cm}
\begin{center}
\includegraphics[width=1.9in]{figs/kcore.png}
\caption{\footnotesize K-core decomposition of a graph of 10 vertices. Vertex 1 has no edges and belongs to the $0th$-core. A one-dimensional hole of vertices 4, 6, 8, and 9 is shown with a red circle.}
\label{fig:core}
\end{center}
\vspace{-.3cm}
\end{wrapfigure}
Figure~\ref{fig:core} shows a graph with its core structure. Here, vertex $1$ belongs to 0-core as it is disconnected from the graph. Vertex colors indicate shared coreness. When we use vertex degree as the filtering function and allow graph cliques of size three at most, the only one-dimensional hole (shown with the red circle) appears at degree 4 for vertices 4, 6, 8, and 9. Vertices 3, 5, and 7 can only contribute to 0-dimensional holes because their degree is 1. Similarly, 8 can only contribute to 0 and 1-dimensional holes because its degree is 2.
The $k$-core decomposition is a fundamental operation in many areas such as graph similarity matching \cite{nikolentzos2018degeneracy}, graph clustering~\cite{giatsidis2014corecluster}, network visualization~\cite{giatsidis2011evaluating}, anomaly detection~\cite{shanahan2013large} and robustness analysis~\cite{burleson2020k}.
A naïve implementation of $k$-core iteratively deletes vertices whose degree falls below a $k$, until it deletes all vertices from the graph. The implementation has a computational complexity of $\mathcal{O}(m\log{}n)$, where $m$ and $n$ are the number of edges and vertices in the network, respectively. Batagelj and Zaversnik reduce the complexity to $\mathcal{O}(m+n)$ \textquote{by keeping an in-memory array of all possible degree values and keeping track of bin boundaries}~\cite{batagelj2003m}.
\subsection{Relation between \texorpdfstring{$\wh{\mathcal{G}}_i$}{Gi} and \texorpdfstring{$\wh{\mathcal{G}}^k_i$}{Gki}}
Our main idea is to compute high-dimensional persistence features on their associated graph cores. Note that a $k$-clique in graph theory corresponds to a $(k-1)$-simplex in PH; a $k$-clique (complete subgraph of order $k$) in $\mathcal{G}$ induces a $(k-1)$-simplex in $\wh{\mathcal{G}}$.
The clique complex $\wh{\mathcal{G}}$ is a simplicial complex of dimension $K-1$ where $K$ denotes the degeneracy of $\mathcal{G}$, i.e. $K=\max\{k\mid \mathcal{G}^k\neq \emptyset\}$. That is, $\wh{\mathcal{G}}$ contains a $(k-1)$-simplex if and only if its $k$-core $\mathcal{G}^k$ is not empty.
For any $i,k$, we have the $k$-core of $\mathcal{G}_i$ contained in $\mathcal{G}_i$ by construction, i.e. $\mathcal{G}^k_i\subset \mathcal{G}_i$. This implies that the same holds for their clique complexes, i.e. $\wh{\mathcal{G}}^k_i\subset \wh{\mathcal{G}}_i$. On the other hand, if one restricts the original filtering function $f:\mathcal{V}\to \mathbb{R}$ to the vertices $\mathcal{V}^k$ of the $k$-core of $\mathcal{G}$, we have $f:\mathcal{V}^k\to\mathbb{R}$. By using the same thresholds for $f:\mathcal{V}^k\to\mathbb{R}$, we obtain the filtration $\wh{\mathcal{G}}^k_0\subset \wh{\mathcal{G}}^k_1\subset \wh{\mathcal{G}}^k_2\subset ...\subset \wh{\mathcal{G}}^k_m$. This will induce the persistence diagram $PD_r(\mathcal{G}^k)$ for any dimension $r$.
Since for any $i,k$, $\wh{\mathcal{G}}^k_i\subset \wh{\mathcal{G}}_i$, we have the following diagram.
\begin{equation}\label{eqn1}
\begin{array}{ccccccc}
\wh{\mathcal{G}}^k_0 & \subset & \wh{\mathcal{G}}^k_1 & \subset & ... & \subset & \wh{\mathcal{G}}^k_m \\
\cap & \ & \cap & \ & \ & \ & \cap \\
\wh{\mathcal{G}}_0 & \subset & \wh{\mathcal{G}}_1 & \subset & ...& \subset & \wh{\mathcal{G}}_m
\end{array}
\end{equation}
Notice that for any $j\geq k-1$, if there is a $j$-cycle $\sigma$ living in $C_j(\wh{\mathcal{G}}_i^k)$, then we have $\sigma\subset C_j(\wh{\mathcal{G}}_i)$ as $\wh{\mathcal{G}}^k_i\subset \wh{\mathcal{G}}_i$. In the following, we show that for these cycles, the converse is also true, and we show the equivalence in the homology level.
\vspace{.2cm}
\begin{remark} \label{remark:restrict_f} \normalfont [Restriction of $f$ to $\mathcal{V}^k$] Notice that the filtering function $f:\mathcal{V}^k\to \mathbb{R}$ on $\mathcal{V}^k$, the vertices of $\mathcal{G}^k$, is defined directly by restricting values of $f:\mathcal{V}\to\mathbb{R}$ to the subset $\mathcal{V}^k\subset \mathcal{V}$. In particular, if $f:\mathcal{V}\to\mathbb{R}$ is a function coming from the graph attributes (such as vertex degree), then $f:\mathcal{V}^k\to\mathbb{R}$ may not be the same function coming from the graph attributes induced by the graph $\mathcal{G}^k$. For example, let $f$ be the degree function on $\mathcal{V}$, the vertices of $\mathcal{G}$. Then, for any $w\in \mathcal{V}^k$, $f(w)$ is the degree of $w$ in $\mathcal{G}$, \textit{not its degree in $\mathcal{G}^k$}. While the $k$-core graph $\mathcal{G}^k$ changes, we \textit{do not update} the values of $f$ on $\mathcal{V}^k$ according to its attribute definition in $\mathcal{G}^k$, but we keep the same values in the original function $f:\mathcal{V}\to\mathbb{R}$ for the remaining vertices in $\mathcal{V}^k\subset \mathcal{V}$. In graph terms, this corresponds to computing vertex filtering (activation) values on the original graph but using the edges of the reduced graph to extract simplices.
\end{remark}
\subsection{CoralTDA Reduction}
Our CoralTDA technique shows that lower degree vertices do not affect higher persistence diagrams, i.e., CoralTDA yields exact results. Note that in the following result, even though the graph size changes, we keep the same filtering function $f:\mathcal{V}\to\mathbb{R}$ with the original values. See Remark~\ref{remark:restrict_f} for further details. We give the proof of the following theorem in Appendix.
\begin{theorem} \label{thm:kcores} Let $\mathcal{G}$ be an unweighted connected graph. Let $f:\mathcal{V}\to\mathbb{R}$ be a filtering function on $\mathcal{G}$. Let $PD_k(\mathcal{G},f)$ represent the $k^{th}$ persistence diagram for the sublevel filtration of the clique complexes. Let $\wh{\mathcal{G}}^k$ be the $k$-core of $\mathcal{G}$. Then, for any $j\geq k$
$$PD_j(\mathcal{G},f)=PD_j(\mathcal{G}^{k+1},f).$$
\end{theorem}
{\em Outline of the proof:} We show that for any nontrivial $k$-homology class $\sigma$ in the original clique complex $\wh{\mathcal{G}}$, a generating $k$-cycle $S$ in this homology class also lives in a much smaller subcomplex: the clique complex of the $(k+1)$-core ( $\wh{\mathcal{G}}^{k+1}$). That is, we prove that any vertex in the $k$-cycle $S$ must have a degree at least $k+1$ where this degree count comes only from the $k$-simplices of $S$, and removing the lower degree vertices from $\mathcal{G}$ has no effect on the existence of such $S$. We give the proof of the theorem in Appendix.
The above result indicates that $k^{th}$ persistence diagram information can be obtained by only considering the $(k+1)$-core of a graph. CoralTDA is an effective tool for reducing computational costs to compute higher persistence diagrams. See \Cref{fig:vertex} for reduction results for various datasets.
\begin{figure*}[ht]
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/clusCoralSocial2.png} \end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/clusCoralSocial3.png}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/clusCoralSocial4.png} \end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/clusCoralSocial5.png}
\end{subfigure}
\caption{\footnotesize Clustering coefficients vs. number of topological features in Facebook and Twitter datasets. Each data point is a graph instance. We observe hundreds of higher topological features in these datasets which can be highly useful for various graph learning tasks.}
\label{fig:clussocial}
\end{figure*}
\begin{remark} \label{remark:kahle} \normalfont [Higher PDs in Random Networks vs. Real-Life Networks] Note that by Kahle's seminal result~\cite{kahle2009topology}, to observe nontrivial Betti numbers for higher dimensions in Erd\'os-R\'enyi graphs $G(n,p)$, the average degree must be very high. In particular, for a graph $G(n,p)$, in order to have nontrivial $k^{th}$-homology in its clique complex, Kahle proved that for $p=n^\alpha$, $\alpha$ should be between $-1/k$ and $-1/(k+1)$. In terms of average degree $n\times p$, this means the average degree should be between $n^{(k-1)/k}$ and $n^{k/(k+1)}$. For instance, for dimension $k=2$, the average degree should be between $\sqrt{n}$ and $\sqrt[3]{n^2}$. For a graph order of $n=1000$, this implies that the average degree should be between $31$ and $100$ to have a nontrivial second homology in random networks. However, in real-life networks, our results show that higher Betti numbers are prevalent in much sparser graphs (see \cref{fig:clussocial} and appendix \cref{fig:cluskernel}). These findings can be further used to derive error bounds and the associated loss of topological information when $G(n,p)$ is employed to approximate real-world network phenomena, for instance, in the case of synthetic power grid networks and other cyber-physical systems.
\end{remark}
In the following, we give another effective method to reduce the size of a graph $\mathcal{G}$ without affecting the persistence diagrams $PD_r(\mathcal{G})$ \textit{for any dimension} $r\geq 0$.
\section{PrunIT Algorithm}
\label{sec:trim}
This section introduces another effective reduction technique for computing persistence diagrams of graphs induced by a filtering function. In particular, we show that for a graph $\mathcal{G}$, and filtering function $f:\mathcal{V}\to \mathbb{R}$, removing (pruning) specific vertices from the graph does not change the persistent homology at any level. The result is valuable because the algorithm may reduce the vertex set considerably (Table \ref{tab:prunitresults}). Furthermore, as our experiments show, the reduced vertex set can significantly lower the simplex count, leading to much shorter computational times for persistent homology (see Figure \ref{fig:vertex} and appendix Figure \ref{fig:complex}).
In algebraic topology, homotopy is a very effective tool to compute topological invariants like homology, and fundamental group \cite{hatcher2002algebraic}. These topological invariants are homotopy invariant, meaning that if two spaces are homotopy equivalent, then their corresponding topological invariants are the same, e.g., $X\sim Y \Rightarrow H_i(X)=H_i(Y)$. We give a very natural homotopy construction to simplify a graph in the following.
For a given filtering function $f:\mathcal{V}\to \mathbb{R}$, let $\wh{\mathcal{G}}_i$ be the clique complex of $\mathcal{G}_i$ which induces the sublevel filtration $\wh{\mathcal{G}}_0\subset \wh{\mathcal{G}}_1\subset \wh{\mathcal{G}}_2\subset ...\subset \wh{\mathcal{G}}_m$. Let $PD_k(\mathcal{G},f)$ represent the $k^{th}$ persistence diagram for the sublevel filtration $\{\wh{\mathcal{G}}_i\}$ as described above.
Now, we define \textit{dominated vertices} in $\mathcal{G}$. Define the neighborhood of $u_0$ as $N(u_0)=\{u_0\}\cup\{v\in \mathcal{V}\mid e_{u_0v}\in \mathcal{E}\}$. In particular, $N(u_0)\subset \mathcal{V}$ is the set of all vertices adjacent to $u_0$, and $u_0$ itself.
\begin{definition} A vertex $u$ is \textit{dominated by} the vertex $v$ in $\mathcal{G}$ if $N(u)\subset N(v)$. If there is such a vertex $v$, we call $u$ a \textit{dominated} vertex of $\mathcal{G}$ (see Figure~\ref{fig:toydomination}).
\end{definition}
\begin{wrapfigure}{r}{0.28\textwidth}
\vspace{-.2cm}
\begin{center}
\includegraphics[width=0.24\textwidth]{figs/domination.png}
\caption{\footnotesize Vertex 3 dominates vertices 1 and 2 because all neighbors of 1 or 3 are neighbors of 3. There are no other dominated vertices.}
\label{fig:toydomination}
\end{center}
\vspace{-.2cm}
\end{wrapfigure}
Removing a vertex $u$ from a graph $\mathcal{G}$ creates the natural subgraph of $\mathcal{G}$ obtained by removing the vertex $u$ and all adjacent edges from $\mathcal{G}$, i.e. $\mathcal{G}-\{u\}=\mathcal{G}'=(\mathcal{V}',\mathcal{E}')$ where $\mathcal{V}'=\mathcal{V}-\{u\}$, and $\mathcal{E}'=\mathcal{E}-\{e_{uw}\in \mathcal{E}\}$ for any $w$.
We can alternatively express these via the star notion. The \textit{star} $\mathbf{St}(u)$ of a vertex $u$ is the union of all simplices which contains $u$. Then, $u$ is dominated by $v$ if $\mathbf{St}(u)\subset \mathbf{St}(v)$. Similarly, removing a vertex $u$ from $\mathcal{G}$ corresponds to removing $\mathbf{St}(u)$ from the clique complex $\wh{\mathcal{G}}$, i.e. $\wh{\mathcal{G}}-\mathbf{St}(u)=\wh{\mathcal{G}}'$. A useful result is that removing a dominated vertex does not affect the homotopy type of the corresponding clique complexes.
\vspace{.2cm}
\begin{lemma} \label{lem:folding} Let $u$ be a dominated vertex in $\mathcal{G}$. Let $\mathcal{G}'=\mathcal{G}-\{u\}$. Then the clique complexes $\wh{\mathcal{G}}$ and $\wh{\mathcal{G}}'$ are homotopy equivalent, i.e. $\wh{\mathcal{G}}\sim \wh{\mathcal{G}}'$.
\end{lemma}
\begin{proof} Notice that $\mathcal{G}'$ is a subgraph of $\mathcal{G}$, and hence $\wh{\mathcal{G}'}$ is a subcomplex in $\wh{\mathcal{G}}$.
Let $u$ be dominated by $v$ in $\mathcal{G}$. Then, we can write a deformation retract from $\wh{\mathcal{G}}$ to $\wh{\mathcal{G}}'$ by pushing the edge $e_{uv}$ starting from $u$ toward $v$. In other words, by using the simplicial coordinates, one can define a homotopy $F:\wh{\mathcal{G}}\times I\to \wh{\mathcal{G}}$ which is identity on $\wh{\mathcal{G}}'$ and pushing all the faces in $\wh{\mathcal{G}}-\wh{\mathcal{G}}'$ to the corresponding faces in $\wh{\mathcal{G}}'$. This gives a homotopy equivalence $\wh{\mathcal{G}}\sim \wh{\mathcal{G}}'$. To visualize, in Figure~\ref{fig:toydomination}, one can push vertex $1$ in the clique complex $\wh{\mathcal{G}}$ towards vertex $3$ along the edge between them. After the push, the $2$-simplices [1,2,3] and [1,3,4] are pushed to the edges [2,3] and [3,4] respectively. See \cite{adamaszek2013clique, boissonnat2018computing} and \cite[Lemma 2.2]{boulet2010simplicial} for details.
\end{proof}
\begin{remark} \label{remark:collapsing} \normalfont [Collapsing] Note that this collapsing operation is adaptation of a well-known notion called \textit{deformation retract} in algebraic topology in a simplicial complex setting \cite{hatcher2002algebraic}. This operation keeps the homotopy type the same, and hence the homology does not change with this reduction. In \cite{adamaszek2013clique, boissonnat2018computing, boulet2010simplicial}, this is called \textit{folding} ($\mathcal{G}$ folds onto $\mathcal{G}-\{u\}$) or a \textit{strong collapse}. In these papers, the algorithm reduces simplicial complexes in the filtration one by one so that its associated clique complex keeps the same homotopy type. Our contribution here is to adapt this operation to the graph filtrations and define a smaller subgraph before the filtration step so that the induced simplicial complexes are homotopy equivalent. Since we prune the graph at the beginning of the process, our algorithm significantly reduces the computational costs for the induced persistence diagrams.
\end{remark}
In the following, we introduce the \textit{PrunIT Algorithm} by showing that removing a dominated vertex does not change the persistence diagrams of the graph. We give the proof in Appendix \ref{sec:proofs}.
\begin{theorem} \label{thm:reduction} Let $\mathcal{G}=(\mathcal{V},\mathcal{E})$ be an unweighted graph, and $f:\mathcal{V}\to\mathbb{R}$ be a filtering function. Let $u\in\mathcal{V}$ be dominated by $v\in \mathcal{V}$ and $f(u)\geq f(v)$. Then, removing $u$ from $\mathcal{G}$ does not change the persistence diagrams for sublevel filtration, i.e. for any $k\geq 0$ $$PD_k(\mathcal{G},f)=PD_k(\mathcal{G}-\{u\},f).$$
\end{theorem}
\noindent {\em Outline of the proof:} The main idea is to employ the collapsing idea in the simplicial complexes of the filtration $\wh{\mathcal{G}}_0\subset \wh{\mathcal{G}}_1\subset \wh{\mathcal{G}}_2\subset \dots\subset \wh{\mathcal{G}}_m$ in a suitable way. In particular, the lemma above shows that if a vertex $u$ is dominated by a vertex $v$ in $\wh{\mathcal{G}}_i$, then removing $\mathbf{St}(u)$ from $\wh{\mathcal{G}}_i$ does not change the homotopy type. Hence, if we ensure that when $u$ first appears in the filtration $\{\wh{\mathcal{G}}_i\}$, the dominated vertex $v$ is already there, then $u$ can be removed from all the simplicial complexes in the filtration; removing $u$ from the original graph before building the simplicial complexes does not affect the homotopy type of complexes in the filtration. The condition $f(u)\geq f(v)$ makes sure that whenever $u$ exists in $\{\wh{\mathcal{G}}_i\}$, the dominant vertex is already there, and $u$ can be removed from all simplicial complexes, and hence from the graph $\mathcal{G}$. We give the proof of the theorem in Appendix \ref{sec:proofs}.
Notice that the primary condition to remove dominated vertices from the graph ensures that the dominated vertex enters the filtration after its dominating counterpart. With the PrunIT Algorithm, we show that removing the dominated vertex does not change the homotopy type of the simplicial complexes in the filtration. As homotopy equivalence implies the equivalences of homology groups at all levels, the reduction with this algorithm works in all dimensions. Furthermore, while coral reduction works above the corresponding dimension $(j>k)$, the PrunIT algorithm works in any dimension.
\begin{remark} \label{remark:superlevel} \normalfont [Superlevel Filtration] \normalfont The same proof applies to the superlevel filtration by changing the condition $f(u)\geq f(v)$ to $f(u)\leq f(v)$ in the theorem. In particular, if $PD_k^\mathrm{v}(\mathcal{G},f)$ represents the $k^{th}$ PD for superlevel filtration, then with the condition $f(u)\leq f(v)$, we would have $PD_k^\mathrm{v}(\mathcal{G},f)=PD_k^\mathrm{v}(\mathcal{G}-\{u\},f)$ for any $k\geq 0$.
Notice that if one takes $f$ to be the degree function and uses the superlevel filtration, then the theorem automatically holds for any dominated vertex as $deg(u)\leq deg(v)$ when $u$ is dominated by $v$.
\end{remark}
\begin{remark} \label{remark:detecting_dominated} \normalfont [Detecting Dominating Vertices]
The dominating vertices can be computed by using the following approach (Algorithm is given in appendix Section~\ref{sec:algorithms}). Let $\mathcal{A}=(a_{ij})$ be the adjacency matrix for a graph $\mathcal{G}$. Given $v_{i_0}\in\mathcal{V}$, consider all $j$'s with $a_{i_0j}=1$. Check if $v_{i_0}$ is dominated by $v_j$ by comparing the rows $R_{i_0}$ and $R_j$, i.e. for any $k\neq j$ with $a_{i_0k}=1$, check whether $a_{j_k}=1$. If this holds, $v_j$ dominates $v_{i_0}$. Removing $i_0^{th}$ row $R_{i_0}$ and $i_0^{th}$ column $C_{i_0}$ from $\mathcal{A}$ corresponds to removing $v_{i_0}$ from $\mathcal{G}$. Essentially, vertex $v_{i_0}$ is compared to each neighbor $v_j$ by checking whether $v_{i_0}$ is already a neighbor of each of $v_j$'s neighbors. These checks require iterating over each vertex, searching vertex neighbors in the graph and getting the neighbors of each neighbor. The computational complexity is therefore $\mathcal{O}(|\mathcal{V}|\times d^2)$ where $d$ is the average degree in the graph.
\end{remark}
While our main focus is the most common method, sublevel/superlevel filtration, in the application of PH in graph setting, our PrunIt algorithm works perfectly well with another common method, power filtration~\cite{aktas2019persistence}, as well, i.e. removing a dominated vertex does not change persistence diagrams.
\begin{theorem} \label{thm:Prunit_power} [PrunIt for Power Filtration] Let $\mathcal{G}=(\mathcal{V},\mathcal{E})$ be an unweighted connected graph. Let $\wh{PD}_k(\mathcal{G})$ represent $k^{th}$ persistence diagram of $\mathcal{G}$ with power filtration. Let $u\in\mathcal{V}$ be dominated by any other vertex in $\mathcal{V}$. Then, for any $k\geq 1$, $$\wh{PD}_k(\mathcal{G})=\wh{PD}_k(\mathcal{G}-\{u\}).$$
\end{theorem}
The proof of this result is given in appendix Section~\ref{sec:proof_Prunit}.
\subsection*{Combining the CoralTDA and PrunIT Algorithms}
Even though both algorithms are quite effective by themselves, we significantly reduce the computational costs (\Cref{fig:combinedresults}) by combining them as follows. For a given graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ and filtering function $f:\mathcal{V}\to \mathbb{R}$, one can start by trimming all dominated vertices with respect to $f$, and get a smaller graph $\mathcal{G}'$. We have already proven that $PD_k(\mathcal{G})=PD_k(\mathcal{G}')$. Then, one can take the $k$-core of this smaller graph $\mathcal{G}'$ to compute higher persistence diagrams of the original graph $\mathcal{G}$ as before. In particular, by applying both reduction algorithms, for any $k\geq 0$, we obtain $$PD_k(\mathcal{G})=PD_k(\mathcal{G}')=PD_k((\mathcal{G}')^{k+1}).$$
\section{Experiments and Discussion}
\label{sec:experiments}
We apply our new approaches to three types of datasets. The details of datasets are provided in Table~\ref{tab:prunitresults} and appendix Table~\ref{tab:dataset}.
\noindent {\em Graph classification datasets} consists of biological kernel~\cite{KKMMN2016} and ego networks from \texttt{TWITTER} and \texttt{FACEBOOK}~\cite{mcauley2012learning}.
\noindent {\em Node classification datasets} includes \texttt{CITESEER} and \texttt{CORA}~\cite{nr} and Open Graph Benchmark citation (paper cites paper) \texttt{OGB-ARXIV} and \texttt{OGB-MAG}~\cite{hu2020open} networks. \noindent {\em Large networks dataset} contains 11 large networks of 100K-1M vertices from the Stanford Repository~\cite{snapnets}.
We used an AMD Ryzen 5 2100 MHZ 4 core computer in our R, Python and Java experiments.
We evaluate both algorithms by comparing vertex and edge sets and the total run time for the reduced graph with respect to the original graph. In the rest of this manuscript, we compute the vertex set reduction as $100\times( \left |\mathcal{V}\right |-\left |\mathcal{V}^\prime\right |) / \left |\mathcal{V}\right |$ where $\mathcal{V}^\prime$ is the vertex count in the reduced graph. Edge and time reductions are computed similarly.
\subsection{Reduction on Graph Classification Datasets}
In this task, our goal is to evaluate the reduction of computational costs when we use the CoralTDA and PrunIT algorithms on datasets chosen from different graph classification tasks. We used one of the most commonly used functions in these experiments, the degree function with sublevel filtration.
\begin{figure*}[!ht]
\centering
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.95\linewidth]{figs/vCoral1.png}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.95\linewidth]{figs/vCoral2.png}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.95\linewidth]{figs/vCoral3.png}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.95\linewidth]{figs/vCoral4.png}
\end{subfigure}
\caption{\footnotesize CoralTDA vertex reduction in graph and node classification datasets (higher is better). Reduction values are averages from graph instances of the datasets (\texttt{CORA} and \texttt{CITESEER} node classification datasets contain a single graph instance only). \texttt{FACEBOOK} and \texttt{TWITTER} datasets are reduced by 20\% for $k>4$, whereas in other datasets graphs are reduced to empty sets.}
\label{fig:vertex}
\end{figure*}
In Figure~\ref{fig:vertex}, we show the vertex reduction when using CoralTDA for computations of $PD_k(\mathcal{G})$ for dimensions (Betti) $k=1$ to $k=5$. At dimension $k=4$ and $k=5$, CoralTDA reduces 10 datasets by 100\%, i.e., these datasets have trivial $PD_k(\mathcal{G})$ for $k\geq 4$. Even at smaller dimensions, CoralTDA can reduce the vertex set by 25\%-75\%.
Figure~\ref{fig:reductiondomination} shows reduction percentages by the PrunIT algorithm.
\texttt{FIRSTMM} and \texttt{SYNNEW} datasets are reduced by less than 10\%; however the other 11 datasets are reduced by at least 35\%. The lower reduction on \texttt{FIRSTMM} and \texttt{SYNNEW} are due to stronger cores on the networks. \texttt{SYNNEW} is synthetically created, but \texttt{FIRSTMM} is created from 3d point cloud data and categories of various household objects. We believe that the physical proximity of similar objects (e.g., chairs are close to each other) in a household creates a denser community structure in the \texttt{FIRSTMM} dataset, which in turn results in strong cores.
We further report reductions in computational time (Figure~\ref{fig:time}), edge set (Figure~\ref{fig:edge}), and simplex count (Figure~\ref{fig:complex}) in the Appendix.
\subsection{Reduction on Node Classification Datasets}
In this task, our goal is to compute the reduction of computational costs by using CoralTDA and PrunIT algorithms on datasets chosen from node classification tasks. The CoralTDA results are computed over \texttt{CITESEER} and \texttt{CORA} networks and shown in \Cref{fig:vertex} with more than 20\% reduction for the first and higher dimensional persistence.
\begin{figure*}[t]
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[height=.65\linewidth]{figs/dominatingCoral.jpg}
\caption{\footnotesize Vertex reduction by PrunIT algorithm in the superlevel filtration. Results are averages of graph instances from the datasets. }
\label{fig:reductiondomination}
\end{subfigure}
~
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[height=.65\linewidth]{figs/dominatingOGB.png}
\caption{\footnotesize PrunIT reduction in OGB node classification dataset. Each data point is an ego network. Even for large networks, time reduction rates can reach 75\%.}
\label{fig:dominatedOGB}
\end{subfigure}
\caption{\footnotesize PrunIt vertex and time reduction in graph datasets.}
\label{fig:new1}
\end{figure*}
In node classification, we can also analyze the k-hop ($k\ge 1$) neighborhood of a vertex with topological features (such as Betti-0) and use the computed persistence diagram to classify the vertex. Such an approach has yielded SOTA results with significant improvement in accuracy by using 0-dimensional persistence~\cite{chen2021topological}. However, the computational costs of persistent homology are non-negligible in large graphs, even for 0-dimensional features. For example, in Open Graph Benchmark datasets~\cite{hu2020open}, one must compute persistence diagrams for each vertex in 100k to 111M vertex graphs.
We apply the PrunIT algorithm to two graphs, Arxiv and MAG, from the Open Graph Benchmark to compute time reduction in persistence diagram computations. We follow the approach in~\cite{chen2021topological} and extract the 1-hop neighborhood of each ego vertex.
We use the degree function as filtering function as before. In \Cref{fig:dominatedOGB}, we show the reduction in computational time for 0-dimensional persistence. We compute the time costs of PrunIT by considering all the algorithm steps: finding and removing the dominated vertices, creating an induced graph with the vertices, and running 0-dimensional persistent homology on the graph by using vertex degrees as the filtering function. As \Cref{fig:dominatedOGB} shows, we see more than 25\% reduction in computation time in most graphs. Specifically, on average, computation times of 0-dimensional persistence on \texttt{OGB-ARXIV} networks are reduced by 37\%, and those of \texttt{OGB-MAG} networks are reduced by 23\%. The results show that we can mitigate the computational costs of persistence homology by using the PrunIT algorithms.
\begin{table}[h]
\centering
\scriptsize
\caption{\footnotesize PrunIt reductions in the number of vertices and edges. }
\label{tab:prunitresults}
\begin{tabular}{l r c r c}
\toprule
Dataset&$\|V\|$&$\|V\|$ Reduction $(\uparrow)$&$\|E\|$ & $\|E\|$ Reduction $(\uparrow)$\\
\midrule
com-youtube&1134890&59\%&2987624&25\%\\
com-amazon&334863&37\%&925872&40\%\\
com-dblp&317080&72\%&1049866&65\%\\
web-Stanford&281903&67\%&1992636&76\%\\
emailEuAll&265214&95\%&364481&94\%\\
soc-Epinions1&75879&57\%&405740&14\%\\
p2pGnutella31&62586&46\%&147892&20\%\\
Brightkite\_edges&58228&48\%&214078&21\%\\
Email-Enron&36692&76\%&183831&38\%\\
CA-CondMat&23133&69\%&93439&65\%\\
oregon1\_010526&11174&62\%&23409&48\%\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Reduction on Large Networks}
\label{sec:combined}
Our goal is to combine PrunIt and CoralTDA algorithms to achieve the maximum vertex and edge reduction in large networks in these experiments.
Table~\ref{tab:prunitresults} shows that on the biggest network of \texttt{com-youtube}, we eliminate 59\% of the vertices when we only apply the PrunIt (on average 62\% in all datasets). The reduction is as high as 95\% (in \texttt{emailEuAll}). Similarly, PrunIt creates significant edge reduction; 40\% of all edges are removed on average. \Cref{fig:combinedresults} shows the reduction when we apply both CoralTDA and PrunIt on large networks. Even for low cores of 2 and 3, the combined algorithms reach a vertex reduction rate of 78\%. These results show that our algorithms can effectively reduce large networks to more manageable sizes.
\begin{figure}[h]
\centering
\includegraphics[width=.7\linewidth]{figs/prunitCoralReduction2.png}
\caption{\footnotesize Vertex reduction results for 11 large datasets after the application of PrunIt and CoralTDA algorithms. \texttt{emailEuAll} is the outlier for the 2nd and 3rd cores (shown with a crossed square).}
\label{fig:combinedresults}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have proposed two new highly effective algorithms to significantly reduce the computational costs of TDA methods on graphs. While coral reduction is very effective for higher persistence diagrams, PrunIt is highly efficient, in general. Our experiments have showed that even for lower dimensional topological features, such as $k=1$, for some datasets our methods can reduce graph order by up to 95\%, which alleviates computational costs substantially. Furthermore, in most graph datasets we reduce graph sizes by 100\% for 3rd or higher dimensions. Our methods provides a novel solution for efficient application of the powerful TDA methods on large networks and build a bridge between the graph theory and TDA,
opening a pathway for broader applicability of topological graph learning in practice.
\section{Acknowledgments}
This material is based upon work sponsored by the Canadian NSERC Discovery Grant {RGPIN-2020-05665}, NSF of USA under award number ECCS 2039701,
OAC-1828467, DMS-1925346, CNS-2029661, OAC-2115094, ARO award W911NF-17-1-0356, and Simons Collaboration Grant \# 579977.
Part of this material is also based upon work supported by (while serving at) the National Science Foundation.
Any opinions, findings, and
conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views
of the National Science Foundation.
\bibliographystyle{plain}
|
1,116,691,497,282 | arxiv | \section{Introduction}
Taking a higher dimensional theory as a starting point, more than one path can lead to a lower dimensional theory. The conventional and most physical connection is obtained by Kaluza-Klein reduction: the starting point is chosen from a specific class of solutions (vacua) of the higher dimensional theory (essentially solutions that factorize between the dimensions one wants to keep and those one would like to discard). The lower dimensional theory is obtained by expansion around one such solution and describes light fluctuations around it. A different requirement one can impose on the lower dimensional theory, which sometimes goes under the name of non-linear reduction, is that its solutions lift to solutions of the higher dimensional theory. The reduction of 11d SUGRA on topological $S^7$ to ${\cal N}=8$ SUGRA in 4d \cite{dWN} is a prominent example of such a relation between a higher and lower dimensional theory. Note that it is not guaranteed that such an ansatz captures all light degrees of freedom around each of the incorporated higher dimensional solutions.
Where is the familiar Calabi-Yau reduction of type II theories \cite{CHSW,BCF} situated with regard to these two possibilities? The reduction can be performed by choosing a Ricci flat metric ${\bf g_0}$ on the Calabi-Yau $X$, and expanding the fields in terms of ${\bf g_0}$-harmonic forms $\omega_i$. We will refer to this in the following as a base point dependent reduction, since we are expanding around a solution ${\bf g_0}$, the hallmark of a Kaluza-Klein reduction. However, we generically have a continuous family of solutions ${\bf g}(t)$, and we can free our ansatz from the base point dependence on ${\bf g_0}$ by expanding in ${\mathbf g}(t)$-harmonics $\omega_i(t)$ instead. The $t$ are metric moduli, and one might hence expect the reduction of the metric sector of the theory to be significantly modified by this step $\omega_i \rightarrow \omega_i(t)$. This does not happen, as we will review below, as the 4d theory ends up depending only on the cohomology classes of the forms $\omega_i(t)$ \cite{Strominger, CdlO, s2}, which of course do not vary with $t$.
Mainly due to this latter fact, the reduction can be performed without having an explicit expression for the expansion forms (a lucky circumstance, since no Ricci flat metrics on compact Calabi-Yau manifolds are explicitly known, let alone explicit expressions for harmonic forms). The 4d theory is expressed in terms of some topological and holomorphic data of the Calabi-Yau (the triple intersection number of the 2nd cohomology and the period matrices of the complex structure). This data is precisely what is needed to specify an (ungauged) ${\cal N}=2$ supergravity action in 4d, and organizes itself appropriately upon performing the reduction.
Above, the requirement we imposed on a non-linear reduction was that the solutions of the lower dimensional theory lift to solutions of the higher dimensional theory. How does the Calabi-Yau reduction fare on this account? Since the ungauged 4 dimensional ${\cal N}=2$ action does not exhibit a potential term, all constant values for the scalar fields are a solution to the 4d equation of motion, and by construction lift to solutions of the higher dimensional theory. While no proof of this lifting property for arbitrary solutions exists to our knowledge, it does hold for certain other prominent solutions such as ${\cal N}=2$ black holes \cite{HMT}.
Flux compactifications establish a connection between string theory and {\it gauged} ${\cal N}=2$ supergravity. Indeed, as first shown in \cite{PS}, nonvanishing expectation
values of the internal fluxes are described in the 4d effective theory by the scalars in the hypermultiplets picking up charges under the gauge fields in the vector multiplets. The fluxes contribute to the potential of the 10d theory, and this energy is reproduced correctly by the potential term in gauged supergravity. The reduction in the presence of fluxes
is still performed on a Calabi-Yau manifold \cite{M,CKLT,DA,LoMi,KK}, and the resulting theory has the same spectrum as its flux-less relative. In particular, it is based on expanding fields in the harmonic forms on the internal Calabi-Yau. The justification for this procedure is still not established (but see \cite{vafa, LMc}). Note that a Kaluza-Klein reduction would take the backreacted geometry as a starting point and would yield a 4d effective theory, generically non-supersymmetric, valid around a given VEV of the 4d scalar fields. The hope is that the procedure described above yields an effective theory encompassing multiple solutions of the 10d theory at different minima of its potential.\footnote{In fact, merely turning on fluxes can never result in a potential with minima at finite radius, as the contribution of fluxes to the potential energy is minimized when the fluxes are `diluted' in the decompactification limit. See \cite{KK} for one possibility to avoid this runaway behavior in the effective ${\cal N}=2$ context.}
Ignoring for the moment the various conceptual challenges posed by effective ${\cal N}=2$ descriptions of flux compactifications, one can consider gauged ${\cal N}=2$ theories in light of the swampland program \cite{VafaSL}: having obtained an ${\cal N}=2$ theory from compactification, can all of its possible gaugings be realized within string theory? Flux compactifications do not exhaust all possible gauging. Recently, various authors \cite{GLMW, AFTV, GLW, HP,LM} have suggested that gauged ${\cal N}=2$ supergravity can also be obtained by compactifying on $SU(3)$ structure manifolds. These manifolds admit almost complex and symplectic structures and hence possess invariant forms $J$ and $\Omega$ (and nowhere vanishing spinors) just as Calabi-Yau manifolds do, but these structures are no longer required to be integrable. When considered as deformations of Calabi-Yau geometries (see e.g. \cite{T}), these ans\"atze supply the missing gaugings \cite{AFTV}. Once these reductions are better understood, however, they should be able to stand on their own feet (the swampland question then of course would arise in the opposite direction, possibly indicating that these manifolds can {\it always} be understood as deformations of Calabi-Yau manifolds).
The reduction proceeds by mimicking the ansatz for Calabi-Yau reductions. In the latter, the expansion forms $\omega_i(t)$ are specified geometrically (as harmonic forms) and their relation to the moduli space of Calabi-Yau metrics is known. That these forms satisfy all the properties needed for the reduction to go through and the 4d action to assemble itself into ${\cal N}=2$ supergravity hence is required by consistency. By contrast, the space of metrics that should be considered in the more general $SU(3)$ structure case is not well understood. The procedure in the literature, which we shall follow and review in much greater detail below, has therefore been the following: to allow for manifolds with merely $SU(3)$ structure rather than $SU(3)$ holonomy, we must allow for some expansion forms to be non-closed. We then attempt to impose the minimal number of requirements on such a system of forms for the resulting four-dimensional theory to have the structure required by ${\cal N}=2$ supersymmetry.
The starting point for the analysis in this note is the above observation that in the case of CY reductions, the step from base dependent to base independent reduction, $\omega_i \rightarrow \omega_i(t)$, is unproblematic due to the reduction depending only on the cohomology classes of the forms $\omega_i(t)$. In the modified setup, such considerations do not apply (the expansion forms are not closed). As explained above, problems are expected to arise in the metric sector, and we hence expose the reduction of this sector to more scrutiny than has been hitherto done. We have essentially two results to report: for the base point dependent reduction to go through, certain differential conditions must be satisfied by the forms, but we demonstrate that these are equivalent to conditions that have been assumed to hold already. We find one additional constraint which is new and must be imposed. The step to a base point independent reduction requires imposing additional constraints, which we discuss. The constraints appear very restrictive.
Throughout this paper, we present our results in the framework of type IIA.
We begin in section \ref{s:theforms} by reviewing and completing the conditions that have appeared in the literature on the system of forms the reduction is to be based on, and listing the additional conditions needed for a base point independent reduction. In section \ref{s:rms}, we analyse the reduction of the metric sector of the theory. We derive the conditions for the base point dependent reduction to go through and see that these follow from the conditions imposed in section \ref{s:theforms}. We also demonstrate how the conditions for the base point independent reduction arise. In section \ref{s:eeL}, we clarify the relation of our ansatz to one based on expanding in eigenforms of the Laplacian. We construct a system of forms satisfying the na\"{\i}ve conditions required for the reduction to go through, and discuss its shortcomings.
\section{Conditions on the expansion forms} \label{s:theforms}
The starting point of the analysis is a reduction manifold $X$ which has $SU(3)$ structure, but is not necessarily Calabi-Yau. Such manifolds exhibit a set of $SU(3)$ invariant forms, a 2-form $J$ and a 3-form $\Omega$. As the nomenclature indicates, these will play a similar role in the reduction as the K\"ahler form and the holomorphic 3-form do in Calabi-Yau reductions. In particular, $J$ determines an $Sp(6,\mathbb{R})$ structure, and $\Omega$ an $SL(3,\mathbb{C})$ structure. As in Calabi-Yau reductions, $J$ and $\Omega$ are to be expanded in the same set of forms as the RR gauge potentials and the B-field,
\begin{eqnarray}
J = v^i \omega_i \, , \qquad \Omega= X^A \alpha_A - G_A \beta^A \,. \label{expinvf}
\end{eqnarray}
Let us recall that $J$ and $\Omega$ are no longer closed, and their failure to be such (i.e. the failure of the structure group to be the holonomy group) is characterized by components of the intrinsic torsion, which fit into SU(3) representations,
\begin{eqnarray}
dJ &=& -\frac{3}{2}\, {\rm Im}(W_1 \bar{\Omega}) + W_4 \wedge J + W_3 \, , \nonumber \\
d\Omega &=& W_1 J^2 + W_2 \wedge J + \bar{W_5} \wedge \Omega\ \, .
\end{eqnarray}
It follows that the expansion forms cannot all be closed, and we must choose what conditions to impose on their differentials. The smallest deviation from the Calabi-Yau reduction, while allowing for non-closed $J$ and $\Omega$, is given by the following ansatz.
\begin{enumerate}
\item{We start with a set of 2-forms $\omega_i$.} \label{e2f}
\item{We need a set of dual 4-forms $\tilde{\omega}^i$ such that
\begin{eqnarray}
\int \omega_i \wedge \tilde{\omega}^j &=& \delta_i{}^j \,. \label{24dual}
\end{eqnarray}
For a Calabi-Yau, these exist by Poincar\'e duality. Here, we construct them by requiring the matrix
\begin{eqnarray*}
g_{ij} = \int \omega_i \wedge *\omega_j \,,
\end{eqnarray*}
to be invertible with inverse $g^{ij}$, and defining
\begin{eqnarray}
\tilde{\omega}^i = g^{ij} * \omega_j \,. \label{defining4}
\end{eqnarray}} \label{d4f}
\item{The 3-forms are to come in pairs $\alpha_A, \beta^A$ and should satisfy
\begin{eqnarray}
\int \alpha_A \wedge \beta^B &=& \delta_{A}{}^{B} \,,\nonumber\\
\int \alpha_A \wedge \alpha_B &=& \int \beta^A \wedge \beta^B =0 \,. \label{symplbasis}
\end{eqnarray}
In addition, the Hodge duals of this set of 3-forms should be expressible as linear combinations within the same set,
\begin{eqnarray}
* \alpha_A &=& A_A^B \alpha_B + B_{AB} \beta^B \,, \nonumber \\
*\beta^A &=& C^{AB} \alpha_B - A^A_B \beta^B \,, \label{hodge}
\end{eqnarray}
with constant (i.e. coordinate independent) coefficient matrices $A, B, C$.} \label{p3f}
\item{For the variation of the coefficients of the $\alpha_A$ in the expansion
\begin{eqnarray*}
\Omega= X^A \alpha_A - G_A \beta^A
\end{eqnarray*}
to correspond to variations of the $SL(3,\mathbb{C})$ structure, we must require that the forms
\begin{eqnarray}
\alpha_A - \partial_A G_B \beta^B -\kappa_A \Omega \label{bidegreeconstr}
\end{eqnarray}
be of type (2,1) away from the $X^A=0$ locus. The objects that enter in the definition of these forms are introduced in section \ref{fcs}.} \label{vcs}
\item{The most obvious differential constraints to impose are that the set of 2-, 3-, and 4-forms we expand in are closed under the action of $d$ and $d^\dagger$. This yields \cite{GLMW, AFTV, GLW}
\begin{eqnarray}
d^\dagger \omega_i &=& 0 \label{ddagom} \\
d\omega_i &=& m_i{}^A \alpha_A + e_{i A} \beta^A \label{exp2forms} \\
d \alpha_A = e_{iA} \tilde{\omega}^i &;& d\beta^A = -m_i{}^A \tilde{\omega}^i \label{exp3forms} \\
d\tilde{\omega}^i &=& 0 \,. \label{dc4f}
\end{eqnarray}
Note that under the assumption of closure under the action of $d$, $d^\dagger$, this is the most general set of conditions we can impose (the coefficients in (\ref{exp3forms}) follow from (\ref{24dual}) and (\ref{symplbasis})). For consistency ($d^2 =0$), the coefficient matrices must satisfy the following set of constraints
\begin{eqnarray}
m_i{}^A e_{jA} - e_{iA} m_j{}^A =0 \,. \label{nilpcond}
\end{eqnarray}
Upon performing the reduction with such an ansatz, the matrices $m_i{}^A$ and $e_{iA}$ descend to charge matrices for the hypermultiplets under the vectors. We hence require that they have integer entries.} \label{d23f}
\item{We next need to impose conditions on the forms
\begin{eqnarray*}
A_{iA} = \omega_i \wedge \alpha_A &;& B_i{}^A= \omega_i \wedge \beta^A \,.
\end{eqnarray*}
The need for constraints on these forms is apparent at many points in the reduction. The strongest constraints, from which all others follow, arise from our analysis in section \ref{sec:sg}, and are given by
\begin{eqnarray}
X^A A_{iA} - G_A B_i{}^A &=& 0 \,, \label{omega11}\\
v^i (mA +eB)_{(ij)} &=& 0 \,. \label{con2AB}
\end{eqnarray}
Note in particular that (\ref{omega11}) is just the condition $\omega_i \wedge \Omega =0$, hence implies that the 2-forms $\omega_i$ are of type (1,1), and the 4-forms $\tilde{\omega}^i$ consequently of type (2,2). This condition also implies compatibility of $J$ and $\Omega$,
$J \wedge \Omega = 0$.
} \label{comp}
\setcounter{n}{\theenumi}
\end{enumerate}
By imposing the conditions \ref{e2f} through \ref{comp} (excluding \ref{vcs}, the need for which will become apparent in the following section), it has been shown \cite{GLMW, AFTV, LM} that the reduction of the terms in the 10d action involving the RR and NSNS gauge potentials yield the expressions familiar from Calabi-Yau reductions, but with the derivatives acting upon the hyperscalars elevated to gauge covariant derivatives, with the charges of the scalars being dictated by the integer entries of the coefficient matrices $e_{iA}$ and $m_i{}^A$. Furthermore, additional terms from these sectors not present in conventional Calabi-Yau reductions assemble themselves, together with the terms stemming from the reduction of $R_6$, to the potential of ${\cal N}=2$ gauged supergravity dictated by the charges of the hyperscalars. That the reduction of $R_6$ yields the correct terms has been shown \cite{GLMW, LM} under the assumptions that the components of the intrinsic torsion in the
representations $\bf 3$ and $\bf {\bar 3}$ vanish, i.e. $J\wedge dJ =0$ and $d\Omega^{(3,1)}=0$, hence $W_4=W_5=0$.\footnote{When $W_4=W_5=0$, the
internal Ricci scalar can be written as \cite{BV}
\begin{eqnarray*}
R_6 = \frac{1}{2} (15 W_1 {\bar W_1} - {W_2} \llcorner {\bar W_2} - W_3
\llcorner W_3)\,,
\end{eqnarray*}
where on forms of any degree, $W \wedge * W = (W \llcorner W) {\rm
Vol}_6$. Introducing pure spinors $\Phi_+ = \exp(-iJ)$ and $\Phi_- =
\Omega$, we observe that the structure of $R_6$ is matched by
\begin{eqnarray*}
\frac{1}{2} \left(\langle d \Phi_+ , * d {\bar \Phi}_+ \rangle + \langle d
\Phi_- , * d {\bar \Phi}_- \rangle \right) \,,
\end{eqnarray*}
where we have used the standard definition of the Mukai pairing $ \langle
\cdot, \cdot \rangle$,
see e.g. \cite{GLW}. This contribution to the
potential would nicely combine with that of the NS flux into
\begin{eqnarray*} V_{\rm NS}
= \frac{1}{2}\left( ( dJ+iH) \wedge * (dJ-iH) + d \Omega \wedge *d {\bar
\Omega}\right) \,,
\end{eqnarray*}
which has the mirror-symmetric structure advocated in \cite{GLMW, FMT, GMPT1}.}
These conditions follow from (\ref{con2AB}) and (\ref{omega11}), respectively. Condition \ref{vcs} has not been discussed in the literature previously.
In the following section, we perform the reduction of the metric sector. We will see that the conditions listed above are sufficient for the reduction to work if we assume that the expansion 2- and 3-forms do not vary with the metric moduli (by definition (\ref{defining4}), the 4-forms $\{\tilde{\omega}^i\}$ are moduli dependent even for a fixed choice of 2-forms $\{\omega_i\}$). If we instead allow such a variation (recall that in the Calabi-Yau case, we expand in harmonic forms that hence are moduli dependent), we need to impose further conditions on these variations.
To retain the form of the prepotential in the vector multiplet sector, as expressed in terms of the forms $\{\omega_i \}$, upon allowing these forms to depend on the moduli, and likewise to retain the form of the special geometry part of the quaternionic metric, the following three conditions arise.
\renewcommand{\labelenumi}{$*$\arabic{enumi}.}
\begin{enumerate}
\setcounter{enumi}{\then}
\item{The 2-forms should satisfy the constraint
\begin{eqnarray*}
v^i \frac{\partial}{\partial v^j} \omega_i &=& 0 \,,
\end{eqnarray*}
with the $v^i$ metric moduli as defined in (\ref{expinvf}). We will review why this holds in the Calabi-Yau case in the next section.} \label{headache}
\item{The integral
\begin{eqnarray}
d_{ijk} &=& \int_X \omega_i \wedge \omega_j \wedge \omega_j \label{tripleintersection}
\end{eqnarray}
should be moduli independent. In the Calabi-Yau case, this is guaranteed because the derivative of the harmonic form $\omega_i$ with regard to a metric modulus is exact.\footnote{Note that we are not requiring $d_{ijk}$ to be a topological invariant. E.g., it can depend on geometric data specifying the subset of $SU(3)$ structures encompassed by our parametrization.}} \label{triple}
\item{Analogously, we demand the vanishing of the following integrals,
\begin{eqnarray}
\int \alpha_A \wedge \partial_C \alpha_B =\int \alpha_A \wedge \partial_C \beta^B= \int \beta^A \wedge \partial_C \beta^B=0 \,, \label{inlieuexact}
\end{eqnarray}
where the derivatives are taken with regard to metric moduli that will be introduced in subsection \ref{fms}. Again, the vanishing of these integrals is guaranteed in the Calabi-Yau case by the exactness of the derivatives.} \label{h3f}
\end{enumerate}
We have labelled these final three conditions with a $*$, as they are derived under the assumption that we retain the form of the prepotentials after allowing moduli dependence of the expansion forms. Can this assumption be weakened? It is possible that the correct reduction requires adding contributions to the prepotentials which depend on derivatives of the expansion forms and hence vanish in the case that these are constant. Though we have not been able to come up with such an ansatz, we are not claiming a no-go theorem in this direction.
Basing the reduction on constant expansion forms is the analogue of picking a base point in moduli space in the case of Calabi-Yau reductions, and expanding in forms harmonic with regard to the metric at this point. This vantage point makes do with the requirements \ref{e2f} to \ref{comp}. Such an ansatz however does not seem in keeping with the underlying philosophy of the reduction, that it be valid over all of moduli space. Removing the base point dependence necessitates imposing additional conditions on the forms. The most natural choice appears to be conditions $*$\ref{headache} to $*$\ref{h3f}.
\section{Reduction of the metric sector} \label{s:rms}
\subsection{Special geometry} \label{sec:sg}
Vector fields arise from the expansion of the RR 3-form field $C_3$ in the set of 2-forms $\{\omega_i \}$. By ${\cal N}=2$ supersymmetry, these vectors should be accompanied by complex scalars, parametrizing a scalar manifold with a special K\"ahler metric. In analogy to the Calabi-Yau case, these scalars should arise in our compactification scheme from the variation of the $Sp(6,\mathbb{R})$ structure.
Let us briefly review the Calabi-Yau case. We start by specifying a basis $\{\Gamma^i\}$ of $H_2(X,\mathbb{Z})$. Coordinates $v^i$ on the space of K\"ahler classes are then introduced via
\begin{eqnarray*}
v^i &=& \int_{\Gamma^i} J \,,
\end{eqnarray*}
for $J$ an arbitrary representative of the K\"ahler class $[J]$. By Yau's theorem, given a complex structure on $X$ and the K\"ahler class specified by the $v^i$, we can find a Ricci flat metric with associated K\"ahler form $J$ within this K\"ahler class. Hence, $v$ not only specifies a K\"ahler class but also a K\"ahler form, which we will denote by $J(v)$.
A K\"ahler form together with a complex structure on $X$ uniquely determine a metric via
\begin{eqnarray*}
ig_{a \bar{b}} = J_{a\bar{b}} \,.
\end{eqnarray*}
To consider variations of this metric with regard to the coordinates $v^i$, we introduce a basis $\{[\omega_i]\}$ of integral cohomology $H^2(X,\mathbb{Z})$ dual to the basis $\{ \Gamma^i \}$ introduced above, with the $\omega_i(v)$ representatives that are harmonic with regard to the metric determined by $J(v)$. Then,
\begin{eqnarray}
i \frac{\partial g_{a \bar{b}}}{\partial v^i} = \omega_{i\;a\bar{b}} + v^j \frac{\partial}{\partial v^i}\omega_{j\;a\bar{b}} \,. \label{metricvariation}
\end{eqnarray}
By the Lichnerowicz equation, we know that variations of a Ricci flat metric preserve Ricci flatness if and only if the associated 2-form
\begin{eqnarray*}
\frac{\partial g_{a \bar{b}}}{\partial v^i} dz^a \wedge d\bar{z}^{\bar{b}}
\end{eqnarray*}
is harmonic. Of the forms appearing on the RHS of (\ref{metricvariation}), $\omega_i$ is harmonic by definition. $\partial_i \omega_j$ is exact, as $[\omega_i(v)]$ is constant, hence we can conclude
\begin{eqnarray*}
v^j \frac{\partial}{\partial v^i}\omega_{j\;a\bar{b}} = 0\,.
\end{eqnarray*}
At our current understanding of the $SU(3)$ structure case, we must skip several of the steps above, and take as our starting point an $SU(3)$ invariant 2-form $J$ together with ad hoc coordinates $v^i$ on the correct subspace of $Sp(6,\mathbb{R})$ structures such that $J= v^i \omega_i(v)$.
Using the $SU(3)$ invariant form $\Omega$ to introduce, patchwise, a basis of $T^*X$ of definite type, we can then define a hermitian metric on $X$ in terms of $J$ as
\begin{eqnarray*}
ig_{a \bar{b}} = J_{a\bar{b}}
\end{eqnarray*}
and consider its variation with regard to $v^i$,
\begin{eqnarray*}
i \frac{\partial g_{a \bar{b}}}{\partial v^i} = \omega_{i\;a\bar{b}} + v^j \frac{\partial}{\partial v^i}\omega_{j\;a\bar{b}} \,.
\end{eqnarray*}
With this relation, KK reduction of the Ricci scalar $R$ yields the following metric for the $\sigma$-model describing the almost symplectic sector,
\begin{eqnarray}
{\cal V} G_{i j} (v)&\sim& (\delta_i{}^k + v^k \frac{\partial}{\partial \tilde{v}^i} ) |_{\tilde{v} = v}\, (\delta_j{}^l + v^l \frac{\partial}{\partial v'^j} ) |_{v' = v} \int_X \omega_k(\tilde{v}) \wedge * \omega_l(v') \,, \label{symplmetric}
\end{eqnarray}
where we have introduced ${\cal V} = \int J \wedge J \wedge J$. The Hodge star is taken with regard to the metric $g_{a \bar{b}}(v)$. It can be traced back to the contractions required to obtain the Ricci scalar from the Riemann tensor, as in the Calabi-Yau case \cite{BCF}.
For Calabi-Yau reductions, a crucial ingredient in obtaining special geometry from the reduction of the symplectic sector is the complexification of the $v^i$ by the scalars $b^i$ descending from the expansion of the NSNS $B$-field, $B= b^i \omega_i + \ldots$, to $t^i = b^i + i v^i$. The kinetic term for these scalars arises from the reduction of $\int_X H \wedge *H$, hence has $\sigma$-model metric
\begin{eqnarray}
G^B_{i j} (v) &\sim& \frac{1}{{\cal V}} \int_X \omega_i(v) \wedge * \omega_j(v) \,. \label{gb}
\end{eqnarray}
Clearly, $G^B$ must coincide with the metric in (\ref{symplmetric}) for this complexification to take place.\footnote{Under our general assumption that the functional form of the prepotential is not modified upon admitting moduli dependence of the expansion forms, we can argue that the complexification $t^i = b^i + i v^i$ must take place in precisely this form by considering the gauge sector.} The derivative terms in (\ref{symplmetric}) must hence vanish. By considering the diagonal contribution
\begin{eqnarray*}
v^k \frac{\partial}{\partial \tilde{v}^i} |_{\tilde{v} = v}\, v^l \frac{\partial}{\partial v'^i} |_{v' = v} \int_X \omega_k(\tilde{v}) \wedge * \omega_l(v') &=& || v^k \frac{\partial}{\partial v^i} \omega_k(v) ||^2 \,,
\end{eqnarray*}
we recognize that short of miraculous cancellations between various integrals, we must require $v^k \frac{\partial}{\partial v^i} \omega_k(v) = 0$. This is our condition $*$\ref{headache}, and with it, (\ref{symplmetric}) reduces to (\ref{gb}), and we can henceforth drop the ${}^B$ in referring to this metric.
The expression for $G$ can be considerably simplified to reveal the special geometry underlying it, {\it provided we assume the expansion forms $\omega_i$ are of type (1,1)}. This is where the need for condition (\ref{omega11}) arises. Let us begin by reexpressing $*\omega_i$. Given an almost complex structure on $X$ with regard to which $\omega_i$ is of type $(1,1)$, we consider a patch and introduce local complex coordinates $z^{\alpha}$, inducing a basis of definite type for the cotangent space. Furthermore, we can choose this basis so that {\it at a point $P_0$}, the $SU(3)$ invariant 2-form $J = \frac{i}{2}\sum dz^{\alpha} \wedge d\bar{z}^{\bar{\alpha}}$. A purely algebraic calculation now yields \cite{Strominger}, at $P_0$,
\begin{eqnarray}
* \omega_i &=& \frac{1}{2} ( \omega_i \llcorner J) J \wedge J - \omega_i \wedge J \,. \label{staromega}
\end{eqnarray}
This equality extends to the whole patch, as it is formulated intrinsically (without reference to the point $P_0$). To extend it over all of $X$, we need $J$ to be a globally defined nowhere vanishing $(1,1)$ form which at a given point can be put in the diagonalized form. $J$ of course enjoys these properties courtesy of the $SU(3)$ structure we take as our starting point.
Next, we want to reexpress the contraction $\omega_i \llcorner J$. Consider
\begin{eqnarray*}
\frac{1}{2} \int_X ( \omega_i \llcorner J) J \wedge J \wedge J &=& \int_X * \omega_i \wedge J + \int_X \omega_i \wedge J \wedge J \\
&=& \int_X \omega_i \wedge *J + \int_X \omega_i \wedge J \wedge J \\
&=& \frac{3}{2}\int_X \omega_i \wedge J \wedge J \,.
\end{eqnarray*}
To pull $\omega_i \llcorner J$ out from underneath the integral, we need $d(\omega \llcorner J)=0$. But this is a consequence of (\ref{dc4f}), {\it assuming that $d(\omega_i \wedge J)=0$},
\begin{eqnarray*}
0 &=& d \tilde{\omega}^i \\
&=& g^{ij} d * \omega_j \\
&=& g^{ij} d (\frac{1}{2}( \omega_j \llcorner J) J \wedge J - \omega_j \wedge J) \\
&=& \frac{1}{2}g^{ij} d( \omega_j \llcorner J) J \wedge J \,.
\end{eqnarray*}
To ensure this relation, we have imposed condition (\ref{con2AB}). With this, we obtain the same expression for the contraction as in the Calabi-Yau case \cite{Strominger},
\begin{eqnarray}
\omega_i \llcorner J&=& 3 \frac{\int_X \omega_i \wedge J \wedge J}{\int_X J \wedge J \wedge J} \,. \label{contraction}
\end{eqnarray}
By plugging all this back into the expression (\ref{symplmetric}) for $G$, we see that the dependence on $\omega_i(v)$ arises in the form
\begin{eqnarray*}
d_{ijk}(v) &=& \int_X \omega_i(v) \wedge \omega_j(v) \wedge \omega_k (v) \,.
\end{eqnarray*}
To relate the metric $G$ to the K\"ahler form $\log K \sim \log \int J \wedge J \wedge J$, we must require that $d_{ijk}$ is independent of $v$. This is condition $*$\ref{triple}. Reexpressing the $v^i$ in terms of the complex coordinates $t^i$, we then obtain $G$ as
\begin{eqnarray*}
G_{i \bar{\jmath}} &\sim& \partial_i \partial_{\bar{\jmath}} K \,.
\end{eqnarray*}
Special geometry now follows exactly as in the Calabi-Yau case.
\subsection{Quaternionic geometry} \label{fcs}
A set of 4d scalars arises when expanding the RR 3-form $C_3$ in the set of 3-forms $\{ \alpha_A, \beta^A \}$. In analogy with the Calabi-Yau case, these are to be augmented by scalars stemming from the variation of the $SL(3,\mathbb{C})$ structure. Together, these scalars are to parametrize a quaternionic manifold. We consider the metric and the RR scalars in turn.
\subsubsection{The metric scalars} \label{fms}
Let us first determine the relation between the variation of the $SU(3)$ invariant form $\Omega$ and the metric.
To this end, let $p$ be an element of the reduced $SU(3)$ frame bundle, and $\{e_a\}$ the standard holomorphic basis of $\mathbb{C}^3$. Then
\begin{eqnarray*}
\Omega(p(e_a), p(e_b), p(e_c)) = \Omega_{abc}
\end{eqnarray*}
is the invariant tensor. Now consider the infinitesimal deformation $\tilde{\Omega} = \Omega + \delta \Omega$, and let $\tilde{p}$ denote an element of the frame bundle defined by $\tilde{\Omega}$, with $\tilde{p}(e_a) = p(e_a) + \delta p_a^{\,\,b} p(e_b)$. Then
\begin{eqnarray*}
0 &=& (\Omega + \delta \Omega)(\tilde{p}(e_a), \tilde{p}(e_b), \tilde{p}(e_{\bar{c}})) \\
&=& \Omega_{abd} \delta p_{\bar{c}}^{\,\,d} + \delta \Omega_{ab\bar{c}} \,.
\end{eqnarray*}
Hence,
\begin{eqnarray}
\delta p_{\bar{c}}^{\,\,d} = - \frac{1}{2||\Omega||^2} \bar{\Omega}^{abd}(\delta \Omega)_{ab \bar{c}} \,, \label{deltap}
\end{eqnarray}
with $||\Omega||^2 := \frac{1}{3!} \bar{\Omega}^{abc} \Omega_{abc}$ and where we have used $\bar{\Omega}^{abc} \Omega_{abd} = \frac{1}{3} \delta^{c}{}_d \bar{\Omega}^{abe} \Omega_{abe}$.
The metric $\tilde{g}$ defined by the new structure satisfies
\begin{eqnarray*}
0 &=& \tilde{g} (\tilde{p}_{\bar{a}},\tilde{p}_{\bar{b}}) \\
&=& \delta g(p_{\bar{a}},p_{\bar{b}}) + g(p_{\bar{a}},p_c) \delta p_{\bar{b}}^{\,\,c} + g(p_c,p_{\bar{b}}) \delta p_{\bar{a}}^{\,\,c} \,.
\end{eqnarray*}
We thus arrive at
\begin{eqnarray}
\delta g_{\bar{a} \bar{b}} &=& - g_{\bar{a} c} \delta p_{\bar{b}}^{\,\,c} - g_{c \bar{b}} \delta p_{\bar{a}}^{\,\,c} \nonumber\\
&=& \frac{1}{2||\Omega||^2} (\bar{\Omega}^{cd}{}_{\bar{a}}(\delta \Omega)_{cd \bar{b}}+\bar{\Omega}^{cd}{}_{\bar{b}}(\delta \Omega)_{cd \bar{a}}) \nonumber \\
&=& \frac{1}{||\Omega||^2} \bar{\Omega}^{cd}{}_{\bar{a}}(\delta \Omega)_{cd \bar{b}} \hspace{1cm}\mbox{for $\delta \Omega$ primitive} \,. \label{cvm}
\end{eqnarray}
Now assume that we have parametrized the variation of the $SL(3,\mathbb{C})$ structure in terms of parameters $z^\alpha$. Below, we will use the expansion forms $\alpha_A, \beta^A$ to define such a parametrization. Given such $z^\alpha$, we introduce 3-forms $\chi_{\alpha}$ of type (2,1) as the (2,1) part of the following derivatives,
\begin{eqnarray}
\chi_\alpha :=\left[\frac{\partial}{\partial z^\alpha} \Omega \right]_{(2,1)} \,. \label{chi}
\end{eqnarray}
Note that $\chi_\alpha \neq 0$ by assumption of $z_\alpha$ being a parametrization of $SL(3,\mathbb{C})$ structure: two complex 3-forms that are each of type (3,0) with regard to the $SL(3,\mathbb{C})$ structure defined by the respective other 3-form define the same $SL(3,\mathbb{C})$ structure. By the compatibility condition $J \wedge \Omega =0$, the $\chi_\alpha$ are primitive.
In terms of these definitions, (\ref{cvm}) becomes
\begin{eqnarray*}
\frac{\partial}{\partial z^\alpha} g_{\bar{a} \bar{b}} &=& \frac{1}{||\Omega||^2} \bar{\Omega}^{cd}{}_{\bar{a}}(\chi_\alpha)_{cd \bar{b}} \,.
\end{eqnarray*}
Reduction of the Einstein term with this ansatz yields \cite{BCF}
\begin{eqnarray}
G_{\alpha \bar{\beta}} \sim \frac{1}{||\Omega||^2} \int_X \chi_\alpha \wedge \bar{\chi}_{\bar{\beta}} \label{ckkm}
\end{eqnarray}
for the $\sigma$-model metric of the almost complex sector. We would like to obtain this metric, as in the Calabi-Yau case, from a K\"ahler form
\begin{eqnarray*}
K \sim \log \int_X \Omega \wedge \bar{\Omega} \,.
\end{eqnarray*}
The key equality for $G_{\alpha \bar{\beta}} \sim \partial_\alpha \partial_{\bar{\beta}}K$ to hold is the relation
\begin{eqnarray}
\frac{\partial}{\partial z^\alpha} \Omega &=& \kappa_\alpha \Omega + \chi_\alpha \label{defkappa}
\end{eqnarray}
with $\kappa_\alpha$ {\it constant}. By definition of $\chi_\alpha$, $\tilde{\Omega}_\alpha=\frac{\partial}{\partial z^\alpha} \Omega - \chi_\alpha$ is a (3,0) form. The quotient $\kappa_\alpha = \frac{\tilde{\Omega}_\alpha}{\Omega}$ is hence well-defined. For a Calabi-Yau, $\kappa_\alpha$ must be a holomorphic function by the holomorphicity of $\Omega$ and the coordinate independence of the parameters $z^\alpha$. As a holomorphic function on a compact manifold, it must be a constant. In our more general setup, we derive this requirement from the condition that the matrices $A$, $B$, $C$ in (\ref{hodge}) be constant. In the next subsection, we will derive expressions for these constants in (\ref{starof3fcoeffs}) that depend on $\kappa_a$, and conclude that $d \kappa_a \neq 0$ is not compatible with $dA = dB = dC =0$.
Note that up to this point, the expansion of the $SU(3)$ invariant form $\Omega$ in the set $\{\alpha_A, \beta^A \}$, $\Omega = X^A \alpha_A - G_A \beta^A$, has not entered. We will need it to introduce a parametrization of $SL(3,\mathbb{C})$ structures, and argue for the metric $G_{\alpha \bar{\beta}}$ being special K\"ahler. As a first step, we want to demonstrate that the $G_A$ can be expressed as a function of the $X^A$. To this end, consider
\begin{eqnarray*}
0 &=& \int \Omega \wedge \partial_A \Omega \\
&=& \int \Omega \wedge ( \alpha_A + X^B \partial_A \alpha_B - \partial_A G_B \beta^B - G_B \partial_A \beta^B) \\
&=& G_A - X^B \partial_A G_B + X^B X^C \int \alpha_B \wedge \partial_A \alpha_C + G_B G_C \int \beta^B \wedge \partial_A \beta^C \,.
\end{eqnarray*}
In the Calabi-Yau case, the two integrals in the final line vanish because the derivatives $\partial_A \alpha_C$, $\partial_A \beta^C$ are exact (varying the complex structure does not change the cohomology classes $[\alpha_A (X)], [\beta_A(X)]$). In the current setup, we impose the vanishing of these integrals as condition $*$\ref{h3f} on the expansion forms. The system of partial differential equations for determining $G_A$ in terms of the $X^A$, with this condition, is linear,
\begin{eqnarray}
G_A &=& X^B \partial_A G_B \,. \label{dhomd1}
\end{eqnarray}
Introducing the function $G = \frac{1}{2} G_A X^A$, such that
\begin{eqnarray*}
\partial_A G &=& G_A \,,
\end{eqnarray*}
we see that (\ref{dhomd1}) can be rewritten as
\begin{eqnarray*}
G_A &=& X^B \partial_B G_A \,.
\end{eqnarray*}
The content of (\ref{dhomd1}) is hence that $G_A$ are homogenous functions of degree 1. As we have seen, they can be obtained as partial derivatives of the homogenous function of degree 2 given by $G$ as defined above.
Further, note that the RHS of (\ref{deltap}) is invariant under rescaling of $\Omega$. We can use this invariance to eliminate one of the variables, e.g. by setting $X^0=1$ away from the $X^0=0$ locus (a variation $\delta X^0=\delta$ of $\Omega$ is then implemented by the variation $\delta X^A = -\delta $, $\forall A \neq 0$). We can now introduce variables $z^\alpha$ parametrizing the variation of $\Omega$ explicitly via $z^\alpha = X^\alpha$ for $\alpha\neq 0$. As mentioned above, for these variables to parametrize variations of the $SL(3,\mathbb{C})$ structure, $\chi_\alpha$ introduced in (\ref{chi}) above must be non-zero. For a Calabi-Yau, this follows because the $\frac{1}{2}b^3 =b^{2,1}+1$ forms $\partial_A \Omega$ are linearly independent, hence span $H^{3,0} \oplus H^{2,1}$. For our more general case, we have imposed this as condition \ref{vcs}. In fact, condition \ref{vcs} is slightly stronger, and the need for it will arise in the next section.
Given this, the metric $G_{\alpha \bar{\beta}}$ in fact proves to be special K\"ahler, as in the Calabi-Yau case, with prepotential the function $G$ introduced above.
\subsubsection{The RR scalars}
In the reduction of the RR-sector, we must evaluate integrals of the form
\begin{eqnarray*}
\int \alpha_A \wedge * \alpha_B \,, \int \alpha_A \wedge *\beta^B\,, \int \beta^A \wedge * \beta^B\,.
\end{eqnarray*}
This is where the coefficients $A_A^B$, $B_{AB}$, $C^{AB}$ introduced in (\ref{hodge}) come into play. Following \cite{Suzuki}, we can derive expressions for these coefficients by using the two relations
\begin{eqnarray*}
*\Omega = -i \bar{\Omega} \,, \hspace{1cm} *\chi_\alpha = i \bar{\chi}_\alpha
\end{eqnarray*}
(we are using conventions in which the scalar product $(\phi, \psi) = \int \phi \wedge * \psi$ is sesquilinear).\footnote{These are the conventions used e.g. in \cite{GH}. They are different from those appearing in discussions of $G$-structures, where typically one introduces a linear, rather than a conjugate linear, Hodge star operator. Under the conventions used here, no representation of $SU(3)$ is (anti) self-dual.} This first relation holds since $\Omega$ is a (3,0) form, and the second since $\chi_\alpha$ is of type (2,1) and primitive. These relations of course hold pointwise and do not require integrability of the almost complex structure. To determine the coefficients, it is convenient to undo the gauge choice $X^0=1$ and introduce the forms
\begin{eqnarray*}
\tilde{\phi}_A &=& \frac{\partial}{\partial X^A} \Omega \\
&=& \alpha_A - \partial_A G_B \beta^B + X^B \partial_A \alpha_B - G_B \partial_A \beta^B \,.
\end{eqnarray*}
For $A\neq0$, $\tilde{\phi}_A - \kappa_A \Omega$ is of type (2,1) with the coefficients $\kappa_A$ introduced in (\ref{defkappa}), and we define $\kappa_0$ to extend this property to all indices $A$. In the Calabi-Yau case, this (2,1) form is a sum of harmonic forms ($\alpha_A$ and $\beta^A$), and exact forms ($\partial_A \alpha_B$ and $\partial_A \beta^B$). By the commutation of the projector $\Pi^{p,q}$ on forms of definite bidegree $(p,q)$ and the projector on harmonic forms ${\cal H}$, we can drop the exact terms, obtaining $\phi_A - \kappa_A \Omega$ with
\begin{eqnarray*}
\phi_A = \alpha_A - \partial_A G_B \,\beta^B \,,
\end{eqnarray*}
while maintaining the bidegree of the form. This proves crucial in deriving the precise form of the matrices $A,B,C$ needed by ${\cal N}=2$ supersymmetry. This is why, in our more general setup, we choose to require $\phi_A - \kappa_A \Omega$ being (2,1) as condition \ref{vcs} on our expansion forms. Again by condition \ref{comp}, this (2,1) form is also primitive. Given this, $\phi_A$ satisfies the property
\begin{eqnarray}
*\phi_A = i \bar{\phi}_A - 2i \bar{\kappa}_A \bar{\Omega} \,. \label{keytohodge}
\end{eqnarray}
We can plug in the expansion (\ref{hodge}) and compute the coefficients in terms of $\kappa_A$, obtaining
\begin{eqnarray}
C^{AB} &=& -({\rm Im \,} G)^{-1\, AC} (\delta^B_C - \kappa_C X^B - \bar{\kappa}_C \bar{X}^B) \,, \nonumber\\
A_A^B &=& C^{BC} ({\rm Re \,} G)_{CA} + i(\kappa_A X^B - \bar{\kappa}_A \bar{X}^B) \,, \label{starof3fcoeffs}\\
B_{AB} &=& A^C_B ({\rm Re \,} G)_{CA} - {\rm Im \,} G_{AB} -i( \kappa_A G_B - \bar{\kappa}_A \bar{G}_B ) \,. \nonumber
\end{eqnarray}
As promised, for these coefficients to be constant, we must require constancy of the $\kappa_A$. Under this condition,
\begin{eqnarray*}
\kappa_A &=& \frac{\int \phi_A \wedge \bar{\Omega}}{\int \Omega \wedge \bar{\Omega}}\\
&=& \frac{{\rm Im \,}(G_{AB}) \bar{X}^B}{X^A {\rm Im \,}(G_{AB}) \bar{X}^B} \,.
\end{eqnarray*}
Substituting this relation into the above expressions for the coefficients yields the conventional result \cite{Suzuki}, and the two set of scalars assemble themselves to parametrize the quaternionic hypermultiplet scalar manifold.
\section{Expanding in eigenforms of the Laplacian} \label{s:eeL}
Our approach to this point has been to impose those conditions on our expansion forms which seem to be required for the reduction of type IIA to yield ${\cal N}=2$ gauged supergravity -- again, we use the non-commital `seem to be required', as our approach, as we have emphasized throughout, mimics the Calabi-Yau case closely; what lies in wake when we dare to distance ourselves further from this safe haven remains to be explored. A more ambitious program would have been to justify the forms to expand in {\it ab initio}. Though concrete proposals in this direction are lacking, one natural thought is that massive eigenforms of the Laplacian should play a role in the expansion \cite{GLMW, CKT}. In the following subsection, we study the relation between our ansatz in section \ref{s:theforms} and an expansion in eigenforms of the Laplacian. In the subsequent subsection, we study how far a na\"ive approach to constructing a system satisfying the conditions of section \ref{s:theforms} based on such eigenforms takes us.
\subsection{Our conditions and the Laplacian}
On a compact manifold, the Laplacian on forms has properties close to those of a self-adjoint operator on a finite dimensional vector space. In particular (see e.g. \cite{chavel}, theorem B2),
\vspace{0.5cm}
\begin{theorem}
The completion $L^2 A^p(M)$ of $A^p(M)$ with respect to the $L^2$ norm has an orthonormal basis $\phi_{1,p}, \phi_{2,p}, \ldots$ consisting of eigenforms of $\triangle_p$. One can order the eigenforms so that the corresponding eigenvalues $\lambda_{k,p}$ satisfy
\begin{eqnarray*}
0 \le \lambda_{1,p} \le \lambda_{2,p} \le \ldots \rightarrow \infty \,,
\end{eqnarray*}
in particular, the multiplicities are finite.
\end{theorem}
\vspace{0.5cm}
The conditions we impose in section \ref{s:theforms} imply that our system of 2, 3, and 4-forms is closed under $d$, $d^\dagger$, and $*$. Together with the above theorem, this implies that our considerations take place within a finite number of eigenspaces of $\triangle_2$, $\triangle_3$, and $\triangle_4$. We hence have a finite basis available within which to expand our forms.
\subsection{A first attempt at constructing a set of expansion forms}
With the observation of the previous subsection, one can imagine setting out to construct a set of forms with the properties listed in section \ref{s:theforms}. We will proceed na\"{\i}vely in this subsection and obtain a set of forms that satisfy the conditions \ref{e2f} through \ref{p3f} and \ref{d23f}. One could imagine imposing condition \ref{comp} (compatibility), but condition \ref{vcs} is explicitly violated, and the reduction hence fails to yield gauged ${\cal N}=2$ supergravity. This subsection is intended both to clarify some of the considerations in the previous sections in a more concrete setting, and to demonstrate the necessity of condition \ref{vcs}, which has not appeared in the literature previously.
We begin with a set of linearly independent 2-forms that are massive eigenforms of the Laplacian (rather than linear combinations of such) and coclosed (\cite{CKT} considers the following setup up to the proper normalization),
\begin{eqnarray*}
\triangle_2 \omega_i = m_i^2 \omega_i \,\,,\,\,\,\, d^\dagger \omega_i =0 \,.
\end{eqnarray*}
With regard to the natural scalar product $(\phi,\chi) = \int \phi \wedge * \chi$, forms from different eigenspaces are orthogonal. On degenerate eigenspaces, an orthogonal basis can be introduced. Hence assume that the 2-forms $\omega_i$ form an orthogonal set. This restricts us to the metric $G_{i \bar{j}} \sim \delta_{i \bar{j}}$. We choose the normalization
\begin{eqnarray*}
|| \omega_i || &=& \frac{1}{m_i} \,.
\end{eqnarray*}
We introduce 4-forms according to our definition in section 2. With the normalization chosen,
\begin{eqnarray*}
\tilde{\omega}^i &=&\frac{*\omega^i}{||\omega_i||^2} \\
&=& m_i^2 * \omega^i \,.
\end{eqnarray*}
We define a set of 3-forms via
\begin{eqnarray}
d\omega_i &=& \alpha_i \,\,\,,\,\,\,\,\, \beta_i = *\alpha_i. \label{min ans}
\end{eqnarray}
We refer to a system of 2-, 3-, and 4-forms that satisfy the above relations as a minimal system (minimality referring to the choice of matrices $e, m, A, B, C$ relating the various forms and their Hodge duals).
Note that the 3-forms have eigenvalue $m_i^2$ with regard to $\triangle_3$.
Trivially, $\int \alpha_i \wedge \alpha_j = \int \beta_i \wedge \beta_j = 0$, and due to our choice of normalization of the 2-forms,
\begin{eqnarray*}
\int \alpha_i \wedge \beta_j &=& \int d\omega_i \wedge *d\omega_j \\
&=& \int \triangle_2 \omega_i \wedge * \omega_j \\
&=& \delta_{ij} \,.
\end{eqnarray*}
Finally, our choice of normalization of the 2-forms also guarantees the integrality of the differential of the 3-forms expanded in our set of 4-forms,
\begin{eqnarray*}
d \beta_i &=& d * \alpha_i \\
&=& d * d \omega_i \\
&=& m_i^2 * \omega_i \\
&=& \tilde{\omega}^i \,.
\end{eqnarray*}
This na\"{\i}ve construction hence meets the requirements \ref{e2f} through \ref{p3f} and \ref{d23f} of section \ref{s:theforms}. Condition \ref{vcs}, however, is violated. As we will now argue, this is because fixing the $*$ of the 3-forms is the moral analogue of compactifying on a Calabi-Yau with rigid complex structure. To see this, consider again
\begin{eqnarray*}
*\Omega = -i \bar{\Omega} \,.
\end{eqnarray*}
In section \ref{s:rms}, we use this condition to determine the matrices $A,B,C$. Since these matrices are fixed in the minimal setup of this section, this condition instead allows us to solve for $G_i$, and yields
\begin{eqnarray*}
G_i&=& iX^i \,,
\end{eqnarray*}
such that $\Omega = X^i (\alpha_i - i \!*\!\!\alpha_i)$. The variation of $\Omega$ with regard to $X^i$ now clearly does not contain a $(2,1)$ piece, hence does not correspond to a variation of $SL(3,\mathbb{C})$ structure. Without this condition, the reduction fails to assemble itself into a quaternionic sector.
\subsection{The scope of minimality}
We witnessed in the previous subsection that the minimal system fails to satisfy the complete set of constraints required to yield the desired reduction. In the form (\ref{min ans}), the minimal system is easy to identify. However, all ans\"atze related to (\ref{min ans}) via a symplectomorphism are equivalent and will equally fail. In view of this, we consider in this final subsection what conditions the matrices $m,e, A,B,C$ must satisfy for our system of forms to not be equivalent to the minimal system.
To transform a given system with
\begin{eqnarray}
d\omega_i &=& m_{ij} \alpha_j + e_{i j} \beta_j \label{nonmin}
\end{eqnarray}
to a minimal one, we need to find a symplectomorphism
\begin{eqnarray*}
\left(\begin{array}{c}\alpha' \\\beta'\end{array}\right) &=&
{\cal M} \left(\begin{array}{c}\alpha \\\beta\end{array}\right)
\end{eqnarray*}
such that
\begin{eqnarray}
({\cal N} \alpha')_i &=& m_{ij} \alpha_j + e_{i j} \beta_j \,, \nonumber\\
\beta'_i &=& * \alpha'_i \,, \label{condonM}
\end{eqnarray}
with ${\cal N}$ a real invertible matrix. We can then introduce a new set of two forms
\begin{eqnarray*}
\omega'_i &=& ({\cal N}^{-1} \omega)_i \,,
\end{eqnarray*}
thus reexpressing (\ref{nonmin}) in minimal form
\begin{eqnarray*}
d \omega_i' = \alpha'_i \,\,\,\,,\,\,\,\,\,\, \beta'_i = *\alpha'_i \,.
\end{eqnarray*}
When does such an ${\cal M}$ exist? By (\ref{condonM}),
\begin{eqnarray*}
{\cal M}&=& \left(\begin{array}{cc}{\cal N}^{-1} m & {\cal N}^{-1} e \\{\cal N}^{-1} (mA +eC) & {\cal N}^{-1} (mB - eA)\end{array}\right) \,,
\end{eqnarray*}
yielding the conditions
\begin{eqnarray*} \small
{\cal N}^{-1} \left(\begin{array}{cc} me^T -em^T & m B m^T - e C e^T - m A e^T - e A m^T \\ -\left[ m B m^T - e C e^T - m A e^T - e A m^T \right] & (mA + eC)(mB-eA)^T - transp. \end{array}\right)({\cal N}^{-1})^{T}\\
=\left(\begin{array}{cc}0 & 1 \\-1 & 0\end{array}\right) \,,
\end{eqnarray*}
where we have written ${\cal N}$ for ${\cal N} \otimes \tiny\left(\begin{array}{cc}1 & 0 \\0 & 1\end{array}\right)$ for notational simplicity.
The first condition (position (1,1) in the matrix) is just (\ref{nilpcond}), required by $d^2=0$. The condition
\begin{eqnarray}
-m B m^T + e C e^T + m A e^T + e A m^T = {\cal N} {\cal N}^T \label{c12}
\end{eqnarray}
can be used to determine ${\cal N}$. A solution exists, since the matrix on the LHS is symmetric, but for ${\cal N}$ to be real, the eigenvalues of this matrix must all be positive. Finally, we obtain the condition that the matrix
\begin{eqnarray}
(mB-eA) (mA + eC)^T \label{c22}
\end{eqnarray}
be symmetric.
To recapitulate, if the matrices $e,m,A,B,C$ are such that these two conditions are satisfied, our system of expansion forms (\ref{nonmin}) is equivalent to a minimal system and hence not suitable as a starting point for the reduction. Note finally that particularly in this final section, we have been treating the matrices $A,B,C$ as an input. Hopefully, a deeper understanding of the type of $SU(3)$ reduction discussed in this paper will have an intrinsic definition of the $X^A$ and $G_A$ of equation (\ref{expinvf}) as a starting point, and these matrices will then follow from (\ref{starof3fcoeffs}), as in the Calabi-Yau case.
\section*{Acknowledgements}
We would like to thank A.~Tomasiello for initial collaboration on this project and many useful discussions. We gratefully acknowledge useful conversations with Jan de Boer, Sheer El-Showk, Mariana Gra\~na, Shamit Kachru, Kostas Skenderis, Marika Taylor, and Daniel Waldram.
AK would like to thank the Aspen Institute for Physics, where part of this work was performed. RM would like to thank the Institute
for Mathematical Sciences, Imperial College for hospitality.
The work of AK was supported by Stichting FOM. RM is supported in part by an RTN contract MRTN-CT-2004-005104 and by ANR
grant BLAN06-3-137168.
|
1,116,691,497,283 | arxiv |
\subsection{Probability Tools}
\begin{lemma}\textbf{ \cite{kiefer1960equivalence}}
Assume that $\A \subset \mathbb{R}^{d}$ is compact and $\operatorname{span}(\A)=\mathbb{R}^{d}$. Let $\pi: \A \rightarrow[0,1]$ be a distribution on $\A$ so that $\sum_{a \in \A} \pi(a)=1$ and $\tX(\pi) \in \mathbb{R}^{d \times d}$ and $g(\pi) \in \mathbb{R}$ be given by
\begin{align*}
\tX(\pi)=\sum_{a \in \A} \pi(a) a a^{\top}, \quad g(\pi)=\max _{a \in \A}\|a\|_{\tX(\pi)^{-1}}^{2}
\end{align*}
Then the following are equivalent:
\begin{enumerate}[(a)]
\item $\pi^{*}$ is a minimizer of $g$.
\item $\pi^{*}$ is a maximizer of $f(\pi)=\log \det \tX(\pi)$.
\item $g\left(\pi^{*}\right)=d$.
\end{enumerate}
Furthermore, there exists a minimizer $\pi^{*}$ of $g$ such that $\left|\operatorname{Supp}\left(\pi^{*}\right)\right| \leq d(d+1) / 2$.
\end{lemma}
\begin{lemma}
\label{conc-lemma-sub-exp}
\textbf{(Sub-Exponential Concentration)}
Suppose that $X$ is sub-exponential with parameters $(\nu, \alpha)$. Then
\begin{align*}
\Pb[X \geq \mu+t] \leq \begin{cases}e^{-\frac{t^{2}}{2 \nu^{2}}} & \text { if } 0 \leq t \leq \frac{\nu^{2}}{\alpha} \\ e^{-\frac{1}{2 \alpha}} & \text { if } t>\frac{\nu^{2}}{\alpha}\end{cases}
\end{align*}
which can be equivalently written as follows:
\begin{align*}
\Pb[X \geq \mu+t] \leq \exp \left\{-\frac{1}{2} \min \left\{\frac{t}{\alpha}, \frac{t^{2}}{\nu^{2}}\right\}\right\}.
\end{align*}
\end{lemma}
\begin{lemma}
\label{lemma:least-square-conc}
\textbf{(Restatement of Theorem 2.2 in \cite{rigollet2015high})} Assume that the linear model holds where the noise $\varepsilon \sim \operatorname{subG}_{n}\left(\sigma^{2}\right)$. Then the least squares estimator $\hat{\theta}^{\mathrm{LS}}$ satisfies
\begin{align*}
\mathbb{E}\left[\operatorname{MSE}\left(\bX \hat{\theta}^{\mathrm{LS}}\right)\right]=\frac{1}{n} \mathbb{E}\left|\bX \hat{\theta}^{\mathrm{LS}}-\bX \theta^{*}\right|_{2}^{2} \lesssim \sigma^{2} \frac{r}{n}
\end{align*}
where $r=\operatorname{rank}\left(\bX^{\top} \bX\right)$. Moreover, for any $\delta>0$, with probability at least $1-\delta$, it holds
\begin{align*}
\operatorname{MSE}\left(\bX \hat{\theta}^{\mathrm{LS}}\right) \lesssim \sigma^{2} \frac{r+\log (1 / \delta)}{n}
\end{align*}
\end{lemma}
\subsection{Formulation for \PE Design to Reduce MSE}
\label{app:bandit-prop}
\input{prob_form}
\subsection{Loss is convex}
\label{app:convex-loss}
\begin{customproposition}{2}
\label{prop:convex-loss}
The loss function
\begin{align*}
\L_n(\pi, \bb, \bSigma) = \frac{1}{n} \left(\sum_{a,a'}\bw(a)^\top\bA_{\bb,\bSigma}^{-1}\bw(a')\right)
\end{align*}
for any arbitrary design proportion $\bb\in\triangle(\A)$ and co-variance matrix $\bSigma$ is strictly convex.
\end{customproposition}
\begin{proof}
Let $\bb, \bb' \in \triangle(\A)$, so that $\bA_{\bb}$ and $\bA_{\bb'}$ are invertible. Recall that we have the loss for a design proportion $\bb$ as
\begin{align*}
\L_n(\pi, \bb, \bSigma) = \frac{1}{n} \left(\sum_{a,a'}\bw(a)^\top\bA_{\bb,\bSigma}^{-1}\bw(a')\right) \overset{(a)}{=} \frac{1}{n} \Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\bb,\bSigma}^{-1}\bw(a')\right) &= \frac{1}{n}\Tr\left(\bA_{\bb,\bSigma}^{-1}\sum_{a,a'}\bw(a)\bw(a')^\top\right) \\
&= \frac{1}{n}\Tr\left(\tX\bA_{\bb,\bSigma}^{-1}\right)
\end{align*}
where, in $(a)$ we can introduce the trace as the R.H.S. is a scalar quantity,
$\bw(a) = \pi(a)\bx(a)$ and $\tX = \sum_{a,a'}\bw(a)\bw(a')^\top$. Similarly for a $\lambda \in[0,1]$ we have
\begin{align*}
\L_n(\pi, \lambda\bb + (1-\lambda)\bb', \bSigma) = \frac{1}{n} \Tr\left(\bA^{-1}_{\bb,\bb',\bSigma}\sum_{a,a'}\bw(a)\bw(a')^\top\right) = \frac{1}{n} \Tr\left(\tX\bA^{-1}_{\bb,\bb',\bSigma}\right).
\end{align*}
Let the matrix $\bA_{\bb,\bb',\bSigma}$ be defined as
\begin{align*}
\bA_{\bb,\bb',\bSigma} \coloneqq \lambda\bA_{\bb,\bSigma} + (1-\lambda)\bA_{\bb',\bSigma}.
\end{align*}
Now observe that
\begin{align*}
\bA_{\bb,\bb',\bSigma} = \lambda\bA_{\bb,\bSigma} + (1-\lambda)\bA_{\bb',\bSigma} = \sum_{a=1}^A\left(\lambda\bb(a) + (1-\lambda)\bb'(a)\right)\tx(a)\tx(a)^\top.
\end{align*}
Also observe that this is a positive semi-definite matrix. Now using Lemma 1 from \citep{whittle1958multivariate} we can show that
\begin{align*}
\left(\lambda\bA_{\bb,\bSigma} + (1-\lambda)\bA_{\bb',\bSigma}\right)^{-1} \prec \lambda\bA_{\bb,\bSigma}^{-1} + (1-\lambda)\bA_{\bb',\bSigma}^{-1}
\end{align*}
for any positive semi-definite matrices $\bA_{\bb}, \bA_{\bb'}$, and $\lambda\in[0,1]$.
Now taking the trace on both sides we get
\begin{align*}
\Tr\left(\lambda\bA_{\bb,\bSigma} + (1-\lambda)\bA_{\bb',\bSigma}\right)^{-1} \prec \Tr\lambda\bA_{\bb,\bSigma}^{-1} + \Tr(1-\lambda)\bA_{\bb',\bSigma}^{-1}.
\end{align*}
Now using Lemma 2 from \citet{whittle1958multivariate} we can show that
\begin{align*}
\Tr\left(\lambda\tX\bA_{\bb,\bSigma} + (1-\lambda)\tX\bA_{\bb',\bSigma}\right)^{-1} \prec \Tr\lambda\tX\bA_{\bb,\bSigma}^{-1} + \Tr(1-\lambda)\tX\bA_{\bb',\bSigma}^{-1}.
\end{align*}
for any positive semi-definite matrix $\tX$. This implies that
\begin{align*}
\L_n(\pi, \lambda\bb + (1-\lambda)\bb', \bSigma) < \lambda\L_n(\pi, \bb, \bSigma) + (1-\lambda)\L_n(\pi, \bb', \bSigma).
\end{align*}
Hence, the loss function is convex.
\end{proof}
\begin{remark}(\textbf{(Bound on variance)}
\label{remark:bound-variance}
We can use singular value decomposition of $\bSigma_*$ as $\bSigma_* = \bU \bD \bP^{\top}$ with orthogonal matrices $\bU, \bP^{\top}$ and $\bD=\operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{d}\right)$ where $\lambda_{i}$ denotes a singular value. Then we can bound $\bx(a)^{\top} \bSigma_* \bx(a)$ as
\begin{align*}
&\left\|\bx(a)^{\top} \bSigma_* \bx(a)\right\|=\left\|\bx(a)^{\top} \bU \bD \bP^{\top} \bx(a)\right\
\overset{(a)}{=}\left\|\bu^{\top} \bD \mathbf{p}\right\| \leq\left\|\bu^{\top}\right\| \max_{i}|\lambda_{i}| \left\|\mathbf{p} \right\|
\overset{(b)}{=} \| \bx(a)\| \max_{i} |\lambda_{i}| \left\| \bx(a)\right\|
= \max_{i}|\lambda_{i}|\left\| \bx(a) \right\|^{2}
\end{align*}
where in $(a)$ we have $\bu = \bU^{\top} \bx(a)$, $\mathbf{p} = \bP^{\top} \bx(a)$ and $(b)$ uses the fact that $\left\|\bU^{\top} \bx(a)\right\|=\|\bx(a)\|$ for any orthogonal matrix $\bU^{\top}$. Similarly we can show that $\left\|\bx(a)^{\top} \bSigma_* \bx(a)\right\|\geq \min_{i}|\lambda_{i}|\left\| \bx(a) \right\|^{2}$. Let $H_L^2\leq \|\bx(a)\|^2\leq H_U^2$ for any $a\in[A]$.
This implies that
\begin{align*}
&\underbrace{\min_{i}|\lambda_{i}|H_L^2}_{\sigma^2_{\min}} \leq \min_{i}|\lambda_{i}|\left\| \bx(a) \right\|^{2} \leq \underbrace{\bx(a)^{\top} \bSigma_* \bx(a)}_{\sigma^2(a)}
\leq \max_{i}|\lambda_{i}|\left\| \bx(a) \right\|^{2}\leq \underbrace{\max_{i}|\lambda_{i}|H^2_U}_{\sigma^2_{\max}}
\end{align*}
\end{remark}
\subsection{Loss Gradient is Bounded}
\label{app:gradient-loss}
\begin{customproposition}{3}
\label{prop:gradient-loss}
Let $\bb, \bb' \in \triangle(\A)$, so that $\bA_{\bb,\bSigma}$ and $\bA_{\bb',\bSigma}$ are invertible and define $\bV = \sum_{a,a'}\bw(a)\bw(a')^{\top}$. Then the gradient of the loss function is bounded such that
\begin{align*}
\|\nabla_{\bb(a)}\L(\pi,\bb,\bSigma) - \nabla_{\bb(a)}\L(\pi,\bb',\bSigma)\|_2 \leq C_{\kappa}
\end{align*}
where, the $C_\kappa = \frac{\lambda_d(\bV)H^2_U}{\sigma^2(a)\left(\min_{a'\in\A}\frac{\bb(a')}{\sigma(a')^2} \lambda_{\min }\left(\sum_{a=1}^A \bw(a) \bw(a)^{\top}\right)\right)^2}
+ \frac{\lambda_1(\bV)H^2_U}{\sigma^2(a)\left(\min_{a'\in\A}\frac{\bb'(a')}{\sigma(a')^2} \lambda_{\min }\left(\sum_{a=1}^A \bw(a) \bw(a)^{\top}\right)\right)^2}$.
\end{customproposition}
\begin{proof}
Let $\bb, \bb' \in \triangle(\A)$, so that $\bA_{\bb,\bSigma}$ and $\bA_{\bb',\bSigma}$ are invertible.
Observe that the gradient of the loss is given by
\begin{align*}
\nabla_{\bb(a)}\L(\pi,\bb,\bSigma) &= \nabla_{\bb(a)} \Tr\left(\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb,\bSigma}\bw(a')\right) \\
&\overset{(a)}{\leq} \lambda_1(\bV)\nabla_{\bb(a)}\Tr(\bA^{-1}_{\bb,\bSigma}) \\
&\overset{}{=} -\lambda_1(\bV)\Tr\left(\left(\dfrac{\bw(a)\bw(a)^\top}{\sigma^2(a)} \right)\bA^{-2}_{\bb,\bSigma}\right)\\
&= - \lambda_1(\bV) \dfrac{1}{\sigma^2(a)}\left\|\bA^{-1}_{\bb,\bSigma} \bw(a)\right\|^2_
\end{align*}
where, in $(a)$ we denote $\bV = \sum_{a,a'}\bw(a)\bw(a')^{\top}$.
Similarly the gradient of the loss is lower bounded by
\begin{align*}
&\nabla_{\bb(a)}\L(\pi,\bb,\bSigma) \geq - \lambda_d(\bV) \dfrac{1}{\sigma^2(a)}\left\|\bA^{-1}_{\bb,\bSigma} \bw(a)\right\|^2_2
\end{align*}
which yields a bound on the gradient difference as
\begin{align*}
\|\nabla_{\bb(a)}\L(\pi,\bb,\bSigma) - \nabla_{\bb'(a)}\L(\pi,\bb',\bSigma)\|_2
&\leq \left\|\lambda_d(\bV) \dfrac{1}{\sigma^2(a)}\left\|\bA^{-1}_{\bb,\bSigma} \bw(a)\right\|^2_2 - \lambda_1(\bV) \dfrac{1}{\sigma^2(a)}\left\|\bA^{-1}_{\bb',\bSigma} \bw(a)\right\|^2_2\right\|_2\\
&\leq \left|\lambda_d(\bV) \dfrac{1}{\sigma^2(a)}\left\|\bA^{-1}_{\bb,\bSigma} \bw(a)\right\|^2_2\right|+ \left|\lambda_1(\bV) \dfrac{1}{\sigma^2(a)}\left\|\bA^{-1}_{\bb',\bSigma} \bw(a)\right\|^2_2\right
\end{align*}
So now we focus on the quantity
\begin{align*}
\left\|\bA^{-1}_{\bb,\bSigma} \bw(a)\right\|^2_2\leq \|\bA^{-1}_{\bb,\bSigma}\|^2_2\|\bw(a)\|^2_2 \leq \|\bA^{-1}_{\bb,\bSigma}\|^2_2 H^2_U
\end{align*}
Now observe that when $\bb(a)\in\Delta(\A)$ and initialized uniform randomly, then the optimization in \eqref{eq:opt-agnostic-loss} results in a non-singular $\bA^{-1}_{\bb,\bSigma}$ if each action has been sampled at least once which is satisfied by \sp. So now we need to bound the minimum eigenvalue of $\bA^{}_{\bb,\bSigma}$ denoted as $\lambda_{\min}(\bA^{}_{\bb,\bSigma})$. Using Lemma 7 of \citet{fontaine2021online} we have that for all $\bb \in \Delta(\A)$,
\begin{align*}
\min_{a \in[A]} \frac{\bb(a)}{\sigma(a)^2} \sum_{a=1}^A \bw(a) \bw(a)^{\top} \preccurlyeq \sum_{a=1}^A \frac{\bb(a)}{\sigma(a)^2} \bw(a) \bw(a)^{\top} .
\end{align*}
And finally
\begin{align*}
\min_{a \in[A]} \frac{\bb(a)}{\sigma(a)^2} \lambda_{\min }\left(\sum_{a=1}^A \bw(a) \bw(a)^{\top}\right) \leq \lambda_{\min}(\bA^{}_{\bb,\bSigma})
\end{align*}
This implies that
\begin{align*}
\lambda_{\min}(\bA^{-1}_{\bb,\bSigma}) \leq \dfrac{1}{\min_{a \in[A]} \frac{\bb(a)}{\sigma(a)^2} \lambda_{\min }\left(\sum_{a=1}^A \bw(a) \bw(a)^{\top}\right)}
\end{align*}
Plugging everything back we get that
\begin{align*}
\|\nabla_{\bb(a)}\L(\pi,\bb,\bSigma) - \nabla_{\bb'(a)}\L(\pi,\bb',\bSigma)\|_2 &\leq \dfrac{\lambda_d(\bV)H^2_U}{\sigma^2(a)\left(\min_{a'\in\A}\frac{\bb(a')}{\sigma(a')^2} \lambda_{\min }\left(\sum_{a=1}^A \bw(a) \bw(a)^{\top}\right)\right)^2} \\
&\qquad + \dfrac{\lambda_1(\bV)H^2_U}{\sigma^2(a)\left(\min_{a'\in\A}\frac{\bb'(a')}{\sigma(a')^2} \lambda_{\min }\left(\sum_{a=1}^A \bw(a) \bw(a)^{\top}\right)\right)^2}
\end{align*}
The claim of the lemma follows.
\end{proof}
\subsection{Bounds on the loss}
\label{app:bound-loss}
\input{kiefer-wolfowitz}
\subsection{Policy Weighted Least Square Estimator that Minimizes MSE}
\label{app:policy-weighted-least-square}
\begin{customproposition}{5}
\textbf{(Policy Weighted Least Square)}
Let $\bA_{\bb} = \sum_{a,a'}\lceil \bb(a)n\rceil\tx(a)\tx(a')^{\top}$ be the design matrix such that each action is sampled according to $\bb\in \triangle(A)$. Define the policy weighted least square estimate $\wtheta_{n}$ by \eqref{eq:weighted-least-square}.
Then the policy weighted least square in \eqref{eq:weighted-least-square} estimate minimizes $\E_{\D}\left[\left( \sum_{a=1}^A\bw(a)^{\top}(\wtheta_n-\btheta_*)\right)^{2}\right]$ by minimizing the quantity $\sum_{a,a'} \bw(a)^{\top}\bA_{\bb}^{-1}\bw(a')$.
\end{customproposition}
\begin{proof}
Recall
from \cref{sec:oracle} that the \textit{policy weighted} least square estimate is
\begin{align*}
\wtheta_{n} \overset{(a)}{=}\arg\min_{\btheta}\sum_{t=1}^{n}\frac{\pi^2(I_t)}{\sigma^{2}(I_{t})}(R(I_t)-\bv(I_t)^{\top}\btheta)^{2}
& \overset{(b)}{=}\left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\bR_{n}\\
& =\left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\left(\tX_{n}\btheta_{*}+\mathbf{\eta}\right)
\end{align*}
where, in $(a)$ the $I_t$ is the action sampled at timestep $t$, and in $(b)$ we define the matrix $\bSigma_n\in\mathbb{R}^{n\times n}$. The $\mathbf{diag}(\bSigma_n) = [\sigma^2(I_1), \sigma^2(I_2), \ldots, \sigma^2(I_n)]$.
It follows then the value estimate
\begin{align*}
Y_n = \sum_{a=1}^A \bw(a)^{\top}\wtheta_n &= \sum_{a=1}^A \bw(a)^{\top}(\tX_n^{\top}\bSigma_n^{-1}\tX_n)^{-1}\tX_n^{\top}\bSigma_n^{-1}\bR_n = \sum_{a=1}^A \bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\tX_n^{\top}\bSigma^{-1}_n\left(\tX_n\btheta_* + \mathbf{\eta}\right)\\
&= \sum_{a=1}^A\bw(a)^{\top}\btheta_* + \sum_{a=1}^A\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\tX_n^{\top}\bSigma^{-1}_n\eta.
\end{align*}
This means that
\begin{align*}
\sum_{a=1}^A \bw(a)^{\top}(\wtheta_n - \btheta_*) &= \sum_{a=1}^A\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\tX_n^{\top}\bSigma^{-1}_n\eta \\
&= \sum_{a=1}^A\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1/2}_n\bSigma^{-1/2}_n\tX_n)^{-1}\tX_n^{\top}\bSigma^{-1}_n\eta
\end{align*}
Again, as $\eta$ is bounded so we have $\eta\sim\SG(0,\bSigma_n)$ where $\SG$ denote the sub-Gaussian distribution. Then we can show that
\begin{align*}
\left(\sum_{a=1}^A \bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 &\sim \SE\left(0, \sum_{a=1}^A\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\tX_n^{\top}\bSigma^{-1}_n\E[\eta\eta^\top]\bSigma^{-1}_n\tX_n^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\sum_{a=1}^A\bw(a)\right)\\
&\overset{(a)}{\sim} \SE\left(0, \sum_{a=1}^A\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\tX_n^{\top}\bSigma^{-1}_n\bSigma^{}_n\bSigma^{-1}_n\tX_n^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\sum_{a=1}^A\bw(a)\right)\\
&\sim \SE\left(0, \sum_{a,a'}\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\bw(a')\right)
\end{align*}
where, $(a)$ follows as $\eta\sim\SG(0,\bSigma_n)$ and $\SE$ denotes sub-exponential distribution. Now using sub-exponential concentration inequality in \Cref{conc-lemma-sub-exp}, setting
$$
\nu^2=\sum_{a,a'}\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\bw(a'),
$$
and $\alpha = \nu$, we can show that
\begin{align*}
\Pb&\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > t \right) \leq \delta, \qquad \text{ if } t \in (0,1]\\
\Pb&\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > t^2 \right) \leq \delta, \qquad \text{ if } t > 1 .
\end{align*}
Combining the above two we can show that
\begin{align*}
\Pb&\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > \min\{t, t^2\} \right) \leq \delta, \forall t > 0
\end{align*}
Further define matrix $\bbSigma_n \in \mathbb{R}^{d\times d}$ as $\bbSigma_n^{-1} \coloneqq (\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}$.
Hence using sub-exponential concentration inequality we can show that
\begin{align*}
\Pb\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > \min\left\{\sqrt{2\sum_{a,a'}\bw(a)^{\top}\bbSigma_n^{-1}\bw(a') \log (1 / \delta)}, 2\sum_{a,a'}\bw(a)^{\top}\bbSigma_n^{-1}\bw(a') \log (1 / \delta)\right\} \right) \leq \delta.
\end{align*}
This means that we have with probability $(1-\delta)$ that
\begin{align*}
\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 &\leq \min\left\{\sqrt{2\sum_{a,a'} \bw(a)^{\top}\bbSigma^{-1}\bw(a') \log (1 / \delta)}, 2\sum_{a,a'}\bw(a)^{\top}\bbSigma_n^{-1}\bw(a') \log (1 / \delta)\right\}\\
&= 2\sum_{a,a'} \bw(a)^{\top}\bbSigma_n^{-1}\bw(a') \log (1 / \delta).
\end{align*}
Recall that we have sampled each action till $n$ in some proportion $\bb\in\triangle(\A)$. Then we have that
\begin{align*}
\bbSigma_n = \tX_n^{\top}\bSigma^{-1}_n\tX_n &= \sum_{a=1}^A\lceil \bb(a)n\rceil\pi^2(a)\sigma^{-2}(a)\bv(a)\bv(a)^{\top}\\
&= \sum_{a=1}^A\lceil \bb(a)n\rceil\tx(a)\tx(a')^{\top} = n\bA_{\bb,\bSigma}.
\end{align*}
It follows then that
\begin{align*}
\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 &\leq \frac{2}{n}\sum_{a,a'} \bw(a)^{\top}\bA_{\bb,\bSigma}^{-1}\bw(a') \log (1 / \delta)
\end{align*}
Hence, using the policy-weighted least square estimate we can show that it
minimizes $\E_{\D}\left[\left( \sum_{a=1}^A\bw(a)^{\top}(\wtheta_n-\btheta_*)\right)^{2}\right]$, by minimizing the quantity $\sum_{a,a'} \bw(a)^{\top}\bA_{\bb,\bSigma}^{-1}\bw(a')$ where $\bA_{\bb,\bSigma} = \sum_{a,a'}\lceil \bb(a)n\rceil\tx(a)\tx(a')^{\top}$.
\end{proof}
\begin{remark}
\label{remark:unbiased-estimator}
Note that the estimator $\wtheta_n$ is an unbiased estimator of $\btheta_*$. This can be shown as follows
\begin{align*}
\E\left[\wtheta_n\right] - \btheta_* &= \E\left[\left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\bR_{n}\right] - \btheta_*\\
&= \E\left[\left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\left(\tX_{n}\btheta^{*}+\mathbf{\eta}\right)\right] - \btheta_*\\
&= \E\left[\left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\btheta^{*}\right]+\E\left[\left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\mathbf{\eta}\right] - \btheta_*\\
&= \btheta_* + \left(\tX_{n}^{\top}\bSigma_{n}^{-1}\tX_{n}\right)^{-1}\tX_{n}^{\top}\bSigma_{n}^{-1}\E\left[\mathbf{\eta}\right] - \btheta_* \overset{(a)}{=} 0
\end{align*}
where, $(a)$ follows as noise is zero mean.
\end{remark}
\section{Bandit Regret Proofs}
\input{bandit_app1}
\section{Regret Lower Bound}
\label{app:lower-regret-bound}
\input{regret_lower_bound.tex}
\section{Additional Experiments}
\label{app:addl-expt}
\input{addl_expt}
\section{Table of Notations}
\label{table-notations}
\begin{table}[!tbh]
\centering
\begin{tabular}{|p{10em}|p{33em}|}
\hline\textbf{Notations} & \textbf{Definition} \\\hline
$\pi(a)$ & Target policy probability for action $a$ \\\hline
$\bb(a)$ & Behavior policy probability for action $a$\\\hline
$\bx(a)$ & Feature of action $a$\\\hline
$\btheta_*$ & Optimal mean parameter\\\hline
$\wtheta_n$ & Estimate of $\btheta_*$\\\hline
$\mu(a) = \bx^\top\btheta_*$ & Mean of action $a$\\\hline
$\wmu_t(a) = \bx^\top\wtheta_t$ & Empirical mean of action $a$ at time $t$\\\hline
$R_t(a)$ & Reward for action $a$ at time $t$\\\hline
$\bSigma_*$ & Optimal co-variance matrix\\\hline
$\wSigma_t$ & Empirical co-variance matrix at time $t$\\\hline
$\sigma^2(a) = \bx(a)^\top\bSigma_*\bx(a)$ & Variance of action $a$ \\\hline
$\wsigma_t^2(a) = \bx(a)^\top\wSigma_t\bx(a)$ & Empirical variance of action $a$ at time $t$ \\\hline
$n$ & Total budget \\\hline
$T_n(a)$ & Total Samples of action $a$ after $n$ timesteps\\\hline
\end{tabular}
\vspace{1em}
\caption{Table of Notations}
\label{tab:my_label}
\end{table}
\subsection{Loss of Bandit Oracle}
\label{app:loss-bandit-oracle}
\begin{customproposition}{6}
\textbf{(Bandit Oracle MSE)}
Let the oracle sample each action $a$ for $\lceil n \bb^*(a)\rceil$ times, where $\bb^*$ is the solution to \eqref{eq:opt-oracle-sol}. Then the MSE is given by
\begin{align*}
\L^*_n(\pi, \bb^*, \bSigma_*) \leq O\left(\frac{ d\lambda_1(V)
\log n}{n}\right) + O\left(\frac{1}{n}\right).
\end{align*}
\end{customproposition}
\begin{proof}
Recall the matrix $\tX_n = [\pi_1\bv_1, \pi_2\bv_2, \ldots, \pi_n\bv_n]^{\top} \in \R^{n\times d}$ are the observed features for the $n$ samples taken. Let $\bR_n = [R_1, R_2, \ldots, R_n]^{\top} \in \R^{n\times 1}$ be the $n$ rewards observed and $\mathbf{\eta}\in\R^{n\times 1}$ and $\mathbf{\eta}$ is the bounded noise vectors. Then using policy weighted least square estimates we have
\begin{align*}
\wtheta_{n} \overset{(a)}{=}\arg\min_{\btheta}\sum_{t=1}^{n}\frac{\pi^2(I_t)}{\sigma^{2}(I_{t})}(R(I_t)-\bx(I_t)\btheta)^{2}
\end{align*}
where, in $(a)$ we $I_t$ is the action sampled at timestep $t$,
Recall that the $\mathbf{diag}(\bSigma_n) = [\sigma^2(I_1), \sigma^2(I_2), \ldots, \sigma^2(I_n)]$, where $I_1, I_2, \ldots, I_n$ are the actions pulled at time $t=1,2,\ldots,n$.
We have that:
\begin{align*}
\wtheta_n - \btheta_* = (\tX_n^{\top}\bSigma_{n}^{-1}\tX_n)^{-1}\tX_n^{\top}\bSigma_{n}^{-1}\mathbf{\eta}
\end{align*}
where the noise vector $\eta\sim\SG(0,\bSigma_n)$ where $\bSigma_n\in\mathbb{R}^{n\times n}$.
For any $\bz \in \R^d$ we have
\begin{align*}
\bz^{\top}(\wtheta_n - \btheta_*) = \bz^{\top}(\tX_n^{\top}\bSigma_{n}^{-1}\tX_n)^{-1}\tX_n^{\top}\bSigma_{n}^{-1}\mathbf{\eta}
\end{align*}
Let $\bb^{*}$ be the \PE design for $\A$ defined in \eqref{eq:opt-oracle-sol}.
Then the oracle pulls action $a \in \A$ exactly $\left\lceil n \bb_{}^{*}\right\rceil$ times for some $n>d(d+1)/2$ and computes the least square estimator $\wtheta_n$. Observe that
$$
\sum_{a=1}^A \bw(a)^\top (\wtheta_n-\btheta_*) \sim \SG\left(0, \sum_{a,a'}\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\tx(a)\right).
$$
So $\left(\sum_{a=1}^A \tx(a)^\top (\wtheta_n-\btheta_*)\right)^2 \sim \SE\left(0,\sum_{a,a'}\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\bw(a')\right)$ where $\SE$ denotes the sub-exponential distribution. Denote the quantity,
\begin{align*}
t \coloneqq \sqrt{2\sum_{a,a'}\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\bw(a') \log (1 / \delta)}.
\end{align*}
Now using sub-exponential concentration inequality in \Cref{conc-lemma-sub-exp}, setting
$$
\nu^2=\sum_{a,a'}\bw(a)^{\top}(\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}\bw(a'),
$$
and $\alpha = \nu$, we can show that
\begin{align*}
\Pb&\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > t \right) \leq \delta, \qquad \text{ if } t \in (0,1]\\
\Pb&\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > t^2 \right) \leq \delta, \qquad \text{ if } t > 1 .
\end{align*}
Combining the above two we can show that
\begin{align*}
\Pb&\left(\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2 > \min\{t, t^2\} \right) \leq \delta, \forall t > 0
\end{align*}
Further define matrix $\bbSigma_n \in \mathbb{R}^{d\times d}$ as $\bbSigma_n^{-1} \coloneqq (\tX_n^{\top}\bSigma^{-1}_n\tX_n)^{-1}$.
This means that we have with probability $(1-\delta)$ that
\begin{align*}
\left(\sum_{a=1}^A\bw(a)^{\top}(\wtheta_n - \btheta_*)\right)^2
&\leq \!\!\min\left\{\!\!\sqrt{2\sum_{a,a'} \bw(a)^{\top}\bbSigma_n^{-1} \bw(a') \log (1 / \delta)}, 2\sum_{a,a'} \bw(a)^{\top}\bbSigma_n^{-1} \bw(a') \log (1 / \delta) \right\}\\
&\overset{(a)}{=} \min\bigg\{\sqrt{\frac{2}{n}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*,\bSigma_*}\bw(a') \log (1 / \delta)}
\frac{2}{n}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*,\bSigma_*}\bw(a') \log (1 / \delta)\bigg\}\\
&\overset{(b)}{\leq} \min\left\{\sqrt{\frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}}, \frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}\right\}
\end{align*}
and we have taken at most $n$ pulls such that $n > \frac{d(d+1)}{2}$ pulls. Here $(a)$ follows as $n\bA^{}_{\bb^*,\bSigma_*} = \bbSigma_n^{}$ and observing that oracle has access to $\bSigma_*$, and optimal proportion $\bb^*$. The $(b)$ follows from applying \Cref{corollary:kiefer} such that $\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*,\bSigma_*}\bw(a') \leq d\lambda_1(\bV)$ where $\bV = \sum_{a,a'}\bw(a)\bw(a')^{\top}$.
Thus, for any $\delta \in(0,1)$ we have
\begin{align}
\mathbb{P}\left(\left\{\left(\sum_{a=1}^A \tx(a)^{\top} (\wtheta_n-\btheta_*)\right)^2> \min\left\{\sqrt{\frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}}, \frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}\right\}\right\}\right) \leq \delta. \label{eq:prob-oracle-loss}
\end{align}
Define the good event $\xi_{\delta}(n)$ as follows:
\begin{align*}
\xi_\delta(n) \coloneqq \left\{\left(\sum_{a=1}^A \tx(a)^{\top} (\wtheta_{n}-\btheta_*)\right)^2 \leq \min\left\{\sqrt{\frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}}, \frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}\right\}\right\}.
\end{align*}
Then the loss of the oracle following \PE $\bb^*$ is given by
\begin{align*}
\L^*_n(\pi, \bb^*, \bSigma_*) &= \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_n - \btheta_*\right)\right)^2\right]\\
&\leq \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_n - \btheta_*\right)\right)^2\xi_{\delta}(n)\right] + \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_n - \btheta_*\right)\right)^2\xi^c_{\delta}(n)\right]\\
&\overset{(a)}{\leq} \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_n - \btheta_*\right)\right)^2\xi_{\delta}(n)\right] + \sum_{t=1}^n AM^2B^2\Pb(\xi^c_\delta(n))\\
&\overset{(b)}{\leq} \min\left\{\sqrt{\frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}}, \frac{8 d\lambda_1(\bV) \log (1 / \delta)}{n}\right\} + \sum_{t=1}^n AH^2_U B^2\Pb(\xi^c_\delta(n))\\
&\overset{(c)}{\leq} \min\left\{\sqrt{\frac{16 d\lambda_1(\bV) \log n}{n}}, \frac{16 d
\lambda_1(\bV)\log n}{n}\right\} + O\left(\frac{1}{n}\right)\\
&\overset{}{\leq} \frac{48 d\lambda_1(\bV)
\log n}{n} + O\left(\frac{1}{n}\right)
\end{align*}
where, $(a)$ follows as the noise $\eta^2\leq B^2$ and $\sum_a\|\bx(a)\|^2\leq A H^2_U$ which implies
\begin{align*}
\E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_n - \btheta_*\right)\right)^2\right] \leq n A H_U^2 B^2.
\end{align*}
The $(b)$ follows from \eqref{eq:prob-oracle-loss}, and $(c)$ follows by setting $\delta = 1/n^3$, and noting that $n > A$.
\end{proof}
\subsection{Loss of the \Cref{alg:linear-bandit}}
\label{app:loss-bandit-tracker}
\begin{lemma}
\textbf{(Concentration Lemma)}
After $\Gamma$ samples of exploration, we can show that $\Pb\left(\xi^{var}_\delta(\Gamma)\right)\geq 1 -8\delta$
where, $C > 0$ is a constant.
\end{lemma}
\begin{proof}
We observed $(\bx_{t},r_{t})\in\R^{d}\times\R,i=1,\ldots,\Gamma$ from
the model
\begin{align}
r_{t} & =\bx_{t}^{\top}\btheta_*+\eta_{t},\label{eq:linear_model}\\
\eta_{t} & \sim \SG(0,\bx_{t}^{\top}\bSigma_* \bx_{t}),\label{eq:variance_model}
\end{align}
where $\btheta_*\in\R^{d}$ and $\bSigma_*\in\R^{d\times d}$
are unknown.
Given an initial estimate $\wtheta_\Gamma$ of $\btheta_*$, we first
compute the squared residual $y_{t}:=\left(\bx_{t}^{\top}\wtheta_\Gamma-r_{t}\right)^{2}$,
and then obtain an estimate of $\bSigma_*$ via
\begin{equation}
\min_{\bS\in\R^{d\times d}}\sum_{t=1}^{\Gamma}\left(\left\langle \bx_{t}\bx_{t}^{\top},\bS\right\rangle -y_{t}\right)^{2}.\label{eq:prog}
\end{equation}
Observe that if $\wtheta_\Gamma=\btheta_*$, then the expectation of
the squared residual $y_{t}$ is
\[
\E\left[y_{t}\right]=\E\left[\left(\bx_{t}^{\top}\btheta_*-r_{t}\right)^{2}\right]=\E\left[\eta_{t}^{2}\right]=\bx_{t}^{\top}\bSigma_* \bx_{t}=\left\langle \bx_{t}\bx_{t}^{\top},\bSigma_*\right\rangle ,
\]
which is a linear function of $\bSigma_*$. The program (\ref{eq:prog})
is thus a least square formulation for estimating $\bSigma_*$.
Let $\bX_{t}:=\bx_{t}\bx_{t}^{\top}$. Below we abuse notation and view
$\bSigma_*,\wSigma_*,\bX_{t},\bS$ as vectors in $\R^{d^{2}}$ endowed
with the trace inner product $\left\langle \cdot,\cdot\right\rangle $.
Let $\bX\in\R^{\Gamma\times d^{2}}$ have rows $\left\{ \bX_{t}\right\} ,$
and $y=(y_{1},\ldots,y_{\Gamma})^{\top}\in\R^{\Gamma}$. Suppose $\bx_{t}$
can only take on $M$ possible values from $\left\{ \phi_{1},\ldots,\phi_{M}\right\} ,$
so $\bX_{t}\in\left\{ \Phi_{1},\ldots,\Phi_{M}\right\} $, where $\Phi_{m}:=\phi_{m}\phi_{m}^{\top}$.
Note that for the forced exploration setting we have $M=d < A$.
Moreover, each value appears exactly $\Gamma/M$ times. Then (\ref{eq:prog})
can be rewritten as
\begin{align*}
\min_{\bS\in\R^{d^{2}}}\sum_{m=1}^{M}\sum_{t:\bX_{t}=\Phi_{m}}\left(\left\langle \Phi_{m},\bS\right\rangle -y_{t}\right)^{2} & =\min_{\bS\in\R^{d^{2}}}\sum_{m=1}^{M}\left(\left\langle \Phi_{m},\bS\right\rangle -\frac{1}{\Gamma/M}\sum_{t:\bX_{t}=\Phi_{m}}y_{t}\right)^{2}.
\end{align*}
Let $z_{m}:=\frac{1}{\Gamma/M}\sum_{t:\bX_{t}=\Phi_{m}}y_{t}$. Then it becomes
\[
\min_{\bS\in\R^{d^{2}}}\sum_{m
=1}^{M}\left(\left\langle \Phi_{m},\bS\right\rangle -z_{m}\right)^{2}=\min_{\bS\in\R^{d^{2}}}\left\Vert \Phi \bS-z\right\Vert _{2}^{2},
\]
where $\Phi\in\R^{m\times d^{2}}$ has rows $\left\{ \Phi_{m}\right\} ,$
and $z:=(z_{1},\ldots,z_{m})^{\top}\in\R/M$. Note that $\left\{ \Phi_{m}\right\} $
may or may not span $\R^{d^{2}}$. Observe that $\wSigma_\Gamma$ be an optimal
solution to the above problem. Then
\begin{align*}
\left\Vert \Phi(\wSigma_\Gamma-\bSigma_*)\right\Vert _{2}^{2}+\left\Vert \Phi\bSigma_*-z\right\Vert _{2}^{2}+2\left\langle \Phi(\wSigma_\Gamma-\bSigma_*),\Phi\bSigma_*-z\right\rangle
&=\left\Vert \Phi\wSigma_\Gamma-\Phi\bSigma_*+\Phi\bSigma_*-z\right\Vert _{2}^{2}\\
&=\left\Vert \Phi\wSigma_\Gamma-z\right\Vert _{2}^{2}\le\left\Vert \Phi\bSigma_*-z\right\Vert _{2}^{2}.
\end{align*}
Hence, we can show that
\begin{align*}
\left\Vert \Phi(\wSigma_\Gamma-\bSigma_*)\right\Vert _{2}^{2} & \leq -2\left\langle \Phi(\wSigma_\Gamma-\bSigma_*),\Phi\bSigma_*-z\right\rangle \\
& \overset{(a)}{\leq} 2\left\Vert \Phi(\wSigma_\Gamma-\bSigma_*)\right\Vert _{2}\left\Vert \Phi\bSigma_*-z\right\Vert _{2}.
\end{align*}
where, $(a)$ follows from Cauchy Schwarz inequality. So
\begin{align*}
\left\Vert \Phi(\wSigma_\Gamma-\bSigma_*)\right\Vert _{2}\le2\left\Vert \Phi\bSigma_*-z\right\Vert _{2}.
\end{align*}
Observe that the RHS does not contain the $\wSigma_\Gamma$ anymore. Note that the $m$-th entry of $\Phi\bSigma_*-z$ is
\begin{align*}
\left\langle \Phi_{m},\bSigma_*\right\rangle -z_{m}=\phi_{m}^{\top}\bSigma_*\phi_{m}-\frac{1}{\Gamma/M}\sum_{t:\bX_{t}=\Phi_{m}}y_{t}.
\end{align*}
But let $\zeta_\Gamma \coloneqq \wtheta_\Gamma-\btheta_*$, then
\begin{align*}
y_{t} & =\left(\bx_{t}^{\top}\wtheta_t-r_{t}\right)^{2}\\
& =(\eta_{t}+\bx_{t}^{\top}\zeta_\Gamma)^{2}\\
& =\eta_{t}^{2}+2\eta_{t}\bx_{t}^{\top}\zeta_\Gamma+\left(\bx_{t}^{\top}\zeta_\Gamma\right)^{2}\\
& \overset{}{=}\bx_{t}^{\top}\bSigma_* \bx_{t}+\epsilon_{t}.
\end{align*}
Then we can show that
\begin{align*}
\epsilon_{t}:=y_{t}-\bx_{t}^{\top}\bSigma_* \bx_{t} \overset{}{=} \eta_{t}^{2}-\E\left[\eta_{t}^{2}\right]+2\eta_{t}\bx_{t}^{\top}\zeta_\Gamma+\left(\bx_{t}^{\top}\zeta_\Gamma\right)^{2} \overset{(a)}{\leq}\underbrace{2(\eta_{t}^{2}-\E\left[\eta_{t}^{2}\right])}_{\textbf{Part A}} + \underbrace{2\left(\bx_{t}^{\top}\zeta_\Gamma\right)^{2}}_{\textbf{Part B}}.
\end{align*}
where, $(a)$ follows as $(a+b)^2 \leq 2a^2 + 2b^2$. So
\begin{align*}
\left\langle \Phi_{m},\bSigma_*\right\rangle -z_{m}\leq -\frac{1}{\Gamma/M}\sum_{t:\bX_{t}=\Phi_{m}}\epsilon_{t}.
\end{align*}
We can divide the $\epsilon_t$ into two parts. Looking into the first part A, observe that $\eta^2_t$ is a sub-exponential random variable as $\eta_t\sim\SG(0,\bx_{t}^{\top}\bSigma_* \bx_{t})$. Also let $\nu = \bx_{t}^{\top}\bSigma_* \bx_{t} = O(M^2B^2d^2) = c'd^2$ for some constant $c'>0$. From \Cref{conc-lemma-sub-exp}, we have that
\begin{align*}
\Pb&\left(\left\{\eta_{t}^{2} - \E\left[\eta_{t}^{2}\right] > \min\left\{\sqrt{\frac{2 \nu \log (A / \delta)}{\tau}}, \frac{2 \nu^2 \log (A / \delta)}{\tau}\right\}\right\}\right) \\
&\overset{(a)}{\leq} \Pb\left(\left\{\eta_{t}^{2} - \E\left[\eta_{t}^{2}\right] > \min\left\{\sqrt{\frac{2c' d^2 \log (A / \delta)}{\tau}}, \frac{2c'd^2 \log (A / \delta)}{\tau}\right\}\right\}\right) \\
&\leq\exp\left(-\min\left\{\sqrt{\frac{2c' d^2 \log (A / \delta)}{\tau}}, \frac{2c' d^2 \log (A / \delta)}{\tau}\right\}\right)\\
&\overset{(b)}{\leq} \exp\left(-\min\left\{\sqrt{\frac{2c' d^2 \log (A / \delta)}{2c' d^2}}, \frac{2c' d^2 \log (A / \delta)}{2c' d^2 }\right\}\right)
\leq \delta.
\end{align*}
where, $(a)$ follows for some $c'>0$ we have that $\nu = \bx^{\top}\bSigma_*\bx \leq c'd^2$, and observing that
$$
\min\left\{\sqrt{\frac{2\nu \log (1 / \delta)}{\tau}}, \frac{2\nu^2 \log (A / \delta)}{\tau}\right\} > \min\left\{\sqrt{\frac{2c' d^2 \log (1 / \delta)}{\tau}}, \frac{2c' d^2 \log (A / \delta)}{\tau}\right\}.
$$
and $(b)$ follows for $\tau > c'd^2(c'd^2+1)/2$.
Now for the second part B first recall that $\zeta_\Gamma \coloneqq \wtheta_\Gamma-\btheta_*$. Then using \Cref{lemma:least-square-conc} we can show that
\begin{align*}
\Pb\left((\bx^{\top}\zeta_\Gamma)^2 > \sigma^2\frac{r+\log (A / \delta)}{\Gamma}\right) &\overset{(a)}{=} \Pb\left((\bx^{\top}\zeta_\Gamma)^2 > \frac{d^2 A+d^2\log (A / \delta)}{\Gamma}\right)\\
&\overset{(b)}{\leq} \Pb\left(\left(\bx^{\top}\left(\wtheta_\Gamma - \btheta_*\right)\right)^2 > \frac{2c^{''}A d^2 \log (A / \delta)}{\Gamma}\right) \leq \delta.
\end{align*}
where, in $(a)$ we have $\sigma^2 \leq \max_a\sigma^2(a) \leq \max_a\bx^{\top}(a)\bSigma_*\bx(a) \leq c_1 d^2$ for some $c_1>0$. Note that $r$ is the rank of $\bX^{\top}\bX$ which is equal to $A$. In $(a)$ we have for some $c^{''} > 0$ we have $d^2 (r + 1) > 2c^{''}A d^2$. Hence we can show that
\begin{align*}
\Pb\left(\left(\eta_t\bx^{\top}\zeta_\Gamma\right)^2 > \left( \frac{2c^{''}A d^2 \log (A / \delta)}{\Gamma}\right)\right) \leq \delta.
\end{align*}
Combining all of the steps above we can show that
\begin{align*}
\Pb&\left( \left\langle \Phi_{m},\bSigma_*\right\rangle -z_{m} > -\frac{M}{\Gamma}\sum_{t:\bX_{t}=\Phi_{m}}\left(\frac{2c^{''}A d^2 \log (A / \delta)}{\Gamma}
+ \frac{2c' d^2 \log (A / \delta)}{\Gamma}\right) \right)\\
&\overset{(a)}{\leq} \Pb\left( \left\langle \Phi_{m},\bSigma_*\right\rangle -z_{m} > -\frac{d}{\sqrt{n}}\sum_{t:\bX_{t}=\Phi_{m}}\left(\frac{2C d^2 \log (A / \delta)}{\Gamma}\right) \right)\\
&\overset{(b)}{=} \Pb\left( \left\langle \Phi_{m},\bSigma_*\right\rangle -z_{m} > -\left(\frac{2C d^2 \log (A / \delta)}{\Gamma}\right) \right)\leq 4\delta/A,
\end{align*}
where, $(a)$ follows for some constant $C >0$, and $(b)$ follows by setting $\Gamma=\sqrt{n}$ and $M=d < A$ and noting that the $m$-th row consist of $\sqrt{n}/d$ entries. Hence the above implies that
\begin{align*}
\Pb\left(\bx(a)^{\top}\wSigma_\Gamma\bx(a) - \bx(a)^{\top}\bSigma_*\bx(a) \geq \frac{2C d^2 \log (A / \delta)}{\Gamma}\right)\leq 4\delta/A .
\end{align*}
Also note that $\eta_t\sim\SG(0,\bx_{t}^{\top}\bSigma_* \bx_{t})$.
So we can have a two tail concentration inequality. It then follows that
\begin{align*}
\Pb\left(\bx(a)^{\top}\wSigma_\Gamma\bx(a) - \bx(a)^{\top}\bSigma_*\bx(a) \leq -\frac{2C d^2 \log (A / \delta)}{\Gamma}\right)\leq 4\delta/A .
\end{align*}
Hence we can show by union bounding over all actions $A > d$ that
\begin{align*}
\Pb\left(\forall a, \left|\bx(a)^{\top}\left(\wSigma_\Gamma - \bSigma_*\right)\bx(a)\right| \geq \frac{2C d^2 \log (A / \delta)}{\Gamma}\right)\leq 2A\dfrac{4\delta}{A} = 8\delta .
\end{align*}
\end{proof}
\input{op_conc}
\subsection{Loss of \Cref{alg:linear-bandit}}
\label{app:loss-alg-1}
\begin{customtheorem}{1}
\textbf{(Loss of \Cref{alg:linear-bandit}, formal)}
Let $\wb^{}$ be the empirical \PE design followed by \Cref{alg:linear-bandit} and it samples each action $a$ as $\lceil n \wb(a)\rceil$ times. Then the MSE of \Cref{alg:linear-bandit} for for $n\geq \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma}$ is given by
\begin{align*}
\L_n(\pi,\wb,\wSigma_{\Gamma}) \leq
\underbrace{O\left(\frac{
d^3\lambda_1(\bV)\log n}{n}\right)}_{\substack{\textbf{\PE MSE}\\\textbf{and exploration error}}} + \underbrace{O\left(\frac{d^2\lambda_1(\bV)\log n}{n^{3/2}}\right)}_{\textbf{Approximation error}} + \underbrace{O\left(\frac{1}{n}\right)}_{\textbf{Failure event MSE}}.
\end{align*}
\end{customtheorem}
\begin{proof}
Recall that the $\wSigma_\Gamma$ be the empirical co-variance after $\Gamma$ timesteps. Then \Cref{alg:linear-bandit} pulls each action $a \in \A$ exactly $\left\lceil (n-\Gamma) \wb_{}^{}(a)\right\rceil$ times for some $\sqrt{n}>A$ and computes the least squares estimator $\wtheta_n$. Recall that the estimate $\wtheta_n$ only uses the $(n-\Gamma)$ data sampled under $\wb$.
Also recall we actually use $\wSigma_{\Gamma}$ as input for optimization problem \eqref{eq:opt-oracle-sol}, where $\Gamma=\sqrt{n}$.
We first define the good event $\xi_{\delta}(n-\Gamma)$ as follows:
\begin{align*}
\xi_{\delta}(n-\Gamma) \coloneqq \left\{\left(\sum_{a=1}^A \bw(a)^{\top} (\wtheta_{n-\Gamma}-\btheta_*)\right)^2 \leq \min\left\{\sqrt{\frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha)\log (1 / \delta)}{n-\Gamma}}, \frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha) \log (1 / \delta)}{n-\Gamma}\right\}\right\}.
\end{align*}
where, $\alpha_0$, and $\alpha$ will be defined later.
Also, define the good variance event as follows:
\begin{align}
\xi^{var}_\delta(\Gamma) \coloneqq \left\{\forall a, \left|\bx(a)^{\top}\left(\wSigma_\Gamma - \bSigma_*\right)\bx(a)\right| < \frac{2C d^2 \log (A / \delta)}{\Gamma}\right\} \label{eq:good-variance-event}
\end{align}
Then we can bound the loss of the \sp\ as follows:
\begin{align}
&\bL_n(\pi, \wb,\wSigma_\Gamma) = \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_{n-\Gamma} - \btheta_*\right)\right)^2\right] \nonumber\\
& = \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_{n-\Gamma} - \btheta_*\right)\right)^2\indic{\xi_{\delta}(n-\Gamma)}\indic{\xi^{var}_{\delta}(\Gamma)}\right] + \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_{n-\Gamma} - \btheta_*\right)\right)^2\indic{\xi^c_{\delta}(n-\Gamma)}\right] \nonumber\\
&\qquad + \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_{n-\Gamma} - \btheta_*\right)\right)^2\indic{(\xi^{var}_{\delta}(\Gamma))^c}\right
\label{eq:loss-decomp}
\end{align}
Now we bound the first term of the \eqref{eq:loss-decomp}. Note that using policy-weighted least square estimates we have
\begin{align*}
\wtheta_{n-\Gamma} \overset{(a)}{=}\arg\min_{\btheta}\sum_{t=1}^{n}\frac{\pi^2(I_t)}{\wsigma_\Gamma^{2}(I_{t})}(R(I_t)-\bx(I_t)^\top\btheta)^{2}
\end{align*}
where, in $(a)$ we $I_t$ is the action sampled at timestep $t$.
Recall that the $\mathbf{diag}(\wSigma_\Gamma) = [\wsigma_\Gamma^2(I_1), \wsigma_\Gamma^2(I_2), \ldots, \wsigma_\Gamma^2(I_n)]$, where $I_1, I_2, \ldots, I_{n-\Gamma}$ are the actions pulled at time $t=1,2,\ldots,n$.
We have that:
\begin{align*}
\wtheta_{n-\Gamma} &= (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\mathbf{R}_n = (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}(\tX_{n-\Gamma}\btheta_* + \eta)\nonumber\\
\wtheta_{n-\Gamma} - \btheta_* &= (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\mathbf{\eta}
\end{align*}
where the noise vector $\eta\sim\SG(0,\bSigma_{n-\Gamma})$ where $\mathbf{diag}(\bSigma_n) = [\sigma^2(I_1), \sigma^2(I_2), \ldots, \sigma^2(I_{n-\Gamma})]$.
For any $\bz \coloneqq \sum_a\bw(a) \in \R^d$ we have
\begin{align}
\bz^{\top}(\wtheta_{n-\Gamma} - \btheta_*) = \bz^{\top}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\mathbf{\eta}
\label{eq:thm-loss-0}
\end{align}
It implies from \eqref{eq:thm-loss-0} that
\begin{align}
\left(\bz^{\top}(\wtheta_{n-\Gamma} - \btheta_*)\right)^2 \sim \SE \left(0, \bz^\top(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\E\left[\mathbf{\eta}\mathbf{\eta}^\top\right] \wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz\right)
\label{eq:loss-thm-1}
\end{align}
where $\SE$ denotes the sub-exponential distribution. Hence to bound the quantity $\left(\bz^{\top}(\wtheta_{n-\Gamma} - \btheta_*)\right)^2$ we need to bound the variance.
We first begin by rewriting the loss function for $n\geq \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma} $ as follows
\begin{align}
\E\left[\left(\bz^{\top}(\wtheta_{n-\Gamma} - \btheta_*)\right)^2\right] &= \bz^\top(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\E\left[\mathbf{\eta}\mathbf{\eta}^\top\right] \wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz\nonumber\\
&\overset{(a)}{=} \bz^\top(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\bSigma_n\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz\nonumber\\
&\overset{}{=} \bz^\top(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-\frac{1}{2}}\wSigma_{\Gamma}^{-\frac{1}{2}}\bSigma_n\wSigma_{\Gamma}^{-\frac{1}{2}}\wSigma_{\Gamma}^{-\frac{1}{2}}\tX_{n-\Gamma}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz\nonumber\\
&\overset{(b)}{=} \underbrace{\bz^\top(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-\frac{1}{2}}}_{\mathbf{m}^\top\in\R^{n-\Gamma}}\wSigma_{\Gamma}^{-\frac{1}{2}}\bSigma_n\wSigma_{\Gamma}^{-\frac{1}{2}}\underbrace{\wSigma_{\Gamma}^{-\frac{1}{2}}\tX_{n-\Gamma}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz}_{\mathbf{m}\in\R^{n-\Gamma}}\nonumber\\
&\overset{(c)}{\leq} \bz^\top(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1/2}\left(\left(1+2C_\Gamma(\delta)\right)\bI_n\right)\wSigma_{\Gamma}^{-1/2}\tX_{n-\Gamma}(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz\nonumber\\
&\overset{(d)}{=} \left(1+2C_\Gamma(\delta)\right)\bz^\top (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz \label{eq:upper-bound-thm-1}
\end{align}
where, $(a)$ follows as $\E\left[\mathbf{\eta}\mathbf{\eta}^\top\right] = \bSigma_n$, in $(b)$ $\mathbf{m}$ is a vector in $\R^n$. The $(c)$ follows by first observing that
\begin{align*}
\wSigma_{\Gamma}^{-\frac{1}{2}}\bSigma_n\wSigma_{\Gamma}^{-\frac{1}{2}} \overset{}{=} \wSigma_{\Gamma}^{-1}\bSigma_n \overset{}{=} \mathbf{diag}(\wSigma_{\Gamma}^{-1}\bSigma_n) = \left[\dfrac{\sigma^2(I_1)}{\wsigma^2_\Gamma(I_1)}, \dfrac{\sigma^2(I_2)}{\wsigma^2_\Gamma(I_2)}, \ldots, \dfrac{\sigma^2(I_n)}{\wsigma^2_\Gamma(I_n)}\right].
\end{align*}
Then note that using \Cref{corollary:multiplicative-bound} we have
$$\frac{\sigma^2(I_t)}{\wsigma^2_\Gamma(I_t)} \leq 1 + 2\cdot\underbrace{\frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma}}_{\textbf{$C_\Gamma(\delta)$}}$$
for each $t\in[n]$, and $(d)$ follows as $1+2C_\Gamma(\delta)$ is not a random variable.
Let $\wb^{*}$ be the empirical \PE design returned by the approximator after it is supplied with $\wSigma_\Gamma$.
Now observe that the quantity of the samples collected (following $\wb^{*}$) after exploration is as follows:
\begin{align*}
\left(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}\right)^{-1} = \left(\sum_a\left\lceil(n-\Gamma)\wb^*(a)\wsigma^{-2}_\Gamma(a)\right\rceil\bw(a)\bw(a)^{\top}\right)^{-1}
= \dfrac{1}{n-\Gamma}\bA_{\wb^*,\wSigma_\Gamma}^{-1}.
\end{align*}
Hence we use the loss function
\begin{align*}
\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) \coloneqq
\left(1+2C_\Gamma(\delta)\right)\bz^\top (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz = \frac{\left(1+2C_\Gamma(\delta)\right)}{n - \Gamma}\sum_{a,a'}\bw(a)^\top\bA_{\wb^*, \wSigma_\Gamma}^{-1}\bw(a').
\end{align*}
Also recall that we define
\begin{align*}
\L^*_n(\pi,\bb^*,\wSigma_\Gamma) = \dfrac{1}{n}\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a').
\end{align*}
So to minimize the quantity $\E\left[\left(\sum_a\bw(a)^{\top}(\wtheta_{n-\Gamma} - \btheta_*)\right)^2\right]$ is minimizing the quantity $ \frac{\left(1+2C_\Gamma(\delta)\right)}{n - \Gamma}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\wb^*, \wSigma_\Gamma}\bw(a')$.
Further recall that we can show that from \Cref{assm:oracle-approx} (approximation oracle) and Kiefer-Wolfowitz theorem in \Cref{corollary:kiefer} that for the proportion $\bb^*$ and any arbitrary positive semi-definite matrix $\wSigma_\Gamma$ the following holds
\begin{align}
\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bw(a') = \Tr\left(\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bw(a') \right) = \Tr\bigg(\bA^{-1}_{\bb^*, \wSigma_\Gamma}\underbrace{\sum_{a,a'}\bw(a)\bw(a')^{\top}}_{\bV} \bigg) = \Tr\left(\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bV\right) \leq d\lambda_1(\bV).\label{eq:kiefer-bound}
\end{align}
Then we can decompose the loss as follows:
\begin{align}
\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma)
&= \L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) + \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma)
\nonumber\\
&= \underbrace{\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma)}_{\textbf{Approximation error}} + \underbrace{\L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) - \L^*_n(\pi,\bb^*,\wSigma_\Gamma)}_{\textbf{Comparing two diff loss}} + \L^*_n(\pi,\bb^*,\wSigma_\Gamma) \label{eq:all-parts}
\end{align}
For the approximation error we need access to an oracle (see \Cref{assm:oracle-approx}) that gives $\epsilon$ approximation error.
Then setting $\epsilon=\frac{1}{\sqrt{n}}$ we have that
\begin{align}
\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) &= \frac{\left(1+2C_\Gamma(\delta)\right)}{n-\Gamma}\underbrace{\Tr\left( \sum_{a,a'}\bw(a)^\top\bA_{\wb, \wSigma_\Gamma}^{-1}\bw(a') - \sum_{a,a'}\bw(a)^\top\bA_{\wb^*, \wSigma_\Gamma}^{-1}\bw(a')\right)}_{\epsilon} \nonumber\\
&\overset{(a)}{\leq}
O\left(\dfrac{d^2\log(A/\delta)}{n^{3/2}}\right). \label{eq:approx-loss}
\end{align}
where, $(a)$ follows by setting $\Gamma = \sqrt{n}$, $\epsilon = 1/\sqrt{n}$ and $C_\Gamma(\delta) = \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma} = \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\sqrt{n}}$.
Let us define $\bK_1 \coloneqq \Tr(\sum_{a,a'}\bw(a)^\top\bA_{\wb^*, \wSigma_\Gamma}^{-1}\bw(a'))$, and $\bK_2\coloneqq \Tr(\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a'))$.
For the second part of comparing the two losses we can show that
\begin{align}
\L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) - \L^*_{n}(\pi,\bb^*,\wSigma_\Gamma) &= \frac{1}{(n - \Gamma)}\Tr\left( \left(1+2C_\Gamma(\delta)\right)K_1\right) - \dfrac{1}{n}K_2 \nonumber\\
&= \dfrac{(1+2C_\Gamma(\delta))\bK_1}{n-\Gamma} - \dfrac{(1+2C_\Gamma(\delta))\bK_2}{n-\Gamma} + \dfrac{(1+2C_\Gamma(\delta)) \bK_2}{n-\Gamma}-\dfrac{1}{n}\bK_2\nonumber\\
&= \dfrac{(1+2C_\Gamma(\delta))}{n-\Gamma}\left(\bK_1 - \bK_2\right) + \dfrac{2C_\Gamma(\delta) \bK_2}{n-\Gamma} + \dfrac{1}{n-\Gamma}\bK_2 -\dfrac{1}{n}\bK_2\nonumber\\
&\overset{(a)}{=} \frac{\Gamma}{n(n - \Gamma)}\underbrace{\Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\wb^*, \wSigma_\Gamma}^{-1}\bw(a') - \sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a')\right)}_{\leq 0} \nonumber\\
&\qquad + \frac{2C_\Gamma(\delta)}{n - \Gamma}\Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a')\right) + \dfrac{\Gamma}{n(n-\Gamma)}\Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a')\right)\nonumber\\
&\overset{(b)}{\leq} O\left(\dfrac{d^3\lambda_1(\bV)\log(A/\delta)}{n^{3/2}}\right)
\label{eq:comparing-two-loss}
\end{align}
where, $(a)$ follows by substituting the definition of $\bK_1$ and $\bK_2$. The $(b)$ follows by setting $\Gamma = \sqrt{n}$, $C_\Gamma(\delta) = \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma} = \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\sqrt{n}}$, and $\Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a')\right) \leq d\lambda_1(\bV)$.
Now we combine all parts together in \eqref{eq:all-parts} using \eqref{eq:kiefer-bound}, \eqref{eq:approx-loss} and \eqref{eq:comparing-two-loss}. First we define the quantity
$$
\alpha \coloneqq 2C_\Gamma(\delta)\Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a')\right) + \frac{\Gamma}{n}\Tr\left(\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a')\right).
$$
It follows then that \eqref{eq:all-parts} can be written as
\begin{align}
\dfrac{1 + 2 C_\Gamma(\delta)}{n - \Gamma}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\wb, \wSigma_\Gamma}\bw(a') &\leq \underbrace{\dfrac{(1+2C_{\Gamma}(\delta))\epsilon}{(n-\Gamma)}}_{\textbf{Approximation error}} + \dfrac{\alpha}{n-\Gamma} + \dfrac{1}{n}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bw(a')\nonumber\\
\implies (1 + 2 C_\Gamma(\delta))\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\wb, \wSigma_\Gamma}\bw(a')
&\leq \underbrace{(1+2C_{\Gamma}(\delta))\epsilon}_{\alpha_0} + \alpha + \dfrac{n-\Gamma}{n}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bw(a')\nonumber\\
&\overset{(a)}{\leq} \alpha_0 + \alpha + d\lambda_1(\bV) \label{eq:thm-oracle-2}
\end{align}
where, $(a)$ follows from \Cref{assm:oracle-approx}, \Cref{corollary:kiefer} and \eqref{eq:kiefer-bound}. Also observe that from \eqref{eq:loss-thm-1} we have that $\big(\sum_{a=1}^A \bw(a)^\top (\wtheta_n-\btheta_*)\big)^2$ is a sub-exponential random variable.
Then using the sub-exponential concentration inequality we have with probability at least $1-\delta$
\begin{align*}
\left(\sum_{a=1}^A \bw(a)^\top (\wtheta_{n-\Gamma}-\btheta_*)\right)^2 &\leq \min\bigg\{\sqrt{(1+2C_\Gamma(\delta))\sum_{a,a'}\bw(a)\left(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}\right)^{-1}\bw(a')2 \log (1 / \delta)},\\
&\qquad (1+2C_{\Gamma}(\delta))\sum_{a,a'}\bw(a)^{\top}\left(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}\right)^{-1}\bw(a')2 \log (1 / \delta)\bigg\}\\
&= \min\bigg\{\frac{1}{\sqrt{n-\Gamma}}\sqrt{(1+2C_\Gamma(\delta))\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb,\wSigma_\Gamma}\bw(a') 2 \log (1 / \delta)},\\
&\qquad\frac{(1+2C_\Gamma(\delta))}{n-\Gamma}\sum_{a,a'}\bw(a)^{\top}\bA^{-1}_{\bb,\wSigma_\Gamma}\bw(a') 2 \log (1 / \delta)\bigg\}\\
&\overset{(a)}{\leq} \min\left\{\sqrt{\frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha ) \log (1 / \delta)}{n-\Gamma}}, \frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha) \log (1 / \delta)}{n-\Gamma}\right\}
\end{align*}
where, $(a)$ follows from \eqref{eq:thm-oracle-2},
and we have taken at most $n-\Gamma$ pulls to estimate $\wtheta_n$ after forced exploration and $\sqrt{n} > d$.
Thus, for any $\delta \in(0,1)$ we have
\begin{align}
\mathbb{P}\left(\left\{\left(\sum_{a=1}^A \bw(a)^{\top} (\wtheta_n-\btheta_*)\right)^2 > \min\left\{\sqrt{\frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha) \log (1 / \delta)}{n-\Gamma}}, \frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha) \log (1 / \delta)}{n-\Gamma}\right\}\right\}\right) \leq \delta. \label{eq:loss-bandit-explore}
\end{align}
This gives us a bound on the first term of \eqref{eq:loss-decomp}.
Combining everything in \eqref{eq:loss-decomp} we can bound the loss of the \sp\ as follows:
\begin{align*}
&\bL_n(\pi, \wb,\wSigma_\Gamma)
%
\leq \E_{\D}\left[ \left(\sum_{a=1}^A\bw(a)^{\top}\left(\wtheta_{n-\Gamma} - \btheta_*\right)\right)^2\indic{\xi_{\delta}(n-\Gamma)}\indic{\xi^{var}_{\delta}(\Gamma)}\right] + \sum_{t=1}^n A H_U^2B^2\Pb(\xi^c_\delta(n-\Gamma)) \\
& \qquad\qquad + \sum_{t=1}^n A H_U^2B^2\Pb\left(\left(\xi^{var}_\delta(\Gamma)\right)^c\right)\\
&\leq \min\left\{\frac{2C d^2 \log (A / \delta)}{\Gamma}, \sqrt{\frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha) \log (A / \delta)}{n-\Gamma}}, \frac{(8 d\lambda_1(\bV) + \alpha_0 + \alpha) \log (A / \delta)}{n-\Gamma}\right\} \\
&\qquad + \sum_{t=1}^n A H_U^2B^2\Pb(\xi^c_\delta(n-\Gamma)) + \sum_{t=1}^n A H_U^2B^2\Pb\left(\left(\xi^{var}_\delta(\Gamma)\right)^c\right
\end{align*}
\begin{align*}
&\overset{(a)}{\leq} \min\left\{\frac{8C d^2 \log (n A)}{\sqrt{n}}, \sqrt{\frac{48 (d\lambda_1(\bV) + \alpha_0 + \alpha) \log (n A)}{n}}, \frac{48(d\lambda_1(\bV) + \alpha_0 + \alpha) \log (n A)}{n}\right\} + O\left(\frac{1}{n}\right)\\
&\overset{}{\leq} \frac{48 d^2\lambda_1(\bV)
\log (n A)}{n} + \frac{48 \alpha
\log (n A)}{n} + \frac{48\alpha_0
\log (n A)}{n} + O\left(\frac{1}{n}\right)\\
&\overset{(b)}{\leq} \frac{48 d^2\lambda_1(\bV)
\log (n A)}{n} + \frac{144 d\lambda_1(\bV) C_\Gamma(\delta)
\log (n A)}{n} + \frac{48 d\lambda_1(\bV)\Gamma
\log (n A)}{n^{3/2}} + \frac{48 \epsilon
\log (n A)}{n^{}} + O\left(\frac{1}{n}\right)
\end{align*}
where $(a)$ follows as \Cref{prop:loss-bandit-tracker} and setting $\delta=1/n^3$ and noting that $\sqrt{n}>d$. The $(b)$ follows by setting $(1+2C_{\Gamma}(\delta))\epsilon$ and the definition of $\alpha$. Recall that for $\Gamma = \sqrt{n}$ we have that $C_\Gamma(\delta) = \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma} = \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\sqrt{n}}$.
Then setting $\epsilon=1/\sqrt{n}$ we can bound the loss of the following \PE $\wb$ as
\begin{align*}
\bL_n(\pi, \wb, \wSigma_{\Gamma})
&\overset{}{\leq} O\left(\frac{ d^3
\lambda_1(\bV)\log (n A)}{n}\right) + O\left(\frac{
d^2\lambda_1(\bV)\log (n A)}{n^{3/2}}\right) + O\left(\frac{1}{n}\right).
\end{align*}
\end{proof}
\subsection{Regret of \Cref{alg:linear-bandit}}
\label{app:regret-linear-bandit}
\input{theorem1}
\section{Kiefer-Wolfowitz for Heteroscedastic setting}
\begin{customproposition}{4}\textbf{(Kiefer-Wolfowitz for \PE)}
Define the heteroscedastic design matrix as $\bA_{\bb,\bSigma} = \sum_{a=1}^A \bb(a)\tx(a)\tx(a)^\top$.
Assume that $\mathcal{A} \subset \mathbb{R}^{d}$ is compact and $\operatorname{span}(\mathcal{A})=\mathbb{R}^{d}$. Then the following are equivalent:
\begin{enumerate}[(a)]
\item $\bb^{*}$ is a minimiser of $\tg(\bb,\bSigma) = \Tr\left( \bA_{\bb,\bSigma}^{-1}\right)$.
\item $\bb^{*}$ is a maximiser of $ f(\bb,\bSigma)=\log \operatorname{det}\left( \bA_{\bb,\bSigma} \right)$.
\item $\tg\left(\bb^{*},\bSigma\right)=d$.
\end{enumerate}
Furthermore, there exists a minimiser $\bb^{*}$ of $\tg(\bb,\bSigma)$ such that $\left|\operatorname{Supp}\left(\bb^{*}\right)\right| \leq d(d+1) / 2$.
\end{customproposition}
\begin{proof}
We follow the proof technique of \citet{lattimore2020bandit}.
Let $\bb: \mathcal{A} \rightarrow[0,1]$ be a distribution on $\mathcal{A}$ so that $\sum_{a \in \mathcal{A}} \bb(a)=1$ and $\bA_{\bb,\bSigma} \in \mathbb{R}^{d \times d}$ and $g(\bb) \in \mathbb{R}$ be given by
\begin{align*}
\bA_{\bb,\bSigma}&=\sum_{a=1}^A \bb(a)\pi^2(a)\sigma^{-2}(a)\ \bx(a) \bx(a)^{\top} = \sum_{a=1}^A \bb(a) \dfrac{\pi(a)\bx(a)}{\sigma(a)} \left(\dfrac{\pi(a)\bx(a)}{\sigma(a)}\right)^{\top}
\end{align*}
where, $(a)$ follows by setting $\tx(a) = \pi(a)\bx(a)/\sigma(a)$.
First recall that for a square matrix $\bA$ let adj $(\bA)$ be the transpose of the cofactor matrix of $\bA$. Use the facts that the inverse of a matrix $\bA$ is $\bA^{-1}=\operatorname{adj}(\bA)^{\top} / \operatorname{det}(\bA)$ and that if $\bA: \mathbb{R} \rightarrow \mathbb{R}^{d \times d}$, then
\begin{align*}
\frac{d}{d t} \operatorname{det}(\bA(t))=\Tr\left(\operatorname{adj}(\bA) \frac{d}{d t} \bA(t)\right).
\end{align*}
It follows then that
\begin{align*}
\nabla f(\bb,\bSigma)_{b(a)}&\overset{(a)}{=}\frac{\Tr\left(\operatorname{adj}(\bA_{\bb,\bSigma}) \tx(a) \tx(a')^{\top}\right)}{\operatorname{det}(\bA_{\bb,\bSigma})}\\
&=\frac{\tx(a)^{\top} \operatorname{adj}(\bA_{\bb,\bSigma}) \tx(a')}{\operatorname{det}(\bA_{\bb,\bSigma})}
\overset{(b)}{=}\tx(a)^{\top} \bA_{\bb,\bSigma}^{-1} \tx(a') = \tg(\bb)
\end{align*}
where, in $(a)$ we show the $a$-th component of $f(\bb)$ when we differentiate w.r.t to $\bb(a)$, and $(b)$ follows as $\frac{\operatorname{adj}(\bA_{\bb,\bSigma})}{\operatorname{det}(\bA_{\bb,\bSigma})} = \bA_{\bb,\bSigma}^{-1}$.
Also observe that
\begin{align}
&\left(\sum_{a=1}^A \bb(a)\|\tx(a)\|^2_{\bA_{\bb,\bSigma}^{-1}}\right)=\Tr\left(\sum_{a=1}^A \bb(a) \tx(a) \tx(a')^{\top} \bA_{\bb,\bSigma}^{-1}\right) = d
\label{eq:I-D-1}
\end{align}
Hence, $\max_\bb\log\det \bA_{\bb}$ is lower bounded by $d$ as in average we have that $\left(\sum_{a=1}^A \bb(a)\|\tx(a)\|^2_{\bA_{\bb,\bSigma}^{-1}}\right) = d$.
(b) $\Rightarrow$ (a): Suppose that $\bb^{*}$ is a maximiser of $f$. By the first-order optimality criterion, for any $\bb$ distribution on $\mathcal{A}$,
\begin{align*}
0 & \geq\left\langle\nabla f\left(\bb^{*},\bSigma\right), \bb-\bb^{*}\right\rangle \\
&\geq\left(\sum_{a=1}^A \bb(a)\|\tx(a)\|^2_{\bA_{\bb^{*},\bSigma}^{-1}}-\sum_{a=1}^A \bb^{*}(a)\|\tx(a)\|^2_{\bA_{\bb^{*},\bSigma}^{-1}}\right) \\
&\geq\left(\sum_{a =1}^A \bb(a)\|\tx(a)\|_{\bA_{\bb^{*},\bSigma}^{-1}}^{2}-d\right) .
\end{align*}
For an arbitrary $a \in \mathcal{A}$, choosing $\bb$ to be the Dirac at $a \in \mathcal{A}$ proves that $\sum_{a=1}^A\|\tx(a)\|^2_{\bA_{\bb^{*},\bSigma}^{-1}} \leq d$.
Since $\tg(\bb) \geq d$ for all $\bb$ by \eqref{eq:I-D-1}, it follows that $\bb^{*}$ is a minimiser of $\tg$ and that $\min_{\bb} \tg(\bb)=d$.
(c) $\Longrightarrow$ (b): Suppose that $\tg\left(\bb^{*}\right)=d$. Then, for any $\bb$,
\begin{align*}
\left\langle\nabla f\left(\bb^{*},\bSigma\right), \bb-\bb^{*}\right\rangle=\left(\sum_{a=1}^A \bb(a)\|\tx(a)\|^2_{\bA_{\bb^{*},\bSigma}^{-1}}-d\right) \leq 0 .
\end{align*}
And it follows that $\bb^{*}$ is a maximiser of $f$ by the first-order optimality conditions and the concavity of $f$. This can be shown as follows:
Let $\bb$ be a Dirac at $a$ and $\bb(t)=\bb^{*}+t\left(\bb^{*}-\bb\right)$. Since $\bb^{*}(a)>0$ it follows for sufficiently small $t>0$ that $\bb(t)$ is a distribution over $\mathcal{A}$. Because $\bb^{*}$ is a minimiser of $f$,
\begin{align*}
0 \geq\left.\frac{d}{d t} f(\bb(t),\bSigma)\right|_{t=0}=\left\langle\nabla f\left(\bb^{*},\bSigma\right), \bb^{*}-\bb\right\rangle=d-\sum_{a =1}^A\|\tx(a)\|^2_{\bA_{\bb,\bSigma}^{-1}}.
\end{align*}
We now show (a) $\Longrightarrow$ (c). To prove the second part of the theorem, let $\bb^{*}$ be a minimiser of $\tg$, which by the previous part is a maximiser of $f$. Let $S=\operatorname{Supp}\left(\bb^{*}\right)$, and suppose that $|S|>d(d+1) / 2$. Since the dimension of the subspace of $d \times d$ symmetric matrices is $d(d+1) / 2$, there must be a non-zero function $v: \mathcal{A} \rightarrow \mathbb{R}$ with $\operatorname{Supp}(v) \subseteq S$ such that
\begin{align}
\sum_{a \in S} v(a) \tx(a) \tx(a)^{\top}=\mathbf{0} \label{eq:equality-1}.
\end{align}
Notice that for any $\tx(a) \in S$, the first-order optimality conditions ensure that $\sum_{a=1}^A\|\tx(a)\|_{\bA_{\bb^{*},\bSigma}^{-1}}^{2}=d$. Hence
\begin{align*}
d \sum_{a \in S} v(a)=\sum_{a \in S} v(a)\|\tx(a)\|_{\bA_{\bb^{*},\bSigma}^{-1}}^{2}=0,
\end{align*}
where the last equality follows from \eqref{eq:equality-1}. Let $\bb(t)=\bb^{*}+t v$ and let $\tau=\max \left\{t>0: \bb(t) \in \mathcal{P}_{\mathcal{A}}\right\}$, which exists since $v \neq 0$ and $\sum_{a \in S} v(a)=0$ and $\operatorname{Supp}(v) \subseteq S$. By \eqref{eq:equality-1}, $\bA_{\bb(t),\bSigma}=\bA_{\bb^{*},\bSigma}$, and hence $ f(\bb(\tau),\bSigma)= f\left(\bb^{*},\bSigma\right)$, which means that $\bb(\tau)$ also maximises $f$. The claim follows by checking that $|\operatorname{Supp}(\bb(T))|<\left|\operatorname{Supp}\left(\bb^{*}\right)\right|$ and then using induction.
\end{proof}
\begin{customcorollary}{1}
From \Cref{prop:kiefer-wolfowitz} we know that $\bb^*$ is a minimizer for $\Tr(\bA^{-1}_{\bb,\bSigma})$ and $\Tr(\bA^{-1}_{\bb^*,\bSigma}) = d$. This implies that the loss is bounded at $\bb^*$ as $\frac{\lambda_d(\bV) d}{n} \leq \L_n(\pi, \bb^*, \bSigma) \leq \frac{\lambda_1(\bV) d}{n}$ where $\bV = \sum_{a,a'}\bw(a)\bw(a')^\top$.
\end{customcorollary}
\begin{proof}
First recall that we can rewrite the loss for any arbitrary proportion $\bb$ and co-variance $\bSigma$ as
\begin{align*}
\L_n(\pi, \bb, \bSigma) = \frac{1}{n} \left(\sum_{a,a'}\bw(a)^\top\bA_{\bb,\bSigma}^{-1}\bw(a')\right) = \frac{1}{n} \left(\bA_{\bb,\bSigma}^{-1}\sum_{a,a'}\bw(a)\bw(a')^\top\right) = \frac{1}{n} \left(\bA_{\bb,\bSigma}^{-1}\bV\right).
\end{align*}
From \citep{fang1994inequalities} we know that for any positive semi-definite matrices $\bA^{-1}_{\bb,\bSigma}$ and $\bV$ we have that
\begin{align*}
\lambda_d(\bV) \Tr(\bA^{-1}_{\bb,\bSigma}) \leq \Tr(\bV\bA^{-1}_{\bb,\bSigma}) \leq \lambda_1(\bV) \Tr(\bA^{-1}_{\bb,\bSigma})
\end{align*}
where $\lambda_i(\bV)$ is the $i$ th largest eigenvalue of $\bV$. Now from \Cref{prop:kiefer-wolfowitz} we know that for $\bb^*$ is a minimizer for $\Tr(\bA^{-1}_{\bb,\bSigma})$ and $\Tr(\bA^{-1}_{\bb^*,\bSigma}) = d$. This implies that the loss is bounded at $\bb^*$ as
\begin{align*}
&\lambda_d(\bV) \Tr(\bA^{-1}_{\bb^*,\bSigma}) \leq \Tr(\bV\bA^{-1}_{\bb^*,\bSigma}) \leq \lambda_1(\bV) \Tr(\bA^{-1}_{\bb^*,\bSigma}
\implies \frac{\lambda_d(\bV) d}{n} \leq \L_n(\pi, \bb^*, \bSigma) \leq \frac{\lambda_1(\bV) d}{n}.
\end{align*}
\end{proof}
\subsection{Bound on the $\L_n(\pi, \bb^*, \bSigma)$}
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Preliminaries}\label{sec:prelims}
\input{prelims}
\section{Related Work}\label{sec:related}
\input{related_1}
\vspace*{-0.5em}
\section{Definition of the Loss}
\vspace*{-0.3em}
\label{sec:loss-def}
\input{loss_definition}
\section{Loss of the Oracle}
\label{sec:oracle}
\input{oracle}
\vspace*{-1em}
\section{Agnostic Algorithm and Regret Bound}
\vspace*{-0.3em}
\label{sec:speed}
\input{regret}
\vspace*{-1em}
\section{Experiments}
\vspace*{-0.8em}
\label{sec:expts}
\input{expt}
\vspace*{-1.1em}
\section{Conclusions and Future Directions}
\label{sec:conclusions}
\vspace*{-0.7em}
\input{conclusions}
\subsection{Optimal Oracle Policy}
\subsection{Notation}
We define $[n]\coloneqq [1,2,\ldots,n]$. The setting consist of $A$ actions, indexed by $a\in[A]$ and consist of features $\bx(a)\in\R^d$ such that the dimension $d^2 \ll A$.
Denote by $\Delta(\A)$ the probability simplex over the action space $\A$ and a policy $\pi$ as a mapping $\pi: a \rightarrow [0,1]$ such that $\sum_a \pi(a) = 1$ which results in the $\pi\in\Delta(\A)$. We denote the total available budget as $n$.
The value of a policy $\pi$ is defined as $v(\pi) \coloneqq \E[R_t]$, and the expectation is taken over $a_{t}\!\sim\!\pi,R_{t}\!\sim\! R(a_{t})$.
Finally, recall that in the policy evaluation problem, we are given a fixed, target policy $\pi$ and asked to estimate $v(\pi)$.
Estimating $v(\pi)$ requires a dataset of actions and their associated rewards, $\D \coloneqq \{(a_1, r_1,...,a_{n},r_{n})\}$, which is collected by executing some policy.
We refer to the policy that collects $\D$ as the \textit{behavior policy}, denoted by $\bb\in\triangle(\A)$.
We then define the value estimate of a policy $\pi$ as $Y_n$, where $n$ is the sample budget. The exact nature of the value estimate for the linear bandit setting will be made clear in \Cref{sec:oracle}. Our goal is to choose a behavior policy that minimizes the mean squared error (MSE) defined as
$\E_{\mathcal{D}}\Big[\big(Y_n - v(\pi)\big)^{2}\Big]$,
where the expectation is over the collected data set $\D$.
We study the linear bandit setting where the expected reward for each action is assumed to be a linear function~\citep{mason2021nearly, jamieson2022interactive}.
Specifically, at each round $t$, the selected action $a_t$ is associated with a feature vector $\bx(a_t) \in \R^d$, and the rewards and transitions satisfy:
$R_{t}(a_t) = \bx(a_t)^{\top}\btheta_* + \eta$,
where $\btheta_*\in\R^d$ is the \textit{unknown} reward parameter, and $\eta$ is zero-mean noise with variance $\sigma^2(a)$.
We assume that the variance $\sigma^2(a)$ has a lower-dimensional structure such that $\sigma^2(s,a) = \bv(s,a)^{\top}\bSigma_*\bv(s,a)$ where $\bSigma_* \in \R^{d\times d}$ is an \emph{unknown} variance parameter and we further assume that $\eta$ is bounded between $-B$ to $B$. Observe that the variance depends on the action features, which is called the heteroscedastic noise model \citep{greene2002000, chaudhuri2017active}.
However, \citet{chaudhuri2017active} only consider the special case of our setting when $\bSigma_*$ is identity, such that $\sigma^2(a) = \bx(a)^{\top}\bx(a)$.
We also assume that the norms of the features are bounded such that $H^2_L \leq \|\bx(s,a)\|^2\leq H^2_U$ for all $a\in\A$.
In our heteroscedastic linear bandit setting selecting any action gives information about $\btheta_*$ and also gives information about the noise covariance matrix $\bSigma_*$.
We now
state the assumption on the boundedness on the variance of each action $a\in[A]$.
Let $H_L^2\leq \|\bx(a)\|^2\leq H_U^2$ for any $a\in[A]$. Let the singular value decomposition of $\bSigma_*$ be $\bU \bD \bP^{\top}$ with orthogonal matrices $\bU, \bP^{\top}$ and $\bD=\operatorname{diag}\left(\lambda_{1}, \ldots, \lambda_{d}\right)$ where $\lambda_{i}$ denotes a singular value. Then it can be shown that $\sigma^2_{\min} \leq \sigma^2(a)\leq \sigma^2_{\max}$ where $\sigma^2_{\min} = \min_{i}|\lambda_{i}|H_L^2$ and $\sigma^2_{\max} = \max_{i}|\lambda_{i}|H^2_U$ (see \Cref{remark:bound-variance}).
\begin{assumption}
\label{assm:bounded-variance}
We assume that $\bSigma_*$ has its minimum and maximum eigenvalues bounded such that for every action $a\in[A]$ the following holds $\sigma^2_{\min} \leq \sigma^2(a)\leq \sigma^2_{\max}$.
\end{assumption}
\subsection{Bounds on Loss}
\subsection{Regret Bound of \sp}
The regret for the agnostic algorithm for the estimated behavior policy $\wb^*$ is given by
\begin{align}
\cR_n &= \bL_n(\pi,\wb,\wSigma_\Gamma) - \L_n(\pi,\bb^*,\bSigma_*). \label{eq:regret-definition}
\end{align}
where, $\L_n(\pi,\wb,\wSigma_\Gamma)$ and $\L^*_n(\pi,\bb^*,\bSigma_*)$ are defined in \eqref{eq:opt-oracle-loss} and \eqref{eq:opt-agnostic-loss} respectively.
We now state the concentration inequality required to prove the regret for the agnostic \Cref{alg:linear-bandit}.
\begin{lemma}
\label{lemma:gradient-conc}
\textbf{(Loss Concentration)} Let $\wSigma_\Gamma$ be the empirical estimate of $\bSigma_*$. Define $\bV=\sum_{a,a'}\bw(a)\bw(a')^{\top}$. We have that for any arbitrary proportion $\bb$ the following
\begin{align*}
\Pb&\big(\big|\sum_{a,a'} \bw(a)^{\top}(\bA^{-1}_{\bb^*, \wSigma_\Gamma} - \bA^{-1}_{\bb^*, \bSigma_*})\bw(a')\big|\\
&\qquad\leq \tfrac{2C B^* d^3 \log (A/ \delta)}{\Gamma}\big)\geq 1-\delta
\end{align*}
where $B^*$ is a problem-dependent quantity
and $C>0$ is a universal constant.
\end{lemma}
\begin{proof}\textbf{(Overview)} We can decompose
\begin{align*}
&|\sum_{a,a'} \!\bw(a)^{\top}(\bA^{-1}_{\bb^*, \wSigma_\Gamma}\!-\!\bA^{-1}_{\bb^*, \bSigma_*})\bw(a')|
\!\leq\! \|\bu\|\!\underbrace{\left\|\bA_{\bb^*,\bSigma_*} - \bA_{\bb^*,\wSigma_\Gamma}\right\|}_{\Delta}\!\|\mathbf{v}\|
\end{align*}
where, $\|\bu\|=\|\bA^{-1}_{\bb^*, \bSigma_*} \bw\|$ and $\|\mathbf{v}\| = \|\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bw\|$. First, observe that $\|\bu\|$ is a problem-dependent quantity. Then to bound $\Delta$ we bound the $\|\wSigma_\Gamma - \bSigma_*\| \leq \frac{2C d^2 \log (A / \delta)}{\Gamma}$. Finally to bound $\|\mathbf{v}\|$ we need to bound $\wsigma^2_\Gamma(a) \leq \sigma^2(a) + \frac{2C d^2 \log (A / \delta)}{\Gamma}$ where $\wsigma^2_\Gamma(a)$ is the empirical variance of $\sigma^2(a)$. Combining everything yields the desired result.
\end{proof}
\begin{customtheorem}{2}
\label{thm:regret-linear-bandit} \textbf{(Regret of \Cref{alg:linear-bandit}, informal)}
The regret of \Cref{alg:linear-bandit} for $n\geq O(\tfrac{d^4\log^2 (A/\delta)}{\sigma^4_{\min}})$
running \PE design in \cref{eq:opt-agnostic-loss} is given by
$\cR_n = O\left(\frac{d^3 \log(n )}{n^{3/2}}\right)$.
\end{customtheorem}
\textbf{Proof (Overview)}
Define the regret as in \eqref{eq:regret-definition}. Then recall that $\bb^*\in\triangle(\A)$ is the optimal design in \eqref{eq:opt-oracle-sol} and $\wb^*$ is the agnostic design in \eqref{eq:opt-agnostic-sol}. Define $\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) \coloneqq \tfrac{\left(1+2C_\Gamma(\delta)\right)}{n - \Gamma}\sum_{a,a'}\bw(a)^\top\bA_{\wb^*, \wSigma_\Gamma}^{-1}\bw(a')$. Then we can show that the regret can be decomposed into three parts: $\cR_n = \underbrace{\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma)}_{\textbf{Approximation error}}$ $+ \underbrace{\L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) - \L^*_n(\pi,\bb^*,\wSigma_\Gamma)}_{\textbf{Comparing two diff loss}}$ $+ \underbrace{\L^*_n(\pi,\bb^*,\wSigma_\Gamma) - \L^*_n(\pi,\bb^*,\bSigma_*)}_{\textbf{Estimation error of $\bSigma_*$}}$.
For the approximation error we need access to an oracle (see \Cref{assm:oracle-approx}) that gives $\epsilon$ approximation error. Then setting $\epsilon=\tfrac{1}{\sqrt{n}}$ we have that the estimation error is upper bounded by $n^{-3/2}$.
For comparing two different loss parts, we can use the definition of $C_\Gamma(\delta)$ to bound it as $O(\tfrac{d^2\log(A/\delta)}{n^{3/2}})$ as shown in \eqref{eq:comparing-two-loss}.
One key technical challenge addressed by our work is to derive a variance concentration inequality that scales with $d^3$ instead of the number of actions $A$. This is shown in \Cref{lemma:gradient-conc} where we show that the following holds with high probability
$\big|\sum_{a,a'} \bw(a)^{\top}(\bA^{-1}_{\bb^*, \wSigma_\Gamma}- \bA^{-1}_{\bb^*, \bSigma_*})\bw(a')\big|
\leq (2B^* Cd^3\log (A/\delta))/\Gamma$
where, $B^*$ is a problem dependent quantity and $C>0$.
Now using \Cref{lemma:gradient-conc}, setting the exploration factor $\Gamma=\sqrt{n}$, and $\delta=\frac{1}{n}$ we can show that the estimation error is upper bounded by
$\frac{B^*Cd^3\log (n )}{n^{3/2}} + \frac{d^2}{n^2}\Tr(\sum_{a,a'}\bw(a)\bw(a')^\top)$.
Combining everything we have the following regret as $ O(\tfrac{B^*d^3\log (n )}{n^{3/2}})$. The proof is given in \Cref{app:speed-regret}. $\blacksquare$.
\Cref{thm:regret-linear-bandit} states that the regret of \Cref{alg:linear-bandit} scales as $O(d^3\log(n)/n^{3/2})$ where $d$ is the dimension of $\btheta^*$. Note that our regret bound depends on the underlying feature dimension $d$ instead of actions $A$.
In the case where $d^3 < A$, we have a tighter bound than \citet{carpentier2011finite}.
Furthermore, the result of \cite{carpentier2011finite} cannot be easily extended to take advantage of structure in the linear bandit setting.
Finally, observe that our bound scales with the dimension $d^3$, whereas for the A-optimal design studied in \citet{fontaine2021online}, their bound scales with the number of arms $A$ such that their regret scales as $O(\frac{A\log n}{n^{3/2}})$.
Theorem 2 upper bounds the regret of our agnostic algorithm \sp\ compared to an oracle algorithm with knowledge of $\Sigma^\star$.
To quantify the tightness of our upper bound, we now turn to whether we can lower bound the regret of \sp.
For our final theoretical result, we consider a slightly different notion of regret:
$
\cR'_n \coloneqq \L_n(\pi, \wb, \bSigma_*) - \L_n(\pi, \bb^*, \Sigma_*).
$
This notion of regret captures how sub-optimal the estimated $\wb$ is compared to $\bb^*$ and \textit{not} additional error incurred by using an estimate of $\Sigma^*$ in the PWLS estimator.
We conjecture that $\cR'_n$ is indeed a lower bound to $\cR_n$ as we have established in \Cref{prop:linear-bandit} that the minimum variance estimator is the PWLS estimator using $\Sigma^\star$.
Thus, intuitively, $\L_{n}(\pi,\wb,\bSigma_*)$ is a lower bound to $\overline{\L}_n(\pi, \wb, \widehat{\Sigma}_\Gamma)$ as estimation error will likely only increase when using $\widehat{\Sigma}_\Gamma$ in place of $\Sigma_*$ in the PWLS estimator.
We leave proving that $\cR'_n$ is a lower bound to $\cR_n$ in future work.
\begin{customtheorem}{3}\!\!\!\textbf{(Lower Bound)}
\label{thm:minimax}
\!\!\!Let $|\bTheta| \!\!=\!\! 2^d$, $\btheta^*\!\in\!\bTheta$. Then any $\delta$-PAC policy $\bb$
satisfies $\cR'_{n} \!\!=\!\! \L_{n}(\pi,\wb,\bSigma_*) \!-\! \L_{n}(\pi,\bb^*,\bSigma_*) \!\geq\! \Omega\left(\frac{d^2\lambda_d(\bV)\log({n})}{{n}^{3/2}}\right)$ for the environment
in \eqref{eq:minimax-environment}.
\end{customtheorem}
\textbf{Proof (Overview:)} The proof follows the standard change of measure argument \citep{lattimore2020bandit}. We follow the proof technique of \citet{huang2017structured, mukherjee2022chernoff}. We reduce our linear bandit problem to the hypothesis testing setting and state a worst-case environment as in \eqref{eq:minimax-environment}. We then show that the regret of any $\delta$-PAC algorithm against an oracle in this environment must scale as $\Omega(\log n/n^{3/2})$. The proof is given in \Cref{app:lower-regret-bound}. $\blacksquare$
From the above result, we see that upper bound of \sp\ regret $\cR_n$ matches the lower bound of estimation regret $\cR'_n$.
\subsection{Regret Bound of \sp}
\label{app:speed-regret}
\begin{customtheorem}{2}\textbf{(formal)}
The regret of \Cref{alg:linear-bandit} for $n\geq 16C^2 d^4\log^2 (A/\delta)/ \sigma^4_{\min} $ running \PE design in \cref{eq:opt-agnostic-loss} is given by
\begin{align*}
\cR_n \leq \dfrac{1}{n^{3/2}} + O\left(\dfrac{d^2\log (n)}{n^{3/2}}\right)+\dfrac{2B^*Cd^3\log (n)}{n^{3/2}} + \dfrac{d^2}{n^2}\Tr\left(\sum_{a,a'}\bw(a)\bw(a')^\top\right) + \dfrac{2A H_U^2B^2}{n^2}= O\left(\dfrac{B^* d^3\log (n)}{n^{3/2}}\right).
\end{align*}
\end{customtheorem}
\begin{proof}
We follow the same steps as in \Cref{prop:loss-bandit-tracker}. Observe that $\tfrac{16C^2 d^4\log^2 (A/\delta)}{\sigma^4_{\min}} > \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma}$.
Hence for $\bz = \sum_a\bw(a)$ the loss function for $n\geq \frac{2C d^2 \log (A / \delta)}{\sigma^2_{\min}\Gamma}$ as follows
\begin{align*}
\bL_n(\pi,\wb,\wSigma_\Gamma) \coloneqq \E\left[\left(\bz^{\top}(\wtheta_{n-\Gamma} - \btheta_*)\right)^2\right]
&\overset{(a)}{\leq} \left(1+2C_\Gamma(\delta)\right)\bz^\top (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz.
\end{align*}
where, $(a)$ follows from \eqref{eq:upper-bound-thm-1}.
Recall that the quantity of the samples collected (following $\wb^{*}$) after exploration is as follows:
\begin{align*}
\left(\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma}\right)^{-1} = \left(\sum_a\left\lceil(n-\Gamma)\wb^*(a)\wsigma^{-2}_\Gamma(a)\right\rceil\bw(a)\bw(a)^{\top}\right)^{-1}
= \dfrac{1}{n-\Gamma}\bA_{\wb^*,\wSigma_\Gamma}^{-1}.
\end{align*}
Hence we use the loss function
\begin{align*}
\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) \coloneqq
\left(1+2C_\Gamma(\delta)\right)\bz^\top (\tX_{n-\Gamma}^{\top}\wSigma_{\Gamma}^{-1}\tX_{n-\Gamma})^{-1}\bz = \frac{\left(1+2C_\Gamma(\delta)\right)}{n - \Gamma}\sum_{a,a'}\bw(a)^\top\bA_{\wb^*, \wSigma_\Gamma}^{-1}\bw(a').
\end{align*}
Also recall that we define
\begin{align*}
\L^*_n(\pi,\bb^*,\wSigma_\Gamma) = \dfrac{1}{n}\sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a').
\end{align*}
Then we can decompose the regret as follows:
\begin{align*}
\cR_n
&= \bL_n(\pi,\wb,\wSigma_\Gamma) - \L^*_n(\pi,\wb^*,\bSigma_*)\\
&\leq \L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) + \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) - \L^*_n(\pi,\wb^*,\bSigma_*)
\\
&= \underbrace{\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma)}_{\textbf{Approximation error}} + \underbrace{\L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) - \L^*_n(\pi,\bb^*,\wSigma_\Gamma)}_{\textbf{Comparing two diff loss}} + \underbrace{\L^*_n(\pi,\bb^*,\wSigma_\Gamma) - \L^*_n(\pi,\bb^*,\bSigma_*)}_{\textbf{Estimation error of $\bSigma_*$}}
\end{align*}
First recall that the good variance event as follows:
\begin{align*}
\xi^{var}_\delta(\Gamma) \coloneqq \left\{\forall a, \left|\bx(a)^{\top}\left(\wSigma_\Gamma - \bSigma_*\right)\bx(a)\right| < \frac{2C d^2 \log (A / \delta)}{\Gamma}\right\}
\end{align*}
Under the good variance event, following the same steps as \Cref{prop:loss-bandit-tracker} we can bound the approximation error setting $\delta = 1/n^3$ as follows
\begin{align*}
\L'_{n-\Gamma}(\pi,\wb,\wSigma_\Gamma) - \L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma)
&\overset{}{\leq}
O\left(\dfrac{d^2\log(A/\delta)}{n^{3/2}}\right)\indic{ \xi^{var}_\delta(\Gamma)} + \sum_{t=1}^n A H_U^2B^2\Pb\left(\left(\xi^{var}_\delta(\Gamma)\right)^c\right
\overset{}{\leq}
O\left(\dfrac{d^2\log(A/\delta)}{n^{3/2}}\right) + \dfrac{A H_U^2B^2}{n^2}
\end{align*}
and the second part of comparing the two losses as
\begin{align*}
\L'_{n-\Gamma}(\pi,\wb^*,\wSigma_\Gamma) - \L^*_{n}(\pi,\bb^*,\wSigma_\Gamma)
&\overset{}{\leq} O\left(\dfrac{d^2\log(A/\delta)}{n^{3/2}}\right)\indic{ \xi^{var}_\delta(\Gamma)} + \sum_{t=1}^n A H_U^2B^2\Pb\left(\left(\xi^{var}_\delta(\Gamma)\right)^c\right
\leq O\left(\dfrac{d^2\log(A/\delta)}{n^{3/2}}\right) + \dfrac{A H_U^2B^2}{n^2}
\end{align*}
We define the good estimation event as follows:
\begin{align*}
\xi^{est}_\delta(\Gamma) \coloneqq \left\{\left|\sum_{a,a'} \bw(a)^{\top}\bA^{-1}_{\bb^*, \wSigma_\Gamma}\bw(a') - \sum_{a,a'} \bw(a)^{\top}\bA^{-1}_{\bb^*, \bSigma_*}\bw(a')\right| \leq \frac{2C B^* d^3 \log (9 H^2_U / \delta)}{\sigma^4_{\min}\Gamma}\right\}
\end{align*}
Under the good estimation event $\xi^{est}(\Gamma)$ and using \Cref{lemma:gradient-conc} we can show that the estimation error is given by
\begin{align*}
&\L^*_n(\pi,\bb^*,\wSigma_\Gamma) - \L^*_n(\pi,\bb^*,\bSigma_*) \leq \left(\frac{1}{n} \sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a') - \frac{1}{n} \sum_{a,a'}\bw(a)^\top\bA_{\bb^*,\bSigma_*}^{-1}\bw(a')\right)\indic{\xi^{est}_\delta(\Gamma)} \\
&\qquad +\left(\frac{1}{n} \sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a') - \frac{1}{n} \sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \bSigma_*}^{-1}\bw(a')\right)\indic{\xi^{est}_\delta(\Gamma)^C}\\
&= \left(\frac{1}{n} \sum_{a,a'}\bw(a)^\top\bA_{\bb^*, \wSigma_\Gamma}^{-1}\bw(a') - \frac{1}{n} \sum_{a,a'}\bw(a)^\top\bA_{\bb^*,\bSigma_*}^{-1}\bw(a')\right)\indic{\xi^{est}_\delta(\Gamma)} \\
&\qquad +\frac{1}{n} \Tr\left(\left(\bA_{\bb^*, \wSigma_\Gamma}^{-1} - \bA_{\bb^*, \bSigma_*}^{-1}\right) \left(\sum_{a,a'}\bw(a)\bw(a')^\top\right)\right)\indic{\xi^{est}_\delta(\Gamma)^C}\\
&\overset{(a)}{\leq} \dfrac{1}{n}2B^*\dfrac{Cd^3\log (1/\delta)}{\Gamma} + \dfrac{1}{n}\Tr\left(\bA_{\bb^*, \bSigma_*}^{-1}\right)\Tr\left(\bA_{\bb^*, \wSigma_\Gamma}^{-1}\right)\Tr\left(\sum_{a,a'}\bw(a)\bw(a')^\top\right)\delta\\
&\overset{(b)}{\leq} \dfrac{1}{n}2B^*\dfrac{Cd^3\log (n)}{\sqrt{n}} + \dfrac{d^2}{n^2}\Tr\left(\sum_{a,a'}\bw(a)\bw(a')^\top\right
= \dfrac{2B^*Cd^3\log (n)}{n^{3/2}} + \dfrac{d^2}{n^2}\Tr\left(\sum_{a,a'}\bw(a)\bw(a')^\top\right)
\end{align*}
where, $(a)$ follows from \Cref{lemma:gradient-conc}, $(b)$ follows as $\Gamma=\sqrt{n}$ and setting $\delta=\frac{1}{n}$. Combining everything we have the following regret as
\begin{align*}
\cR_n &\leq \dfrac{1}{n^{3/2}} + O\left(\dfrac{d^2\log (n)}{n^{3/2}}\right)+\dfrac{2B^*Cd^3\log (n)}{n^{3/2}} + \dfrac{d^2}{n^2}\Tr\left(\sum_{a,a'}\bw(a)\bw(a')^\top\right) + \dfrac{2A H_U^2B^2}{n^2}= O\left(\dfrac{B^* d^3\log (n)}{n^{3/2}}\right)
\end{align*}
where, $B^* = \left(\left\|\bA^{-1}_{\bb^*, \bSigma_*} \bw\right\|^2 \left\|\sum_{a=1}^A\bb^*(a)\bw(a)\bw(a)^{\top}H^2_U\right\| \left\|\left(\sum_{a=1}^A\dfrac{\bb^*(a)\bw(a)\bw(a)^{\top}}{\sigma^2(a) + \frac{2C d^3 \log (9 H^2_U / \delta)}{\sqrt{n}} }\right)^{-1}\bw\right\|\right)$.
\end{proof}
|
1,116,691,497,284 | arxiv | \section{Introduction}
Polar codes are the first family of low-complexity capacity-achieving codes. Polar codes were first introduced by Ar{\i}kan for binary-input channels \cite{Arikan}. The construction of polar codes relies on a phenomenon that is called \emph{polarization}: A collection of independent copies of the channel is transformed into a collection of synthetic channels that are almost perfect or almost useless.
The transformation of Ar{\i}kan for binary-input channels uses the XOR operation. The polarization phenomenon was generalized to channels with non-binary input by replacing the XOR operation with a binary operation on the input-alphabet \cite{SasogluTelAri,SasS,ParkBarg,SahebiPradhan,RajTelA,RajErgI,RajErgII}. Note that if the input alphabet size is not prime, we may have multilevel polarization where the synthetic channels can polarize to intermediate channels that are neither almost perfect nor almost useless. In this paper, we are interested in the multilevel polarization phenomenon when an Abelian group operation is used. More precisely, we are interested in determining the polarization levels of a family of channels that we call automorphic-symmetric channels.
In Section \ref{sec2}, we introduce the preliminaries of this paper. In Section \ref{sec3} we introduce $\mathcal{H}$-polarizing and strongly-polarizing families of channels. We show that if $\mathcal{W}$ is an $\mathcal{H}$-polarizing family of channels, then the polarization levels of every channel in $\mathcal{W}$ are determined by subgroups in $\mathcal{H}$. In Section \ref{sec4} we show that the family of $q$-ary erasure channels is strongly polarizing. This implies that every $q$-ary erasure channel polarizes to almost perfect and almost useless channels. In Section \ref{sec5} we introduce $q$-symmetric channels and generalized $q$-symmetric channels. $q$-symmetric channels generalize binary symmetric channels to arbitrary input alphabets. Generalized $q$-symmetric channels are a generalization of binary-input memoryless symmetric-output (BMS) channels. In Section \ref{sec6}, we introduce the family of automorphic-symmetric channels. We show that generalized $q$-symmetric channels are automorphic-symmetric. We show that the polarization levels of an automorphic-symmetric channel are determined by characteristic subgroups. This implies that if the group that is used does not contain any non-trivial characteristic subgroup, we only have two-level polarization to almost perfect and almost useless channels.
\section{Preliminaries}
\label{sec2}
Throughout this paper, $(G,+)$ denotes a fixed finite Abelian group, and $q=|G|$ denotes its size.
Let $\mathcal{Y}$ be a finite set. We write $W:G\longrightarrow \mathcal{Y}$ to denote a discrete memoryless channel (DMC) of input alphabet $G$ and output alphabet $\mathcal{Y}$. We write $I(W)$ to denote the symmetric capacity\footnote{The symmetric capacity is the mutual information between a uniformly distributed input and its corresponding output.} of $W$.
Define the two channels $W^-:G\longrightarrow\mathcal{Y}^2$ and $W^+:G\longrightarrow\mathcal{Y}^2\times G$ as follows:
$$W^-(y_1,y_2|u_1)=\frac{1}{q}\sum_{u_2\in G}W(y_1|u_1+u_2)W(y_2|u_2),$$
and
$$W^+(y_1,y_2,u_1|u_2)=\frac{1}{q}W(y_1|u_1+u_2)W(y_2|u_2).$$
Furthermore, for every $n\geq 1$ and every $s=(s_1,\ldots,s_n)\in\{-,+\}^n$, define the channel $W^s=(\ldots(W^{s_1})^{s_2}\ldots)^{s_n}$.
Now let $H$ be a subgroup of $(G,+)$. We denote by $G/H$ the quotient group of $G$ by $H$. Define the channel $W[H]:G/H\longrightarrow \mathcal{Y}$ as follows:
$$W[H](y|A)=\frac{1}{|A|}\sum_{x\in A}W(y|x)=\frac{1}{|H|}\sum_{x\in A}W(y|x).$$
It is easy to see that if $X$ is a uniformly distributed random variable in $G$ and $Y$ is the output of $W$ when $X$ is the input, then $I(W[H])=I(X\bmod H;Y)$.
\begin{definition}
Let $\delta>0$. A channel $W:G\longrightarrow\mathcal{Y}$ is said to be $\delta$-determined by a subgroup $H$ of $G$ if $\big|I(W)-\log|G/H|\big|<\delta$ and $\big|I(W[H])-\log|G/H|\big|<\delta$.
\end{definition}
The inequalities $\big|I(W)-\log|G/H|\big|<\delta$ and $\big|I(W[H])-\log|G/H|\big|<\delta$ can be interpreted as follows: Let $X$ be a uniformly distributed random variable in $G$ and let $Y$ be the output of the channel $W$ when $X$ is the input. If $\delta>0$ is small and $\big|I(W[H])-\log|G/H|\big|<\delta$, then from $Y$ we can determine $X\bmod H$ with high probability. If we also have $\big|I(W)-\log|G/H|\big|<\delta$, then $X\bmod H$ is almost the only information about $X$ which can be reliably deduced from $Y$. This is why we can say that if $W$ is $\delta$-determined by $H$ for a small $\delta$, then $W$ behaves similarly to a deterministic homomorphism channel projecting its input onto $G/H$.
It was proven in \cite{RajTelA} that as the number $n$ of polarization steps becomes large, the synthetic channels $(W^s)_{s\in\{-,+\}^n}$ polarize to deterministic homomorphism channels projecting their input onto quotient groups. More precisely, for every $\delta>0$, we have
\begin{align*}
\lim_{n\to\infty}\frac{1}{2^n}\Big|\Big\{s\in\{-,+\}^n:\;&\exists H_s\;\text{a subgroup of}\;G,\\
&W^s\;\text{is}\;\delta\text{-determined by}\;H_s\Big\}\Big|=1.
\end{align*}
\section{$\mathcal{H}$-Polarizing Families of Channels}
\label{sec3}
\begin{definition}
Let $\mathcal{H}$ be a set of subgroups of $(G,+)$. We say that a channel $W:G\longrightarrow\mathcal{Y}$ \emph{$\mathcal{H}$-polarizes} if for every $\delta>0$, we have
\begin{align*}
\lim_{n\to\infty}\frac{1}{2^n}\Big|\Big\{s\in\{-,+\}^n:\;&\exists H_s\in\mathcal{H},\\
&W^s\;\text{is}\;\delta\text{-determined by}\;H_s\Big\}\Big|=1.
\end{align*}
If $\mathcal{H}=\{\{0\},G\}$ and $W$ $\mathcal{H}$-polarizes, we say that $W$ \emph{strongly polarizes}.
\end{definition}
If $W$ $\mathcal{H}$-polarizes, then the levels of polarization are determined by subgroups in $\mathcal{H}$. $W$ strongly polarizes if and only if its synthetic channels $(W^s)_{s\in\{-,+\}^n}$ polarize only to almost useless and almost perfect channels.
Let $W:G\longrightarrow\mathcal{Y}$ be a given channel, and assume that after simulating enough polarization steps, we are convinced that $W$ $\mathcal{H}$-polarizes for some family $\mathcal{H}$ of subgroups. How can we prove that this is indeed the case? Characterizing $\mathcal{H}$-polarizing channels seems to be very difficult. In this paper, we aim to provide sufficient conditions for $\mathcal{H}$-polarization.
Our approach to show the $\mathcal{H}$-polarization of a channel, is to show that it belongs to what we call \emph{$\mathcal{H}$-polarizing family of channels}:
\begin{definition}
A family $\mathcal{W}$ of channels with input alphabet $G$ is said to be \emph{$\mathcal{H}$-polarizing} if it satisfies the following conditions:
\begin{itemize}
\item If $W\in\mathcal{W}$, then $W^-\in\mathcal{W}$ and $W^+\in\mathcal{W}$.\footnote{This implies that $W^s\in\mathcal{W}$ for every $n\geq 1$ and every $s\in\{-,+\}^n$.}
\item There exists $\delta_{\mathcal{W},\mathcal{H}}>0$ such that $\mathcal{W}$ does not contain any channel that is $\delta_{\mathcal{W},\mathcal{H}}$-determined by a subgroup other than those in $\mathcal{H}$.
\end{itemize}
\end{definition}
\begin{proposition}
\label{propMain}
Let $\mathcal{H}$ be a family of subgroups and let $\mathcal{W}$ be an $\mathcal{H}$-polarizing family of channels. Every channel $W\in\mathcal{W}$ $\mathcal{H}$-polarizes.
\end{proposition}
\begin{proof}
Fix $W\in\mathcal{W}$ and let $0<\delta<\delta_{\mathcal{W},\mathcal{H}}$. For every $n\geq 1$, define
\begin{align*}
A_{n,\delta}=\Big\{s\in\{-,+\}^n:\;\exists H_s\;&\text{a subgroup of}\;G,\\
&W^s\;\text{is}\;\delta\text{-determined by}\;H_s\Big\}.
\end{align*}
We have $\displaystyle\lim_{n\rightarrow\infty}\frac{1}{2^n}|A_{n,\delta}|=1$. Let $s\in A_{n,\delta}$. There exists a subgroup $H_s$ of $G$ such that $W^s$ is $\delta$-determined by $H_s$. Since $W\in\mathcal{W}$, then $W^s\in\mathcal{W}$, which implies that $W^s$ cannot be $\delta$-determined by a subgroup other than those in $\mathcal{H}$. Therefore, $H_s\in\mathcal{H}$. We conclude that
\begin{align*}
\lim_{n\to\infty}\frac{1}{2^n}\Big|\Big\{s\in\{-,+\}^n:\;&\exists H_s\in\mathcal{H},\\
&W^s\;\text{is}\;\delta\text{-determined by}\;H_s\Big\}\Big|=1,
\end{align*}
which means that $W$ $\mathcal{H}$-polarizes.
\end{proof}
\section{$q$-ary Erasure Channels}
\label{sec4}
Our first example of a strongly polarizing family of channels is the family of $q$-ary erasure channels.
\begin{definition}
Let $e$ be a symbol that does not belong to $G$. We say that a channel $W:G\longrightarrow G\cup\{e\}$ is a $q$-ary erasure channel with parameter $\epsilon$ (denoted $W=qEC(\epsilon)$) if
$$W(y|x)=\begin{cases}1-\epsilon\quad&\text{if}\;y=x,\\\epsilon\quad&\text{if}\;y=e,\\0\quad\text{otherwise}.\end{cases}$$
\end{definition}
We also call $q$-ary erasure channel any channel that is \emph{equivalent} to $qEC(\epsilon)$ in the following sense:
\begin{definition}
A channel $W:G\longrightarrow\mathcal{Y}$ is said to be \emph{degraded} from another channel $W':G\longrightarrow\mathcal{Y}'$ if there exists a channel $V':\mathcal{Y}'\longrightarrow\mathcal{Y}$ such that
$$W(y|x)=\sum_{y'\in\mathcal{Y}'}W'(y'|x)V'(y|y').$$
$W$ and $W'$ are said to be \emph{equivalent} if they are degraded from each other.
\end{definition}
Denote by $\mathcal{W}_{qEC}$ the family of all $q$-ary erasure channels.
\begin{lemma}
If $W\in \mathcal{W}_{qEC}$, then $W^-\in\mathcal{W}_{qEC}$ and $W^+\in\mathcal{W}_{qEC}$.
\label{lemqEC1}
\end{lemma}
\begin{proof}
It is easy to check that if $W$ is equivalent to $qEC(\epsilon)$, then $W^-$ is equivalent to $qEC(2\epsilon-\epsilon^2)$ and $W^+$ is equivalent to $qEC(\epsilon^2)$.
\end{proof}
\begin{lemma}
\label{lemqEC2}
There exists $\delta_{qEC}>0$ such that there is no $q$-ary erasure channel that is $\delta_{qEC}$-determined by a non-trivial\footnote{The trivial subgroups of $(G,+)$ are $\{0\}$ and $G$.} subgroup.
\end{lemma}
\begin{proof}
Define $\displaystyle\delta_{qEC}=\frac{(\log 2)^2}{\log (2q)}$. Let $W=qEC(\epsilon)$ be a $q$-ary erasure channel and assume there exists a non-trivial subgroup $H$ of $G$ such that $\big|I(W[H])-\log|G/H|\big|<\delta_{qEC}$. It is easy to check that $W[H]$ is a $\displaystyle\frac{q}{|H|}$-erasure channel of input alphabet $G/H$ and of parameter $\epsilon$. Moreover, we have $I(W[H])=(\log|G/H|)(1-\epsilon)$. Now since $\big|I(W[H])-\log|G/H|\big|<\delta_{qEC}=\frac{(\log 2)^2}{\log (2q)}$, we have
$$\resizebox{0.48\textwidth}{!}{$\displaystyle\epsilon\log|G/H| <\frac{(\log 2)^2}{\log (2q)}\;\stackrel{(a)}{\Rightarrow}\; \epsilon<\frac{(\log2)^2}{\log (2q)\log|G/H|}\leq\frac{\log 2}{\log (2q)},$}$$
where (a) follows from the fact that $H$ is non-trivial (and hence $|G/H|\geq 2$). Thus,
\begin{align*}
I(W)-\log|G/H|&=(\log q)(1-\epsilon)-\log\frac{q}{|H|}\\
&= \log |H| -\epsilon\log q\\
&\stackrel{(a)}{\geq} \log 2 - \epsilon\log q\\
&> \log 2 -
\frac{(\log q)(\log2)}{\log 2 + \log q}\\
&=\frac{(\log 2)^2}{\log (2q)}=\delta_{qEC},
\end{align*}
where (a) follows from the fact that $H$ is non-trivial (and hence $|H|\geq 2$). Therefore, we cannot have $\big|I(W)-\log|G/H|\big|<\delta_{qEC}$.
We conclude that if $W$ is a channel with input alphabet $G$ such that there exists a non-trivial subgroup $H$ of $G$ satisfying $\big|I(W)-\log|G/H|\big|<\delta_{qEC}$ and $\big|I(W[H])-\log|G/H|\big|<\delta_{qEC}$, then $W\notin\mathcal{W}_{qEC}$.
\end{proof}
\begin{proposition}
\label{propqEC}
$\mathcal{W}_{qEC}$ is a strongly polarizing family of channels.
\end{proposition}
\begin{proof}
The proposition follows from Lemmas \ref{lemqEC1} and \ref{lemqEC2}.
\end{proof}
\begin{corollary}
Every $q$-ary erasure channel with input alphabet $G$ strongly polarizes.
\end{corollary}
\begin{proof}
The corollary follows from Propositions \ref{propMain} and \ref{propqEC}
\end{proof}
\section{$q$-Symmetric Channels and Generalized $q$-Symmetric Channels}
\label{sec5}
\begin{definition}
Let $\displaystyle 0\leq\epsilon\leq\frac{1}{q-1}$. The $q$-symmetric channel of parameter $\epsilon$ (denoted $qSC(\epsilon)$) is the channel $W:G\longrightarrow G$ defined as
$$W(y|x)=\begin{cases}1-(q-1)\epsilon\quad&\text{if}\;y=x,\\\epsilon\quad&\text{otherwise.}\end{cases}$$
\end{definition}
$q$-symmetric channels generalize the binary symmetric channels to non-binary input alphabets.
We are interested in showing the strong polarization of $q$-symmetric channels. More generally, we are interested in showing the strong polarization of a more general family of channels:
\begin{definition}
\label{defGenqSym}
We say that a channel $W:G\longrightarrow\mathcal{Y}$ is a generalized $q$-symmetric channel if there exist a set $\mathcal{Y}_W$ and a bijection $\pi_W:G\times\mathcal{Y}_W\rightarrow\mathcal{Y}$ such that:
\begin{itemize}
\item There exists a mapping $p_W:\mathcal{Y}_W\rightarrow[0,1]$ such that $\displaystyle\sum_{y'\in\mathcal{Y}_W}p_W(y')=1$, i.e., $p_W$ is a probability distribution on $\mathcal{Y}_W$.
\item For every $y'\in\mathcal{Y}_W$, there exists $\displaystyle 0\leq\epsilon_{y'}\leq\frac{1}{q-1}$ such that for every $x,x'\in G$, we have:
\begin{align*}
W(\pi_W&(x',y')|x)\\
&=\begin{cases}p_W(y')\cdot(1-(q-1)\epsilon_{y'})\quad&\text{if}\;x'=x,\\p_W(y')\cdot\epsilon_{y'}\quad&\text{otherwise}.\end{cases}
\end{align*}
\end{itemize}
\end{definition}
Generalized $q$-symmetric channels generalize binary memoryless symmetric-output (BMS) channels.
\begin{example}
$qSC(\epsilon)$ is a generalized $q$-symmetric channel: Let $\mathcal{Y}_W=\{0\}$, define $\pi_W:G\times\mathcal{Y}_W\rightarrow G$ as $\pi_W(x,0)=x$, and define $p_W(0)=1$.
\end{example}
\begin{example}
Every $q$-ary erasure channel is equivalent to a generalized $q$-symmetric channel.
\end{example}
\begin{remark}
A generalized $q$-symmetric channel can be thought of as a combination of $q$-symmetric channels indexed by $y'\in\mathcal{Y}_W$:
\begin{itemize}
\item The channel picks $y'\in\mathcal{Y}_W$ with probability $p_W(y')$ and independently from the input.
\item The channel sends the input $x$ through a channel $qSC(\epsilon_{y'})$ and obtains $x'$.
\item The channel output is $y=\pi_W(x',y')$.
\end{itemize}
Since $\pi_W$ is a bijection, the receiver can recover $(x',y')$ from $y$. In other words, the receiver knows which $qSC$ from the collection $\{qSC(\epsilon_{y'}):\;y'\in\mathcal{Y}_W\}$ was used. Moreover, the receiver knows the $qSC$ output $x'$.
\end{remark}
The reader can check that if $W$ is a generalized $q$-symmetric channel, then $W^-$ is a generalized $q$-symmetric channel as well. Unfortunately, $W^+$ is not necessarily a generalized $q$-symmetric channel. Therefore, generalized $q$-symmetric channels do not form a strongly polarizing family of channels. In the next section, we will see that under some condition on the group $(G,+)$, generalized $q$-symmetric channels form a subfamily of a strongly polarizing family of channels.
\section{Automorphic-Symmetric Channels}
\label{sec6}
\begin{definition}
An automorphism of $G$ is an isomorphism\footnote{An isomorphism is a bijective homomorphism.} from $G$ to itself.
\end{definition}
\begin{definition}
A channel $W:G\longrightarrow\mathcal{Y}$ is said to be \emph{automorphic-symmetric with respect to $(G,+)$} if for every automorphism $f:G\rightarrow G$ there exists a bijection $\pi_f:\mathcal{Y}\rightarrow\mathcal{Y}$ such that $W(\pi_f(y)|f(x))=W(y|x)$.
\end{definition}
\begin{example}
If $G\equiv\mathbb{Z}_q$, then the identity is the only automorphism of $G$. This means that every channel is (trivially) automorphic-symmetric with respect to $\mathbb{Z}_q$.
\end{example}
Another example of automorphic-symmetric channels is generalized $q$-symmetric channels:
\begin{proposition}
\label{propSymIsAut}
Every generalized $q$-symmetric channel is automorphic-symmetric with respect to $(G,+)$.
\end{proposition}
\begin{proof}
Let $W:G\longrightarrow\mathcal{Y}$ be a generalized $q$-symmetric channel. Let $\mathcal{Y}_W$, $\pi_W$ and $p_W$ be as in Definition \ref{defGenqSym}.
Let $f:G\rightarrow G$ be an automorphism. Define $\pi_f:\mathcal{Y}\rightarrow\mathcal{Y}$ as $\pi_f=\pi_W\circ g_f\circ \pi_W^{-1}$, where $g_f:G\times\mathcal{Y}_W\rightarrow G\times\mathcal{Y}_W$ is defined as
$$g_f(x',y')=(f(x'),y').$$
Let $x\in G$ and $y\in\mathcal{Y}$. Define $(x',y')=\pi_W^{-1}(y)$. We have
\begin{align*}
W(\pi_f(y)|f(x))&=W\big((\pi_W\circ g_f\circ \pi_W^{-1})(y)\big|f(x)\big)\\
&=W(\pi_W(g(x',y'))|f(x))\\
&=W(\pi_W(f(x'),y')|f(x))\\
&\stackrel{(a)}{=}W(\pi_W(x',y')|x)=W(y|x),
\end{align*}
where (a) follows from the definition of generalized $q$-symmetric channels.
\end{proof}
\begin{definition}
Let $H$ be a subgroup of $G$. We say that $H$ is a \emph{characteristic subgroup} of $G$ if $f(H)=H$ for every automorphism $f$ of $G$. A subgroup that is not characteristic is said to be non-characteristic.
We denote the family of characteristic subgroups of $(G,+)$ by $\mathcal{H}_{ch}(G)$.
\end{definition}
In the rest of this section, we will show that automorphic-symmetric channels form an $\mathcal{H}_{ch}(G)$-polarizing family of channels.
\begin{lemma}
\label{lemAutPol1Step}
If $W:G\longrightarrow\mathcal{Y}$ is automorphic-symmetric, then $W^-$ and $W^+$ are automorphic-symmetric as well.
\end{lemma}
\begin{proof}
Let $f:G\rightarrow G$ be an automorphism and let $\pi_f:\mathcal{Y}\rightarrow\mathcal{Y}$ be a bijection satisfying $W(\pi_f(y)|f(x))=W(y|x)$. Define $\pi_f^-:\mathcal{Y}^2\rightarrow\mathcal{Y}^2$ and $\pi_f^+:\mathcal{Y}^2\times G\rightarrow\mathcal{Y}^2\times G$ as follows:
$$\pi_f^-(y_1,y_2)=(\pi_f(y_1),\pi_f(y_2)),$$
$$\pi_f^+(y_1,y_2,u_1)=(\pi_f(y_1),\pi_f(y_2),f(u_1)).$$
Obviously, $\pi_f^-$ and $\pi_f^+$ are bijections. Moreover, we have:
\begin{align*}
W^-&(\pi_f^-(y_1,y_2)|f(u_1))\\
&=W^-(\pi_f(y_1),\pi_f(y_2)|f(u_1))\\
&=\frac{1}{q}\sum_{u_2\in G}W(\pi_f(y_1)|f(u_1)+u_2)W(\pi_f(y_2)|u_2)\\
&\stackrel{(a)}{=}\frac{1}{q}\sum_{u_2\in G}W(\pi_f(y_1)|f(u_1)+f(u_2))W(\pi_f(y_2)|f(u_2))\\
&\stackrel{(b)}{=}\frac{1}{q}\sum_{u_2\in G}W(\pi_f(y_1)|f(u_1+u_2))W(\pi_f(y_2)|f(u_2))\\
&=\frac{1}{q}\sum_{u_2\in G}W(y_1|u_1+u_2)W(y_2|u_2)\\
&=W^-(y_1,y_2|u_1),
\end{align*}
where (a) and (b) follow from the fact that $f$ is an automorphism. This shows that $W^-$ is automorphic-symmetric. On the other hand, we have
\begin{align*}
W^+(&\pi_f^+(y_1,y_2,u_1)|f(u_2))\\
&=W^+(\pi_f(y_1),\pi_f(y_2),f(u_1)|f(u_2))\\
&=\frac{1}{q}W(\pi_f(y_1)|f(u_1)+f(u_2))W(\pi_f(y_2)|f(u_2))\\
&=\frac{1}{q}W(\pi_f(y_1)|f(u_1+u_2))W(\pi_f(y_2)|f(u_2))\\
&=\frac{1}{q}W(y_1|u_1+u_2)W(y_2|u_2)\\
&=W^+(y_1,y_2,u_1|u_2).
\end{align*}
This shows that $W^+$ is automorphic-symmetric as well.
\end{proof}
\begin{lemma}
\label{lemDetAut}
Let $\delta>0$. If $W$ is an automorphic-symmetric channel which is $\delta$-determined by a subgroup $H$ of $G$, then $W$ is $\delta$-determined by $f(H)$ for every automorphism $f:G\rightarrow G$.
\end{lemma}
\begin{proof}
Let $W:G\longrightarrow\mathcal{Y}$ be an automorphic-symmetric channel, let $f:G\rightarrow G$ be an automorphism, and let $H$ be a subgroup of $G$.
For every coset $A\in G/H$, define $$f(A)=\{f(x):\;x\in A\}.$$ It is easy to see that $f(A)\in G/f(H)$. Moreover, the reader can check that the mapping $f:G/H\rightarrow G/f(H)$ is an isomorphism of groups.
Now let $X$ be a uniformly distributed random variable in $G$ and let $Y$ be the output of the channel $W$ when $X$ is the input. For every $(x,y)\in G\times\mathcal{Y}$, we have $\displaystyle\mathbb{P}_{X,Y}(x,y)=\frac{1}{q}W(y|x)$. Therefore, for every $A\in G/H$, we have
\begin{equation}
\label{eqAutSym}
\begin{aligned}
&\mathbb{P}_{f^{-1}(X\bmod f(H)),\pi_f^{-1}(Y)}(A,y)\\
&\;\;=\mathbb{P}_{X\bmod f(H),Y}(f(A),\pi_f(y))=\sum_{x\in f(A)}\mathbb{P}_{X,Y}(x,\pi_f(y))\\
&\;\;=\sum_{x\in A}\mathbb{P}_{X,Y}(f(x),\pi_f(y))=\sum_{x\in A}\frac{1}{q}W(\pi_f(y)|f(x))\\
&\;\;\stackrel{(a)}{=}\sum_{x\in A}\frac{1}{q}W(y|x)=\sum_{x\in A}\mathbb{P}_{X,Y}(x,y)=\mathbb{P}_{X\bmod H,Y}(A,y),
\end{aligned}
\end{equation}
where (a) follows from the fact that $W$ is automorphic-symmetric. We deduce that
\begin{align*}
I(W[f(H)])&=I(X\bmod f(H);Y)\\
&\stackrel{(b)}{=}I(f^{-1}(X\bmod f(H));\pi_f^{-1}(Y))\\
&\stackrel{(c)}{=}I(X\bmod H;Y)=I(W[H]),
\end{align*}
where $(b)$ follows from the fact that $f:G/H\rightarrow G/f(H)$ and $\pi_f:\mathcal{Y}\rightarrow\mathcal{Y}$ are bijections. (c) follows from Equation \eqref{eqAutSym}.
Since $|G/f(H)|=|G/H|$ and $I(W[f(H)])=I(W[H])$, $W$ is $\delta$-determined by $H$ if and only if $W$ is $\delta$-determined by $f(H)$.
\end{proof}
\begin{proposition}
\label{propUnique}
There exists $\delta_0>0$ such that for every channel $W$ with input alphabet $G$, if $W$ is $\delta_0$-determined by a subgroup $H$ of $G$, then $H$ is the only subgroup $\delta_0$-determining $W$ (i.e., there is no subgroup $H'$ other than $H$ such that $W$ is $\delta_0$-determined by $H'$).
\end{proposition}
\begin{proof}
Define $\displaystyle\delta_0=\frac{1}{3}\log 2$. Assume that there are two subgroups $H_1$ and $H_2$ such that $W$ is $\delta_0$-determined by both $H_1$ and $H_2$.
Let $X$ be a random variable uniformly distributed in $G$ and let $Y$ be the output when $X$ is the input. We have
\begin{align*}
I(W[H_1])&=I(X\bmod H_1;Y)\\
&=H(X\bmod H_1)-H(X\bmod H_1|Y)\\
&=\log|G/H_1|-H(X\bmod H_1|Y).
\end{align*}
Therefore,
\begin{align*}
H(X\bmod H_1|Y)=\log|G/H_1|-I(W[H_1])\stackrel{(a)}{<}\delta_0,
\end{align*}
where (a) follows from the fact that $W$ is $\delta_0$-determined by $H_1$. Similarly, we can show that $H(X\bmod H_2|Y)<\delta_0$.
Now since there is a one-to-one correspondence between $(X\bmod H_1\cap H_2)$ and $(X\bmod H_1,X\bmod H_2)$, we have
\begin{align*}
H(X&\bmod H_1\cap H_2|Y)\\
&=H(X\bmod H_1,X\bmod H_2|Y)\\
&\leq H(X\bmod H_1|Y)+H(X\bmod H_2|Y)<2\delta_0.
\end{align*}
Therefore,
\begin{equation}
\label{eqeq1}
\begin{aligned}
I(W)&=I(X;Y)\geq I(X\bmod H_1\cap H_2;Y)\\
&=H(X\bmod H_1\cap H_2)-H(X\bmod H_1\cap H_2|Y)\\
&>\log|G/(H_1\cap H_2)|-2\delta_0\\
&=\log|G| - \log|H_1\cap H_2|-2\delta_0.
\end{aligned}
\end{equation}
Now since $W$ is $\delta_0$-determined by $H_1$, we have
\begin{align*}
I(W)-\log|G/H_1|<\delta_0,
\end{align*}
hence
\begin{equation}
\label{eqeq2}
I(W)<\log|G|-\log|H_1|+\delta_0.
\end{equation}
By combining Equations \eqref{eqeq1} and \eqref{eqeq2}, we get
$$\log\frac{|H_1|}{|H_1\cap H_2|}<3\delta_0=\log 2,$$
which implies that $\displaystyle|H_1\cap H_2|>\frac{|H_1|}{2}$. On the other hand, since $H_1\cap H_2$ is a subgroup of $H_1$, we have either $H_1\cap H_2=H_1$ or $|H_1\cap H_2|\leq\frac{1}{2}|H_1|$. Therefore, $H_1=H_1\cap H_2$ and so $H_1\subset H_2$. Similarly, we can show that $H_2\subset H_1$. Hence $H_1=H_2$.
We conclude that $W$ is $\delta_0$-determined by at most one subgroup of $G$.
\end{proof}
\begin{lemma}
\label{lemSecondProp}
Let $\delta_0$ be as in Proposition \ref{propUnique}. Automorphic-symmetric channels cannot be $\delta_0$-determined by a subgroup that is non-characteristic.
\end{lemma}
\begin{proof}
Let $W$ be an automorphic-symmetric channel. Assume that $W$ is $\delta_0$-determined by a non-characteristic subgroup $H$.
Since $H$ is non-characteristic, there exists an automorphism $f$ of $G$ such that $f(H)\neq H$. Lemma \ref{lemDetAut} implies that $W$ is $\delta_0$-determined by $f(H)$. This contradicts Proposition \ref{propUnique}.
\end{proof}
\begin{theorem}
\label{theMain}
Automorphic-symmetric channels form an $\mathcal{H}_{ch}(G)$-polarizing family of channels.
\end{theorem}
\begin{proof}
The theorem follows from Lemmas \ref{lemAutPol1Step} and \ref{lemSecondProp}.
\end{proof}
\vspace*{3mm}
Theorem \ref{theMain} shows that the synthetic channels of an automorphic-symmetric channel polarize to channels that are determined by characteristic subgroups.
\begin{corollary}
\label{corMain}
If $(G,+)$ does not contain any non-trivial characteristic subgroup, then the family of automorphic-symmetric channels is strongly polarizing.
\end{corollary}
\begin{proof}
The corollary follows from Theorem \ref{theMain}.
\end{proof}
\begin{corollary}
\label{corMain2}
If $(G,+)$ does not contain any non-trivial characteristic subgroup, then all automorphic-symmetric channels strongly polarize. In particular, all generalized $q$-symmetric channels strongly polarize.
\end{corollary}
\begin{proof}
The corollary follows from Corollary \ref{corMain}, Proposition \ref{propMain} and Proposition \ref{propSymIsAut}.
\end{proof}
\begin{example}
If $G\equiv\mathbb{F}_p^r$ for a prime $p$, then every non-trivial subgroup is non-characteristic. In this case, every automorphic-symmetric channel strongly polarizes.
\end{example}
\begin{example}
If $\displaystyle G=\prod_{i=1}^n \mathbb{F}_{p_i}^{r_i}$, where $p_1,\ldots,p_n$ are prime numbers, the reader can check that the characteristic subgroups of $(G,+)$ are those of the form $\displaystyle H=\prod_{i=1}^n\mathbb{F}_{q_i}^{l_i}$, where $l_i=0$ or $l_i=r_i$ for every $1\leq i\leq n$.
Therefore, if $\displaystyle G=\prod_{i=1}^n \mathbb{F}_{p_i}^{r_i}$ and $W$ is automorphic-symmetric, the polarization levels of $W$ are determined by subgroups of the form $\displaystyle H=\prod_{i=1}^n\mathbb{F}_{q_i}^{l_i}$, with $l_i=0$ or $l_i=r_i$ for every $1\leq i\leq n$.
\end{example}
\section{Discussion}
If $G\equiv\mathbb{Z}_q$ with composite $q$, then $G$ contains non-trivial characteristic subgroups, so we cannot apply Corollary \ref{corMain2}. Nevertheless, the simulations in \cite[Section V]{GulcuYeBarg} suggest that $q$-symmetric channels strongly polarize when the group $\mathbb{Z}_q$ is used. Proving this remains an open problem.
\section*{Acknowledgment}
I would like to thank Emre Telatar, Min Ye and Alexander Barg for helpful discussions.
\bibliographystyle{IEEEtran}
|
1,116,691,497,285 | arxiv | \section{Introduction}
\label{sec1}
\setcounter{equation}{0}
The problem of impurities in mediums formed by bosons is comprehensively studied in the condensed matter physics. Even properties of a single atom immersed in the weakly-interacting Bose gas change drastically \cite{Tempere2009,Vlietinck2015,Ardila2015,Grusdt2015,Vakarchuk2017}. Depending on the strength of the boson-impurity interaction, a number of physically distinct impurity phases can be realized, namely, the Bose-polaronic state \cite{Astrakharchik2004,Novikov2009,Christensen2015,Vakarchuk2018} in various spatial dimensions, which is very similar to the free-particle one but with the modified, due to the presence of bath, kinematic characteristics; the molecular state \cite{Rath2013,Li2014}, when the impurity captures one boson with the formation of a dimer; a set of the Efimov states \cite{Levinsen2014,Levinsen2015_2,Naidon2017,Sun} with the universal scaling behavior of energy levels, and higher-order conglomerates \cite{Wang,Casteels2013,Blume2014,Shi2018,Yoshida2018,Blume2019} which involve a larger number of host atoms. Remarkably, some of these phases can be observed in experiments \cite{Jorgensen,Hu}. The experimental progress in the field of ultra-cold atomic gases has recently lead to the observation \cite{Yan} of Bose polarons at finite temperatures. This experiment confirmed previous theoretical predictions \cite{Levinsen_temp,Guenther2018,Pastukhov2018,Liu2019,Field2020,Pascual2021} about the breakdown of the quasi-particle picture description of Bose polarons in a close vicinity of the Bose-Einstein condensation (BEC) point.
Recently, the problem of two impurities immersed in the dilute one- and three-dimensional Bose gases has become a subject of extensive examination. Physically this problem is substantially distinguishable from the single Bose polaron one due to the emergence of the induced effective interaction \cite{Zinner2013,Zinner2014,Ardila2018,Guardian2018} between impurity particles. In 1D, the character of this interaction crucially depends on a sign of the boson-impurity coupling constant \cite{Brauneis2021}, the effective attraction is found for positive couplings, while the induced repulsive potential is inherent for the negative interactions. While it increases, the induced attractive interaction between impurities leads to the formation of bipolarons \cite{Petkovich} in continuum and on the lattice \cite{Pasek2019}, and even to emergence of the two-polaron bound states \cite{Will2021}. In one-dimensional geometries with harmonic trapping, the induced interaction causes the clustering \cite{Dehkharghani2018} of two initially non-interacting atoms, and modifies their quench dynamics \cite{Mistakidis2020}. By switching the boson-impurity interaction in 3D dilute BEC with two impurities, the transition from a weakly-interacting through the Yukawa potential bipolarons to the Efimov trimer state was predicted in Ref.~\cite{Naidon}. Recently, properties of a single polaron in 2D BEC have been discussed both analytically \cite{Pastukhov2018_2} and numerically \cite{Astrakharchik2D, Akaturk}. The arbitrary $D$ one-polaron case was considered in Ref.~\cite{Khan}. As far as we know the problem of two Bose polarons in 2D Bose gas has been never discussed, therefore the objective of this paper is to make the first step toward the revealing of peculiarities of the bipolaron physics and boson-induced effective interaction between impurities by considering the static limit. The absence of the impurity dynamics in this limit allows to find the exact solution of the problem in the dilute 1D Bose mediums both in one- \cite{Kain2018} and two-particle \cite{Reichert2019,Reichert2019_2} cases. In 3D, only a case of the ideal Bose gas \cite{Panochko2021,Drescher2021} is the exactly tractable one, while the presence of a weak boson-boson interaction requires \cite{Levinsen2021} a substantial numerical efforts.
\section{Formulation}
\subsection{Model}
The discussed model consists of the $D$-dimensional (here we focus on $D=2, 3$ cases) Bose gas loaded in volume $L^D$ (with the periodic boundary conditions imposed) with the weak interparticle interaction and microscopic number $\mathcal{N}$ of heavy (infinite-mass) impurities immersed in it. Heavy particles are supposed to be randomly placed in positions $\{{\bf r}_j\}$. In the following, we adopt the imaginary-time path-integral approach with Euclidean action
\begin{eqnarray}\label{S}
S=\int d x \psi^*(x)\left\{\partial_{\tau}-\varepsilon+\mu-\Phi({\bf r})\right\}\psi(x)\nonumber\\
-\frac{g_{B,\Lambda}}{2}\int d x|\psi(x)|^4,
\end{eqnarray}
where $x=(\tau, {\bf r})$ denotes the `position` in $D+1$-dimensional space (and consequently $\int d x=\int_0^{\beta}d\tau\int_{L^D}d{\bf r}$), complex field $\psi(x)$ is periodic in $\tau$ with period $\beta$ (which is the inverse temperature of the system). We also use the shorthand notations for bosonic dispersion $\varepsilon=-\frac{\hbar^2\nabla^2}{2 m}$ and the chemical potential $\mu$ that fixes average density $n$ of Bose gas, and for term
\begin{eqnarray}\label{Phi}
\Phi({\bf r})=\sum_{1\le j\le \mathcal{N}}g_{I,\Lambda}\delta_{\Lambda}({\bf r}-{\bf r}_j),
\end{eqnarray}
that describes interaction between Bose particles and impurities. The $\delta$-like two-body potential is ill-defined in the higher ($D\ge 2$) dimensions, and therefore, in order to obtain any reasonable results one should adopt some renormalization scheme. The latter is typically realized by the implication of the ultraviolet cutoff $\Lambda$ in all momentum summations and in the simultaneous rewriting of bare couplings $g_{B,\Lambda}$ and $g_{I,\Lambda}$ via the two-body vacuum binding energies $\epsilon_B$ and $\epsilon_I$
\begin{eqnarray}
g^{-1}_{B,\Lambda}=g^{-1}_B-\frac{1}{L^D}\sum_{{\bf k}}\frac{1}{2\varepsilon_k},\label{g_bare_B}\\
g^{-1}_{I,\Lambda}=g^{-1}_I-\frac{1}{L^D}\sum_{{\bf k}}\frac{1}{\varepsilon_k},\label{g_bare_I}
\end{eqnarray}
respectively (from now on we assume that all summations over the wave-vector ${\bf k}$ are restricted from the above $|{\bf k}|<\Lambda$). Such a `regularization' is already used in the definition of the point-like boson-impurity interaction potential, $\delta_{\Lambda}({\bf r})=\frac{1}{L^D}\sum_{|{\bf k}|<\Lambda}e^{{\rm i}{\bf k}{\bf r}}$, in Eq.~(\ref{Phi}). The `observable' couplings $g_B$ and $g_I$ are specified as follows
\begin{eqnarray}
g^{-1}_B=-\frac{\Gamma({{2-D}\over 2})}{(4\pi)^{D\over 2}}\left(\frac{m}{\hbar^2}\right)^{D\over 2}|\epsilon_B|^{{D\over 2}-1},\label{g_phys_B}\\
g^{-1}_I=-\frac{\Gamma({{2-D}\over 2})}{(2\pi)^{D\over 2}}\left(\frac{m}{\hbar^2}\right)^{D\over 2}|\epsilon_I|^{{D\over 2}-1},\label{g_phys_I}
\end{eqnarray}
where $\Gamma(z)$ stands for the gamma function. Note that the bound states are only possible for positive $g_B$s and $g_I$s, but it is convenient to parametrize negative couplings by the binding energies either. By careful inspection of the $D\to 2$ limit one can conclude that Eqs.~(\ref{g_bare_B}), (\ref{g_bare_I}) and (\ref{g_phys_B}), (\ref{g_phys_I}) provide a correct description of zero-range potentials even in the two-dimensional case. Moreover, the $D=2$ pseudo-potential always provides the existence of one bound state.
The alternative way (see, for instance \cite{Volosniev2015}) to deal with a point-like interactions is to initially start from some `physical` (Gaussian, for instance) potentials and then relate the appropriate coupling constant to the $s$-wave scattering lengths $a_B$ and $a_I$ in the limit where the effective ranges are the smallest parameters with the dimension of length in the system. In the following, no restrictions are set on a magnitude of the boson-impurity interaction, while the Bose gas itself is expected to be extremely dilute.
\subsection{Effective field theory approach}
The further analysis will be performed in a spirit of the effective field theory approach (see, for review \cite{Andersen2004}), which is known to be extremely convenient for the many-boson systems. Particularly, this formulation automatically guarantees the implementation of the Hugengoltz-Pines theorem (which is a concrete manifestation of the Goldstone theorem) in every order of a loop expansion. Moreover, the effective field theory approach provides a non-perturbative predictions for the Bose gas thermodynamics. In the limit of weak boson-boson coupling the loop expansion is identical to the perturbation theory in term of characteristic small parameter $a^D_Bn$. The main idea of the method relies on the separation of `classical' dynamics during the computations of partition function by means of the path integral
\begin{eqnarray}\label{psi}
\psi(x)=\psi_0({\bf r})+\tilde{\psi}(x),\ \
\psi^*(x)=\psi^*_0({\bf r})+\tilde{\psi}^*(x),
\end{eqnarray}
where the introduced classical fields are determined by the minimization of the action (\ref{S}): $\delta S_0=\delta S[\psi^*_0,\psi_0]=0$. Note that in general $|\psi_0({\bf r})|^2$ should not be confused with the Bose condensate density. In the absence of impurities, $\Phi({\bf r})=0$, the solution $\psi_0({\bf r})$ is real and uniform. Putting a microscopic amount of heavy particles in the Bose condensate we cannot principally change the character of this solution provided that $\psi_0({\bf r})$ becomes only slightly non-uniform, i.e., $\int_{L^D}d{\bf r}|\psi_0({\bf r})|^2\propto L^D$. Of course, one may argue that the localized solutions $\psi_0({\bf r})$ decrease the total energy by $\propto-\mathcal{N}|\epsilon_I|$, but any non-zero repulsion between bosons immediately increases the energy of the system by $\propto N^2g_B/a^D_I$. Therefore, the collapsed BEC state \cite{Panochko2021} is not energetically preferable in the thermodynamic limit, where both number of the repulsively interacting bosons $N$ and volume of the box $L^D$ infinitely increase.
Performing the shift (\ref{psi}), we end up with the following effective action
\begin{eqnarray}\label{S_eff}
S_{\textrm{eff}}=S_0-\frac{1}{2}\int d x\left[\tilde{\psi}^*(x),\tilde{\psi}(x)\right]\hat{K}\left[ \begin{array}{c}
\tilde{\psi}(x)\\
\tilde{\psi}^*(x)\\
\end{array} \right],
\end{eqnarray}
where only the Gaussian in the fluctuation fields part is explicitly written down. Here the $2\times 2$ matrix operator $\hat{K}$ with elements
\begin{eqnarray}\label{K}
&&\hat{K}_{11}=\varepsilon-\mu+\Phi({\bf r})+2g_{B,\Lambda}|\psi_0({\bf r})|^2-\partial_{\tau},\nonumber\\
&&\hat{K}_{12}=\hat{K}^*_{21}=g_{B,\Lambda}\psi^2_0({\bf r}),\nonumber\\
&&\hat{K}_{22}=\varepsilon-\mu+\Phi({\bf r})+2g_{B,\Lambda}|\psi_0({\bf r})|^2+\partial_{\tau}.
\end{eqnarray}
is introduced. Taking into account the equation for $\psi_0({\bf r})$
\begin{eqnarray}\label{Eq_psi_0}
\left\{\varepsilon-\mu+\Phi({\bf r})+g_{B,\Lambda}|\psi_0({\bf r})|^2\right\}\psi_0({\bf r})=0,
\end{eqnarray}
and performing the Gaussian integration in (\ref{S_eff}), we finally obtain the grand potential of the Bose system with the impurities immersed
\begin{eqnarray}\label{Omega}
\Omega = -\frac{g_{B,\Lambda}}{2}\int_{L^D}d{\bf r}|\psi_0({\bf r})|^4+\frac{1}{2\beta}\textrm{Sp}\ln\hat{K}-\textrm{const},
\end{eqnarray}
where $\textrm{Sp}$ denotes the trace in the $D+1$ space. A constant term (counterterm) in (\ref{Omega}), is most straightforwardly represented in the plane-wave basis $\textrm{const}=\frac{1}{2}\sum_{\bf k}\langle {\bf k}|\varepsilon-\mu+2g_{B,\Lambda}|\psi_0({\bf r})|^2+\Phi({\bf r})|{\bf k}\rangle$, but cannot be obtained by the functional integration and has to be written by hands \cite{Salasnich2016} in order to resolve a standard normal-ordering routine. Consequently, the calculation of thermodynamics for `Bose gas + static impurities' reduces to finding a solution of Eq.~(\ref{Eq_psi_0}), and then with $\psi_0({\bf r})$ in hands to the evaluation of the functional determinant. Note that by taking into account $S_0$ only, one reproduces the mean-field \cite{Volosniev2017,Pastukhov2019,Panochko2019,Hryhorchak2020,Hryhorchak2020_2,Massignan2021} description of the system generalized to $\mathcal{N}$ impurities in the static limit.
\subsection{Limit of dilute Bose gas}
In general case, the above program, which can be realized to the very end in 1D \cite{Reichert2019} even at finite impurity masses \cite{Volosniev2017,Panochko2019,Jager2020}, requires considerable numerical efforts in the higher dimensions, but the limit of weak inter-boson interaction can be handled more or less easily. Indeed, the intrinsic, for the dilute Bose gas, length-scale is represented by the so-called coherence length $\xi=\frac{\hbar}{mc}$ (with $c=\sqrt{ng_B/m}$ being the sound velocity), which is large in comparison to the average distance between particles and to the $s$-wave scattering length $a_B$. The magnitude of boson-impurity interaction, in turn, is dictated by $a_I$. So if we additionally assume that $a_I\ll \xi$, the solution of Eq.~(\ref{Eq_psi_0}) can be immediately found $\psi_0({\bf r})=\sqrt{\mu/g_{B,\Lambda}}\simeq \sqrt{n}$. In all other cases, we can apply the successive expansion in terms of the $\psi_0$-field `non-uniformity'
\begin{eqnarray}\label{psi_0_solution}
\psi_0({\bf r})=\sqrt{\mu/g_{B,\Lambda}}\left\{1-\bar{\psi}^{(1)}_0({\bf r})-\bar{\psi}^{(2)}_0({\bf r})\ldots \right\},
\end{eqnarray}
where after the substitution in Eq.~(\ref{Eq_psi_0}) the dimensionless functions $\bar{\psi}^{(1)}_0({\bf r})$, $\bar{\psi}^{(2)}_0({\bf r})$ satisfy the following equations:
\begin{eqnarray}
&&\left\{\varepsilon+2\mu+\Phi({\bf r})\right\}\bar{\psi}^{(1)}_0({\bf r})=\Phi({\bf r}),\label{barpsi^1_0}\\
&&\left\{\varepsilon+2\mu+\Phi({\bf r})\right\}\bar{\psi}^{(2)}_0({\bf r})=3\mu \left(\bar{\psi}^{(1)}_0({\bf r})\right)^2.\label{barpsi^2_0}
\end{eqnarray}
Note that the above approximate procedure does not require the boson-impurity interaction to be weak. Furthermore, by a naive dimensional analysis, it is easy to argue that both at the weak and strong couplings $g_I$, the contribution of the second-order correction $\bar{\psi}^{(2)}_0({\bf r})$ in the thermodynamics of the system is much smaller than the one originating from $\bar{\psi}^{(1)}_0({\bf r})$. Therefore, in our consideration below we fully focus on the first-order correction. But even this simple approximation effectively sums up some infinite set of terms of the standard pertubation theory for a model with the uniform condensate \cite{Kain2018}. Equation (\ref{barpsi^1_0}) with $\Phi({\bf r})$ given by (\ref{Phi}) can be solved for arbitrary $\mathcal{N}$ by means of the Fourier transformation
\begin{eqnarray}\label{barpsi^1_0_sol}
\bar{\psi}^{(1)}_0({\bf r})=\sum_{1\le j\le \mathcal{N}}A_j\frac{1}{L^D}\sum_{\bf k}\frac{e^{{\rm i}{\bf k}({\bf r}-{\bf r}_j)}}{\varepsilon_k+2\mu},
\end{eqnarray}
with $\varepsilon_k=\frac{\hbar^2k^2}{2m}$ and coefficients $A_j=\sum_{1\le i\le \mathcal{N}}T_{ji}(-2\mu)$, where matrix $T_{ji}(-2\mu)$ is introduced in Appendix.
We can now proceed with the calculations of the functional determinant in (\ref{Omega}). Taking into account the extreme diluteness of the Bose subsystem, it is enough to expand $\textrm{Sp}\ln\hat{K}\simeq\textrm{Sp}\ln\hat{K}^{(0)}+\textrm{Sp}\left\{[\hat{K}^{(0)}]^{-1}\Delta\hat{K}\right\}$, where $\hat{K}^{(0)}$ is given by (\ref{K}) but with $\psi_0({\bf r})\to \sqrt{\mu/g_{B,\Lambda}}$ and $\Delta\hat{K}=\hat{K}-\hat{K}^{(0)}$. Following our previous discussion, we ignore in $\Delta\hat{K}$ all higher-order corrections except $\bar{\psi}^{(1)}_0({\bf r})$. After this, the calculations are relatively simple and at absolute zero we obtain the $\Omega$-potential in the adopted approximation
\begin{eqnarray}\label{Omega_approx}
&&\Omega \simeq - L^D\frac{\mu^2}{2g_{B,\Lambda}}+\frac{\mu}{g_{B,\Lambda}}\sum_{1\le j\le \mathcal{N}}A_j\nonumber\\
&&+\frac{1}{2}\sum_{\bf k}\langle {\bf k}|\mathcal{E}-\varepsilon-\mu-\Phi({\bf r})|{\bf k}\rangle\nonumber\\
&&+\frac{1}{L^D}\sum_{\bf k}\left\{1-\frac{\varepsilon_k+\mu/2}{E_k}\right\}\sum_{1\le j\le \mathcal{N}}A_j,
\end{eqnarray}
where $\mathcal{E}=\sqrt{(\varepsilon+\Phi({\bf r}))^2+2\mu(\varepsilon+\Phi({\bf r}))}$ and $E_k=\sqrt{\varepsilon_k^2+2\mu\varepsilon_k}$ stands for the Bogoliubov spectrum of the `pure' Bose system. It should be noted that for dilute Bose systems the impact of the so-called quantum fluctuations (terms with the summations over the wave-vector) to $\Omega$ is much smaller than the first two terms (the mean-field contributions). The last step to be performed in these calculations is to replace the bare couplings $g_{B,\Lambda}$ and $g_{I,\Lambda}$ via (\ref{g_bare_B}) and (\ref{g_bare_I}), respectively. This procedure provides the convergence of sums over the wave-vector in last two terms of (\ref{Omega_approx}). Then the trace in the third term of (\ref{Omega_approx}) can be computed (see, Appendix for details). With the well-defined grand potential, we can relate, by using the thermodynamic identity $n=-\frac{\partial}{\partial \mu}\frac{\Omega}{L^D}$, the chemical potential of the Bose system to its equilibrium density $n$. Performing these calculations, one must keep in mind that the presence of a microscopic number of impurities cannot principally change the properties of the system. So, if we denote (and appropriate grand potential $\Omega_B$) the chemical potential of Bose gas without exterior particles by $\mu_B$, the difference $\Delta\mu=\mu-\mu_B\propto \mathcal{N}/L^D$ should be small. Using this fact and $n=-\frac{\partial}{\partial \mu}\frac{\Omega_B}{L^D}-\frac{\partial}{\partial \mu}\frac{\Delta\Omega}{L^D}$, we can identify a small correction $\Delta\mu=-\frac{\partial \Delta\Omega}{\partial \mu_B}/\frac{\partial^2 \Omega_B}{\partial \mu_B^2}$. The latter formula allows to determine the energy that the Bose system gains when $\mathcal{N}$ impurities are immersed
\begin{eqnarray}
\Delta E_{\mathcal{N}}=\left(\Omega-\Omega_B\right)|_{\mu\to \mu_B},
\end{eqnarray}
which is an explicit manifestation of the well-known theorem about small corrections to the thermodynamic potentials.
\section{Results}
Before we proceed to describing our main results, it is necessary to analyze the case of `pure' bosons. Setting $\Phi({\bf r})=0$ in (\ref{Omega_approx}) and calculating integrals, we obtain for density
\begin{eqnarray}
n=\frac{\mu_B}{g_{B}}\left\{1-\frac{\Gamma(D)}{{D\over 2}\Gamma^2({D\over 2})}\left(\frac{\mu_B}{|\epsilon_B|}\right)^{{D\over 2}-1}\right\},
\end{eqnarray}
which allows to obtain expression for $\mu_B$ iteratively. For the weakly non-ideal three-dimensional bosons we find the well-known formula ($|\epsilon_B|=\frac{\hbar^2}{ma_{B}^2}$ in 3D)
\begin{eqnarray}
\mu_B=\frac{4\pi\hbar^2a_{B}n}{m}\left\{1+\frac{32}{3\sqrt{\pi}}\sqrt{na_{B}^3}+\ldots\right\}.
\end{eqnarray}
Similarly, in the two-dimensional case we have the transcendental equation \cite{Mora2009}
\begin{eqnarray}
n=\frac{m\mu_B}{4\pi\hbar^2}\left\{\ln\frac{|\epsilon_B|}{\mu_B}-1\right\}.
\end{eqnarray}
Being convinced that the limit of Bose gas without impurities is correctly reproduced by the adopted approach, we are ready to present our main results concerning the binding energy of one and two impurity atoms in the dilute three- and two-dimensional Bose gases.
\subsection{3D case}
In 3D, the general structure of the two-impurity binding energy in the dilute Bose gas $(n\xi^3\ll 1)$ can be represented as
\begin{eqnarray}\label{E_2_3D}
\Delta E_2= \Delta E^{(0)}_2\left[\varepsilon_1\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)+\frac{1}{n\xi^3}\varepsilon_2\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)+\ldots\right],
\end{eqnarray}
where $\Delta E^{(0)}_2=2g_In$ is the contribution of the ideal Bose gas, $a_I$ is the $s$-wave scattering length that parametrizes the (renormalized) two-body coupling $g_I=\frac{2\pi\hbar^2a_I}{m}$ and $R$ is the distance between two static particles. The first term in (\ref{E_2_3D}) has a simple analytic form
\begin{eqnarray}
\varepsilon_1\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)=\frac{\xi/a_I}{\xi/a_I-2+e^{-2R/\xi}/(R/\xi)},
\end{eqnarray}
and originates purely from the mean-field correction to the grand potential [the second term in (\ref{Omega_approx})], while $\varepsilon_2\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)$ contains both the mean-field and purely quantum corrections. Note that in formula for $\Omega$ only the one-loop corrections were taken into account and a consistent consideration of the next to a leading order terms in series expansion over the small parameter $1/(n\xi^3)$ necessary requires the calculation of the two-loop diagrams to the grand potential. By setting the distance between heavy particles $R$ to infinity, one obtains from (\ref{E_2_3D}) the one-impurity limit. A typical behavior of functions $\varepsilon_{1,2}\left(\frac{a_I}{\xi};\infty\right)$ is presented in Fig.~\ref{one_particle_3D}.
\begin{figure}[h!]
\includegraphics[width=0.35\textwidth,clip,angle=-0]{one_particle_3D.pdf}
\caption{Dimensionless functions $\varepsilon_{1,2}\left(\frac{a_I}{\xi};\infty\right)$ determining the one-impurity energy in 3D dilute Bose gas.}\label{one_particle_3D}
\end{figure}
Let us recall that the problem considered here is the exactly solvable one, when the bosons are non-interacting. Therefore, it should be clearly understood that the presented results are accurate if the coherence length $\xi$ is the largest parameter with dimension of length in the system. In order to reveal the interplay between regimes of very dilute $a_I/\xi \to 0$ Bose gas and intermediate boson-impurity interaction we have plotted in Fig.~\ref{two_particle_3D} the binding energy of two heavy particles for the positive and negative $s$-wave scattering lengths $a_I$.
\begin{figure}[h!]
\includegraphics[width=0.35\textwidth,clip,angle=-0]{two_particle_3D_a001.pdf}
\includegraphics[width=0.35\textwidth,clip,angle=-0]{two_particle_3D_a1.pdf}
\caption{Mean-field and the first-order quantum corrections $\varepsilon_{1,2}\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)$ to the energy of 3D dilute Bose gas generated by two impurities for $\frac{a_I}{\xi}=\pm 0.01$ and $\frac{a_I}{\xi}=\pm 1$.}\label{two_particle_3D}
\end{figure}
Comparing these findings to the ideal Bose gas results \cite{Panochko2021}, we can observe similar patterns in the behavior of the systems at weak coupling: at positive $a_I$ the binding energy is the monotonic function of $R$, while at the negative boson-impurity scattering lengths both $\varepsilon_{1,2}\left(-0.01;\frac{R}{\xi}\right)$ have a simple-pole singularity. When the interaction increases (see lower panel in Fig.~\ref{two_particle_3D}) the mean-field and quantum corrections to the ground state energy of 3D Bose gas possess an infinite discontinuities independently of a sign of $a_I$.
\subsection{2D case}
In general, the low-dimensional dilute Bose systems with static impurities are very peculiar. When the interaction between bosons is switched off, these systems are insensible to the boson-impurity interaction in their not collapsed ground state, and therefore, the binding energy of the heavy particles requires a finite compressibility of the host system to be non-zero. Introducing the two-body $s$-wave scattering length $a_I$ through the boson-impurity vacuum bound state energy $|\epsilon_I|=2e^{-2\gamma}\hbar^2/(ma^2_I)$, we can write down the energy that the 2D Bose gas gains when two heavy particles are immersed in it
\begin{eqnarray}\label{E_2_2D}
\Delta E_2= 2\frac{2\pi\hbar^2n}{m}\left[\varepsilon_1\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)\right.\nonumber\\
\left.+\frac{1}{n\xi^2}\varepsilon_2\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)+\ldots\right].
\end{eqnarray}
Note that in contrast to a 3D case, both $\varepsilon_{1,2}\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)$ tend to zero (at least logarithmically) in the limit of ideal Bose gas ($\xi \to \infty$). At large distances $R$, Eq.~(\ref{E_2_2D}) gives the double binding energy of a single impurity which is presented in Fig.~\ref{one_particle_2D}.
\begin{figure}[h!]
\includegraphics[width=0.35\textwidth,clip,angle=-0]{one_particle_2D.pdf}
\caption{One-impurity binding energy $\varepsilon_{1,2}\left(\frac{a_I}{\xi};\infty\right)$ (see Eq.~(\ref{E_2_2D})) in 2D case.}\label{one_particle_2D}
\end{figure}
Particularly, these calculations clearly demonstrate the weakening of the role of quantum fluctuations in the formation of polarons in two-dimensional Bose systems. Actually, this observation \cite{Jager2020} seems to be intrinsic for the low-dimensional systems in general.
The numerical computations of the two-impurity energies (see Fig.~\ref{two_particle_2D})
\begin{figure}[h!]
\includegraphics[width=0.35\textwidth,clip,angle=-0]{two_particle_2D_a001.pdf}
\includegraphics[width=0.35\textwidth,clip,angle=-0]{two_particle_2D_a1.pdf}
\caption{Two-impurity binding energies $\varepsilon_{1,2}\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)$ in 2D dilute Bose gas.}\label{two_particle_2D}
\end{figure}
in the 2D Bose gas demonstrate qualitative similarity between the two- and three-dimensional cases. At weak boson-impurity interactions $a_I/\xi \ll 1$, where our effective field-theoretical formulation is supposed to make a quantitative predictions, the mean-field term $\varepsilon_{1}\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)$ as well as the one that includes the quantum corrections $\varepsilon_{2}\left(\frac{a_I}{\xi};\frac{R}{\xi}\right)$ behave as monotonic functions of $R$. The interaction-induced effective two-body potential between static particles at large $a_I/\xi$ always contains singularity.
\section{Conclusions}
In summary, by means of the effective field theory formulation, we have calculated the impurity-induced shifts to the ground state energies of the two- and three-dimensional dilute Bose gases. Particularly, by taking into account the extreme diluteness of the host bosons, we have proposed the approximate procedure that allows to calculate the properties of an arbitrary (microscopic) number of static impurities in terms of characteristic small parameter $1/(n\xi^D)$ (where $n$ and $\xi$ are the density and coherence length of bosons, respectively). The numerical calculations of the binding energies of two static impurities in dilute 2D and 3D Bose gases that were performed for a wide range of the boson-impurity interactions and distances between impurities has revealed the peculiarities of the medium-induced (Casimir) forces: \textit{i)} the two-body effective potential always demonstrates singular behavior at the distances between impurities comparable to the boson-impurity $s$-wave scattering lengths $a_I$; \textit{ii)} an impact of purely quantum corrections decreases with the lowering of a spatial dimensionality. Similar singularities are also intrinsic for the binding energy of a single impurity at $a_I\sim \xi$, which may signal \cite{Schmidt2021} about the inapplicability of the adopted approximate treatment for calculations of the `classical' solution $\psi_0({\bf r})$ in that region.
\section{Appendix}
For completeness, in this section we give some details of the calculations not presented in the main text. Let us first start from the equation that determines the classical field $\psi_0({\bf r})$. Explicitly writing down Eq.~(\ref{barpsi^1_0}), after the implementation of ansatz (\ref{barpsi^1_0_sol})
\begin{eqnarray*}
\sum_{1\le j\le \mathcal{N}}A_j\delta_{\Lambda}({\bf r}-{\bf r}_j)+\sum_{1\le j\le \mathcal{N}}g_{I,\Lambda}\delta_{\Lambda}({\bf r}-{\bf r}_j)\\
\times\sum_{1\le i\le \mathcal{N}}A_i\frac{1}{L^D}\sum_{\bf k}\frac{e^{{\rm i}{\bf k}({\bf r}-{\bf r}_i)}}{\varepsilon_k+2\mu}=\sum_{1\le j\le \mathcal{N}}g_{I,\Lambda}\delta_{\Lambda}({\bf r}-{\bf r}_j),
\end{eqnarray*}
and combining $j=i$ terms in double sum with the first term of equation, we obtain
\begin{eqnarray*}
A_j\left[\frac{1}{g_{I,\Lambda}}+\frac{1}{L^D}\sum_{\bf k}\frac{1}{\varepsilon_k+2\mu}\right]+\\
\sum_{1\le i\neq j\le \mathcal{N}}\frac{1}{L^D}\sum_{\bf k}\frac{e^{{\rm i}{\bf k}({\bf r}_j-{\bf r}_i)}}{\varepsilon_k+2\mu}A_i=1.
\end{eqnarray*}
The divergent sum in the square brackets is now regularized by the renormalization of a coupling constant (\ref{g_bare_I}), so the final result contains only observable $g_I$. One can easily recognize the square brackets as the boson-impurity two-body $T$-matrix
\begin{eqnarray*}
t^{-1}_I(\omega)=g^{-1}_{I,\Lambda}-\frac{1}{L^D}\sum_{\bf k}\frac{1}{\omega-\varepsilon_k},
\end{eqnarray*}
and introducing auxiliary notations
\begin{eqnarray*}
\Delta_{ij}(\omega)=\frac{1}{L^D}\sum_{\bf k}\frac{e^{{\rm i}{\bf k}({\bf r}_i-{\bf r}_j)}}{\omega-\varepsilon_k},
\end{eqnarray*}
we find the result for coefficients $A_j$ announced in the main text
\begin{eqnarray*}
A_i=\sum_{1\le j\le \mathcal{N}}T_{ij}(-2\mu),\\
T^{-1}_{ij}(-2\mu)=\delta_{ij}t^{-1}_I(-2\mu)-\Delta_{ij}(-2\mu)(1-\delta_{ij}).
\end{eqnarray*}
For the calculation of trace in the second term of (\ref{Omega_approx}), we have used formal identity
\begin{eqnarray*}
\sum_{\bf k}\langle {\bf k}|\mathcal{E}-\varepsilon-\mu-\Phi({\bf r})|{\bf k}\rangle=\\
\int d\omega D(\omega)\left[\sqrt{\omega^2+2\mu \omega}-\omega-\mu\right],\\
D(\omega)=\sum_{\bf k}\langle {\bf k}|\delta(\omega-\varepsilon-\Phi({\bf r})|{\bf k}\rangle.
\end{eqnarray*}
The density of states $D(\omega)$ is easily calculated within the Green's function method \cite{Panochko2021}
\begin{eqnarray*}
D(\omega)=\sum_{\bf k}\left[\delta(\omega-\varepsilon_k)-\frac{1}{\pi}{\rm Im} \frac{\langle {\bf k}|\mathcal{T}(\omega+{\rm i}0)|{\bf k}\rangle}{(\omega+{\rm i}0-\varepsilon_k)^2}\right],
\end{eqnarray*}
where the $T$-matrix $\mathcal{T}(\omega)$ characterizes the scattering of a single boson on $\mathcal{N}$ impurities
\begin{eqnarray*}
\langle {\bf q}|\mathcal{T}(\omega)|{\bf k}\rangle=\sum_{1\le i,j\le \mathcal{N}}e^{-{\rm i}{\bf q}{\bf r}_i}T_{ij}(\omega)e^{{\rm i}{\bf k}{\bf r}_j}.
\end{eqnarray*}
The calculations of $\langle {\bf k}|\mathcal{T}(\omega+{\rm i}0)|{\bf k}\rangle$ in the density of states requires the knowledge of an explicit analytic formulas for the boson-impurity two-body $T$-matrix
\begin{eqnarray*}
t^{-1}_{I}(\omega)=\frac{\Gamma({{2-D}\over 2})}{(2\pi)^{D\over 2}}\left(\frac{m}{\hbar^2}\right)^{D\over 2}\left[(-\omega)^{{D\over 2}-1}-|\epsilon_I|^{{D\over 2}-1}\right],
\end{eqnarray*}
and a function $\Delta_{ij}(\omega)=\Delta_{R}({\omega})$ of distance $R=|{\bf r}_i-{\bf r}_j|$ between two impurities in arbitrary $D$
\begin{eqnarray*}
\Delta_{R}(\omega)=\frac{1}{(2\pi)^{D\over 2}}\frac{2mk^{D-2}_{\omega}}{\hbar^2}\frac{K_{{D\over 2}-1}(Rk_{\omega})}{\left(Rk_{\omega}\right)^{{D\over 2}-1}},
\end{eqnarray*}
where $k_{\omega}=\sqrt{2m(-\omega)}/\hbar$, and $K_{\nu}(z)$ is the modified Bessel function of the second kind \cite{Abramowitz}.
\begin{center}
{\bf Acknowledgements}
\end{center}
We are indebted to Dr.~Iryna Pastukhova for comments on the manuscript.
|
1,116,691,497,286 | arxiv | \section{Introduction}
\label{sec:intro}
The motivation to study nuclear matter under extreme conditions of temperature and density reached in Relativistic Heavy Ion conditions is primarily the search for a new state of matter, called the Quark Gluon Plasma for historical reasons because it was thought to be a gas of deconfined quarks and gluons covering a volume much larger than an individual nucleon (a plasma being an ionized gas)~\cite{MJTROP}. The early signals, searched for at the Berkeley Bevalac, the Brookhaven AGS and the CERN SpS, were collective hydrodynamic flow~\cite{ROPBeV,CP99}, strangeness enhancement~\cite{RM82,E802-90}, baryon stopping~\cite{Ahle98} and $J/\Psi$ suppression~\cite{MatsuiSatz,NA50} which was believed to be the `gold-plated' signature of deconfinement. All these signatures were found at the nucleon-nucleon c.m. energy range $2<\sqrt{s_{NN}}< 17.2$ GeV of these machines but in my opinion the Quark Gluon Plasma was not found~\cite{MJTROP}.
Nevertheless, the prevailing opinion in the field until the end of the 20$^{\rm th}$ century was that the `Hard Probe', $J/\Psi$ suppression, was the best method to find the QGP at RHIC.
However, in the 1990's a new hard probe of the color response of the medium, `Jet Quenching'~\cite{GP90,WG92} was proposed and given a firm basis in QCD~\cite{BDMPS} as coherent Landau-Pomeranchuck-Migdal coherent bremsstrahlung of gluons by the outgoing hard-scattered partons traversing the medium. The discovery of a huge quenching of high $p_T$ $\pi^0$ by a factor of ~5 in central Au+Au collisions at RHIC (Fig.~\ref{fig:QM05wow})
\begin{figure}[!h]
\includegraphics[width=0.45\textwidth]{figs/raa-Tshirt.eps}
\caption[]{Nuclear modification factor, $R_{AA}$ for direct-$\gamma$, $\pi^0$ and $\eta$ in Au+Au central collisions at $\sqrt{s_{NN}}=200$ GeV~\cite{YAQM05}, together with GLV theory curve~\cite{GLV}.}
\label{fig:QM05wow}
\end{figure}
coupled with the absence of such an effect at SpS energies (Fig.~\ref{fig:PXCu})
\begin{figure}[!h]
\includegraphics[width=0.90\linewidth]{figs/raa_CuCu_energy_Cc08.eps}
\caption[]{Nuclear modification factor, $R_{AA}$ for $\pi^0$ in Cu+Cu central collisions at $\sqrt{s_{NN}}=200$, 62.4 and 22.4 GeV~\cite{ppg084}, together with Vitev theory curves~\cite{Vitev2}.}
\label{fig:PXCu}
\end{figure}
has led to the opinion by some that we now really understand everything that is happening in RHI collisions at RHIC (e.g. nearly opaque matter, with partons visible only from the surface). In my opinion this
is the principal failure at RHIC. In fact, it is clear to me that we are still on a long learning curve and far from understanding in detail the many discoveries and effects observed at RHIC, the underlying fundamental physics of QCD in a color-charged medium, and the properties of the medium produced at RHIC. I will sketch a few of the successes and outline some of the many open questions below.
\section{Successes}
The real success at RHIC is the precision and accuracy of the measurements and excellent data sets at the same $\sqrt{s_{NN}}$, with absolute cross sections and semi-inclusive yield measurements in p-p, Au+Au, (see Fig.~\ref{fig:pi0cross}) d+Au and Cu+Cu, as well as measurements over a broad range of $\sqrt{s_{NN}}$ for Cu+Cu and p-p. The impressive agreement of the p-p measurements with QCD {\em predictions} (Fig.~\ref{fig:pi0cross}a) gives added confidence to both the measurements and the theory.
In Fig.~\ref{fig:pi0cross}b, both the p-p and Au+Au spectra exhibit a pure power law for $p_T>4$ GeV/c with $n=8.10\pm 0.05$ which indicates that their ratio will be constant $\sim 0.2$ over the range $4 \leq p_T\leq 12$ GeV/c.
The ratio:
\begin{equation}
R_{AA}(p_T)=\frac{d^2 N^{\pi}_{AA}/dp_T dy N^{inel}_{AA}}{ \mean{T_{AA}} d^2\sigma^{\pi}_{pp}/dp_T dy}
\label{eq:RAA}
\end{equation}
is used as the most convenient way to represent the physics and not because, for instance, the efficiency cancels in the ratio, which it certainly doesn't because the mean background multiplicity to the processes of interest increases by a factor of $\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 300$ from p-p to central Au+Au collisions. \begin{figure}[ht]
\begin{center}
\begin{tabular}{c}
\hspace*{-0.0\linewidth}\includegraphics[width=0.70\linewidth]{figs/pi0200-cross.eps}\cr
\hspace*{-0.10\linewidth}\includegraphics[width=0.70\linewidth,height=0.60\linewidth]{figs/pi0_AuAu_pp_0_10_power.eps}
\end{tabular}
\end{center}
\caption[]
{a) (top) Invariant cross section at mid-rapidity for $\pi^0$ production in p-p collisions at $\sqrt{s}=200$ GeV~\cite{PXPRD76}. b) (bottom) log-log plot of semi-inclusive invariant yield of $\pi^0$ in central (0-10\%) Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV~\cite{ppg054} and Invariant cross section for p-p collisions~\cite{PXppPRL91} multipied by $\mean {T_{AA} (0-10\%)}$
\label{fig:pi0cross} }
\end{figure}
\subsection{Parton suppression at RHIC}
Fig.~\ref{fig:QM05wow} shows that at $\sqrt{s_{NN}}=200$ GeV, direct-$\gamma$ which do not interact with the medium are not suppressed while the $\pi^0$ and $\eta$ mesons which are fragments of hard-scattered light-quarks and gluons are suppressed. This indicates a strong medium effect on partons, consistent with QCD LPM energy loss as indicated by the agreement with the theory~\cite{GLV}. I actually think that the data are more consistent with a constant $R_{AA}\sim 0.2$ from $4\leq p_T\leq 20$ GeV/c (as would be given by a constant-fractional energy loss and a pure power-law partonic $p_T$ spectrum~\cite{ppg054}) than with a slowly rising value of $R_{AA}$ with increasing $p_T$ as indicated by the theory and in fact this is borne out by the best fit to the data~\cite{ppg079}. Another new PHENIX result nicely illustrates that parton suppression begins somewhere between $\sqrt{s_{NN}}$=22.4 and 62.4 GeV (Fig.~\ref{fig:PXCu})~\cite{ppg084}, but does not completely rule out parton energy loss at 22.4 GeV although it is very suggestive.
\subsection{Precision tests of models of parton suppression}
There are many different models of parton suppression with totally different assumptions which all give results in agreement with the PHENIX measurement $R_{AA}^{\pi^0}\approx 0.20$ for $4\leq p_T\leq 20$ GeV/c in Au+Au central collisions. In Jamie Nagle's talk at this meeting, he described how he got all theorists to send him predictions as a function of their main single parameter that characterizes the medium in order to do precision fits to the latest PHENIX $\pi^0$ data including the correct treatment of correlated experimental systematic errors (Fig.~\ref{fig:pi0pqm} )~\cite{ppg079}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/ppg079figure2a-data.eps}
\end{center}
\caption[]{PHENIX $\pi^0$ $R_{AA}(p_T)$ for Au+Au central (0-5\%) collisions at $\sqrt{s_{NN}}=200$~\cite{ppg079} compared to PQM model predictions~\cite{PQM} as a function of $\mean{\hat{q}}$. The thick red line is the best fit. Values of $\mean{\hat{q}}$ corresponding to the lines are shown on Fig.~\ref{fig:RAA20}.
\label{fig:pi0pqm} }
\end{figure}
Systematic uncertainties of the theory predictions were not considered.
The large value of the transport coefficient $\mean{\hat{q}=\mu^2/\lambda}=13.2^{+2.1}_{- 3.2}$ GeV$^2$/fm from the best fit to the PQM model~\cite{PQM} (where $\mu$ is the average 4-momentum transfer to the medium per mean free path $\lambda$) is a subject of some debate in both the more fundamental QCD community~\cite{BS06} and the more phenomenological community~\cite{fragility}. For instance it was stated in Ref.~\cite{fragility} that ``the dependence of $R_{AA}$ on $\hat{q}$ becomes weaker as $\hat{q}$ increases'' as is clear from Fig.~\ref{fig:RAA20}a. It was also asserted that ``when the values of the time-averaged transport coefficient $\hat{q}$ exceeds 5 GeV$^2$/fm, $R_{AA}$ gradually loses its sensitivity.'' That statement also appeared reasonable. However, given the opportunity of looking at a whole range of theoretical predictions (kindly provided by the PQM authors~\cite{PQM}) rather than just the one that happens to fit the data, we experimentalists learned something about the theory that was different from what the theorists thought. By simply looking at the PQM predictions on a log-log plot (Fig.~\ref{fig:RAA20}b), it became evident that the PQM prediction could be parameterized as $R_{AA}[p_T=20 {\rm GeV/c}]=0.75/\sqrt{\hat{q}\,({\rm GeV^2/fm})}$ over the range $5<\hat{q}<100$ GeV$^2$/fm. This means that in this range, the fractional sensitivity to $\hat{q}$ is simply proportional to the fractional uncertainty in $R_{AA}$ ($\Delta\hat{q}/\hat{q}=2.0\times \Delta R_{AA}/R_{AA}$), so that improving the precision of $R_{AA}$ e.g. in the range $10\leq p_T\leq 20$ GeV/c will lead to improved precision on $\mean{\hat{q}}$. This should give the theorists some incentive to improve their (generally unstated) systematic uncertainties.
\begin{figure}[!h]
\begin{center}
\begin{tabular}{cc}
\hspace*{-0.02\linewidth}\includegraphics[width=0.51\linewidth]{figs/ppg079figure2apqmonly.eps} &
\hspace*{-0.08\linewidth}\includegraphics[width=0.53\linewidth]{figs/figure2_pqm_loglogonly.eps}
\end{tabular}
\end{center}
\caption[]
{a) (left) $R_{AA}$ at $p_T=20$ GeV/c as a function of $\mean{\hat{q}}$ in the PQM model~\cite{PQM}. b) (right) same plot on a log-log scale.
\label{fig:RAA20} }
\end{figure}
\subsection{$R_{AA}$ vs. the reaction plane}
Another good synergy between experimentalists and theorists is the study of $R_{AA}$ as a function of angle to the reaction plane and centrality in order to understand the effect of varying the initial conditions (centrality) and the path length through the medium (angle). When PHENIX presented results on $R_{AA}(p_T)$ vs. the angle $\Delta\phi$ to the reaction plane (Fig.~\ref{fig:th-angle})~\cite{ppg054}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/RAAdphicftheory.eps}
\end{center}
\caption[]{$R_{AA}^{\pi^0}$ for $5<p_T< 8$ GeV as a function of $\Delta\phi$ the angle to the reaction plane in Au+Au collisions with centrality 20--30\% at $\sqrt{s_{NN}}=200$ GeV~\cite{ppg054} (data points) compared to prediction for $10<p_T<15$ GeV/c (dashes)~\cite{08053271}.
\label{fig:th-angle} }
\end{figure}
there was a reaction from the flow community that this is nothing other than a different way to present $v_2$. This is strictly not true for two reasons: 1) $v_2$ measurements are relative while $R_{AA}(\Delta\phi, p_T)$ is an absolute measurement including efficiency, acceptance and all other such corrections; 2) if and only if the angular distribution of high $p_T$ suppression around the reaction plane were simply a second harmonic so that all the harmonics other than $v_2$ vanish (and why should that be?) then $R_{AA}(\Delta\phi, p_T)/R_{AA}(p_T)=1+2 v_2\cos 2\Delta\phi$. In nice talks at this meeting, Steffen Bass and Abhijit Majumder have attempted to put all the theoretical models of jet quenching into a common nuclear geometrical and medium evolution formalism so as to get an idea of the fundamental differences in the models ``evaluated on identical media, initial state and final fragmentation. The only difference in models will be in the Eloss kernel.''~\cite{08053271}. The different models all agreed with the measured $R_{AA}(p_T)$. The agreement with the measured $R_{AA}(\Delta\phi, p_T)$ is not so good (Fig~\ref{fig:th-angle}), but hopefully suggests the way for improvement.
\subsection{Photons and neutral mesons}
\subsubsection{Direct photons at low $p_T$}
Internal conversion of a photon from $\pi^0$ and $\eta$ decay is well-known and is called Dalitz decay~\cite{egNPS}. Perhaps less well known in the RHI community is the fact that in any reaction (e.g. $q+g\rightarrow \gamma +q$) in which a real photon can be emitted, a virtual photon (e.g. $e^+ e^-$ pair) of mass $m_{ee}\geq 2m_e$ can be emitted instead. This is called internal-conversion and is generally given by the Kroll-Wada formula~\cite{KW,ppg086}:
\begin{eqnarray}
{1\over N_{\gamma}} {{dN_{ee}}\over {dm_{ee}}}&=& \frac{2\alpha}{3\pi}\frac{1}{m_{ee}} (1-\frac{m^2_{ee}}{M^2})^3 \quad \times \cr & &|F(m_{ee}^2)|^2 \sqrt{1-\frac{4m_e^2}{m_{ee}^2}}\, (1+\frac{2m_e^2}{m^2_{ee}})\ ,
\label{eq:KW}
\end{eqnarray}
where $M$ is the mass of the decaying meson or the effective mass of the emitting system. The dominant terms are on the first line of Eq.~\ref{eq:KW}: the characteristic $1/m_{ee}$ dependence; and the cutoff of the spectrum for $m_{ee}\geq M$ (Fig.~\ref{fig:ppg086Fig2KWdist})~\cite{ppg086}. Since the main background for direct-single-$\gamma$ production is a photon from $\pi^0\rightarrow \gamma +\gamma$, selecting $m_{ee} \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 100$ MeV effectively reduces the background by an order of magnitude by eliminating the background from $\pi^0$ Dalitz decay, at the expense of a factor $\sim 1000$ in rate. This allows the direct photon measurements to be extended (for the first time in both p-p and Au+Au collisions) below the value of $p_T\sim 4$ GeV/c, possible with real photons, down to $p_T=1$ GeV/c (Fig.~\ref{fig:ppg086Fig4})~\cite{ppg086}, which is a real achievement.
The solid lines on the p-p data are QCD calculations which work down to $p_T=2$ GeV/c. The dashed line is a fit of the p-p data to the modified power law $B (1+p_T^2/b)^{-n}$, used in the related Drell-Yan~\cite{Ito81} reaction, which flattens as $p_T\rightarrow 0$. For Au+Au, the exponential spectrum of excess photons above the $\mean{T_{AA}}$ extrapolated p-p fit are suggestive of a thermal source. This is quite distinct from the case for e.g. $\pi^0$ production, where the spectra are exponential in both p-p and Au+Au collisions as $p_T\rightarrow 0$ (Fig.~\ref{fig:pi0cross}).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/ppg086Fig2KWdist.eps}
\end{center}
\caption[]{Invariant mass ($m_{e+ e^-}$) distribution of $e^+ e^-$ pairs from Au+Au minimum bias events for $1.0< p_T<1.5$ GeV/c~\cite{ppg086}. Dashed lines are Eq.~\ref{eq:KW} for the mesons indicated. Blue solid line is $f_c(m)$, the total di-electron yield from the cocktail of meson Dalitz decays; Red solid line is $f_{dir}(m)$ the internal conversion $m_{e^+ e^-}$ spectrum from a direct-photon ($M>> m_{e^+ e^-}$). Black solid line is a fit of the data to the sum of cocktail plus direct contributions in the range $80< m_{e+ e^-} < 300$ MeV/c$^2$.
\label{fig:ppg086Fig2KWdist} }
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/ppg086Fig4.eps}
\end{center}
\caption[]{Invariant cross section (p-p) or invariant yield (Au+Au) of direct photons as a function of $p_T$~\cite{ppg086}. Filled points are from virtual photons, open points from real photons.
\label{fig:ppg086Fig4} }
\end{figure}
\subsubsection{Direct photons and mesons up to $p_T=20$ GeV/c}
PHENIX continues its relentless pursuit of measuring $R_{AA}$ for pseudo-scalar ($\pi^0$ and $\eta$) and vector ($\omega$, $\phi$, $J/\Psi$) mesons and direct photons over the broadest $p_T$ range (Fig.~\ref{fig:raa_mesons_AuAu})~\cite{allmQM08}. The $\pi^0$ and $\eta$ continue to track each other to the highest $p_T$. The $\phi$ and $\omega$ vector mesons appear to track each other also but with a different value of $R_{AA}(p_T)$. Interestingly, the $J/\Psi$ seems to track the $\pi^0$ for $0\leq p_T\leq 4$ GeV/c; and it will be interesting to see whether this trend continues at higher $p_T$.
The direct-$\gamma$ case is striking and possibly indicative of trouble ahead for the LHC. With admittedly large systematic errors, which should not be ignored, the direct-$\gamma$ appear to become suppressed for $p_T> 14$ GeV/c with a trend towards equality with $R_{AA}^{\pi^0}$ for $p_T\sim 20$ GeV. Should $R_{AA}^{\gamma}$ become equal to $R_{AA}^{\pi^0}$, it would imply that the energy loss in the final state is no longer a significant effect for $p_T\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 20$ GeV/c and that the equal suppression of direct-$\gamma$ and $\pi^0$ is due to the initial state structure functions. If this were true, it could mean that going to much higher $p_T$ would not be useful for measurements of parton suppression. In this vein, Kari Eskola gave a nice talk on the latest structure functions in nuclei~\cite{EPS08} at this meeting, but wisely declined to present a prediction for the PHENIX direct-$\gamma$ data, which he said he would show as soon as the preliminary data are published. Clearly, improved measurements of both direct-$\gamma$ and $\pi^0$ in the range $10<p_T<20$ GeV/c are of the utmost importance.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/raa_mesons_AuAu.eps}
\end{center}
\caption[]{$R_{AA}(p_T)$ for direct-$\gamma$ and the mesons indicated in Au+Au central collisions at $\sqrt{s_{NN}}=200$ GeV~\cite{allmQM08}.
\label{fig:raa_mesons_AuAu} }
\end{figure}
\subsection{The baryon anomaly}
There is a tendency of some groups to treat non-identified charged hadrons $h^+$ $h^-$ and correlations among them in A+A collisions as if they were dealing with identified $\pi^0$ mesons. While this might be true in p-p collisions, the situation in Au+Au collisions is quite different as illustrated by Fig.~\ref{fig:RAApi-h}~\cite{MayaQM05}, where $R_{AA}$ for $\pi^0$ and $h^+ + h^-$ are different in the range $2\leq p_T\leq 6$ GeV/c, now called ``intermediate $p_T$.'' Although the effect may appear small on Fig.~\ref{fig:RAApi-h}, when the identified $p/\pi^+$ and $\overline{p}/\pi^-$ ratios were measured in this range (Fig.~\ref{fig:ppi})~\cite{ppg015}, they were an order of magnitude larger than had ever been seen previously in either $e^+ e^-$ jet fragmentation or in the average particle composition of the bulk matter in Au+Au central collisions~\cite{ppg026}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.98\linewidth]{figs/Raa_pi0_h_compari_AUAU_200GEV_0to10cent.eps}
\end{center}
\caption[]{$R_{AA}(p_T)$ for $\pi^0$ and non-identified charged hadrons $(h^+ + h^-)/2$ for central (0-5\%) Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV~\cite{MayaQM05}.
\label{fig:RAApi-h} }
\end{figure}
\begin{figure}[!h]
\begin{center}
\vspace*{+0.03\linewidth}
\includegraphics[width=0.85\linewidth]{figs/ppg015-Fig1color.eps}
\end{center}
\caption[]{$p/\pi^+$ and $\overline{p}/\pi^-$ ratios as a function of $p_T$ and centrality in Au+Au collisions at $\sqrt{s_{NN}}=200$ GeV~\cite{ppg015} compared to other data indicated.
\label{fig:ppi} }
\end{figure}
This `baryon anomaly' was beautifully explained as due to the coalescence of an exponential (thermal) distribution of constituent quarks (a.k.a. the QGP)~\cite{GFH03}; but measurements of correlations of $h^{\pm}$ in the range $1.7\leq p_{T_a}\leq 2.5$ GeV/c to identified mesons or baryons with $2.5\leq p_{T_t}\leq 4.0$ GeV/c showed the same near side and away side peaks and yields (Fig.~\ref{fig:Sickles-corr-fig2}) characteristic of di-jet production from hard-scattering~\cite{PXPRC71}, rather than from soft coalescence, apparently ruling out this beautiful model.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.85\linewidth]{figs/Sickles-corr-fig2.eps}
\end{center}
\caption[]{Conditional yield per trigger meson (circles), baryon (squares) with $2.5< p_T < 4$ GeV/c, for associated charged hadrons with $1.7 < p_T < 2.5$ GeV/c integrated within $\Delta\phi=\pm 0.94$ radian of the trigger (Near Side) or opposite azimuthal angle, for Au+Au (full), d+Au (open) collisions at $\sqrt{s_{NN}}=200$ GeV~\cite{PXPRC71}. The red-dashed curve indicates the expected trigger-side conditional yield if all the anomalous protons in Au+Au collisions were produced by coalescence.
\label{fig:Sickles-corr-fig2} }
\end{figure}
However, at this meeting, Marco Van Leeuwen showed STAR data presented at QM2008 in which same side correlations with $p_{T_t}>4.0$ GeV/c and $2.0<p_{T_a}<4$ GeV/c are separated into the ridge (large $\delta\eta$ from trigger) and jet region. This result seems to imply that the large $\overline{p}/\pi^-$ ratio $\sim 1$ observed for single inclusive identified particles in the range of the baryon anomaly, $2.0<p_{T}<4.5$ GeV/c, all come from the underlying ridge and not from the smaller jet region! This spectacular observation, which clearly needs to be checked, opens up a whole host of questions: i) what is the $\overline{p}/\pi^-$ ratio in jets in p-p collisions? (it is $\sim 0.2$ as in jets in $e^+ e^-$ collisions); ii) what are the same and away side correlations in the ridge?; iii) is the ridge the region of equilibrated coalescence? If so why is it localized in azimuth near a jet? iv) why does the azimuthal width of the ridge appear to be similar if not equal to that of a jet?
\subsection{Heavy quark suppression}
Another set of striking data at RHIC with no clear explanation at present is the measurement of direct-single $e^{\pm}$ production in p-p collisions (Fig.~\ref{fig:single-e-pp})~\cite{ppg065} in agreement with the FONNL theoretical calculations of semi-leptonic decays of mesons containing $c$ and $b$ quarks, and the indication by the same measurement in Au+Au collisions of the apparent suppression of heavy quarks $c$ and $b$ by roughly the same amount as $\pi^0$, notably for $p_T\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 5$ GeV/c where the $m\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}} 4$ GeV $b$ quarks dominate (Fig.~\ref{fig:fig3se})~\cite{ppg066,seeSTAR}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/single-e-pp.eps}
\end{center}
\caption[]{a) (top) Invariant cross sections of electrons from heavy flavor decays~\cite{ppg065}. Curves are FONLL theoretical calculations~\cite{ppg065}. b) (bottom) Ratio of the data and the FONLL calculation. The upper (lower) curve shows the theoretical upper (lower) limit of the FONLL calculation.
\label{fig:single-e-pp} }
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/fig3se_data_only_logo.eps}
\end{center}
\caption[]{a) (top) $R_{AA}$ of heavy-flavor electrons in 0-10\% central collisions compared with $\pi^0$ data. b) Anisotropic flow harmonic $v_2^{HF}$ of heavy-flavor electrons in minimum bias collisions compared with $\pi^0$ data.~\cite{ppg066}.
\label{fig:fig3se} }
\end{figure}
This appears to strongly disfavor the hypothesis of energy loss via gluon bremsstrahlung, which was predicted to be much less for heavy quarks~\cite{deadcone} than for light quarks and gluons; but opens up a whole host of new possibilities including string theory~\cite{egsee066}, as discussed by several talks at this meeting, and even more transformational possibilities~\cite{AZINPC07}. Clearly detailed measurements of correlations of $b-\overline{b}$, $c-\overline{c}$ quarks and light quarks and gluons will be required in order to sort out this very important and very interesting issue.
\subsection{Correlations, jets and fragmentation}
The di-jet structure of hard scattering was originally discovered in p-p collisions at the CERN-ISR by measurements of two-particle correlations~\cite{egseeMJTHP04}; and because of the huge multiplicity and the complication of the azimuthal anisotropy due to hydrodynamic flow is the only way that it has been studied so far in A+A collisions at RHIC.
STAR originally claimed that the away-jet vanished in Au+Au collisions~\cite{HardkeQM02} for a trigger $h^{\pm}$ with $4 < p_{T_t} < 6$ GeV/c and associated $h^{\pm}$ with $2< p_{T_a}< p_{T_t}$, but later realized that the away jet didn't vanish it just lost energy and appeared for $h^{\pm}$ with $0.15< p_{T_a}< 4$ GeV/c as a much broader away-side correlation in Au+Au than in p-p collisions~\cite{FQWang05}. The situation was further complicated by the appearance of a narrow away-side peak at still higher $p_{T_t}$ (Fig.~\ref{fig:2stars})~\cite{Magestro}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.80\linewidth]{figs/2stars.eps}
\end{center}
\caption[]{Conditional yield of away-side associated $h^{\pm}$ per $h^{\pm}$ trigger with $4< p_{T_t} < 6$ GeV/c (solid points)~\cite{FQWang05} and $8 < p_{T_t} < 15$ GeV/c (open points)~\cite{Magestro} plotted as a function of the ratio of the transverse momentum of the associated particle to the trigger particle $p_{T_a}/p_{T_t}=x_E$. Insets show the conditional probability azimuthal distributions, with flow modulated background subtracted, for both data sets as labeled.
\label{fig:2stars} }
\end{figure}
These features were confirmed by PHENIX~\cite{ppg032} with the added fillip of an apparent dip exactly opposite to the trigger particle azimuth, suggesting a two lobed distribution (Fig.~\ref{fig:JetExtract})~\cite{ppg067} or possibly a Mach cone, Cerenkov radiation or other effect resulting from the reaction of the medium to the passage of a fast parton~\cite{seeRefs3267}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/JetExtract.ps}
\end{center}
\caption[]{Conditional yield azimuthal correlation function, $C(\Delta\phi)$ (black squares), flow background (solid line) and Jet function $J(\Delta\phi)$ (red dots) after flow subtraction, per trigger $h^{\pm}$ with $2.5< p_{T_t}<4$ GeV/c for associated $h^{\pm}$ of $1.0 < p_{T_a}<2.5$ GeV/c from PHENIX~\cite{ppg067}. PHENIX discusses the half-width $D$ ($\sim 1.1$ radian) of the Jet function $J(\Delta\phi)$ as the angular distance of the apparently displaced peak of the distribution from the angle $\Delta\phi=\pi$.
\label{fig:JetExtract} }
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/ShapePar-D-only.eps}
\end{center}
\caption[]{PHENIX $D$ parameters~\cite{ppg067} (Fig.~\ref{fig:JetExtract}) as a function of centrality, represented as the number of participants $N_{\rm part}$, for the systems and c.m. energies indicated.
\label{fig:ShapePar-D-only} }
\end{figure}
One of the striking features of the wide away side correlation is that the width as nicely discussed by Anne Sickles in a talk at this meeting and illustrated by the PHENIX data in Fig.~\ref{fig:ShapePar-D-only} does not depend on centrality, angle to the reaction plane, $p_{T_t}$, $p_{T_a}$ and $\sqrt{s_{NN}}$, which seems problematic to me if the effect is due to a reaction to the medium. Another problematic issue is that all the data upon which the two-lobed correlation function is based are from non-identified $h^{\pm}-h^{\pm}$ correlations in the $p_T$ range of the baryon anomaly where the particle ratios are strongly varying and are anomalous. Another interesting issue seen so far only in a preliminary result from PHENIX (Fig.~\ref{fig:JTMqm06-Azi200Au})~\cite{JTMQM06} is that the same shape away-side correlation persists in Au+Au central collisions even for auto-correlations of particles with very low $p_T$ between 0.2 and 0.4 GeV/c where any effect of hard-scattered partons should be submerged by the predominant soft physics. Clearly, measurements of correlations with both particles identified and covering a broad range of $p_{T_t}$ and $p_{T_a}$ as a function of the reaction plane are sorely needed.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.90\linewidth]{figs/JTMqm06-Azi200Au.eps}
\end{center}
\caption[]{Low $p_T$ like-sign pair azimuthal correlation function for 0-5\% central Au+Au collisions at $\sqrt{s}=200$ GeV from charged hadrons with $0.2\leq p_{T_1},\,p_{T_2}\leq 0.4$ GeV/c~\cite{JTMQM06}.
\label{fig:JTMqm06-Azi200Au} }
\end{figure}
\subsubsection{Systematic measurements and punch-through jets}
The STAR measurement~\cite{FQWang05} (Fig.~\ref{fig:2stars}) was the first to make a systematic study in $h^{\pm} h^{\pm}$ correlations of the away-side distribution of the ratio of the away-particle to the trigger particle transverse momenta, $p_{T_a}/p_{T_t}$, called $z_T$ by STAR and $x_E$ by PHENIX which was thought to be a determination of the fragmentation function~\cite{FFF}. It was found by PHENIX~\cite{ppg029} that this was not the case, that the away-side $x_E$ distribution triggered by a fragment of a hard-scattered parton was not sensitive to the shape of the fragmentation function of the away-jet but was only sensitive to the power ($n=8.1$) of the semi-inclusive invariant parton $\hat{p}_{T_t}$ spectrum. With no assumptions other than a power law for the parton $\hat{p}_{T_t}$ distribution (${{d\sigma_{q} }/{\hat{p}_{T_t} d\hat{p}_{T_t}}}= A \hat{p}_{T_t}^{-n}$), an exponential fragmentation function ($D^{\pi}_q (z)=B e^{-bz}$), and constant ratio of the away-parton transverse momentum to that of the trigger parton $\hat{x}_h=\hat{p}_{T_a}/\hat{p}_{T_t}$, for fixed $p_{T_t}$ as a function of $p_{T_a}$, it was possible~\cite{ppg029} to derive the $x_E$ distribution in the collinear limit, where $p_{T_a}=x_E p_{T_t}$:
\begin{equation}
\left.{dP_{\pi} \over dx_E}\right|_{p_{T_t}}\approx {N (n-1)}{1\over\hat{x}_h} {1\over
{(1+ {x_E \over{\hat{x}_h}})^{n}}} \, \qquad ,
\label{eq:condxe2}
\end{equation}
and $N=\mean{m}$ is the multiplicity of the unbiased away-jet.
Thus, although not sensitive to the fragmentation function, the $x_E$ ($z_T$) distribution is still sensitive to the ratio of the away parton transverse momentum to the trigger parton transverse momentum, $\hat{x}_h=\hat{p}_{T_a}/\hat{p}_{T_t}$, which is a measure of the differential energy loss of the away parton relative to the trigger parton which is surface biased due to the steeply falling $\hat{p}_{T_t}$ spectrum~\cite{MagestroRef}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.80\linewidth]{figs/pTt-44-AuAupp.ps}
\end{center}
\caption[]{Conditional yield of away side ($|\Delta\phi-\pi|<\pi/2$) $h^{\pm}$ per trigger $h^{\pm}$ with $4\leq p_{T_t}\leq 5$ GeV/c~\cite{ppg074} in p-p (circles) and Au+Au central (0-20\%) collisions plotted as $dP/dx_E$ with fits to Eq.~\ref{eq:condxe2} shown and best-fit parameters indicated.
\label{fig:pTt-44-AuAupp} }
\end{figure}
In p-p collisions, the imbalance of the away-parton and the trigger parton indicated by the fitted value of $\hat{x}_h=0.93\pm 0.03$ in Fig.~\ref{fig:pTt-44-AuAupp} is caused by $k_T$-smearing. In A+A collisions, the fitted value $\hat{x}_h=0.52\pm 0.03$ indicates that the away parton has lost energy relative to the trigger parton. The fits work well on the PHENIX data so I looked more closely at the two STAR measurements in Fig.~\ref{fig:2stars}. The lower $p_{T_t}$ data set~\cite{FQWang05} nicely followed Eq.~\ref{eq:condxe2} with $\hat{x}_h=0.48$ (see Fig.~\ref{fig:s48cu}) but the higher $p_{T_t}$ data~\cite{Magestro} disagreed in both normalization and shape with the lower $p_{T_t}$ data so I normalized the higher $p_{T_t}$ data to the lower $p_{T_t}$ data in the region $x_E<0.4$ where the slopes seemed to agree and which would be correct if $x_E$ scaling would apply in Au+Au collisions as it does in p-p collisions. When I did this, I was struck by the dramatic break and flattening of the slope in the higher $p_{T_t}$ distribution for $x_E\geq 0.5$. This could be suggestive of a two-component distribution where some partons, which pass through the medium, lose energy, while other partons, such as those emitted tangentially, punch
through without any energy loss. However it is difficult to understand why the punch-through of tangential partons would depend on the trigger $p_{T_t}$. I suggested that the comparison of the two STAR measurements and the possibility of a dramatic break in the $x_E$ distribution would be greatly clarified if a few lower $x_E$ points could be obtained for the higher $p_{T_t}$. STAR presented such a set of preliminary results at Quark Matter 2006 (Fig.~\ref{fig:Horner-AwayPlot-AuAu})~\cite{Horner} which in my opinion show a clear break for $z_T>0.5$ in the range $6<p_{T_t}< 10$ GeV/c, which I believe could represent punch-through of partons which have not lost energy by coherent LPM gluon radiation, but only by standard Bethe-Heitler gluon radiation which presumably is a much smaller effect since it is not coherent. This is a pretty striking observation and a pretty wild guess, so I am surprised and a bit disappointed that there is very little discussion of the `break' in the community. I hope this changes by the next Hard Probes Conference.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.80\linewidth]{figs/s48cu.ps}
\end{center}
\caption[]{$x_E$ distributions from Fig.~\ref{fig:2stars}, with higher $p_{T_t}$ data normalized to agree with lower $p_{T_t}$ data for $x_E<0.4$. Dashed line is a fit of lower $p_{T_t}$ data to Eq.~\ref{eq:condxe2} as described in the text.
\label{fig:s48cu} }
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.80\linewidth]{figs/Horner-AwayPlot-AuAu.eps}
\end{center}
\caption[]{STAR $z_T$ ($x_E$) distributions in $h^{\pm}-h^{\pm}$ correlations for 4 intervals of $p_{T_t}$: $2.5<p_{T_t}< 3$ GeV/c (green circles) to $6< p_{T_t}<10$ GeV/c (blue inverted triangles)~\cite{Horner}.
\label{fig:Horner-AwayPlot-AuAu} }
\end{figure}
\subsubsection{Medium modification of jet fragmentation}
Borghini and Wiedemann (Fig.~\ref{fig:BorgWied})~\cite{BW06} proposed using the hump-backed or $\xi=\ln(1/z)$ distribution of jet fragments, which is a signature of QCD coherence for small values of particle momentum fraction, $z=p/E_{\rm jet}$, to explore the medium-modification of jets in heavy ion collisions. The use of the $\xi$ variable would emphasize the increase in the emission of fragments at small $z$ due to the medium induced depletion of the number of fragments at large $z$. The jet energy must be known for this measurement so that it was presumed that full jet reconstruction would be required.
However, one of the original measurements of the $\xi$ distribution in $e^+ e^-$ collisions on the $Z^0$ resonance at LEP was made using the inclusive distribution of $\pi^0$, which could be plotted in either the $z$ or the $\xi$ variable since the energy of the jets for di-jet events was known (Fig.~\ref{fig:Ting})~\cite{L3}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.99\linewidth]{figs/mediummodif.eps}
\end{center}
\caption[]{Single inclusive distribution of $h^{\pm}$ as a function of $\xi$ for jets measured in $e^+ e^-$ collisions at two values of $\sqrt{s}$ together with MLLA calculations in vacuum and in medium~\cite{BW06}
\label{fig:BorgWied} }
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.75\linewidth]{figs/L3Inclusivepi-xp..eps}\cr
\includegraphics[width=0.75\linewidth]{figs/L3Inclusivepi-xip.eps}
\end{tabular}
\end{center}
\caption[]
{L3 measurement~\cite{L3} of the inclusive $\pi^0$ spectrum on the $Z^0$ resonance presented as either (top) $x_p=2 p^{\pi^0}/\sqrt{s}$ or (bottom) $\xi_p=\ln (1/x_p)$.
\label{fig:Ting} }
\end{figure}
A similar state of affairs exists for direct-$\gamma$-hadron correlations in p-p and A+A collisions since, modulo any $k_T$ effect, the jet recoiling from a direct-$\gamma$ has equal and opposite transverse momentum to the precisely measured $\gamma$. Also since the direct-$\gamma$ is a participant in the tree-level partonic reaction $q+g\rightarrow \gamma +q$ and not a fragment, the $x_E$ or $z_T$ distribution of the away-side hadrons from a direct-$\gamma$ actually does represent the away-jet fragmentation function, as suggested by Wang, Huang and Sarcevic~\cite{WHS} so that the $\xi$ distribution can be derived.
Justin Frantz showed the preliminary PHENIX isolated-direct-$\gamma$ data from p-p collisions in his talk (see Fig.~\ref{fig:PXxi}a), so I was able to calculate the $\xi$ distribution from this data by the simple change of variables, $dN/d\xi=z\,dN/dz$ (Fig.~\ref{fig:PXxi}b). The PHENIX data nicely follow the trend of the TASSO measurements in $e^+ e^-$ collisions~\cite{TASSO}.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.75\linewidth]{figs/hp2008-prelim-isol-gam-xE.eps}\cr
\includegraphics[width=0.75\linewidth]{figs/MAN-isolated-gam-h-xi-cm.ps}
\end{tabular}
\end{center}
\caption[]
{a) (top) Preliminary PHENIX isolated-direct-$\gamma$ $x_E$ distributions for several ranges of isolated-direct-$\gamma$ $p_{T_t}$ as presented by Justin Frantz at this meeting; b) Same data (using the same symbols) plotted as a function of $\xi=\ln (1/x_E)$ compared to TASSO measurements in $e^+ +e^-$ at two values of $\sqrt{s}$~\cite{TASSO}.
\label{fig:PXxi} }
\end{figure}
The $\xi$ distribution clearly emphasizes the fragmentation function in the region $z<0.05$ ($\xi>3.0$) at the expense of the region $z>0.05$, while
in my opinion it is easier to understand the energy loss of partons in the medium by looking at the fragmentation functions in the standard fragmentation variable $z$, the fractional energy of the jet carried by a fragment particle, as in Fig.~\ref{fig:PXxi}a or the $x_E$ ($z_T$) variable as in Figs.~\ref{fig:2stars}, \ref{fig:pTt-44-AuAupp},~\ref{fig:s48cu},~\ref{fig:Horner-AwayPlot-AuAu}. However, just in case the $\xi$ plot made from direct-$\gamma$-hadron correlations happens to come into common usage, I `modestly' give it the name: Tannenbaum-Ting-Borghini-Wiedemann-Wang plot (in almost alphabetical order).
\section{``Nobel Dreams'' Redux}
The appearance of monojets in hard scattering at RHIC, especially in p+A or d+A collisions, has been predicted as a signature of gluon saturation at low $x$~\cite{KLM05}. Due to a famous but erroneous measurement at the SpS collider~\cite{ND} the term and concept of monojets has a very negative connotation to people of my generation. PHENIX~\cite{PXdAu06} has seen no evidence for mono-jets in $h^\pm -h^\pm$ correlations for triggers with $2.5< p_{T_t} <4$ GeV/c at $\mean{\eta}$=1.7, 0, -1.7, and associated particles with $1.0< p_{T_a} < 2.5$ GeV/c at mid-rapidity ($\mean{\eta}=0$). The widths and conditional yields are the same for triggers at all three values of $\mean{\eta}$ in both p-p and d+Au collisions. On the other hand, STAR~\cite{STARdAu06}, for $\pi^0$ triggers with $1 < \mean{p_{T_t}} <1.4$ GeV/c at $\mean{\eta}=4$ and associated $h^{\pm}$ with $p_{T_a}>0.50$ GeV/c at $|\eta|<0.75$, appears to see a reduction of both the width and magnitude of the away-side correlation. This situation must be resolved by new data from both p-p and d+Au collisions. One important issue of concern to me is the background from diffraction dissociation which may be large and even coherent in d+Au collisions in the low $p_{T_t}$ range studied. Also, other, more conventional, pQCD with $k_T$-broadening mechanisms~\cite{QiuVitev} have been proposed which would give a similar effect.
Clearly, measurements covering a wide range of $\eta_1$, $\eta_2$ and $p_{T_t}$ must be performed in order to verify such an important proposed effect, and I note in this regard that the kinematics for obtaining low $x_2$ are much more favorable with both particles at large $\eta$ since:
\begin{equation}
x_1=x_T \frac{e^{\eta_1} + e^{\eta_2}}{2} \qquad x_2=x_T \frac{e^{-\eta_1} + e^{-\eta_2}}{2} \label{eq:2kin}
\end{equation}
where $x_T=2 p_T/\sqrt{s}$.
\section{Some of what I think is still not understood}
To close, I make a list of some of the issues that I think are still not understood at present. With the imminent startup of the LHC, I would not like to bet whether the list gets shorter or longer in the future.
\begin{itemize}
\item Is the nuclear modification factor $R_{AA}$ for $\pi^0$ really constant at a factor of 5 suppression over the range $3< p_T< 20$ GeV/c which would occur for a constant-fractional energy loss analogous to bremsstrahlung, or does the suppression tend to vanish at larger $p_T$? Is dE/dx constant, or a constant fraction, or something else?
\item Does $R_{AA}$ for direct-$\gamma$ really approach that of $\pi^0$ at large $p_T\sim20$ GeV/c as indicated by preliminary data? If true this would argue that the suppression due to a medium effect vanishes at large $p_T> 20$ GeV/c and the effect observed is due to shadowing of the structure functions. If this is confirmed, it would be VERY BAD for LHC.
\item The detailed mechanism of jet suppression due to interaction with the medium is not understood. It is not known whether partons lose energy continuously or discretely; whether they stop in the medium so that the only observed jet fragments are those emitted from the surface; or whether partons merely lose energy exiting the medium such that those originating from the interior of the medium with initially higher $p_T$ are submerged (due to the steeply falling $p_T$ spectrum) under partons emitted at the surface which have not lost energy. In either case, there is a surface bias.
\item The reason why heavy quarks appear to lose the same energy as light quarks is not understood.
\item It is not known whether a parton originating at the center of the medium can exit the medium without losing any energy.
\item It is not known where the energy from the absorbed jets or the parton energy loss goes or how it is distributed.
\item The surface bias discussed above complicates the use of two-particle correlations of hard-scattered partons to probe the medium since detecting a particle from an away-side parton changes the surface bias of the trigger parton. This means that detection of both a trigger and away side particle is required in order to constrain the hard-scattering kinematics and the position of the origin of the hard-scattered parton-pair within the nuclear matter. Then, the main correlation information with relatively stable kinematics and origin is obtained by studying correlations with an additional 1 or two particles, i.e. a total of 3 or 4 particle correlations, which is much more complicated and requires much more data than the same studies in p-p collisions.
\item The baryon anomaly, the increase of the p$^{\pm}/\pi^{\pm}$ ratio in the range $2<p_T <6$ GeV/c in Au+Au collisions from the value given by parton-fragmentation in this $p_T$ range in p+p collisions, is not understood. Elegant recombination models fail to explain the similar jet activity correlated to the p and $\pi$ triggers in this ``intermediate'' $p_T$ range.
\item The wide away-side non-identified hadron correlations for triggers in the intermediate range $2<p_T <6$ GeV/c in Au+Au collisions, with a possible dip at 180$^o$ which causes apparent peaks displaced by $\sim 60^o$, is not understood. It could represent a Mach cone due to the analogy of a sonic-boom of the parton passing through the medium faster than the speed of sound, or it could indicate jets with large deflections. The effect may be related to the baryon anomaly, which occurs in this $p_T$ range; or the peaks, which are seen also for much softer trigger particles, may not be a hard-scattering effect; or they could represent something totally new.
\item The ridge is not understood. What causes it? What are its properties? How does it depend on $p_{T_t}$, angle to the reaction plane, centrality, etc? Why isn't there an away-side ridge? How can such a long range correlation $\delta\eta\sim \pm 5$ be created? Is the ridge really the region of the famous equilibrated coalescence with an anomalous $\overline{p}/\pi^-$ ratio? If so why is this region localized near a jet in azimuth and not distributed uniformly in the bulk medium?
\item Are there really mono-jets in d+Au collisions at RHIC energies as predicted by Gluon Saturation?
\item Finally, $J/\Psi$ suppression, which for more than 20 years has represented the gold-plated signature of deconfinement, is not understood.
\end{itemize}
|
1,116,691,497,287 | arxiv | \section{Introduction}
For over two decades, superconductivity in Sr$_2$RuO$_4$ had been proposed to be of spin-triplet pairing both in theory and in experiment \cite{Maeno1994,Rice1995,Ishida1998,Mackenzie2003,Mackenzie2017}. This belief has lately been overturned when refined nuclear magnetic resonance (NMR) \cite{Pustogow2019,Ishida2020,Chronister2021} and polarized neutron scattering (PNS) \cite{Petsch2020} experiments detected a drop in the spin susceptibility below $T_c$.
Muon spin relaxation ($\mu$SR) and polar Kerr effect revealed time-reversal symmetry breaking (TRSB) of the superconducting order parameter \cite{Luke1998,Xia2006}.
A line-node gap was then supported by specific heat \cite{NishiZaki2000,Deguchi2004,Kittaka2018}, penetration depth \cite{Bonalde2000}, thermal conductivity \cite{Suzuki2002,Hassinger2017}, spin-lattice relaxation rate \cite{Ishida2000}, and quasiparticle interference imaging \cite{Sharma2020}. Very recently, ultrasound experiment reported a thermodynamic discontinuity in the shear elastic modulus $c_{66}$ and put further constraint on the pairing symmetry \cite{Benhabib2021,Ghosh2021}. Candidate proposals of two-component TRSB order parameters include $d_{x^2-y^2}+ig$ \cite{Kivelson2020,Clepkens2021}, $s+id_{x^2-y^2}$ \cite{Romer2019}, $s+id_{xy}$ \cite{Romer2021}, chiral or helical or mixed $p$-wave \cite{Mackenzie2003,Roising2019,Wang2019,Ramires2019,Scaffidi2020,Gupta2020,Ikegaya2020,Chen2020,Wang2020,Huang2021}, $d_{xz}+id_{yz}$ \cite{Zhang2021}, and exotic interorbital pairings \cite{Huang2019,Kaba2019,Gingras2019,Suh2020,Kaser2021,Zhang2020}. Among them, $d_{xz}+id_{yz}$ and $d_{x^2-y^2}+ig$ can satisfy the ultrasound requirement. While $d_{xz}+id_{yz}$ also seems supported by $\mu$SR under pressure \cite{Grinenko2021a,Grinenko2021b}, its nodal structure is inconsistent with spectroscopic measurements by scanning tunneling microscope (STM) \cite{Sharma2020}. The accidentally degenerate $d_{x^2-y^2}+ig$ state can fit most experiments including the STM, but how the $g$-wave can arise and become degenerate with $d_{x^2-y^2}$ remains unclear. The exact pairing symmetry of Sr$_2$RuO$_4$ has not been decided.
To resolve this issue, we construct here a general model Hamiltonian combining realistic band structures from angle-resolved photoemission spectroscopy (ARPES) and multipole pairing interactions allowed by symmetry for the spin-orbit coupled Ru-$4d$ electrons. The superconducting gap structures are then evaluated systematically by solving the linearized Eliashberg equations with antiferromagnetic (AFM), ferromagnetic (FM), electric multipole fluctuations and their mixtures. We find that the $d_{x^2-y^2}+ig$ (pseudospin) singlet pairing is the most probable candidate and can be realized by the interplay of all three types of multipole fluctuations, while $d_{xz}+id_{yz}$ is theoretically not favored within proper parameter range from experiments due to the quasi-two-dimensional Fermi surface topology. A candidate $s+id_{x^2-y^2}$ state can also be obtained to have the desired gap structure for STM \cite{Sharma2020}, but fails to conform with the ultrasound experiment. Our work may help to clarify the nature of superconductivity in Sr$_2$RuO$_4$.
\section{Model}
Spin-orbit coupling (SOC) is considered important in Sr$_2$RuO$_4$ \cite{Veenstra2014,Mackenzie2017}. To capture its superconducting symmetry, we first construct a general model Hamiltonian based on multipole representations of the pairing interactions. By Stevens operator-equivalent technique, multipole operators $\hat Q^{jkq}$ ($k=0,1,\dots,2j$; $q=-k,-k+1,\dots,k$) for a given angular momentum $j$ can be obtained from the $(2j+1)\times(2j+1)$ tensor operator $\hat J_{kq}$ satisfying \cite{Stevens1952,Inui1993}:
\begin{equation}
\begin{aligned}
& \hat J_{kk} = (-1)^k\sqrt{\frac{(2k-1)!!}{(2k)!!}} (\hat J_+)^k,
\\
& [\hat J_{\pm},\hat J_{kq}] = \sqrt{(k\mp q)(k\pm q+1)} \hat J_{k,q\pm1}\ (q<k),
\end{aligned}
\end{equation}
where $\hat J_{\pm}$ is the raising/lowering operator within the corresponding $j$-subspace. These multipole operators are further projected into the irreducible representation (IR) $\Gamma$ of the $D_{4h}$ point group of Sr$_2$RuO$_4$ and denoted as $\hat Q^{j\Gamma\alpha}$ for the $\alpha$-th component in $\Gamma$ \cite{Kusunose2008,Watanabe2018}. Table \ref{tab1} gives all multipole operators for the $j=3/2$ and $5/2$ manifolds of Ru-$4d$ electrons. The electric multipoles are of even-rank and time-reversal symmetric and listed on the top of the table, while on the bottom are the magnetic multipoles (odd-rank and time-reversal antisymmetric) \cite{Ikeda2012,Watanabe2018}. More details on the definition of these multipole operators can be found in Appendix A.
\begin{table}
\caption{\label{tab1}Multipole operators classified according to the irreducible representations $\Gamma$ of $D_{4h}$ point group based on the operator-equivalent technique. The $j=5/2$ manifold contains all listed operators from rank 0 to rank 5 (monopole $\mathds{1}$; dipole $J$; quadrupole $O$; octupole $T$; hexadecapole $H$; dotriacontapole $D$); while the $j=3/2$ manifold only has multipoles up to rank 3 ($\mathds{1}$, $J$, $O$, $T$). For simplicity, we have used the same symbols for both $j$-spaces. They have in principle different bases and representation matrices. The subscript $g$ marks inversion symmetric representations and the superscripts $+/-$ denote time-reversal symmetric/antisymmetric ones. The subscripts of multipole operators are related to the tesseral harmonics in $O_h$ group or cubic harmonics \cite{Ikeda2012,Kusunose2008,Lage1947}. More details are explained in Appendix A.}
\renewcommand\arraystretch{1.5}
\centering
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ccc}
\hline
\hline
& IR ($\Gamma$) & Multipole operator $\hat Q^{j\Gamma\alpha}$ \\
\hline
\multirow{5}{*}{Electric} & $A_{1g}^+$ & $\hat{\mathds{1}}$, $\hat O_{20}$, $\hat H_0$, $\hat H_4$ \\
& $A_{2g}^+$ & $\hat H_{za}$ \\
& $B_{1g}^+$ & $\hat O_{22}$, $\hat H_2$ \\
& $B_{2g}^+$ & $\hat O_{xy}$, $\hat H_{zb}$ \\
& $E_{g}^+$ & $(\hat O_{xz},\hat O_{yz})$, $(\hat H_{xa},\hat H_{ya})$, $(\hat H_{xb},\hat H_{yb})$ \\
\hline
\multirow{6}{*}{Magnetic} & $A_{1g}^-$ & $\hat D_4$ \\
& $A_{2g}^-$ & $\hat J_z$, $\hat T_{za}$, $\hat D_{za1}$, $\hat D_{za2}$\\
& $B_{1g}^-$ & $\hat T_{xyz}$, $\hat D_2$ \\
& $B_{2g}^-$ & $\hat T_{zb}$, $\hat D_{zb}$ \\
& \multirow{2}{*}{$E_{g}^-$} & $(\hat J_x,\hat J_y)$, $(\hat T_{xa},\hat T_{ya})$, $(\hat T_{xb},\hat T_{yb})$, \\
& & $(\hat D_{xa1},\hat D_{ya1})$, $(\hat D_{xa2},\hat D_{ya2})$, $(\hat D_{xb},\hat D_{yb})$ \\
\hline
\hline
\end{tabular}
}
\end{table}
We then write down a general interaction containing all symmetry-allowed multipole fluctuations as potential superconducting pairing glues:
\begin{equation}
\begin{aligned}
H_{\text{int}} = & - \sum_{j,\Gamma}{\sum_{\alpha \beta}}\sum_{{\bf{q}}} g^{j\Gamma}_{\alpha\beta} V^{j\Gamma}({\bf{q}}) \hat Q^{j\Gamma\alpha\,\dag}({\bf{q}}) \hat Q^{j\Gamma\beta}({\bf{q}})
\\
= & - \sum_{j,\Gamma}{\sum_{\alpha\beta}}\sum_{{\bf{q}},{\bf{k}},{\bf{k}}'} \sum_{lml'm'} g^{j\Gamma}_{\alpha\beta} V^{j\Gamma}({\bf{q}}) Q^{j\Gamma\alpha*}_{ml}Q^{j\Gamma\beta}_{l'm'}
\\
& \times c^{\dag}_{l,{\bf{k}}-{\bf{q}}} c_{m,{\bf{k}}} c^{\dag}_{l',{\bf{k}}'+{\bf{q}}} c_{m',{\bf{k}}'},
\end{aligned}
\end{equation}
where $\hat Q^{j\Gamma\alpha}({\bf{q}}) =\sum_{{\bf{k}},lm}Q^{j\Gamma\alpha}_{lm}c^{\dag}_{l,{\bf{k}}+{\bf{q}}} c_{m,{\bf{k}}}$ and $c_{m,\bf{k}}$ ($c_{m,\bf{k}}^\dag$) is the electron annihilation (creation) operator with $\bf{k}$ being the momentum and $m$ the $z$-projection of the total angular momentum $j$. The matrix elements $Q^{j\Gamma\alpha}_{lm}$ are normalized with $Q^{j\Gamma\alpha}_{lm} \rightarrow Q^{j\Gamma\alpha}_{lm} / \sqrt{\sum_{l'm'} |Q^{j\Gamma\alpha}_{l'm'}|^2}$ for comparison of different multipole fluctuations, $V^{j\Gamma}(\bf{q})$ is the momentum-dependent interaction vertex, and $g^{j\Gamma}_{\alpha\beta}$ controls the fluctuation strength between the multipole components $\alpha$ and $\beta$, as illustrated in Fig. \ref{fig1}(a). The values of $g^{j\Gamma}_{\alpha\beta}$ are highly restricted as the multipole product should be projected to the identity representation to keep the overall symmetry of the Hamiltonian. Thus only multipoles belonging to the same IR can be coupled. For the two-dimensional IR $E^{\pm}_{g}$, such projection yields
$(\hat Q^{j\Gamma\alpha}_{x}\hat Q^{j\Gamma\beta}_{x}+\hat Q^{j\Gamma\alpha}_{y}\hat Q^{j\Gamma\beta}_{y})/2$, which will be denoted as $\hat Q^{j\Gamma\alpha}_{r}\hat Q^{j\Gamma\beta}_{r}$ for simplicity. There are a total number of 6 electric multipole fluctuation channels and 11 magnetic multipole fluctuation channels in the $j=3/2$ manifold, and 23 electric components and 38 magnetic components in the $j=5/2$ manifold, that are allowed by symmetry in Sr$_2$RuO$_4$. For clarity, we arrange them according to their IR and rank. For example, the 6 electric multipole channels for $j=3/2$ are listed as $\hat{\mathds{1}}\hat{\mathds{1}}$, $\hat{\mathds{1}}\hat O_{20}$, $\hat O_{20}\hat O_{20}$, $\hat O_{22}\hat O_{22}$, $\hat O_{xy}\hat O_{xy}$ and $\hat O_{rz}\hat O_{rz}$.
\begin{figure}
\begin{center}
\includegraphics[width=8.6 cm]{fig1.eps}
\caption{(a) Illustration of the multipole interaction $\hat Q^{j\Gamma\alpha}\hat Q^{j\Gamma\beta}$. (b) The Feynman diagram of the anomalous self-energy $\psi_{\mu\eta\bar\eta}$ from multipole pairing interactions within the Eliashberg framework. We use $k=({\bf{k}},i\omega_n)$ for simplicity. (c) The 3D Fermi surfaces with $(d_{\text{xz}},d_{\text{yz}},d_{\text{xy}})$ orbital characters derived from the TB Hamiltonian $H_K$. (d) Orbital-resolved band structures along a high-symmetry line on the $k_z=0$ plane of the Brillouin zone. The inset shows the colors for three orbitals.}
\label{fig1}
\end{center}
\end{figure}
The above procedures lay out a general phenomenological framework for studying electron pairing induced by multipole fluctuations. To apply it to Sr$_2$RuO$_4$, we consider the following three dimensional (3D) tight-binding (TB) model, $H_K=H_0+H_z$, where $H_0=\sum_{{\bf{k}},s} \psi^\dag_{s}({\bf{k}})h_0({\bf{k}},s)\psi_{s}({\bf{k}})$ describes the $k_z$-independent band structure from ARPES measurements \cite{Zabolotnyy2013}. $\psi_{s}({\bf{k}})=[c_{\text{xz},s}({\bf{k}}),c_{\text{yz},s}({\bf{k}}),c_{\text{xy},-s}({\bf{k}})]^T$ is the basis of the low-lying Ru-$4d$ $t_{2g}$ orbitals $(d_{\text{xz}},d_{\text{yz}},d_{\text{xy}})$. We have
\begin{equation}
h_0({\bf{k}},s) =
\begin{pmatrix}
\epsilon^{\text{xz}}_{{\bf{k}}}-\mu_0 & \epsilon^{\text{off}}_{{\bf{k}}}-is\lambda_{\text{SOC}} & i\lambda_{\text{SOC}} \\
\epsilon^{\text{off}}_{{\bf{k}}}+is\lambda_{\text{SOC}} & \epsilon^{\text{yz}}_{{\bf{k}}}-\mu_0 & -s\lambda_{\text{SOC}} \\
-i\lambda_{\text{SOC}} & -s\lambda_{\text{SOC}} & \epsilon^{\text{xy}}_{{\bf{k}}}-\mu_0 \\
\end{pmatrix},
\end{equation}
with $s=\pm$ for the spin and
\begin{equation}
\begin{aligned}
& \epsilon^{\text{xy}}_{{\bf{k}}} = -2t_1\cos(k_x)-2t_2\cos(k_y), \\
& \epsilon^{\text{yz}}_{{\bf{k}}} = -2t_2\cos(k_x)-2t_1\cos(k_y), \\
& \epsilon^{\text{xz}}_{{\bf{k}}} = -2t_3(\cos(k_x)+\cos(k_y)) -4t_4\cos(k_x)\cos(k_y) \\
&\qquad - 2t_5(\cos(2k_x)+\cos(2k_y)), \\
& \epsilon^{\text{off}}_{{\bf{k}}} = -4t_6\sin(k_x)\sin(k_y).
\end{aligned}
\end{equation}
The $H_z$ term describes the hopping along $z$-direction and is introduced to deal with out-of-plane pairing such as $d_{xz}$ and $d_{yz}$. Under the same basis $\psi_{s}({\bf{k}})$, it takes the form,
\begin{equation}
H_z({\bf{k}}) = -8t_0 \cos(k_x/2)\cos(k_y/2)\cos(k_z/2).
\end{equation}
The best ARPES fit yields $[t_1,t_2,t_3,t_4,t_5,t_6,\mu_0,\lambda_{\text{SOC}}]=[0.145,0.016,0.081,0.039,0.05,0,0.122,0.032]$ eV \cite{Zabolotnyy2013}. We choose $t_0=0.01$ eV so that $t_0/t_1$ agrees with previous study \cite{Pavarini2006}. The resulting 3D Fermi surfaces are plotted in Fig. \ref{fig1}(c). The 2D orbital-resolved band structures are shown in Fig. \ref{fig1}(d) along a high symmetry line within the $k_z=0$ plane of the Brillouin zone.
The above TB Hamiltonian allows us to get a feeling about leading multipole fluctuations, which cannot be obtained currently from experiment \cite{Sidis1999,Servant2000,Steffens2019}. As shown in Appendix B, calculations based on random phase approximation (RPA) for $j=3/2$ yield two leading multipole correlations, $\langle \hat J_z\hat J_z\rangle$, $\langle \hat T_{ra}\hat T_{ra}\rangle$, in the AFM channel and three leading diagonal correlations, $\langle\hat J_z\hat J_z\rangle$, $\langle\hat T_{ra}\hat T_{ra}\rangle$ and $\langle\hat T_{rb}\hat T_{rb}\rangle$, in the FM channel. By contrast, electric multipole fluctuations increase only slightly for $\alpha_S<1$, implying that there is no electric instability. The leading multipoles are supposed to dominate the pairing interaction, but other components also should be present and have substantial contributions.
Quite often, RPA cannot give a proper description of multipole fluctuations in real materials with strong electronic correlations. To have a systematic analysis of the electrons' pairing, we disentangle all multipole components allowed by symmetry and assume an empirical form of the interaction vertex \cite{Millis1990,Monthoux1991,Li2018,Li2019}:
\begin{equation}
V^{j\Gamma}({\bf{q}},i\nu_n) = \frac{1}{1+[{\bm{\xi}}\cdot({\bf{q}}-{\bf{Q}})]^2+|\nu_n|/\omega_{\bf q}},
\label{vertex}
\end{equation}
where $\nu_n$ is the bosonic Matsubara frequency, ${\bm{\xi}}=(\xi_{xy},\xi_{xy},\xi_{z})$ is the anisotropic correlation length of corresponding multipole fluctuations, $\omega_{\bf q}$ is the fluctuation energy, and ${\bf{Q}}$ is the characteristic wave vector for AFM, FM, or electric fluctuations. These quantities may in principle vary with $j$ and $\Gamma$. Here we have dropped the labels for simplicity.\\
\section{Eliashberg equations}
Candidate pairing symmetries of the superconductivity can be analyzed using the linearized Eliashberg equations \cite{Monthoux1992,Li2018}:
\begin{equation}
\begin{aligned}
Z_{\mu}({\bf{k}},i\omega_n) = & 1 + \frac{\pi T}{\omega_n} \sum_{\mu',n'} \oint_{\text{FS}_{\mu'}}\frac{\text{d}{\bf{k}}'}{(2\pi)^3 v_{{\bf{k}}'_{\text{F}}}}\text{sgn}(\omega_{n'})
\\
& \times K^{\text{N}}_{\mu\mu'}({\bf{k}},i\omega_{n};{\bf{k}}',i\omega_{n'}),
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\lambda \psi_{\mu}({\bf{k}},i\omega_n) = & \pi T\sum_{\mu',n'} \oint_{\text{FS}_{\mu'}}\frac{\text{d}{\bf{k}}'}{(2\pi)^3 v_{{\bf{k}}'_{\text{F}}}}\psi_{\mu'}({\bf{k}}',i\omega_{n'})
\\
& \times \frac{K^{\text{A}}_{\mu\mu'}({\bf{k}},i\omega_{n};{\bf{k}}',i\omega_{n'})}{|\omega_{n'}Z_{\mu'}({\bf{k}}',i\omega_{n'})|},\label{gap}
\end{aligned}
\end{equation}
where $\omega_{n}$ and $\omega_{n'}$ denote the fermionic Matsubara frequencies, $\mu$ and $\mu'$ are band indices, the integral with FS$_{\mu'}$ is over the Fermi surface of band $\mu'$ with corresponding Fermi velocity $v_{{\bf{k}}'_{\text{F}}}$, $Z_{\mu}$ is the renormalization function, and $\psi_{\mu}=\Delta_{\mu}Z_{\mu}$ is the anomalous self-energy related to the gap function $\Delta_\mu$. All bands are doubly degenerate with pseudospin $\eta=\pm$ and we only consider intraband pairing (singlet or triplet over pseudospin). Figure \ref{fig1}(b) shows the Feynman diagram for the anomalous self-energy $\psi_\mu$. The kernel functions $K^{\text{N}}_{\mu\mu'}$ and $K^{\text{A}}_{\mu\mu'}$ are given by
\begin{widetext}
\begin{equation}
\begin{aligned}
K^{\text{N}}_{\mu\mu'}({\bf{k}},i\omega_{n};{\bf{k}}',i\omega_{n'}) = &\sum_{lml'm'\eta\eta'}\sum_{j\Gamma}{\sum_{\alpha\beta}} g^{j\Gamma}_{\alpha\beta} V^{j\Gamma}({\bf{k}}-{\bf{k}}',i\omega_{n}-i\omega_{n'}) Q_{ml}^{j\Gamma\alpha*}Q_{l'm'}^{j\Gamma\beta}
u_{jl,\mu\eta}^{{\bf{k}}*}u_{jm,\mu'\eta'}^{{\bf{k}}'}u_{jl',\mu'\eta'}^{{\bf{k}}'*}u_{jm',\mu\eta}^{{\bf{k}}},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
K^{\text{A}}_{\mu\mu'}({\bf{k}},i\omega_{n};{\bf{k}}',i\omega_{n'}) = &
\sum_{lml'm'\eta}\sum_{j\Gamma}{\sum_{\alpha\beta}} g^{j\Gamma}_{\alpha\beta}[ V^{j\Gamma}({\bf{k}}-{\bf{k}}',i\omega_{n}-i\omega_{n'})Q_{ml}^{j\Gamma\alpha*}Q_{l'm'}^{j\Gamma\beta} + V^{j\Gamma}({\bf{k}}+{\bf{k}}',i\omega_{n}+i\omega_{n'})Q_{m'l}^{j\Gamma\alpha*}Q_{l'm}^{j\Gamma\beta} ]
\\
&\times u_{jl,\mu\eta}^{{\bf{k}}*}u_{jm,\mu'\eta}^{{\bf{k}}'}u_{jl',\mu\bar\eta}^{-{\bf{k}}*}u_{jm',\mu\bar\eta}^{-{\bf{k}}'},
\end{aligned}
\end{equation}
\end{widetext}
where $\hat u^{{\bf{k}}}$ is the matrix diagonalizing the 3D (2D) TB Hamiltonian $H_K$ ($H_0$), projected in the $j$ representation. The linearized Eliashberg equations are then solved numerically by approximating $\Delta_{\mu}({\bf{k}})\equiv\Delta_{\mu}({\bf{k}},i\omega_{n})\approx\Delta_{\mu}({\bf{k}},i\pi T_c)$ and using 1024 Matsubara frequencies, $41\times41\times41$ ${\bf{k}}$ meshes in the 3D Brillouin zone or $201\times201$ ${\bf{k}}$ meshes in the 2D Brillouin zone. Each eigenvector of Eq. (\ref{gap}) corresponds to a single solution of electron pairing and gives the corresponding gap structure on the Fermi surfaces. The largest eigenvalue $\lambda$ of $\psi_\mu$ at $T_c$ determines the leading pairing state.
\begin{figure}[t]
\centering
\includegraphics[width=8.6 cm]{fig2.eps}
\caption{Eigenvalues of five major pairing states, $s$ ($A_{1g}$), $d_{x^2-y^2}$ ($B_{1g}$), $(p_x,p_y)$ ($E_u$), $g$ ($A_{2g}$), and $(d_{xz},d_{yz})$ ($E_g$), for individual pairing interactions in $j=3/2$ from (a) 11 AFM multipole channels with $\xi^{\text{AFM}}_{xy}=9.7$ {\AA}, $\omega_{\bf q}=\omega^{\text{AFM}}_0=11.1$ meV, and ${\bf{Q}}_{\text{AFM}}=(0.3,0.3,0)$; (b) 11 FM multipole channels with $\xi^{\text{FM}}_{xy}=2.5$ {\AA}, $\omega_{\bf q}=v_0|{\bf q}|$, and ${\bf{Q}}_{\text{FM}}=(0,0,0)$; (c) 6 electric multipole fluctuation channels with $\xi^{\text{E}}_{xy}=\xi^{\text{AFM}}_{xy}$, $\omega_{\bf q}^{\text{E}}=\omega_0^{\text{AFM}}$, and ${\bf{Q}}_{\text{E}}=(0.2,0.2,0)$. (d) Eigenvalues of 5 major pairing states for averaged electric multipole fluctuations as a function of ${\bf{Q}}_{\text{E}}$ along $(h,h,0)$ direction. The $s$-wave state always dominates and has a maximal eigenvalue around ${\bf{Q}}_{\text{E}}=(0.2,0.2,0)$. The table on the bottom lists all multipole fluctuation components for $j=3/2$, sorted according to their IR and rank. The longitudinal correlation length is set to $\xi_{z}=0.1\xi_{xy}$.}
\label{fig2}
\end{figure}
\section{Individual pairing interactions}
For clarity, we first discuss the consequence of individual multipole fluctuation channels and focus on $j=3/2$ since it has a lower SOC energy. Figure \ref{fig2} compares the five major pairing states of the largest eigenvalues induced by each multipole channel for the 3D Hamiltonian of Sr$_2$RuO$_4$. The eigenvalues of all other pairing states, such as $d_{xy}$, are much smaller and therefore not shown. The parameter $g^{j\Gamma}_{\alpha\beta}$ is normalized so that all multipole fluctuations are treated equally for comparison.
For AFM fluctuations, inelastic neutron scattering (INS) experiments estimate $\xi^{\text{AFM}}_{xy}=9.7$ {\AA} and $\omega_{\bf q}=\omega^{\text{AFM}}_0=11.1$ meV at the AFM wave vector ${\bf{Q}}_{\text{AFM}}=(0.3,0.3,0)$ \cite{Sidis1999,Servant2000,Steffens2019}. The longitudinal correlation length is set to $\xi^{\text{AFM}}_z=0.1\xi^{\text{AFM}}_{xy}$ to reflect the absence of $z$-axis signal \cite{Servant2000}. Among all 11 AFM multipole fluctuation channels, most of them support $d_{x^2-y^2}$ or $s$. Two leading fluctuation channels from RPA analysis, $\hat J_{z}\hat J_{z}$ and $\hat T_{ra}\hat T_{ra}$, gives predominant $d_{x^2-y^2}$-wave pairing. The subordinate channels, $\hat J_{z}\hat T_{za}$, $\hat T_{xyz}\hat T_{xyz}$, $\hat T_{ra}\hat T_{rb}$, $\hat T_{rb}\hat T_{rb}$, also support $d_{x^2-y^2}$, while the subordinate $\hat T_{zb}\hat T_{zb}$, $\hat J_r\hat J_r$, $\hat J_r\hat T_{ra}$, $\hat J_r\hat T_{rb}$ favor $s$-wave and $\hat T_{za}\hat T_{za}$ favors $(p_x,p_y)$ or $p_x+ip_y$. All these pairing states are $m_z$-symmetric, i.e., symmetric about the $k_z=0$ plane. The $m_z$-antisymmetric $d_{xz}+id_{yz}$ or $(d_{xz},d_{yz})$ pairing state can also be obtained but is not favored for the TB Hamiltonian which is quasi-2D and weakly dispersive along $k_z$ direction. This conclusion is robust against reasonable tuning of $t_0$ and $\xi_z$. The results for $\xi_z=\xi_{xy}$ are summarized in Appendix C.
FM pairing interactions have previously been considered because Sr$_2$RuO$_4$ has similar electronic structures as the itinerant ferromagnets SrRuO$_{3}$ and Sr$_{4}$Ru$_{3}$O$_{10}$ and the metamagnet Sr$_{3}$Ru$_{2}$O$_{7}$ \cite{Bergemann2003,Mackenzie2017}. PNS experiment has reported a broad FM response \cite{Steffens2019}, giving ${\bf{Q}}_{\text{FM}}=(0,0,0)$, $\xi^{\text{FM}}_{xy}=2.5$ {\AA}, and a characteristic energy $\omega_0^{\text{FM}}=15.5$ meV. Since there is no experimental data for the spin wave dispersion, we use $\omega_{\bf q}=v_0|{\bf q}|$ and choose $v_0$ such that $\omega_{\bf q}$ reaches the order of $\omega_0^{\text{FM}}$ at the zone boundary. A slight variation of $v_0$ makes no qualitative change on our main conclusions. Figure \ref{fig2}(b) shows the typical results of five major pairing states induced by FM pairing interactions. Similarly, we find predominant $d_{x^2-y^2}$-wave pairing from leading dipole fluctuations $\hat J_{z}\hat J_{z}$ and $p$-wave from leading octupole $\hat T_{ra}\hat T_{ra}$ and $\hat T_{rb}\hat T_{rb}$, while the $s$-wave is supported by some subordinate multipole channels.
Electric fluctuations may arise from multi-orbital nature of Sr$_2$RuO$_4$ \cite{Raghu2010,Mravlje2011,Acharya2017,Boehnke2018,Acharya2019} and have similar interaction vertex as AFM ones, but with ${\bf{Q}}_{\text{E}}=(0.2,0.2,0)$, $\xi^{\text{E}}_{xy}=\xi^{\text{AFM}}_{xy}$, and $\omega_0^{\text{E}}=\omega_0^{\text{AFM}}$ \cite{Acharya2019}. As shown in Fig. \ref{fig2}(c), all six multipole channels support $s$-wave pairing, which is robust under the tuning of $\xi^{\text{E}}_{xy}$ and $\omega_0^{\text{E}}$. Figure \ref{fig2}(d) plots the eigenvalues of five major pairing states as a function of ${\bf{Q}}_{\text{E}}$ along the (110) direction. We see that the $s$-wave pairing always has a much larger eigenvalue than others. This is expected since superconductivity induced by charge fluctuation is typically $s$-wave. But quite interestingly, the eigenvalue of $s$ reaches a maximum around ${\bf{Q}}_{\text{E}}=(0.2,0.2,0)$, exactly the wave vector proposed by the RPA charge susceptibility \cite{Acharya2019}, implying a potential role of electric multipole fluctuations in superconducting Sr$_2$RuO$_4$.
We have performed similar calculations for $j=5/2$ and the results are summarized in Appendix D, showing that $d_{x^2-y^2}$, $s$ and $p_x+ip_y$ are still the major pairing states. In all cases, the $m_z$-antisymmetric $d_{xz}+id_{yz}$ is not favored \cite{Benhabib2021,Ghosh2021}.
\section{Mixed pairing interactions}
Figure \ref{fig2} seems to suggest that $g$-wave also has a much smaller eigenvalue and cannot be accidentally degenerate with $d_{x^2-y^2}$ to realize the $d_{x^2-y^2}+ig$ pairing. This is not the case. Different multipole fluctuations may coexist in real materials, so we must consider their possible combinations, namely, a mixed pairing interaction of AFM, FM, and electric multipole fluctuations such as
\begin{equation}
V_{\text{mix}} = r_1 g^{j\Gamma}_{\text{AFM}} V^{\text{AFM}} + r_2 g^{j\Gamma}_{\text{FM}} V^{\text{FM}} + r_3 g^{j\Gamma}_{\text{E}} V^{\text{E}},
\end{equation}
where $g^{j\Gamma}_{\text{AFM}}$ and $g^{j\Gamma}_{\text{FM}}$ are non-zero for magnetic fluctuations, and $g^{j\Gamma}_{\text{E}}$ are non-zero for electric ones. For simplicity, we average each term over their respective multipole components. $r_i$ controls the relative strength of three channels. We set all parameters as in previous section.
\begin{figure}
\begin{center}
\includegraphics[width=8 cm]{fig3.eps}
\caption{(a) Eigenvalues of $s/s'$, $g$ and $d_{x^2-y^2}$ pairing states as a function of the ratio $r_3/r_1$ at $r_2/r_1=0.6$ and $r_2/r_1$ at $r_3/r_1=0.6$ for a mixed AFM, FM, and electric pairing interaction. (b) Theoretical phase diagram of predominant pairing states on the plane of $r_2/r_1$ and $r_3/r_1$. The insets show corresponding gap structures in each region.}
\label{fig3}
\end{center}
\end{figure}
Since $d_{xz}+id_{yz}$ has been excluded, we will focus on the 2D Hamiltonian $H_0$ to simplify numerical efforts. The resulting phase diagram is plotted in Fig. \ref{fig3} for $j=3/2$, together with two examples of the eigenvalues of three major pairing states and their variation with the ratio $r_2/r_1$ or $r_3/r_1$. The $d_{x^2-y^2}$-wave is seen to extend from the origin ($r_2=r_3=0$) to cover a major part of the phase diagram with dominant AFM multipole fluctuations ($r_2/r_1<0.5$, $r_3/r_1<1$). A nodal $s$-wave can be induced by a moderate FM pairing interaction ($r_2/r_1>0.5$), while for a strong electric interaction ($r_3/r_1>1$), we find a predominant nodeless $s$-wave with gap minima on $\gamma$ band. For distinction, we will use $s'$ to denote nodal $s$-wave in the following. As discussed earlier, $d_{x^2-y^2}$ and $s$ (or $s'$) are major pairing states for pure AFM, FM or electric multipole fluctuations. But quite surprisingly, Fig. \ref{fig3} shows that $g$-wave pairing can become dominant over a large portion of the phase diagram where both FM and electric multipole fluctuations are equally important as the AFM ones.
Under this premise, accidentally degenerate $d_{x^2-y^2}+ig$ pairing may appear at the phase boundary with a somewhat weaker FM pairing interaction than the AFM one, namely $r_2/r_1\approx 0.5$. However, a moderate electric pairing interaction ($0.5<r_3/r_1<2$) is also required. If electric fluctuations are too weak, a two-component $s'+id_{x^2-y^2}$ state might appear as an alternative candidate. In any case, electric multipole fluctuations, such as the 0-rank charge fluctuations, seem to play a crucial role for $d_{x^2-y^2}+ig$ to actually appear in Sr$_2$RuO$_4$, and should be better examined by future X-ray diffraction or Raman experiments \cite{Feng2012,Croft2014,Gallais2013,Xi2015,Thorsmolle2016}.
The emergence of $g$-wave is robust for such mixed AFM, FM, and electric pairing interactions. Appendix E shows the phase diagrams of two other examples of averaged pairing interactions. In the first case, we include $j=5/2$ and take an averaged pairing interaction over all multipole fluctuations; and in the second case, we keep $j=3/2$ but consider only leading multipole components in the averaged AFM and FM interactions. In both cases, the phase diagrams are similar as in Fig. \ref{fig3}(b) and the $g$-wave state covers a large portion of the phase diagram for mixed AFM, FM, and electric pairing interactions, where $d_{x^2-y^2}$ and $s$ (or $s'$) solutions supported by leading multipole fluctuations are suppressed. If these leading fluctuations dominate the pairing interaction, the phase boundaries will shift slightly towards a larger $r_3/r_1$. We conclude that the competition and interplay of AFM, FM, and electric multipole fluctuations may give rise to a mechanism for this unusual $g$-wave pairing in Sr$_2$RuO$_4$. Whether or not this reflects the true situation in real materials requires future experimental scrutiny on their relative weights.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.2 cm]{fig4.eps}
\caption{Typical gap structures of (a) $d_{x^2-y^2}$, (b) $s'$, and (c) $g$-wave pairing states and their evolutions with the azimuth $\phi$. (d) Gap magnitude of a typical $s'+id_{x^2-y^2}$ pairing state constructed from (a,b) as a function of the azimuth $\phi$, showing minima along the zone diagonal ($\phi=\pm \pi/4$ and $\pm 3\pi/4$) on all three bands. (e) Gap magnitude of a typical $d_{x^2-y^2}+ig$ pairing state from (b,c) as a function of the azimuth $\phi$, showing the symmetry-protected nodes along the zone diagonal.}
\label{fig4}
\end{center}
\end{figure}
To have an idea of the gap structures for these pairing states, Fig. \ref{fig4} presents their projection on the 2D Fermi surfaces and evolution with the azimuth $\phi$. Both $d_{x^2-y^2}$ and $g$ show clear nodes on three bands along the zone diagonal ($\phi=\pm \pi/4$ and $\pm 3\pi/4$). The resulting $d_{x^2-y^2}+ig$ gap has nodes along the zone diagonal, which is protected by symmetry and fits well the STM data \cite{Sharma2020}. Quite unexpectedly, we also find that the $s'$-wave can change sign or have gap minima near the zone diagonal. This interesting feature arises from the particular orbital character of the three bands. Along the diagonal direction, $\alpha$ and $\beta$ bands contain no contribution from $|j=3/2,j_z=\pm1/2\rangle$, and the $\gamma$ band contains no contribution from $|j=3/2,j_z=\pm3/2\rangle$. Hence, an $s$-wave pairing supported by multipole fluctuations with only $j_z=\pm1/2$ or $\pm3/2$ components must have nodes along the zone diagonal on $\alpha/\beta$ or $\gamma$ bands, respectively. This gives rise to the gap minima after other $j_z$ contributions are included. As a result, the two-component $s'+id_{x^2-y^2}$ also exhibits gap minima near the zone diagonal. For the above averaged pairing interaction, we find a relative gap ratio $|\Delta_{\text{min}}/\Delta_{\text{max}}|\approx0.11$, 0.01, 0.16 for $\alpha$, $\beta$, $\gamma$ bands, respectively. Note that the previous STM experiment has an energy resolution of about 75 $\mu$eV, which is roughly 21\% of the measured gap of 350 $\mu$eV \cite{Sharma2020}. Hence, within the resolution of the STM experiment, it might not be possible to distinguish the predicted $s'+id_{x^2-y^2}$ and $d_{x^2-y^2}+ig$ pairing symmetry.
\section{Discussion and Conclusion}
Under the assumption of a two-component order parameter that breaks the time reversal symmetry, we have examined candidate pairing states $p_x+ip_y$, $d_{xz}+id_{yz}$, $s'+id_{x^2-y^2}$, and $d_{x^2-y^2}+ig$ for Sr$_2$RuO$_4$ by constructing a general model Hamiltonian with all symmetry-allowed multipole fluctuations as the pairing interaction.
The spin-triplet $p_x+ip_y$ paring has been excluded by a series of NMR \cite{Pustogow2019,Ishida2020,Chronister2021} and PNS \cite{Petsch2020} experiments; its nodeless gap is also inconsistent with STM experiment. In our theory, it is only supported by the relatively weaker ferromagnetic octupole fluctuations. A mixed state of spin singlet-triplet pairing with line nodes might be possible, but is too complicated to realize.
The $d_{xz}+id_{yz}$ pairing can also be excluded with certainty due to the quasi-two-dimensional Fermi surface topology of Sr$_2$RuO$_4$. It is featured with horizontal line nodes on the $k_z=0$ plane and is supposed to cause $L$ modulated intensity of the spin resonance \cite{Iida2020}. But this expected resonance peak at $L=\pm0.5$ was not observed in latest neutron scattering experiment \cite{Jenni2021}. The $d_{xz}+id_{yz}$ pairing was mostly supported by $\mu$SR measurements that reported the splitting of superconductivity and TRSB under uniaxial pressure opposed to hydrostatic pressure \cite{Grinenko2021a,Grinenko2021b}. However, the splitting was questioned by specific heat measurements, which found no sign of bulk phase transition induced by uniaxial pressure \cite{Li2021}. More accurate experiments will clarify how exactly superconductivity evolves under pressure. Alternatively, a spin-triplet odd-orbital $d_{xz}+id_{yz}$ pairing has been proposed based on momentum-dependent SOC \cite{Suh2020,Gingras2019,Clepkens2021}, but it is inconsistent with NMR \cite{Pustogow2019,Ishida2020,Chronister2021} and PNS \cite{Petsch2020} experiments.
The $d_{x^2-y^2}$-wave has the desired vertical line nodes revealed by thermal conductivity \cite{Hassinger2017} and nodes or gap minima on $\alpha$ and $\beta$ bands in STM measurements \cite{Sharma2020}. From our calculations, it is indeed supported by AFM fluctuations and can form a two-component order parameter with accidentally degenerate $s'$ or $g$ in the presence of moderate FM and electric fluctuations. An $s'+id_{x^2-y^2}$ has been proposed in previous theory but was nodeless along the zone diagonal \cite{Romer2019}. By contrast, our derived $s'+id_{x^2-y^2}$ has nodes or gap minima near the 2D zone diagonal and agrees with STM experiments. However, $s'+id_{x^2-y^2}$ seems inconsistent with ultrasound experiment, where the observed thermodynamic jump of the shear elastic modulus $\delta c_{66}\propto\alpha_4^2$ reflects the coupling term $\alpha_4 u_{xy}(\Delta_{s'}^*\Delta_{d_{x^2-y^2}}+\Delta_{d_{x^2-y^2}}^*\Delta_{s'})$ between the strain $u_{xy}$ and two superconducting components in the Landau free energy \cite{Benhabib2021,Ghosh2021}. But such coupling is prohibited by symmetry because $B_{2g}(u_{xy})\otimes A_{1g}(\Delta_{s'})\otimes B_{1g}(\Delta_{d_{x^2-y^2}})=A_{2g}\not=A_{1g}$. To overcome this \cite{Benhabib2021,Ghosh2021}, an $s'+id_{xy}$ pairing has been introduced by considering nearest-neighbor Coulomb repulsion \cite{Romer2021,Bhattacharyya2021}, which is nodeless on $\alpha$ band but has accidental gap minima $|\Delta_{\text{min}}/\Delta_{\text{max}}|\approx0.1$ on $\gamma$ band along azimuth $\theta=0.15\pi$ direction. It should be noted that $s'+id_{xy}$ is not realistic in our model. As shown in Fig. \ref{fig3}(b), very strong FM and electric fluctuations are required to realize $d_{xy}$.
Overall, it seems that the $d_{x^2-y^2}+ig$ (pseudospin) singlet state is the most probable candidate for superconducting pairing in Sr$_2$RuO$_4$. It has the desired nodal structures for STM and the required symmetry ($B_{2g}(u_{xy})\otimes B_{1g}(\Delta_{d_{x^2-y^2}})\otimes A_{2g}(\Delta_{g})=A_{1g}$) by ultrasound experiment, and may also find signatures in impurity scattering \cite{Hashitani2020,Zinkl2021}, heat capacity \cite{Wagner2021}, or strain effect \cite{Kivelson2020,Yuan2021}. Our calculations show that it can arise naturally from a mixed pairing interaction of AFM, FM, and electric multipole fluctuations of reasonable magnitudes. Oppositely, a spin-triplet odd-orbital $d_{x^2-y^2}+ig$ pairing may appear in theory with momentum-dependent SOC \cite{Clepkens2021} but has the same difficulty as the spin-triplet $p_x+ip_y$ and $d_{xz}+id_{yz}$ discussed above. Thus, our work here provides a most plausible basis for understanding the pairing mechanism of Sr$_2$RuO$_4$. It also poses a challenge for future experiment to examine the role of different multipole fluctuations.
\acknowledgments
This work was supported by the National Natural Science Foundation of China (NSFC Grant No. 11774401, No. 11974397, No. 12174429), the National Key Research and Development Program of MOST of China (Grant No. 2017YFA0303103), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB33010100), the China Postdoctoral Science Foundation (Grant No. 2020M670422), and the Youth Innovation Promotion Association of CAS.
|
1,116,691,497,288 | arxiv | \section{Introduction}
In this paper we propose a second order accurate finite volume scheme for solving the following nonlinear, possibly degenerate parabolic equation: for $u: \mathbb{R}^+\times \Omega \mapsto \mathbb{R}^+$ solution to
\begin{equation}
\left\{
\begin{array}{l}
\partial_{t}u=\text{div}\, \left(f(u) \nabla V(x)+\nabla r(u)\right), \quad x \in \Omega, \quad t>0,
\\
\,
\\
u(t=0,x)=u_{0}(x),
\end{array}\right.
\label{eqgene}
\end{equation}
with $\Omega \subset \mathbb{R}^{d}$ is an open bounded domain or the whole space $ \mathbb{R}^{d}$, $u \geq 0$ is a time-dependent density, $f$ is a given function and $r \in \mathcal{C}^{1}(\mathbb{R}_{+})$ is such that $r'(u) \geq 0$ and $r'(u)$ can vanish for certain values of $u$.
A large variety of numerical methods have been proposed for the discretization of nonlinear degenerate parabolic equations: piecewise linear finite elements \cite{Barrett1997,Ebmeyer1998,Jager1991,Nochetto2000,Nochetto1988}, cell-centered finite volume schemes \cite{Eymard2000,Eymard2002,Filbet2006}, vertex-centered finite volume schemes \cite{Ohlberger2001}, finite difference methods \cite{Karlsen2002}, mixed finite element methods \cite{Arbogast1996}, local discontinuous Galerkin finite element methods \cite{Zhang2009}, combined finite volume-finite element approach \cite{Eymard2006a}. Schemes based on discrete BGK models have been proposed in \cite{Aregba-Driollet2004}, as well as characteristics-based methods considered in \cite{Chen2003,Kavcur2002}. Other approaches are either based on a suitable splitting technique \cite{Evje1999}, or based on the maximum principle and on perturbation and regularization \cite{Pop2002}. Also high order schemes have been developed in \cite{Cavalli2007,Liu2011,Kurganov2000}, which is a crucial step getting an accurate approximation of the transient solution.
\\
In this paper our aim is to construct a second-order finite volume scheme preserving steady-states in order to obtain a satisfying long-time behavior for numerical solutions. Indeed, it has been observed in \cite{Chatard2011} that numerical schemes based on the preservation of steady states for degenerate parabolic problems offer a very accurate behavior of the exact solution as time goes to infinity. To our knowledge, only few papers investigate this large-time asymptotic of numerical solutions. L. Gosse and G. Toscani proposed in \cite{Gosse2006} a scheme based on a formulation using the pseudo-inverse of the density's repartition function for porous media equation and fast-diffusion equation, and analysed the long-time behavior of approximate solutions. C. Chainais-Hillairet and F. Filbet studied in \cite{Chainais-Hillairet2007} a finite volume discretization for nonlinear drift-diffusion system and proved that the numerical solution converges to a steady-state when time goes to infinity and F. Filbet analysed the long-time behavior of approximate solution for a king of reaction-diffusion model \cite{Filbet2008}. In \cite{Burger2010}, M. Burger, J. A. Carrillo and M. T. Wolfram proposed a mixed finite element method for nonlinear diffusion equations and proved convergence towards the steady-state in case of a nonlinear Fokker-Planck equation with uniformly convex potential. Here we propose a general way for designing a scheme preserving steady-states and entropy decay for numerous equations which can be written like (\ref{eqgene}).
Before describing our numerical scheme, let us emphasize that for some models described by equation (\ref{eqgene}), the large-time asymptotic has been studied using entropy/entropy-dissipation arguments, which will be the starting point of our approach. On the one hand equation (\ref{eqgene}) with linear convection, namely $f(u)=u$, has been analysed by J.A. Carrillo, A. Jüngel, P. A. Markowich, G. Toscani and A. Unterreiter in \cite{Carrillo2001}. On the other hand for equation (\ref{eqgene}) with nonlinear convection and linear diffusion a particular case has been studied in \cite{Carrillo2008,Carrillo2009,Toscani2011} by J. A. Carrillo, Ph. Laurençot, J. Rosado, F. Salvarani and G. Toscani. We will now remind some of the useful results contained in these papers.
\subsection*{Case of a linear convection.} The paper \cite{Carrillo2001} focuses on the long time asymptotic with exponential decay rate for
\begin{equation}
\partial_{t}u=\text{div}\, \left(u \nabla V(x)+\nabla r(u)\right), \quad x \in \Omega, \quad t>0,
\label{eqgenediff}
\end{equation}
with initial condition $u(t=0,x)=u_{0}(x) \geq 0$, $u_{0} \in L^{1}(\Omega)$ and
\begin{equation}
\int_{\Omega}u_{0}(x)\, dx =:M.
\label{mass}
\end{equation}
Equation (\ref{eqgenediff}) is supplemented either by a decay condition when $|x| \rightarrow \infty$ if $\Omega= \mathbb{R}^{d}$ or by a zero out-flux condition on $\partial \Omega$ if $\Omega$ is bounded. In the following, we assume that $r:\mathbb{R}_{+} \rightarrow \mathbb{R}$ belongs to $ \mathcal{C}^{2}(\mathbb{R}_{+})$, is increasing and verifies $r(0)=0$. We define
\begin{equation}
h(s):=\int_{1}^{s}\frac{r'(\tau)}{\tau}\, d\tau, \quad s \in (0,\infty),
\label{defh}
\end{equation}
and assume that $h \in L^{1}_{loc}\left([0,\infty)\right)$. Then
\begin{equation}
H(s):=\int_{0}^{s}h(\tau)\, d\tau, \quad s \in [0, \infty),
\label{defH}
\end{equation}
is well-defined, and $H'(s)=h(s)$ for all $s \geq 0$.\\
To analyze the large-time behavior to (\ref{eqgenediff}), stationary solutions $u^{eq}$ of (\ref{eqgenediff}) in $\Omega$ are first studied
\begin{equation*}
u^{eq}\nabla V(x)+\nabla r(u^{eq})=0, \quad \int_{\Omega}u^{eq}(x)\, dx=M.
\end{equation*}
By using the definition (\ref{defh}) of $h$, this can be written as
\begin{equation*}
u^{eq} \left( \nabla V(x)+\nabla h(u^{eq}) \right)=0, \quad \int_{\Omega}u^{eq}(x) \, dx=M,
\end{equation*}
and if $u^{eq}>0$ in $\Omega$, then one obtains
\begin{equation*}
V(x)+h \left(u^{eq}(x)\right)=C \quad \forall x \in \Omega,
\end{equation*}
for some $C \in \mathbb{R}$. By considering the entropy functional
\begin{equation}
E(u):=\int_{\Omega}\left( V(x)\,u(x)\,+\,H(u(x))\right)\, dx,
\label{defE}
\end{equation}
a function $u^{eq,M} \in L^{1}(\Omega)$ is an equilibrium solution of (\ref{eqgenediff}) if and only if it is a minimizer of $E$ in
$$
\mathcal{C}\,=\,\left\{ u \in L^{1}(\Omega), \,\, \int_{\Omega}u(x) \, dx=M \right\}.
$$
Under some regularity assumptions on $V$, existence and uniqueness of an equilibrium solution is proved. Therefore, the long time behavior is investigated and the exponential decay of the relative entropy
\begin{equation}
\mathcal{E}\left(t\right):=E\left(u(t)\right)-E(u^{eq,M})
\label{defRE}
\end{equation}
is shown, using the exponential decay of the entropy dissipation
\begin{equation}
\mathcal{I}\left(t\right)\,:=\,-\frac{d\mathcal{E}(t)}{dt}\,=\,\int_{\Omega}u(t,x)\left| \nabla \left(V(x)\,+\,h(u(t,x))\right)\right|^{2}\,dx.
\label{defI}
\end{equation}
Finally using a generalized Csiszar-Kullback inequality, it is proved that the solution $u(t,x)$ of (\ref{eqgenediff}) with $r(s)=\log(s)$ or $r(s)=s^{m}$, $m \geq 0$, converges to the equilibrium $u^{eq,M}(x)$ as $t \rightarrow \infty$ at an exponential rate.
Equation (\ref{eqgenediff}) includes many well-known equations governing physical phenomena as porous media or drift-diffusion models for semiconductors.
\begin{example}[the porous media equation] In the case $V(x)=|x|^2/2$ and $r(u)=u^{m}$, with $m>1$, equation (\ref{eqgenediff}) is the porous media equation, which describes the flow of a gas through a porous interface. J. A. Carrillo and G. Toscani have proved in \cite{Carrillo2000} that the unique stationary solution of the porous media equation is given by Barenblatt-Pattle type formula
\begin{equation}
u^{eq}(x)=\left(C_{1}-\frac{m -1}{2m}|x|^{2}\right)_{+}^{1/(m -1)},
\label{barenblatt}
\end{equation}
where $C_{1}$ is a constant such that $u^{eq}$ has the same mass as the initial data $u_{0}$. Moreover, the convergence of the solution $u(t,x)$ of the porous media equation to the Barenblatt-Pattle solution $u^{eq}(x)$ as $t \rightarrow \infty$ has been proved in \cite{Carrillo2000}, using the entropy method.
\end{example}
\begin{example}[the drift-diffusion model for semiconductors] The drift-diffusion model can also be interpreted in the formalism of (\ref{eqgenediff}). It is written as
\begin{equation}
\left\{\begin{array}{lcl} \partial_{t}N-\nabla \cdot (\nabla r(N)-N\nabla V)=0 ,
\\
\,
\\
\partial_{t}P-\nabla \cdot(\nabla r(P)+P\nabla V)=0,
\\
\,
\\
\Delta V=N-P-C,\end{array}\right.
\label{DD}
\end{equation}
where the unknowns are $N$ the electron density, $P$ the hole density and $V$ the electrostatic potential, and $C$ is the prescribed doping profile. The two continuity equations on the densities $N$ and $P$ correspond to (\ref{eqgenediff}) with $r(s)=s^{\gamma}$ the pressure function. These equations are supplemented with initial conditions $N_{0}(x)$ and $P_{0}(x)$ and physically motivated boundary conditions: Dirichlet boundary conditions $ \overline{N}$, $ \overline{P}$ and $ \overline{V}$ on ohmic contacts $\Gamma^{D}$ and homogeneous Neumann boundary conditions on insulating boundary segments $\Gamma^{N}$.\\
The stationary drift-diffusion system admits a solution $(N^{eq},P^{eq},V^{eq})$ (see \cite{Markowich1993}), which is unique if in addition:
\begin{equation}
h(N^{eq})-V^{eq} \left\{ \begin{array}{lll} = \alpha_{N} & \text{ if } & N^{eq}>0 \\ \geq \alpha_{N} & \text{ if } & N^{eq}=0\end{array}\right., \quad h(P^{eq})+V^{eq} \left\{ \begin{array}{lll} = \alpha_{P} & \text{ if } & P^{eq}>0 \\ \geq \alpha_{P} & \text{ if } & P^{eq}=0\end{array}\right.,
\label{compatibility1}
\end{equation}
holds, and if the Dirichlet boundary conditions satisfy (\ref{compatibility1}) and the compatibility condition (if $ \overline{N}\,\overline{P}>0$)
\begin{equation}
h(\overline{N})+h(\overline{P})=\alpha_{N}+\alpha_{P}.
\label{compatibility2}
\end{equation}
In this case the thermal equilibrium $(N^{eq},P^{eq},V^{eq})$ is defined by
\begin{equation}
\left\{ \begin{array}{rcl} \Delta V^{eq}=g\left(\alpha_{N}+V^{eq}\right)-g\left(\alpha_{P}-V^{eq}\right)-C & & \text{on } \Omega,
\\
\,
\\ N^{eq}=g\left(\alpha_{N}+V^{eq}\right), \,\ P^{eq}=g\left(\alpha_{P}-V^{eq}\right) & & \text{on } \Omega,\end{array}\right.
\label{eqthermiqueDD}
\end{equation}
where $g$ is the generalized inverse of $h$, namely
\begin{equation}
g(s)=\left\{\begin{array}{lcl} h^{-1}(s) & \text{ if } & h(0_{+})<s<\infty, \\ 0 & \text{ if } & s \leq h(0_{+}). \end{array}\right.
\label{defg}
\end{equation}
In the linear case $r(u)=u$, it has been proved by H. Gajewski and K. Gärtner in \cite{Gajewski1996} that the solution to the transient system (\ref{DD}) converges to the thermal equilibrium state as $t \rightarrow \infty$ if the boundary conditions are in thermal equilibrium. A. Jüngel extends this result to a degenerate model with nonlinear diffusion in \cite{Juengel1995}. In both cases the key-point of the proof is an energy estimate with the control of the energy dissipation.
\end{example}
\subsection*{Case of a nonlinear convection.}
In \cite{Carrillo2008,Carrillo2009,Toscani2011}, a nonlinear Fokker-Planck type equation modelling the relaxation of fermion and boson gases is studied. This equation corresponds to (\ref{eqgene}) with linear diffusion and nonlinear convection:
\begin{equation}
\partial_{t}u=\text{div} \, \left(xu(1+ku)+\nabla u\right), \quad x \in \mathbb{R}^{d}, \quad t>0,
\label{bosonfermion}
\end{equation}
with $k=1$ in the boson case and $k=-1$ in the fermion case. The long-time asymptotic of this model has been studied in 1D for both cases \cite{Carrillo2008}, in any dimension for fermions \cite{Carrillo2009} and in 3D for bosons \cite{Toscani2011}. The stationary solution of (\ref{bosonfermion}) is given by the Fermi-Dirac ($k=-1)$ and Bose-Einstein ($k=1$) distributions:
\begin{equation}
u^{eq}(x)=\frac{1}{\beta e^{\frac{|x|^{2}}{2}}-k},
\label{eqbosonfermion}
\end{equation}
where $ \beta \geq 0$ is such that $u^{eq}$ has the same mass as the initial data $u_{0}$. The entropy functional is given by
\begin{equation}
E(u):=\int_{\mathbb{R}^{d}}\left(\frac{|x|^{2}}{2}u+u\log (u)-k(1+ku)\log (1+ku)\right)\, dx,
\label{Ebosonfermion}
\end{equation}
and the entropy dissipation is defined by
\begin{equation}
\mathcal{I}(t)\,:=-\frac{d\mathcal{E}(t)}{dt}\,=\,\int_{\mathbb{R}^{d}}u(1+ku)\left| \nabla \left(\frac{|x|^{2}}{2}+\log\left(\frac{u}{1+ku}\right)\right)\right|^{2}\, dx.
\label{Ibosonfermion}
\end{equation}
Then decay rates towards equilibrium are given in \cite{Carrillo2008,Carrillo2009} for fermion case in any dimension and for 1D boson case by relating the entropy and its dissipation. As in the case of a linear diffusion, the key-point of the proof is an entropy estimate with the control of its dissipation. \\
Concerning 3D boson case, it is proved in \cite{Toscani2011} that for sufficiently large initial mass, the solution blows up in finite time.
As explained above, it has been proved by entropy/entropy dissipation techniques that the solution to (\ref{eqgene}) converges to a steady-state as time goes to infinity often with an exponential time decay rate. Our aim is to propose a numerical scheme considering these problems and for which we can obtain a discrete entropy estimate as in the continuous case. In \cite{Arnold2003,Carrillo2007,Burger2010} temporal semi-discretizations have been proposed and semi-discrete entropy estimates have been proved. However, when the problem is spatially discretized a saturation of the entropy and its dissipation may appear, due to the spatial discretization error. This emphasizes the importance of considering spatial discretization techniques which preserve the steady-states and the entropy dissipation. This point of view has been already adopted in \cite{Chainais-Hillairet2007,Chatard2011} but both schemes do not provide really satisfying results when the equation degenerates. Indeed both schemes degenerate in the upwind flux if the diffusion vanishes and then are only first order accurate in space. Thus we propose in this paper a finite volume scheme for nonlinear parabolic equations, possibly degenerate. We focus on the spatial discretization, with a twofold objective. On the one hand we require preserving steady-states in order to obtain a satisfying long-time behavior of the approximate solution. On the other hand the scheme proposed remains valid and second order accurate in space even in the degenerate case. The main idea of our new scheme is to discretize together the convective and diffusive parts of the equation (\ref{eqgene}) to obtain a flux which preserves equilibrium and to use a slope-limiter method to get second-order accuracy even in the degenerate case.
The plan of the paper is as follows. In Section 2, we construct the finite volume scheme. We first focus on the case of a linear diffusion (\ref{eqgenediff}). Then we extend this construction to the general case (\ref{eqgene}). In Section 3 we give some basic properties of the scheme and a semidiscrete entropy estimate for the case of a linear diffusion (\ref{eqgenediff}). We end in Section 4 by presenting some numerical results. We first verify experimentally the second order accuracy in space of our scheme, even in the degenerate case. Then we focus on the long-time behavior. The scheme is applied to the physical models introduced above and the numerical results confirm its efficiency to preserve the large-time asymptotics. Finally we propose a test case with both nonlinear convection and diffusion.
\section{Presentation of the numerical scheme}
In this section we present our new finite volume scheme for (\ref{eqgene}). For simplicity purposes, we consider the problem in one space dimension. It will be straightforward to generalize this construction for Cartesian meshes in multidimensional case.\\
In a one-dimensional setting, $\Omega=(a,b)$ is an interval of $ \mathbb{R}$. We consider a mesh for the domain $(a,b)$, which is not necessarily uniform $ \textit{i.e.}$ a family of $N_{x}$ control volumes $\left(K_{i}\right)_{i=1,...,N_{x}}$ such that $K_{i}=\left]x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}\right[$ with $\displaystyle{ x_{i}=\frac{x_{i-\frac{1}{2}}+x_{i+\frac{1}{2}}}{2} }$ and
\begin{equation*}
a=x_{\frac{1}{2}}<x_{1}<x_{\frac{3}{2}}<...<x_{i-\frac{1}{2}}<x_{i}<x_{i+\frac{1}{2}}<...<x_{N_{x}}<x_{N_{x}+\frac{1}{2}}=b.
\end{equation*}
Let us set
\begin{eqnarray}
& &\text{m}(K_{i})=x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}, \quad \text{ for } 1 \leq i \leq N_{x}.
\end{eqnarray}
Let $ \Delta t$ be the time step. We set $t^{n}=n \Delta t$. A time discretization of $(0,T)$ is then given by the integer value $N_{T}=E(T/\Delta t)$ and by the increasing sequence of $(t^{n})_{0\leq n \leq N_{T}}$.\\
First of all, the initial condition is discretized on each cell $K_{i}$ by:
\begin{equation}
U_{i}^{0}=\frac{1}{\text{m}(K_{i})}\int_{K_{i}}u_{0}(x)\, dx, \quad i=1,...L.
\label{CIdis}
\end{equation}
The finite volume scheme is obtained by integrating the equation (\ref{eqgene}) over each control volume $K_{i}$ and over each time step. Concerning the time discretization, we can choose any explicit method (forward Euler, Runge-Kutta,...). Since in this paper we are interested in the spatial discretization, we will only consider a forward Euler method afterwards. Let us now focus on the spatial discretization.\\
We denote by $U_{i}(t)$ an approximation of the mean value of $u$ over the cell $K_{i}$ at time $t$. By integrating the equation (\ref{eqgene}) on $K_{i}$, we obtain the semi-discrete numerical scheme:
\begin{equation}
\text{m}(K_{i}) \frac{d}{dt}U_{i} + \mathcal{F}_{i+\frac{1}{2}}-\mathcal{F}_{i- \frac{1}{2}}\,=\,0,
\label{semidiscrete}
\end{equation}
where $\mathcal{F}_{i+\frac{1}{2}}$ is an approximation of the flux $-\left[f(u)\partial_{x}V\,+\,\partial_{x}r(u)\right]$ at the interface $x_{i+\frac{1}{2}}$ which remains to be defined.
\subsection*{Case of a linear convection ($f(u)=u$).} To explain our approach we first define the numerical flux for equation (\ref{eqgenediff}). The main idea is to discretize together the convective and the diffusive parts. To this end, we write $\left[u\partial_{x}V+\partial_{x}r(u)\right]$ as $u\left[\partial_{x}\left(V+h(u)\right)\right]$, where $h$ is defined by (\ref{defh}). Then we will consider $-\partial_{x}\left(V+h(u)\right)$ as a velocity and denote by $A_{i+\frac{1}{2}}$ an approximation of this velocity at the interface $x_{i+\frac{1}{2}}$:
\begin{equation}
A_{i+\frac{1}{2}}=-dV_{i+\frac{1}{2}}-dh(U)_{i+\frac{1}{2}},
\label{defAdiff}
\end{equation}
where $dV_{i+\frac{1}{2}}$ and $dh(U)_{i+\frac{1}{2}}$ are centered approximations of $\partial_{x}V$ and $\partial_{x}h(u)$ respectively, namely
\begin{equation*}
dV_{i+\frac{1}{2}}=\frac{V(x_{i+1})-V(x_{i})}{\text{d}(x_{i},x_{i+1})}, \quad dh(U)_{i+\frac{1}{2}}=\frac{h(U_{i+1})-h(U_{i})}{\text{d}(x_{i},x_{i+1})}.
\end{equation*}
Now we apply the standard upwind method and then define our new numerical flux, called fully upwind flux, as
\begin{equation}
\mathcal{F}_{i+\frac{1}{2}}=F(U_{i},U_{i+1})=A_{i+\frac{1}{2}}^{+}U_{i}-A_{i+\frac{1}{2}}^{-}U_{i+1},
\label{defF1}
\end{equation}
where $x^{+}=\max(0,x)$ and $x^{-}=\max(0,-x)$. This method is only first-order accurate. To obtain second-order accuracy, we replace in (\ref{defF1}) $U_{i}$ and $U_{i+1}$ by $U_{i+\frac{1}{2},-}$ and $U_{i+\frac{1}{2},+}$ respectively, which are reconstructions of $u$ at the interface defined by:
\begin{equation}
\left\{\begin{array}{l}
U_{i+\frac{1}{2},-} \,=\, U_{i}+\frac{1}{2}\phi \left(\theta_{i}\right)\left(U_{i+1}-U_{i}\right),
\\
\,
\\
U_{i+\frac{1}{2},+} \;=\, U_{i+1}-\frac{1}{2}\phi \left(\theta_{i+1}\right)\left(U_{i+2}-U_{i+1}\right),
\end{array}\right.
\label{defudem}
\end{equation}
with
\begin{equation*}
\theta_{i}=\frac{U_{i}-U_{i-1}}{U_{i+1}-U_{i}}
\end{equation*}
and $\phi$ is a slope-limiter function (setting $\phi=0$ gives the classical upwind flux). From now on we will consider the second-order fully upwind scheme defined with the Van Leer limiter:
\begin{equation}
\phi(\theta)=\frac{\theta+|\theta|}{1+|\theta|}.
\label{vanleer}
\end{equation}
\subsection*{General case.} We now consider the general case where both diffusion and convection are nonlinear in (\ref{eqgene}). Following the same idea as above, we write
\begin{equation}
f(u)\partial_{x}V+\partial_{x}r(u)\,\,=\,\,\partial_{x}\left(V+\tilde{h}(u)\right)\,f(u)
\label{fluxgene}
\end{equation}
where $ \tilde{h}(u)$ is such that $ \tilde{h}'(u)f(u)=r'(u)$. Then we define the numerical flux as a local Lax-Friedrichs flux
\begin{equation}
\mathcal{F}_{i+\frac{1}{2}}=\frac{{A}_{i+\frac{1}{2}}}{2}\left(f(U_{i})+f(U_{i+1})\right)-\frac{\left|{A}_{i+\frac{1}{2}}\right|\alpha_{i+\frac{1}{2}}}{2}\left(U_{i+1}-U_{i}\right),
\label{defFgene}
\end{equation}
where
\begin{equation}
{A}_{i+\frac{1}{2}}=-dV_{i+\frac{1}{2}}-d\tilde{h}(U)_{i+\frac{1}{2}},
\label{defAgene}
\end{equation}
and
\begin{equation}
\alpha_{i+\frac{1}{2}}=\max \left( \left|f'(u)\right|\right) \text{ over all }u \text{ between } U_{i} \text{ and } U_{i+1}.
\label{defalpha}
\end{equation}
As above, we replace $U_{i}$ and $U_{i+1}$ in (\ref{defFgene}) by reconstructions $U_{i+\frac{1}{2},-}$ and $U_{i+\frac{1}{2},+}$ defined by (\ref{defudem}) to obtain a second-order scheme.\\
We can now summarize our new numerical flux by:
\begin{equation}
\left\{\begin{array}{lll}
\displaystyle{\mathcal{F}_{i+\frac{1}{2}}=\frac{{A}_{i+\frac{1}{2}}}{2}\left(f(U_{i+\frac{1}{2},-})+f(U_{i+\frac{1}{2},+})\right)-\frac{\left|{A}_{i+\frac{1}{2}}\right|\alpha_{i+\frac{1}{2}}}{2}\left(U_{i+\frac{1}{2},+}-U_{i+\frac{1}{2},-}\right),}& &
\\
\displaystyle{{A}_{i+\frac{1}{2}}=-dV_{i+\frac{1}{2}}-d\tilde{h}(U)_{i+\frac{1}{2}}, \vphantom{\frac{\tilde{A}_{i+\frac{1}{2}}}{2}}}& &\\
\displaystyle{\alpha_{i+\frac{1}{2}}=\max \left( \left|f'(u)\right|\right) \text{ over all }u \text{ between } U_{i} \text{ and } U_{i+1}, \vphantom{\frac{\tilde{A}_{i+\frac{1}{2}}}{2}}}& & \\
\displaystyle{U_{i+\frac{1}{2},-} = U_{i}+\frac{1}{2}\phi \left(\theta_{i}\right)\left(U_{i+1}-U_{i}\right),\vphantom{\frac{\tilde{A}_{i+\frac{1}{2}}}{2}}}& & \\
\displaystyle{U_{i+\frac{1}{2},+} = U_{i+1}-\frac{1}{2}\phi \left(\theta_{i+1}\right)\left(U_{i+2}-U_{i+1}\right), \vphantom{\frac{\tilde{A}_{i+\frac{1}{2}}}{2}}} & &\end{array}\right.
\label{bigdef}
\end{equation}
where either a first-order scheme
\begin{equation}
\phi(\theta)=0,
\label{order1}
\end{equation}
or a second order scheme
\begin{equation}
\phi(\theta)=\frac{\theta+|\theta|}{1+|\theta|}.
\label{order2}
\end{equation}
\begin{rem}[Generalization to multidimensional case]
It is straightforward to define the scheme for Cartesian meshes in multidimensional case: the 1D formula can be used as it is in any of the Cartesian directions. However, the construction of the scheme on unstructured meshes is more complicated. More precisely, it is easy to define the first order scheme on such grids, but the difficulty is to obtain high-order accuracy. As in the one dimensional case, the idea is to replace the first-order flux $F(U_{i},U_{j})$, where $U_{i}$, $U_{j}$ are the constant values on each side of an edge $\Gamma_{ij}=K_{i} \cap K_{j}$, by $F(U_{ij},U_{ji})$, where $U_{ij}$, $U_{ji}$ are second-order approximations of the solution on each side of the edge $ \Gamma_{ij}$. More precisely, we need to obtain piecewise linear functions on each triangle instead of piecewise constant functions. For more details concerning these questions, see for example \cite{Durlofsky1992,Godlewski1996} and the references therein.
\end{rem}
\section{Properties of the scheme}
\subsection{The semi-discrete scheme}
In this part, we study the semi-discrete scheme (\ref{semidiscrete})-(\ref{bigdef})-(\ref{order2}) and consider the equation (\ref{eqgenediff}) on a bounded domain with homogeneous Neumann boundary conditions. We assume that $r \in \mathcal{C}^{2}(\mathbb{R}_{+})$ is strictly increasing and $h$ defined by (\ref{defh}) is in $L^{1}_{loc}\left([0,\infty)\right)$. Then $H$ given by (\ref{defH}) is well-defined, strictly convex and verifies $H'(s)=h(s)$ for all $s\geq 0$.\\
We denote by $\left(U_{i}^{eq}\right)_{i=1,...,N_{x}}$ an approximation of the equilibrium solution $u^{eq}$. This approximation verifies
\begin{equation}
dh\left(U^{eq}\right)_{i+\frac{1}{2}}+dV_{i+\frac{1}{2}}=0 \quad \forall i=0,...,N_{x},
\label{eqapp}
\end{equation}
and
\begin{equation}
\sum_{i=1}^{N_{x}} \Delta x_{i}U_{i}^{eq}=\sum_{i=1}^{N_{x}}\Delta x_{i}U_{i}^{0}=:\overline{M}.
\label{discretemass}
\end{equation}
A semi discrete version of the relative entropy $\mathcal{E}$ defined by (\ref{defE}) is given by
\begin{equation}
\mathcal{E}_\Delta(t) \,:=\,\sum_{i=1}^{N_{x}}\Delta x_{i}\big(H\left(U_{i}(t)\right)-H\left(U_{i}^{eq}\right)-h\left(U_{i}^{eq}\right)\left(U_{i}(t)-U_{i}^{eq}\right)\big).
\label{Esd}
\end{equation}
We also introduce the semi discrete version of the entropy dissipation
\begin{equation}
\mathcal{I}_\Delta(t)\,:=\,\sum_{i=0}^{N_{x}}\Delta x_{i+\frac{1}{2}}\left|A_{i+\frac{1}{2}}\right|^{2}\min\left(U_{i+\frac{1}{2},-}(t),U_{i+\frac{1}{2},+}(t)\right).
\label{Isd}
\end{equation}
\begin{prop}
Assume that the initial data $U_i(0)$ is nonnegative. Then, the finite volume scheme (\ref{semidiscrete})-(\ref{bigdef})-(\ref{order2}) for equation (\ref{eqgenediff}) satisfies
\begin{itemize}
\item[(i)] the preservation of the nonnegativity of $U_{i}(t)$,
\item[(ii)] the preservation of the equilibrium,
\item[(iii)] the entropy estimate: for $0<t_{1}\leq t_{2}<\infty$,
\begin{equation}
0 \,\leq\, {\mathcal{E}}_{\Delta}(t_{2})\,+\,\int_{t_{1}}^{t_{2}}{\mathcal{I}}_{\Delta}(t) \,dt \,\leq \,{\mathcal{E}}_{\Delta}(t_{1}).
\label{energyestimate}
\end{equation}
\end{itemize}
\end{prop}
\begin{proof}
To prove the preservation of nonnegativity, we need to check that
\begin{equation}
F \left(U_{i+\frac{1}{2},-},U_{i+\frac{1}{2},+}\right)-F \left(U_{i-\frac{1}{2},-},U_{i-\frac{1}{2},+}\right) \leq 0
\label{nonnegativity}
\end{equation}
whenever $U_{i}=0$.\\
When $U_{i}=0$, we have $U_{i} \leq U_{i+1}$ and $U_{i} \leq U_{i-1}$, and then $ \theta_{i} \leq 0$, which gives $\phi(\theta_{i})=0$ and finally
\begin{equation*}
U_{i+\frac{1}{2},-}=U_{i-\frac{1}{2},+}=U_{i}=0.
\end{equation*}
Then we get
\begin{equation*}
F \left(U_{i+\frac{1}{2},-},U_{i+\frac{1}{2},+}\right)-F \left(U_{i-\frac{1}{2},-},U_{i-\frac{1}{2},+}\right) = -A_{i+\frac{1}{2}}^{-}U_{i+\frac{1}{2},+}-A_{i-\frac{1}{2}}^{+}U_{i-\frac{1}{2},-}.
\end{equation*}
Moreover, $U_{i-\frac{1}{2},-}$ is given by
$$
U_{i-\frac{1}{2},-} \,=\,\left(1-\frac{\phi(\theta_{i-1})}{2}\right)U_{i-1},
$$
which is nonnegative since $\phi(\theta) \leq 2$ for all $\theta$.
On the other hand, we deal with $U_{i+\frac{1}{2},+}$, and get that either $ \theta_{i+1} \leq 0$, then $U_{i+\frac{1}{2},+}=U_{i+1}\geq 0$, or we have $ \theta_{i+1}>0$, that is $U_{i+2} \geq U_{i+1}$ and since $\phi(\theta) \leq 2 \theta$ for all $\theta \geq 0$, we get
\begin{equation*}
U_{i+\frac{1}{2},+}\geq U_{i+1}-\theta_{i+1} \left(U_{i+2}-U_{i+1}\right)=U_{i+1}-\left(U_{i+1}-U_{i}\right)=0.
\end{equation*}
We conclude that (\ref{nonnegativity}) always holds when $U_{i}=0$, which gives $(i)$.\\
The part $(ii)$ is clear by construction: at the equilibrium, we have $dh(U)_{i+\frac{1}{2}}+dV_{i+\frac{1}{2}}=0$, which is exactly $A_{i+\frac{1}{2}}=0$ and then $ \mathcal{F}_{i+\frac{1}{2}}=0$.\\
By definition (\ref{Esd}) of $\mathcal{E}_{\Delta}(t)$ and since $H'(s)=h(s)$ for all $s \geq 0$, we have
\begin{equation*}
\frac{d{\mathcal{E}_\Delta}}{dt}(t)=\sum_{i=1}^{N_{x}}\Delta x_{i}\left(h(\left(U_{i}(t)\right)-h(U_{i}^{eq})\right)\frac{dU_{i}}{dt}(t).
\end{equation*}
Using the numerical scheme (\ref{semidiscrete}), we get
\begin{equation*}
\frac{d {\mathcal{E}_\Delta}}{dt}(t)=-\sum_{i=1}^{N_{x}}\left(h(\left(U_{i}(t)\right)-h(U_{i}^{eq})\right)\left(\mathcal{F}_{i+\frac{1}{2}}-\mathcal{F}_{i-\frac{1}{2}}\right),
\end{equation*}
and then a discrete integration by parts yields (using the homogeneous Neumann boundary conditions)
\begin{equation*}
\frac{d{\mathcal{E}}_\Delta}{dt}(t)=\sum_{i=0}^{N_{x}}\Delta x_{i+\frac{1}{2}}\left(dh(U(t))_{i+\frac{1}{2}}-dh(U^{eq})_{i+\frac{1}{2}}\right)\mathcal{F}_{i+\frac{1}{2}}.
\end{equation*}
Since by (\ref{eqapp}) we have $dh\left(U_{i}^{eq}\right)_{i+\frac{1}{2}}=-dV_{i+\frac{1}{2}}$, we obtain
\begin{eqnarray*}
\frac{d\mathcal{E}_\Delta}{dt}(t) & = & -\sum_{i=0}^{N_{x}}\Delta x_{i+\frac{1}{2}}A_{i+\frac{1}{2}}\left(A_{i+\frac{1}{2}}^{+}U_{i+\frac{1}{2},-}(t)-A_{i+\frac{1}{2}}^{-}U_{i+\frac{1}{2},+}(t)\right)\\
& \leq & -\sum_{i=0}^{N_{x}}\Delta x_{i+\frac{1}{2}}\left|A_{i+\frac{1}{2}}\right|^{2}\min\left(U_{i+\frac{1}{2},-}(t),U_{i+\frac{1}{2},+}(t)\right).
\end{eqnarray*}
Finally we get $(iii)$ by integrating between $t_{1}$ and $t_{2}$.
\end{proof}
\subsection{The fully-discrete scheme}
In this part we consider the fully-discrete scheme obtained by using the forward Euler method. We denote by $U^{n}_{i}$ an approximation of the mean value of $u$ over the cell $K_{i}$ at time $t^{n}=n \Delta t$. The fully-discrete scheme is given by:
\begin{equation}
\text{m}(K_{i})\,\frac{U_{i}^{n+1}-U_{i}^{n}}{\Delta t}\,+\,\mathcal{F}_{i+\frac{1}{2}}^{n}-\mathcal{F}_{i-\frac{1}{2}}^{n}\,=\,0,
\label{fullydiscrete}
\end{equation}
where the numerical flux $ \mathcal{F}_{i+\frac{1}{2}}$ is defined by (\ref{bigdef})-(\ref{order2}).
\begin{prop}
For $n \geq 0$, assume that $U_{i}^{n} \geq 0$ for all $i=1,...,N_{x}$. Then under the CFL condition
\begin{equation}
\Delta t\,\max_{i}\left|V(x_{i+1})-V(x_{i})-h(U_{i+1}^{n})+h(U_{i}^{n})\right| \,\,\leq \,\,\frac{1}{2}\,{\min_{i}}\,\Delta x_{i}^{2},
\label{CFL}
\end{equation}
the fully-discrete first-order scheme (\ref{bigdef})-(\ref{order1}) and (\ref{fullydiscrete}) for equation (\ref{eqgenediff}) preserves the nonnegativity of $U_{i}$, which means that $U_{i}^{n+1} \geq 0$ for all $i=1,...,N_{x}$, and the steady-states solution.
\end{prop}
\begin{proof}
Using the definition (\ref{fullydiscrete})-(\ref{bigdef})-(\ref{order1}) of the fully-discrete first-order scheme, we get for all $i=1,...,N_{x}$
\begin{equation*}
U_{i}^{n+1}=\left(1-\frac{\Delta t}{\Delta x_{i}}\left(\left(A_{i+\frac{1}{2}}^{n}\right)^{+}+\left(A_{i-\frac{1}{2}}^{n}\right)^{-}\right)\right)U_{i}^{n}+\frac{\Delta t}{\Delta x_{i}}\left(A_{i+\frac{1}{2}}^{n}\right)^{-}U_{i+1}^{n}+\frac{\Delta t}{\Delta x_{i}}\left(A_{i-\frac{1}{2}}^{n}\right)^{+}U_{i-1}^{n}.
\end{equation*}
Thus we deduce that $U_{i}^{n+1} \geq 0$ as soon as $\displaystyle{\frac{\Delta t}{\Delta x_{i}}\left(\left(A_{i+\frac{1}{2}}^{n}\right)^{+}+\left(A_{i-\frac{1}{2}}^{n}\right)^{-}\right) \leq 1}$, which is necessarily the case from (\ref{CFL}), using the definition of $A_{i+\frac{1}{2}}^{n}$.
\end{proof}
\begin{rem}
This result is not surprising since the stability condition for an explicit discretization of a parabolic equation requires the time step to be limited by a power two of the space step.
\end{rem}
\section{Numerical simulations}
In this section, we present several numerical results performed by using our new fully-upwind flux. In all the numerical experiments performed, the time discretization is given by the forward Euler method. We first study the spatial order of convergence of the scheme for linear convection in degenerate cases. Then we will apply it to the physical models presented in the introduction: the porous media equation, the drift-diffusion system for semiconductors and the nonlinear Fokker-Planck equation for bosons and fermions. The results underline the efficiency of the scheme to preserve long-time behavior of the solutions. Finally we apply the scheme to a fully nonlinear problem: the Buckley-Leverett equation.\\
Below we make comparison between the finite volume schemes (\ref{fullydiscrete}) defined with the following numerical fluxes:
\begin{itemize}
\item \textbf{The first-order fully upwind flux}, given by
\begin{equation}
\mathcal{F}_{i+\frac{1}{2}}=\frac{{A}_{i+\frac{1}{2}}}{2}\left(f(U_{i})+f(U_{i+1})\right)-\frac{\left|{A}_{i+\frac{1}{2}}\right|\alpha_{i+\frac{1}{2}}}{2}\left(U_{i+1}-U_{i}\right),
\label{fluxFU1}
\tag{\textbf{FU1}}
\end{equation}
with $ \displaystyle{{A}_{i+\frac{1}{2}}}$, $\displaystyle{\alpha_{i+\frac{1}{2}}}$ defined in (\ref{bigdef}).
\item \textbf{The second-order fully upwind flux}, given by
\begin{equation}
\mathcal{F}_{i+\frac{1}{2}}=\frac{{A}_{i+\frac{1}{2}}}{2}\left(f(U_{i+\frac{1}{2},-})+f(U_{i+\frac{1}{2},+})\right)-\frac{\left|{A}_{i+\frac{1}{2}}\right|\alpha_{i+\frac{1}{2}}}{2}\left(U_{i+\frac{1}{2},+}-U_{i+\frac{1}{2},-}\right).
\label{fluxFU2}
\tag{\textbf{FU2}}
\end{equation}
\item \textbf{The classical upwind flux,} introduced and studied in \cite{Eymard2000}. It is valid for linear convection and for both linear and nonlinear diffusion. The diffusion term is discretized classically by using a two-points flux and the convection term is discretized with the upwind flux. This flux has then been used for the drift-diffusion system for semiconductors \cite{Chainais-Hillairet2003,Chainais-Hillairet2003a,Chainais-Hillairet2004}. It is defined for equation (\ref{eqgenediff}) by
\begin{equation}
\mathcal{F}_{i+\frac{1}{2}} =\left(-dV_{i+\frac{1}{2}} \right)^{+}U_{i}-\left(-dV_{i+\frac{1}{2}} \right)^{-}U_{i+1}-\frac{r\left(U_{i+1}\right)-r\left(U_{i}\right)}{\Delta x_{i+\frac{1}{2}}}.
\label{fluxCU}
\tag{\textbf{CU}}
\end{equation}
\item \textbf{The Scharfetter-Gummel flux and its extension for nonlinear diffusion.} This scheme is widely used in the semiconductors framework in the case of a linear diffusion. It has been proposed in \cite{Il'in1969,Scharfetter1969} for the numerical approximation of the 1D drift-diffusion model. This scheme preserves equilibrium and is second-order accurate \cite{Lazarov1996}. The definition of the Scharfetter-Gummel flux has been extended to the case of a nonlinear diffusion in \cite{Chatard2011}. For equation (\ref{eqgenediff}) this flux is written
\begin{equation}
\mathcal{F}_{i+\frac{1}{2}} =\frac{dr_{i+\frac{1}{2}}}{\Delta x_{i+\frac{1}{2}}}\left[ B\left(\frac{\Delta x_{i+\frac{1}{2}} dV_{i+\frac{1}{2}}}{dr_{i+\frac{1}{2}}}\right)U_{i}-B\left(-\frac{\Delta x_{i+\frac{1}{2}} dV_{i+\frac{1}{2}}}{dr_{i+\frac{1}{2}}}\right)U_{i+1}\right],
\label{fluxSG1}
\tag{\textbf{SGext}}
\end{equation}
where
\begin{equation}
\left\{\begin{array}{lll} \displaystyle{B(x)=\frac{x}{e^{x}-1}} \text{ for } x \neq 0, \quad B(0)=1, & &\\ \displaystyle{ dr_{i+\frac{1}{2}}=dr\left(U_{i},U_{i+1}\right) ,\vphantom{ \frac{x-\frac{1}{2}}{e^{x}}}} \end{array}\right.
\label{fluxSG2}
\end{equation}
with for $a$, $b \in \mathbb{R}_{+}$,
\begin{equation}
dr(a,b)=\left\{ \begin{array}{cll} \displaystyle{\frac{h(b)-h(a)}{\log(b)-\log(a)}} & &\text{ if } ab>0 \text{ and } a \neq b,\\ \, \\ \displaystyle{r'\left(\frac{a+b}{2}\right)} & &\text{ elsewhere. } \end{array} \right.
\label{fluxSG3}
\end{equation}
\end{itemize}
\subsection{Order of convergence}
In this part, we test the spatial accuracy of the scheme for linear convection ($f(s)=s$). We consider the equation (\ref{eqgenediff}) in 1D on $(-1,1) \times (0,T)$ for different values of $\partial_{x}V$ and $r$. The time step is taken equal to $\Delta t=10^{-8}$ to study the order of convergence with respect to the spatial step size. The boundary conditions are periodic. An estimation of the error in $L^{1}$ norm at time $T$ is given by
\begin{equation*}
e_{2\Delta x}=\Vert u_{\Delta x}(T)-u_{2\Delta x}(T)\Vert_{L^{1}(\Omega)},
\end{equation*}
where $u_{\Delta x}$ represents the approximation computed from a mesh of size $\Delta x$. The numerical scheme is said to be $k$-th order if $e_{2\Delta x} \leq C \Delta x^{k}$, for all $0<\Delta x \ll 1$.
\subsubsection*{Example 1.} We first consider a test case with $ \partial_{x}V=1$ and with $r(s)=s^{2}$, thus $r'(0)=0$ and $r'(s)>0$ for all $s>0$. The initial data is
\begin{equation*}
u_{0}(x)= 0.5+0.5\sin(\pi x), \quad x \in (-1,1)
\end{equation*}
and the final time $T=0.1$.\\
In Table \ref{tablenonlin1} we compare the order of convergence in $L^{1}$ norm of the Scharfetter-Gummel extended scheme (\ref{fluxSG1}) and of our first and second order fully upwind fluxes (\ref{fluxFU1})-(\ref{fluxFU2}). Surprisingly it appears that the Scharfetter-Gummel scheme is still second order accurate. This can be explained by the fact that $r'$ vanishes only at one point. Moreover, we verify experimentally that our scheme (\ref{fluxFU2}) is second-order accurate and we notice that the $L^{1}$ error obtained with it is smaller than that obtained with the Scharfetter-Gummel extended scheme.
\begin{table}[!ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline $N_{x}$ & $L^{1}$ error & Order & $L^{1}$ error & Order & $L^{1}$ error & Order \\
& \textbf{SGext} & & \textbf{FU1} & & \textbf{FU2} & \\
\hline
100 & $ 1.451.10^{-4} $ & 2 & $ 2.667.10^{-3} $ & 0.87 & $ 8.237.10^{-5} $ & 1.87 \\
200 & $ 3.619.10^{-5} $ & 2 & $ 1.398.10^{-3} $ & 0.93 & $ 2.208.10^{-5} $ & 1.9 \\
400 & $ 9.027.10^{-6} $ & 2 & $ 7.156.10^{-4} $ & 0.97 & $ 5.778.10^{-6} $ & 1.93 \\
800 & $ 2.251.10^{-6} $ & 2 & $ 3.621.10^{-4} $ & 0.98 & $ 1.485.10^{-6} $ & 1.96 \\
1600 & $ 5.614.10^{-7} $ & 2 & $ 1.822.10^{-4} $ & 0.99 & $ 3.772.10^{-7} $ & 1.98 \\
\hline
\end{tabular}
\caption{Example 1 - Experimental spatial order of convergence in $L^{1}$ norm.}
\label{tablenonlin1}
\end{table}
\subsubsection*{Example 2.} We still consider equation (\ref{eqgenediff}) with $ \partial_{x}V=1$, but now with
\begin{equation*}
r(s)=\left\{\begin{array}{ll} (s-1)^{3} &\text{ if } s \geq 1, \\ 0 & \text{ elsewhere,} \end{array}\right.
\end{equation*}
then $r'(s)=0$ for all $s \in (0,1)$. The initial data is
\begin{equation*}
u_{0}(x)= 1+0.5\sin(\pi x) \quad x \in (-1,1),
\end{equation*}
and the final time is $T=0.01$.\\
In Table \ref{tabledeg1} we compare the order of convergence in $L^{1}$ norm of the Scharfetter-Gummel extended scheme (\ref{fluxSG1}) and of our first and second order fully upwind fluxes (\ref{fluxFU1})-(\ref{fluxFU2}). In this case where $r'$ vanishes on a whole interval, it appears that the second-order scheme (\ref{fluxFU2}) is more accurate than the two others schemes. The Scharfetter-Gummel extended scheme is only one order accurate while second-order accuracy is preserved with our new scheme.
\begin{table}[!ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline $N_{x}$ & $L^{1}$ error & Order & $L^{1}$ error & Order & $L^{1}$ error & Order \\
& \textbf{SGext} & & \textbf{FU1} & & \textbf{FU2} & \\
\hline
100 & $ 3.074.10^{-4} $ & 0.96 & $ 2.697.10^{-4} $ & 0.55 & $ 1.053.10^{-4} $ & 1.83 \\
200 & $ 1.554.10^{-4} $ & 0.98 & $ 1.531.10^{-4} $ & 0.82 & $ 2.830.10^{-5} $ & 1.90 \\
400 & $ 7.834.10^{-5} $ & 0.99 & $ 8.096.10^{-5} $ & 0.92 & $ 8.040.10^{-6} $ & 1.82 \\
800 & $ 3.928.10^{-5} $ & 1 & $ 4.163.10^{-5} $ & 0.96 & $ 2.288.10^{-6} $ & 1.81 \\
1600 & $ 1.966.10^{-5} $ & 1 & $ 2.111.10^{-5} $ & 0.98 & $ 6.576.10^{-7} $ & 1.80 \\
\hline
\end{tabular}
\caption{Example 2 - Experimental spatial order of convergence in $L^{1}$ norm.}
\label{tabledeg1}
\end{table}
\subsection{The drift-diffusion system for semiconductors}
We now consider the drift-diffusion system for semiconductors (\ref{DD}). In the two following examples, the Dirichlet boundary conditions satisfy (\ref{compatibility1})-(\ref{compatibility2}), so the thermal equilibrium is uniquely defined by (\ref{eqthermiqueDD}). We compute an approximation $(N^{eq}_{i},P^{eq}_{i},V^{eq}_{i})_{i=1,...,N_{x}}$ of this equilibrium with the finite volume scheme proposed by C. Chainais-Hillairet and F. Filbet in \cite{Chainais-Hillairet2007}.
\subsubsection*{Example 3.} Firstly we consider a 1D test case on $\Omega=(0,1)$. We take $r(s)=s^{2}$. Initial data are
\begin{equation*}
N_{0}(x)=\left\{\begin{array}{ccc} 0 & \text{ for }& x \leq 0.5 \\ 1 & \text{ for } & x>0.5 \end{array}\right. , \quad P_{0}(x)=\left\{\begin{array}{ccc} 1 & \text{ for }& x \leq 0.5 \\ 0 & \text{ for } & x>0.5 \end{array}\right.,
\end{equation*}
and we consider the following Dirichlet boundary conditions
\begin{equation*}
\begin{array}{ccl} N(0,t)=0, \quad & P(0,t) = 1, \quad & V(0,t)=-1,\\ N(1,t) = 1, \quad & P(1,t)=0, \quad & V(1,t)=1. \end{array}
\end{equation*}
The doping profile is
\begin{equation*}
C(x)=\left\{\begin{array}{ccl} -1 & \text{ for } & x \leq 0.5, \\ +1 & \text{ for } & x > 0.5. \end{array}\right.
\end{equation*}
The time step is $\Delta t = 5.10^{-5}$ and the final time $T=10$. The domain $(0,1)$ is divided into $N_{x}= 64$ uniform cells.\\
In Figure \ref{ex3_1}, we compare the discrete relative energy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ obtained with the Scharfetter-Gummel extended scheme (\ref{fluxSG1}), the classical upwind scheme (\ref{fluxCU}) and our first and second order schemes (\ref{fluxFU1})-(\ref{fluxFU2}). The classical upwind flux (\ref{fluxCU}) does not preserve the thermal equilibrium, which explains the phenomenon of saturation observed with it. The Scharfetter-Gummel extended flux (\ref{fluxSG1}) preserves the equilibrium at the points where the densities $N$ and $P$ do not vanish, but due to the zero boundary conditions on the left for $N$ and on the right for $P$, there is also a phenomenon of saturation with it. Contrary to these two schemes, our new schemes (\ref{fluxFU1})-(\ref{fluxFU2}) which preserve the equilibrium everywhere, provide a satisfying long-time behavior. Moreover, we computed the relative energy and its dissipation with our schemes for different numbers $N_{x}$ of cells and notice that the decay rate does not depend on the spatial step size. We obtained satisfying results even for few number of cells.
\begin{figure}[!ht]
\centering
\subfigure{\includegraphics[width=2.6in]{ex4_1.eps}}
\subfigure{\includegraphics[width=2.6in]{ex4_2.eps}}
\caption{Example 3 - Evolution of the relative energy $\mathcal{E}_\Delta (t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ in log-scale for different schemes ($N_{x}=64$).}
\label{ex3_1}
\end{figure}
\subsubsection*{Example 4.} Let us consider now a 2D test case picked on the paper of C. Chainais-Hillairet, J. G. Liu and Y. J. Peng \cite{Chainais-Hillairet2003}. As in the previous example, the Dirichlet boundary conditions vanish on some part of the boundary. The time step is $\Delta t= 10^{-4}$, the final time is $T=10$ and we compute an approximate solution on a $32 \times 32$ Cartesian grid.\\
In Figure \ref{ex4}, we compare the discrete relative energy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ obtained with the Scharfetter-Gummel extended scheme (\ref{fluxSG1}), the classical upwind scheme (\ref{fluxCU}) and the fully upwind schemes (\ref{fluxFU1})-(\ref{fluxFU2}). We make the same observations as in Example 3: there is a phenomenon of saturation with the Scharfetter-Gummel extended and the classical upwind schemes, and not with our new scheme. Moreover, the decay rate does not depend on the number of grid cells chosen.
\begin{figure}[!ht]
\centering
\subfigure{\includegraphics[width=2.6in]{ex5_1.eps}}
\subfigure{\includegraphics[width=2.6in]{ex5_2.eps}}
\caption{Example 4 - Evolution of the relative energy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ in log-scale for different schemes.}
\label{ex4}
\end{figure}
\subsection{The porous media equation}
In this part we approximate solutions to the porous media equation
\begin{equation}
\partial_{t}u=\nabla \cdot (xu+\nabla u^{m}).
\label{PM}
\end{equation}
We define an approximation $\left(U^{eq}_{i}\right)_{i=1,...,N_{x}}$ of the unique stationary solution $u^{eq}$ (\ref{barenblatt}) by
\begin{equation*}
U^{eq}_{i}=\left(\overline{C}-\frac{m -1}{2m}\left\vert x_{i}\right\vert^{2}\right)^{1/(m -1)}_{+}, \,\ i=1,...,N_{x},
\end{equation*}
where $\overline{C}$ is such that the discrete mass of $\left(U^{eq}_{i}\right)_{i =1,...,N_{x}}$ is equal to that of $\left(U^{0}_{i}\right)_{i=1,...,N_{x}}$, namely\\
$\displaystyle{\sum_{i}\Delta x_{i}U^{eq}_{i}=\sum_{i}\Delta x_{i}U^{0}_{i}}$. We use a fixed point algorithm to compute this constant $ \overline{C}$.
\subsubsection*{Example 5.} We consider the following one dimensional test case: $m=5$, with initial condition
\begin{equation*}
u_{0}(x)=\left\{\begin{array}{ll} 1 &\text{ if }\, x \in (-3.7,-0.7) \cup (0.7,3.7), \\ 0 & \text{ otherwise. }
\end{array}\right.
\end{equation*}
Then we compute the approximate solution on $(-5.5,5.5)$, which is divided into $N_{x}=160$ uniform cells. The time step is fixed to $\Delta t=10^{-4}$ and the final time is $T=10$.\\
In Figure \ref{ex5_1} we compare the discrete relative entropy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ obtained with the Scharfetter-Gummel extended scheme, the classical upwind scheme and the first and second order fully upwind schemes. We obtain almost the same behavior for the Scharfetter-Gummel scheme and the fully upwind schemes. We only notice that the dissipation $ \mathcal{I}_\Delta(t^{n})$ obtained with the Scharfetter-Gummel scheme saturates before those obtained with the fully upwind schemes. This phenomenon of saturation is still greater for the classical upwind scheme.\\
Moreover, we compute the discrete $L^{1}$ norm of $U^{n}-U^{eq}$ obtained with our second-order scheme. According to the paper of J. A. Carrillo and G. Toscani \cite{Carrillo2000}, there exists a constant $C>0$ such that, in this case,
\begin{equation*}
\Vert u(t)-u^{eq}\Vert_{L^{1}(\mathbb{R})} \leq C \exp\left(-{3\,t}/{7}\right), \,\ t \geq 0.
\end{equation*}
The experimental decay of $U^{n}$ towards the steady state $U^{eq}$ is exponential, at a rate of about 6, which is better than ${3}/{7}$.
\begin{figure}[!ht]
\centering
\subfigure{\includegraphics[width=2.6in]{ex6_1.eps}}
\subfigure{\includegraphics[width=2.6in]{ex6_2.eps}}
\caption{Example 5 - Evolution of the relative entropy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ in log-scale for different schemes.}
\label{ex5_1}
\end{figure}
\subsubsection*{Example 6.} We still consider the porous media equation, but now in two space dimension on $\Omega = (-10,10) \times (-10,10)$. We take $m=4$ and the initial condition is
\begin{equation*}
u_{0}(x,y)=\left\{\begin{array}{ll}
\exp\left(-\frac{1}{6-(x-2)^{2}-(y+2)^{2}}\right) &\text{ if }\, (x-2)^{2}+(y+2)^{2}<6,
\\
\,
\\
\exp\left(-\frac{1}{6-(x+2)^{2}-(y-2)^{2}}\right) &\text{ if }\, (x+2)^{2}+(y-2)^{2}<6,
\\
\,
\\
0 & \text{ otherwise. }
\end{array}\right.
\end{equation*}
We compute the approximate solution on a $200 \times 200$ Cartesian grid, with $\Delta t=10^{-4}$ and $T=10$. \\
In Figure \ref{ex6_1} we compare the discrete relative entropy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ obtained with the Scharfetter-Gummel scheme, the classical upwind scheme and the fully upwind schemes.\\
Figure \ref{ex6_2} presents the evolution of the density of gas $u$ computed with our second-order scheme at four different times $t=0$, $t=0.5$, $t=1$ and $t=10$ and the approximation of the stationary solution $u^{eq}$ corresponding to this initial data.\\
Moreover, according to the paper of J. Carrillo and G. Toscani \cite{Carrillo2000}, there exists a constant $C>0$ such that, in this case,
\begin{equation*}
\Vert u(t)-u^{eq} \Vert_{L^{1}(\mathbb{R}^{2})} \leq C \exp\left(-{4\,t}/{7}t\right), \,\ t \geq 0.
\end{equation*}
We compute the discrete $L^{1}$ norm of $U^{n}-U^{eq}$ and obtained an exponential decay rate of 2.
\begin{figure}[!ht]
\centering
\subfigure{\includegraphics[width=2.6in]{ex7_1.eps}}
\subfigure{\includegraphics[width=2.6in]{ex7_2.eps}}
\caption{Example 6 - Evolution of the relative entropy $\mathcal{E}_\Delta(t^{n})$ and its dissipation $\mathcal{I}_\Delta(t^{n})$ in log-scale for different schemes.}
\label{ex6_1}
\end{figure}
\begin{figure}[!ht]
\centering
\subfigure[$t=0$]{\includegraphics[width=2.6in]{ex7_4.eps}}
\subfigure[$t=0.5$]{\includegraphics[width=2.6in]{ex7_5.eps}}
\subfigure[$t=1$]{\includegraphics[width=2.6in]{ex7_6.eps}}
\subfigure[$t=10$]{\includegraphics[width=2.6in]{ex7_7.eps}}
\subfigure[Stationary solution]{\includegraphics[scale=0.45]{ex7_8.eps}}
\caption{Example 6 - Evolution of the density of gas $u$ and corresponding stationary solution $u^{eq}$.}
\label{ex6_2}
\end{figure}
\subsection{Nonlinear Fokker-Planck equations for fermions}
We consider now the nonlinear Fokker-Planck equation (\ref{bosonfermion}) for fermions ($k=-1$). As in the porous media equation case, we define an approximation $\left(U^{eq}_{i}\right)_{i=1,...,N_{x}}$ of the unique stationary solution $u^{eq}$ (\ref{eqbosonfermion}) by
\begin{equation*}
U^{eq}_{i}=\frac{1}{\overline{\beta}e^{\frac{|x_{i}|^{2}}{2}}+1}, \,\ i=1,...,N_{x},
\end{equation*}
where $\overline{\beta} \geq 0$ is such that the discrete mass of $\left(U^{eq}_{i}\right)_{i =1,...,N_{x}}$ is equal to that of $\left(U^{0}_{i}\right)_{i=1,...,N_{x}}$. We use a fixed point algorithm to compute this constant $ \overline{\beta}$.
\subsubsection*{Example 7.} We consider a 3D test case. The initial condition is chosen as the sum of four Gaussian distributions:
\begin{equation*}
u_{0}(x)=\frac{1}{2\sqrt{2\pi}}\left(\exp \left(-\frac{|x-x_{1}|^{2}}{2}\right)+\exp \left(-\frac{|x-x_{2}|^{2}}{2}\right)+\exp \left(-\frac{|x-x_{3}|^{2}}{2}\right)+\exp \left(-\frac{|x-x_{4}|^{2}}{2}\right)\right),
\end{equation*}
where $x_{1}=(2,2,2)$, $x_{2}=(-2,-2,-2)$, $x_{3}=(2,-2,2)$ and $x_{4}=(-2,2,-2)$.\\
We consider a $40 \times 40 \times 40$ Cartesian grid of $\Omega=(-8,8)^{3}$, $\Delta t=10^{-4}$ and $T=10$.\\
Evolution of the discrete relative entropy $ \mathcal{E}_\Delta(t^{n})$, its dissipation $ \mathcal{I}_\Delta(t^{n})$ and $\Vert U^{n}-U^{eq}\Vert_{L^{1}}$ obtained with the scheme (\ref{fluxFU2}) is presented in Figure \ref{ex7_1}. We observe exponential decay rate of these quantities, which is in agreement with the result proved by J. A. Carrillo, Ph. Laurençot and J. Rosado in \cite{Carrillo2009}. \\
In Figure \ref{ex7_2} we report the evolution of the level set of the distribution function $u(t,x,y,z)=0.1$ at different times and the level set of the corresponding equilibrium solution $u^{eq}(x,y,z)=0.1$.
\begin{figure}[!ht]
\centering
\includegraphics[width=2.6in]{ex9_1.eps}
\caption{Example 7 - Evolution of the relative entropy $\mathcal{E}_\Delta(t^{n})$, the dissipation $\mathcal{I}_\Delta(t^{n})$ and the $L^{1}$ norm $\Vert U^{n}-U^{eq}\Vert_{1}$.}
\label{ex7_1}
\end{figure}
\begin{figure}[!ht]
\centering
\subfigure[$t=0$]{\includegraphics[width=2.6in]{ex9_2.eps}}
\subfigure[$t=0.2$]{\includegraphics[width=2.6in]{ex9_3.eps}}
\subfigure[$t=0.4$]{\includegraphics[width=2.6in]{ex9_4.eps}}
\subfigure[$t=1$]{\includegraphics[width=2.6in]{ex9_5.eps}}
\subfigure[$t=10$]{\includegraphics[width=2.6in]{ex9_6.eps}}
\subfigure[Stationary solution]{\includegraphics[width=2.6in]{ex9_7.eps}}
\caption{Example 7 - Evolution of the level set $u(t,x,y,z)=0.1$ and level set of the corresponding stationary solution $u^{eq}(x,y,z)=0.1$.}
\label{ex7_2}
\end{figure}
\subsection{The Buckley-Leverett equation}
Finally we consider the Buckley-Leverett equation, with both nonlinear convection and diffusion:
\begin{equation}
\partial_{t}u+\partial_{x}f(u)=\varepsilon \partial_{x}\left(\nu(u)\partial_{x}u\right).
\label{BLeq}
\end{equation}
The Buckley-Leverett equation is a simple model for displacement of oil by water in oil reservoirs. The function $u(t,x)$ represents the fraction of fluid corresponding to oil. The fractional flow function $f$ has a s-shaped form
\begin{equation*}
f(u)=\frac{u^{2}}{u^{2}+(1-u)^{2}},
\end{equation*}
and the capillary diffusion coefficient is given by
\begin{equation*}
\nu(u)=4u(1-u).
\end{equation*}
The scaling parameter $ \varepsilon>0$ in front of the capillary diffusion is usually small.
\subsubsection*{Example 8.} We consider the following test case \cite{Kurganov2000,Liu2011}: the domain $\Omega$ is $(0,1)$, the initial condition
\begin{equation*}
u_{0}(x)=\left\{\begin{array}{ccl} 1-3x & \text{ if } & 0 \leq x \leq \frac{1}{3},
\\
\,
\\ 0 & \text{ if } & \frac{1}{3}<x \leq 1, \end{array}\right.
\end{equation*}
and the boundary condition $u(0,t)=1$.\\
The flux of equation (\ref{BLeq}) can be written under the form (\ref{fluxgene}) by taking $V=x$ and
\begin{equation*}
\tilde{h}(u)=4 \left(\log(u)-3u+2u^{2}-\frac{2}{3}u^{3}\right).
\end{equation*}
The domain is divided into $N_{x}=100$ cells and the time step is $\Delta t=10^{-4}$. The numerical solution computed at different times for different values of $ \varepsilon$ is shown in Figure \ref{ex8_1}. The results compare well with those in \cite{Kurganov2000,Liu2011}. Moreover, our scheme remains valid for all values of $\varepsilon$, even $ \varepsilon=0$. In this case the fully upwind flux degenerates into the well-known local Lax-Friedrichs flux.
\begin{figure}[!ht]
\centering
\subfigure[$\varepsilon=10^{-1}$]{\includegraphics[width=2.6in]{ex10_1.eps}}
\subfigure[$\varepsilon=10^{-2}$]{\includegraphics[width=2.6in]{ex10_2.eps}}
\subfigure[$\varepsilon=10^{-3}$]{\includegraphics[width=2.6in]{ex10_3.eps}}
\subfigure[$\varepsilon=0$]{\includegraphics[width=2.6in]{ex10_4.eps}}
\caption{Example 8 - Evolution of the numerical solution for different values of $\varepsilon$.}
\label{ex8_1}
\end{figure}
\section{Conclusion}
In this article we have presented how to build a new finite volume scheme for nonlinear degenerate parabolic equations. To this end, we rewrite the equation in the form of a convection equation, by taking the convective and diffusive parts into account together. Then we apply either the upwind method in the linear case or the local Lax-Friedrichs method in the nonlinear case.\\
On the one hand, this construction ensures that a particular type of steady-state is preserved. We obtain directly a semi-discrete entropy estimate, which is the first step to prove the large-time behavior of the numerical solution. On the other hand, we use a slope-limiter method to get second-order accuracy even in the degenerate case.\\
Numerical examples demonstrate high-order accuracy of the scheme. Moreover we have applied it to some of the physical models for which the long-time behavior has been studied: the porous media equation, the drift-diffusion system for semiconductors, the nonlinear Fokker-Planck equation for bosons and fermions. We obtain the convergence of the approximate solution to an approximation of the equilibrium state at an exponential rate. A future work would be to prove this exponential rate by using a discrete entropy/entropy dissipation estimate as in the continuous case compared with previous approaches.\\
\textbf{Acknowledgement:} This work was partially supported by the European Research Council ERC Starting Grant 2009, project 239983-NuSiKiMo. The authors thanks C. Chainais-Hillairet for interesting discussions on this topic.
\bibliographystyle{plain}
|
1,116,691,497,289 | arxiv | \section{INTRODUCTION}
Do black holes interact with an accretion flow in such a way so a distinct
observational signature entirely different from those associated with any
other compact object exists? In other words can
the existence of a black hole be solely inferred from the
radiation observed at infinity ?
These are the crucial questions where theoreticians and observers are
confronting nowadays.
Even though there have been now accumulated the
enormous observational evidences in favor of the existence of black holes
still it is fair to say that their existence has not been established.
Perhaps the proof their existence
would have been a much easier task if, for instance,
an argument would have been advanced, which would:
a) single out the generic component (or components) of a black hole which
is responsible for shaping up the unique observed feature associated with
black holes
b) prove that indeed this generic component always results in the same
observable feature independent of the environmental conditions that the
black hole finds itself.
The luck of such an argument may be traced in the plethora of various
accretion flows: accretion in a state of free fall, optically thin or
optically thick,
accretion disks with or without relativistic corrections, shocked flows
{\it etc.} Of course, this diversity of accretion models is highly
justified. On physical grounds one expects accretion flows describing
a solar-mass black hole accreting interstellar medium to be distinct from
those flows describing accretion onto a black hole in a close binary system
or from a supermassive black hole at
the center of an AGN. Viewed from this angle, the detectability of a black
hole appears to be a rather frustrating issue since it is not clear
a priori what type of the existing accretion models (if any) would
describe a realistic accreting black hole.
In the present paper we shall show that may be not the case.
The distinct feature of black hole spacetime, as opposed to the
spacetimes due to other compact objects is the presence of the event
horizon.
Near the horizon the strong gravitational field is expected
to dominate the pressure forces and thus to drive the accreting material into
a free fall. In contrast, for other compact objects the pressure
forces are becoming dominant as their surface is approached, and thus free
fall state is absent. We argue that this difference is rather crucial,
resulting in an observational signature of a black hole.
Roughly, the origin of this signature is due to the inverse Comptonization
of low energy photons from fast moving electrons.
The presence of the low-energy photon component is expected to be generic
due, for instance, to the disk structure near a black hole or to Bremstrahlung
of the electron component from the corresponding proton one.
The boosted photon component is characterized by a power law spectrum,
and is entirely independent of the initial spectrum of the low-energy
photons. The spectral index of the boosted photons is determined
by the mass accretion rate and the bulk motion plasma temperature only.
A key ingredient
in proving our claim is the employment of the exact relativistic transfer
describing the Compton scattering of the low-energy radiation field of the
Maxwellian distribution of fast moving electrons.
We will prove that the power law is always present as a part of the
black hole spectrum in a wide energy range. We
investigate the particular case of a non-rotating Schwartzchild black hole
powering the accretion, leaving the case of a rotating black hole
for the future analysis.
The presence of the power law part in the upcomptonized spectra was
rigorously proven by Titarchuk \& Lyubarskij 1995 (hereafter TL95).
There it has been demonstrated that for the wide class of the electron
distributions the power law is the solution of the full kinetic equation.
The importance of Compton upscattering of low-frequency photons
in an optically thick, converging flow has been understood
for a long time.
Blandford and Payne were the first to address this problem in a series of
papers (Blandford \& Payne 1981 and Payne \& Blandford 1981).
In the first paper they derived the Fokker-Planck
radiative transfer equation which took into account
photon diffusion in space and energy, while in the second paper
they solved the Fokker-Planck radiative transfer
equation in the case of the steady state, spherically symmetric,
super-critical accretion into a central black hole with the assumption
of a power-law flow velocity $v(r)\propto r^{-\beta}$ and neglecting
thermal Comptonization. For the inner boundary condition they assumed
adiabatic compression of photons as $r \to 0$. Thus, their flow extended from
$r=0$ to infinity. They showed that all emergent spectra have a high-energy,
power-law tail with index $\alpha=3/(2-\beta)$ (for free fall $\beta=1/2$
and $\alpha=2$), which is independent of the low-frequency
source distribution.
Titarchuk, Mastichiadis \& Kylafis (1996) paper
(hereafter TMK96 and see also the extended version in TMK97) presents
the exact numerical and approximate
analytical solutions of the problem of spectral formation
in a converging flow, taking into account the inner boundary condition,
the dynamical effects of the free fall, and the thermal motion of
the electrons.
The inner boundary has been taken at {\it finite} radius with the spherical
surface being considered as a fully absorptive.
Titarchuk, Mastichiadis \& Kylafis (1996) have
used a variant of the Fokker-Plank formalism
where the inner boundary mimics a black-hole horizon;
no relativistic effects (special or general) are taken into account.
Thus their results are instructively useful but
they are not directly comparable with the observations.
{\it By using the numerical and analytical techniques they demonstrated
that the extended power laws are
present in the resulting spectra in addition to
the blackbody like emission at lower energies.}
Zane, Turrola, Nobilli \& Erna (1996) presented a characteristic method
and the code for
solving the radiative transfer equation in differentially moving media in
a curved spacetime. Some applications, concerning hot and cold accretion
onto nonrotating black holes were discussed there.
In our paper the full relativistic treatment is worked out
in terms of the relativistic Boltzmann kinetic equation
without recourse to the Fokker-Planck approximation in either configuration
and energy space.
The relativistic transport theory was developed
by Lindquist (1966). He presents the appropriate radiative transfer
in the curve spacetime. For completeness of our paper we delineate
some important points of that theory related with the application to the
radiative transfer in the electron atmosphere.
We demonstrate that the power-law spectra are produced
when low-frequency photons are scattered in the Thomson regime
(i.e. when the dimensionless photon energy $z^=E^{\prime}/m_ec^2$ measured
in the electron rest frame satisfies $z^{\prime}\ll 1$).
The eigenfunction method for the Comptonization problem
employed in this paper has been offered and developed by
Titarchuk \& Lyubarskij 1995 (hereafter TL95).
Giesler \& Kirk 1997 have extended the TL95 treatment by accommodating
an arbitrary anisotropy of the source function.
Their results for the spectral index confirm those of TL95
over a wide range of electron temperature and optical depth; the
largest difference they found is 10\%, occurring at low optical
depth.
The spectral indices related with the eigenvalues of the problem
are determined as functions
of the optical depth of the accreting matter. Thus,
for the first time we are able to solve the
full Comptonization problem in the presence of the bulk and thermal motions
of electrons.
In \S~2 and Appendix A we will give the details of the derivation of
the general relativistic
radiative kinetic equation. Section 3 ( and some details in the Appendix B)
presents the method. We describe the method of separation of variables,
and the reduction of the whole problem to the specific eigen-problem in
the configuration space. We propose the numerical solution of this problem
by using the iteration method (e.g. Sunyaev \& Titarchuk 1985) with
integrating over characteristics (the photon trajectories in the presence
of Schwarzschild black hole background). Finally, we summarize our work and
draw conclusions in \S 4.
\section{THE MAIN EQUATION}
We begin with considering, background geometry, described by the
following line element:
$$
ds^2~=~-fdt^2~+~{{dr^2}\over{f}}~+~r^2d\Omega^2
\eqno(1)
$$
where, for the Schwarzschild black hole, $f=1-r_s/r$, $r_s=2GM/c^2$,
and $t,~r,~\theta,~\varphi$ are the event coordinates with
$d\Omega^2= d\theta^2 +\sin^2\theta d\varphi^2$.
$G$ is the gravitational constant and $M$ is the mass of a black hole.
In order to describe the photon radiation field we shall employ
the concept of the distribution function $N$.
The distribution function $N(x, {\bf p})$ describes the number
$dN$ of photons (photon world lines) which cross a certain spacelike
volume element $dV$ at $x(t,r,\theta,\varphi)$, and whose $4-$ momenta
${\bf p}$ lie within a corresponding 3-surface element $dP$ in momentum
space. It is desirable to choose $dV$ and $dP$ to be coordinate-invariants.
Thus $dN$ would be invariant as well and the same would be true
of $N(x, {\bf p})$.
In Appendix A we present the detailed derivation of
the relativistic radiative transfer equation expressed through
the distribution function $N(x, {\bf p})$ and the interaction density
function $S(N)$ (see the definition of this function after Eq. A13).
We will describe the electron component
by a local Maxwellian distribution (e.g. Landau \& Lifshitz 1980,
and Pathria 1970 )
$$
F(r, P_e)dP~=~\displaystyle{\aleph^{-1}e^{\beta u_{\mu}P_e^{\mu}}}dP
\eqno(2)
$$
where $\aleph$ is the normalization constant.
One has to interpret $F(r,P_e)(-P^{a}_{e}n_{a})dPdV$
similar to the one implied by (A8), with the sole exception that considerations
are restricted on the electron phase space.
For our purpose an arbitrary electron momentum state $P_{e}$ ,
can be represented in the form:
$$
P_e=\left({1\over{\sqrt{1-V^2/c^2}}},~{{|V|\bf n_e}\over{\sqrt{1-V^2/c^2}}}
\right),
\eqno(3)
$$
where the ``the thermal three velocity'' ${\vec V}$ stands for a convenient
parameterization of the electron phase space.
We will take $\beta=m_ec^2/kT_e$, while $u_{\mu}$ stands
for the hydrodynamical four velocity of the inflowing plasma
which may be represented relative to the local orthonormal frame in the
form
$$
u=(u^{o},u^{r})=\left({1\over{\sqrt{(1-v^{2})}}},~
-{{\bf v^{r}}\over{\sqrt{(1-v^{2})}}}
\right),
\eqno(4)
$$
with the negative sign in $u^{r}$
takes into account the convergent
nature of the fluid flow.
Note that, as a result of the hydrodynamic bulk motion, the
local Maxwellian distribution exhibits a coupling of the thermal
velocity ${\vec V}$ with the hydrodynamic bulk motion ${\vec v}$, and one gets
$$
\beta u_{\mu}P_e^{\mu}=-{{m_ec^2}\over{kT_e}}
(1-v^2/c^2)^{-1/2}(1-V^2/c^2)^{-1/2}\left[1+\cos\theta{{Vv}\over{c^2}}
\right]
\eqno(5)
$$
We will discuss the coupling effect in \S 4 and we
will consider this issue in detail in our next publication.
Within a $4-$volume $dW$ at the event $x$ there is a decrease in the original
number of world lines due to absorption and scattering out of momentum
range $dP$, given by (cf. the right hand side of Eq. A13)
$$
-\kappa(x,{\bf p})n(x)N(x,{\bf p})~dW~dP.
\eqno(6)
$$
Here $n(x)$ is the proper number density of the electrons interacting
with the photons, namely the number density of electrons as measured
in their own local rest frame, and
$\kappa(x,{\bf p})$ is the invariant absorption coefficient or invariant
opacity. The $\kappa-$ opacity is related to
the usual scattering cross-section $\sigma_s$ via expression
(see Lindquist 1966)
$$
\kappa~=~E\cdot\sigma_s.
\eqno(7)
$$
On the other hand, there is increase due to pure scattering out of all
other $4-$momentum ranges $dP^{\prime}$ into $dP$ given by
$$
n(x)~dW~dP\int dP^{\prime}\kappa(x,{\bf p})\zeta(x;{\bf p}^{\prime}
\rightarrow{\bf p}) N(x,{\bf p}^{\prime}).
\eqno(8)
$$
Thus the transition probability $\zeta(x;{\bf p}^{\prime})$
can be expressed in terms of
the differential cross section $d\sigma_s/(dEd\Omega)$
(see Lindquist 1966) as
$$
\kappa(x,{\bf p}^{\prime})\zeta(x;{\bf p}^{\prime}
\to {\bf p}) = {{E^{\prime}}\over E}
{{d\sigma_s}\over{dEd\Omega}}.
\eqno(9)
$$
Taking into account only Compton scattering of photons off the background
electrons, one may covariantly write the transfer equation (see Eq. A13)
in the following form:
$$
p^{\alpha}{{DN}\over{dx^{\alpha}}}=
{\int}N(r,P')\kappa(x,{\bf p}^{\prime})\zeta(x;{\bf p}^{\prime}
\rightarrow{\bf p})d P^{\prime}
$$
$$-
N(r,P){\int}N(r,P')\kappa(x,{\bf p^{\prime}})\zeta(x;{\bf p}
\rightarrow{\bf p^{\prime}}) dP^{\prime}.
\eqno(10)
$$
The first term in the
right hand side describes the increase in the photon world lines
over the infinitesimal phase space cell centered around $P$
while the second term describes the processes
of depletion.
Recall that the
scattering cross-section of photon from an electron in the electron's
rest frame is described by the Klein-Nishina formula
$$
\sigma(\nu\to\nu^{\prime},\xi)=
{{3}\over{16\pi}}n_e\sigma_T{{1+\xi^2}\over{[1+z(1-\xi)]^2}}
\times $$
$$\times\left\{1+{{z^2(1-\xi)^2}\over{(1+\xi^2)[1+z(1-\xi)]}}
\right\}\delta\left[\nu^{\prime}-{{\nu}\over{1+z(1-\xi)}}\right]
\eqno(11)
$$
where $z= h\nu/m_ec^2$ is a dimensionless photon energy,
$\xi$ is the cosine of scattering angle, $\sigma_T$ is the Thomson
cross$-$section and that all quantities in the right hand of the above
formula are
computed in the rest frame of the electron one may explicitly write the
transfer equation on the black hole background.
By rewriting (10) for the orthonormal frame of (1), [see Eq. (A25)]
we get the following equation:
$$
\mu\sqrt{f} {{\partial N}\over{\partial r}}-
\nu\mu{{\partial \sqrt{f}}\over{\partial r}}{{\partial N}\over{\partial \nu}}
-(1-\mu^2)\left({{\partial \sqrt{f}}\over{\partial r}}-
{{\sqrt{f}}\over{r}}\right)\cdot {{\partial N}\over{\partial \mu}}
=
$$
$$
\int_0^{\infty}d\nu_1\int_{4\pi}d\Omega_1
\left[\left({{\nu_1}\over{\nu}}\right)^2
\sigma_s(\nu_1\to \nu, \xi)N(\nu_1,\mu_1,r) -
\sigma_s(\nu\to\nu_1, \xi)N(\nu,\mu,r)\right].
\eqno(12)
$$
The scattering kernel can be calculated by performing a Lorentz boost of
$\sigma_s$, multiplying it by $F(r,P_e)$ [see Eq.(2)] and
integrating over $P_e$. Then, the scattering kernel is given by
$$
\sigma_s(\nu\to \nu_1, \xi,\beta)={3\over16\pi}{{n_e\sigma_T}\over{\nu z}}
\int_0^{\pi} \sin\theta d\theta \int d^{3}{\bf v}{{F(r,P_e)}
\over{\gamma}}
$$
$$
\left\{1+\left[1-{{1-\xi}\over{\gamma^2 DD^{\prime}}}\right]^2+
{{z z^{\prime}(1-\xi)^2}\over{\gamma^2D D^{\prime}}}\right\}
\delta(\xi-1+\gamma D^{\prime}/z-\gamma D/z^{\prime}),
\eqno(13)
$$
where $D=1-\mu V$, $D_1=1-\mu^{\prime}V$,
$\gamma=(1-V^2)^{-1/2}$
and $\xi={\bf\Omega^{\prime}}\cdot{\bf\Omega}$ is the cosine of
scattering angle. In deriving the above equation we have chosen
$$
\aleph(\beta)=m_ec\int_0^{\pi} \int_0^c \exp(\beta u_{\mu}P_e^{\mu})
\gamma^5{{V^2}\over{c^2}}\sin{\theta}dv^2 d\theta
\eqno(14)
$$
so that the distribution of electrons is normalized by a fixed electron
density $n$ as measured in the orthonormal frame associated with (1).
\section{THE METHOD OF SOLUTION}
\subsection[THE METHOD OF SOLUTION]{Separation variables}
As long as the ejected low energy photons satisfy $z^{\prime}=
h\nu_0/m_ec^2 \gamma \ll 1,$ the integration over incoming
frequencies $\nu_0$ is trivially implemented provided
that the explicit function of $N(r,\nu_0,\nu, {\bf \Omega})$ is known.
Thus, we need to describe the main properties of Green
function $N(r,\nu_0,\nu, {\bf \Omega})$ in a situation
when the low-energy photons are injected into atmosphere with
the bulk motion.
The power-law part of the spectrum (Sunyaev \& Titarchuk 1980, TL95)
occurs at frequencies lower than that of Wien cut-off
($E< E_e$, where $E_e$ is the average electron energy).
In this regime the energy change due to the recoil effect of the electron
can be neglected in comparison with the Doppler shift of the photon.
Hence we can drop the third term in parenthesis and
the term $\xi-1$ of the delta-function argument in the scattering kernel
(13) transforming that into the classical Thomson scattering kernel
(cf. TL95 and Gieseler \& Kirk 1997).
Now we seek the
solution of the Boltzmann equation (12) with the aforementioned
simplifications, in the form
$$
N(r,\nu,{\bf \Omega})=\nu^{-(3+\alpha)}J(r,\mu).
\eqno(15)
$$
Then we can formally get from (12) that
$$
\mu\sqrt{f}{{\partial J}\over{\partial r}}+
(\alpha+3) \mu {{\partial\sqrt{f} }\over{\partial r}}J-
(1-\mu^2)\left({{\partial\sqrt{f} }\over{\partial r}}-
{{\sqrt{f} }\over{r}}\right){{\partial J}\over{\partial\mu}}=
$$
$$
=n_e\sigma_T\left[-J+{1\over{4\pi}}\int_{-1}^{1}d\mu_1\int^{2\pi}_0
d\varphi R(\xi)J(\mu_1,\tau)\right].
\eqno(16)
$$
Here the phase function $R(\xi)$
$$
R(\xi)={3\over4} \int_0^{\pi}\sin\theta d\theta
\int d^3{\bf v}{{F(r, P_e)}\over{\gamma^2}}
\left({{D_1}\over{D}}\right)^{\alpha+2}{{1}\over{D_1}}
[1+(\xi^{\prime})^2],
\eqno(17)
$$
where $\xi^{\prime}$ is the cosine of scattering angle between photon
incoming and outgoing directions in the electron rest frame.
The reduced integro-differential equation is two dimensional and it can be
treated and solved much easer than the original equation (12).
The whole problem is reduced to the eigenvalue problem for equation (16).
We can not claim that the kinetic equation allows a power-law solution (15)
unless first $\alpha$ is found and $J(r,\mu)$ is specified.
In order to derive equation for the determination
of a spectral index we expand
the phase function $R(\xi)$ in series of Legendre polynomials
(see also Sobolev 1975 and TL95)
$$
R(\xi)=p^0(\mu,\mu^{\prime})+2\sum_{m=1}^np^m(\mu,\mu^{\prime})
\cos{m(\varphi-\varphi^{\prime})},
\eqno(18)
$$
$$
p^m(\mu,\mu^{\prime})=\sum_{i=m}^nc_i^mP_i^{m}(\mu)P_i^{m}(\mu^{\prime})
\eqno(19)
$$
and
$$
c_i^m=C_i{{(i-m)!}\over{(i+m)!}}
\eqno(20)
$$
for $m~=~0,~1,~2,~...~n$.
Since the phase function $R(\xi)$ is given by
the series (Eq. 18) in $\cos{m\varphi}$,
the source function (the second term in brackets of the right hand side
of Eq. 16), and $J (r,{\bf \Omega})$ can be
expanded over $\cos{m\phi}$ too.
Under assumption of spherical symmetry for the source, we are interested in
the zero-term of the expansion which satisfies the following equations
$$
\ell J^0(r,\mu)=
-[n_e\sigma_T+(\alpha+3)\mu {{\partial\sqrt{f} }\over{\partial r}}]J^0(r,\mu)
+(n_e\sigma_T)B^0(r,\mu),
\eqno(21)
$$
where
$$
\ell J^0(r,\mu)=\mu\sqrt{f}{{\partial J^0}\over{\partial r}}+
(1-\mu^2)\left({{\partial\sqrt{f} }\over{\partial r}}-
{{\sqrt{f} }\over{r}}\right){{\partial J^0}\over{\partial\mu}},
\eqno(22)
$$
and the source function
$$
B^0(r,\mu)={1\over2}\int_{-1}^{1} p^0(\mu,\mu^{\prime})J^0(r,\mu^{\prime})
d\mu^{\prime}.
\eqno(23)
$$
There are two boundary conditions which our solution must satisfy.
The first is that there is no scattered radiation outside of the
atmosphere
$$
J^0(0,\mu)=0~~~~~~~~{\rm for}~~~\mu<0.
\eqno(24a)
$$
The second boundary condition is that we have an absorptive boundary
at radius $r_s$
$$
J^0(r_s,\mu)=0~~~~~~~~{\rm for}~~~\mu>0.
\eqno(24b)
$$
Thus the whole problem is reduced to the standard radiative transfer problem
for the space part of the solution $J(r,{\bf \Omega})$.
Inversion of the differential operator $\ell$ of the left hand side
of equation (21) leads to the integral equation
for $B^0(r,\mu)$
$$
B^0(r,\mu)={1\over2}\int_{-1}^{0}p^0(\mu,\mu^{\prime})d\mu^{\prime}
$$
$$
\times \int_0^{T(r_{bn},r,\mu^{\prime})}
\exp\{-T[r_{bn},r^{\prime}(r,\mu^{\prime}),\mu^{\prime}]\}
B^0(r^{\prime},\mu^{\prime})dT
$$
$$
+{1\over2}\int_{0}^{1}p^0(\mu,\mu^{\prime})d\mu^{\prime}
$$
$$
\times\int_0^{T(r_{bn},r,\mu^{\prime})}
\exp\{-T[r_{bn},r^{\prime}(r,\mu^{\prime}),\mu^{\prime}]\}
B^0(r^{\prime},\mu^{\prime})dT,
\eqno(25)
$$
where $T(r_{bn},r,\mu)$ is the optical path along characteristic curve of
the differential operator $\hat\ell$ determined by the initial
point $r,~\mu^{\prime}$ toward the boundary radius $r_{bn}$
($r_{bn}=r_s$, and $\infty$
for the inner, and outer boundaries, respectively).
The phase function component $p^0(\mu,\mu^{\prime})$ entered in
equations (21) and (23) is
determined by the sum
$$
p^0(\mu,\mu^{\prime})=\sum_{i=0}^nC_iP_i(\mu)P_i(\mu^{\prime}).
\eqno(26)
$$
Thus, we can present the source function $B^0$ also as a sum:
$$
B^0(r,\mu)=\sum_{i=0}^nC_iP_i(\mu)\int_{-1}^{1}P_i(\mu^{\prime})
J^0(r,\mu^{\prime})d\mu^{\prime}.
\eqno(27)
$$
This form of the source function is used for the solution of the
boundary problem (21-24) by the iteration method (e.g. Sunyaev \& Titarchuk
1985). In order to proceed with the iteration method one has to assume
some initial field distribution (in terms of the intensity $J^{0}$)
and then to calculate $B^0$ in accordance to Eq. (27) which
is followed by the solution of the differential equation (21).
This iteration formalism is identical to the integral-equation formalism
in which
$$
B^0(r,\mu)=\sum_{i=0}^nC_iP_i(\mu)B^0_i(r)
\eqno(28)
$$
where the set of $B^0_i(r)$, components of the source function
$B(r,\Omega)$, is
determined by the system of the integral equations (compare with TL95)
$$
B^0_i(r)= {1\over2}\sum_{j=0}^nC_j[\int_{-1}^{0}p_i(\mu^{\prime})
p_j(\mu^{\prime})d\mu^{\prime}
$$
$$
\times\int_0^{T(r_{bn},r,\mu^{\prime})}
\exp\{-T[r_{bn},r^{\prime}(r,\mu^{\prime}),\mu^{\prime}]\}
B^0_j(r^{\prime})dT
$$
$$
+\int_{0}^{1}p_i(\mu^{\prime})p_j(\mu^{\prime})d\mu^{\prime}
$$
$$
\times\int_0^{T(r_{bn},r,\mu{\prime})}
\exp\{-T[r_{bn},r^{\prime}(r,\mu{\prime}),\mu^{\prime}]\}
B^0_j(r^{\prime})dT].
\eqno(29)
$$
Thus the eigenvalue problem Eqs. (21-24), can be reduced
to a eigenproblem for a system
of integral equations (29) where
the optical paths $T(r_{bn},r,\mu)$ and the expansion coefficients of the
phase function $C_i$ depend on the spectral index $\alpha$ as a parameter.
In other words, one has to find the values of $\alpha$ which guarantee
the existence of the nontrivial solution of equations (29).
In \S 3.2 and Appendix B we shall proceed with the numerical solution of the
eigenvalue problem by presenting
the bulk motion phase function $R_b(\xi_b)$ in the degenerated form
(cf. Eq. 26).
\par
Now it is worth noting
that in the case of the pure thermal motion in the isothermal
plasma cloud the problem is substantially simplified.
The source function $B^0(r,\mu)$
can be replaced by its zeroth moment $B^0_0(r)$ (TL95)
which guarantees the accuracy of the spectral index determination better
than 10\% in the worst cases (Giesler \& Kirk 1997).
For example, the equation for the zeroth moment $B_0^0(r)$ reads
$$
B^0_0(r)={{C_0}\over{2}}[\int_{-1}^{0}d\mu^{\prime}
\int_0^{T(r_{bn},r,\mu^{\prime})}
\exp\{-T[r_{bn},r^{\prime}(r,\mu^{\prime}),\mu^{\prime}]\}
B^0_0(r^{\prime})dT
$$
$$
+\int_{0}^{1}d\mu^{\prime}
\int_0^{T(r_{bn},r,\mu{\prime})}
\exp\{-T[r_{bn},r^{\prime}(r,\mu{\prime}),\mu^{\prime}]\}
B^0_0(r^{\prime})dT].
\eqno(30)
$$
where $C_0$ is the zero-moment of the phase function.
\subsection[THE METHOD OF SOLUTION]{Photon trajectories and
the characteristics of the space operator $\ell$}
The characteristics of the differential
operator $\ell$ are determined by the following differential
equation
$$
\left[-{{1}\over{2x^2(1-x^{-1})}}+x^{-1}\right]dx=d[\ln(1-\mu^2)^{-1/2}],
\eqno(31)
$$
where $x=r/r_s$ is a dimensionless radius.
The integral curves of this equation (the characteristic curves) are given
by
$$
{{x(1-\mu^2)^{1/2}}\over{(1-x^{-1})^{1/2}}}=
{{x_0(1-\mu_0^2)^{1/2}}\over{(1-x_0^{-1})^{1/2}}}=p,
\eqno(32)
$$
where $p$ is an impact parameter at infinity.
$p$ can also be determined at a given point in a characteristic
by the cosine of an angle between the tangent to and the radius vector to
the point and by the given point position $x_0$.
\par
\noindent
In the the flat geometry,
the characteristics are just straight lines
$$
x(1-\mu^2)^{1/2}=p,
\eqno(33)
$$
where an impact parameter $p$ is the distance of a given point to the center.
We can resolve equation (32) with respect of $\mu$ to get
$$
\mu={\pm}(1-p^2/y^2)
\eqno(34)
$$
where $y=x^{3/2}/(x-1)^{1/2}$.
The graph of $y$ as a function of $x$ is presented in Fig. 1 which allows
to comprehend the possible range of radii for the given impact parameter $p$
through the inequality $p\leq y$.
For example, if $p\leq \sqrt{6.75}$, then the photon can escape from the inner
boundary (the black hole horizon) toward the observer or vice versa
all photons going
toward the horizon having these
impact parameters are gravitationally attracted by the black hole.
However, if $p>\sqrt{6.75}$,
then the finite trajectories are possible with the radius range between
$1\leq x\leq 1.5$, or the infinite trajectories with $p\leq y(x)$ ($x$
is always more than $1.5$).
\subsection[THE METHOD OF SOLUTION]{Spectral index determination}
We are assuming a free fall for the background flow where the bulk
velocity of the infalling plasma is given by $v(r)=c(r_s/r)^{1/2}$.
In the kinetic equations (12, 16) the density $n$ is measured in
the local rest frame of the flow and it is
$n=\dot m(r_s/r)^{1/2}/(2r\sigma_T )$. Here $\dot m=\dot M/\dot M_E$,
$\dot M$ is mass accretion rate and
$\dot M_E \equiv L_E/c^2=4\pi GMm_p/ \sigma_Tc~$ is
the Eddington accretion rate.
For the cold converging inflow ($kT_e=0$ keV) the electron distribution
is the delta-function $F(r,P_e) =\delta({\bf v}-{\bf v}_b)$ defined
in the velocity phase space in the way that
$$
\int_0^{\pi}\sin{\theta}d\theta\int d^3{\bf v}F(r,P_e)=1.
$$
In this case the phase function is
$$
R_b(\xi)={3\over4} {{1}\over{\gamma_b^2}}
\left({{D_{1b}}\over{D_b}}\right)^{\alpha+2}{{1}\over{D_{1b}}}
[1+(\xi_b^{\prime})^2],
\eqno(35)
$$
where subscript ``b'' is related with the bulk velocity direction
(the case of arbitrary temperature will be considered elsewhere).
In the case of zero temperature the directions of incoming and
outcoming photons are related.
Our goal is to find the nontrivial solution $J^0(r,\mu)$
of this homogeneous problem
and the appropriate spectral index $\alpha$ for which this solution exists.
This problem can be solved by the iteration method
which involves the integration of the differential equation (21)
with the given boundary conditions (24) along
the characteristics (32) by using Runge-Kutta's method.
The integration starts from the internal or the outer boundary depending
on the particular impact parameter $p$ (Eq. 32). In turn which is determined
by the dimensionless radius $x$ ($x=r/r_s$) and the cosine $\mu$
of the angle between the photon direction and the radius vector
at the given point $x$.
If $\mu$ is positive at the $x$ then the photon trajectory
(the characteristics) can start at the inner boundary
[ if $x<1.5$ and $p<x^{3/2}/(x-1)^{1/2}$
or if $x>1.5$ and $p<\sqrt{6.75}$] or at the outer boundary
(if $x>1.5$ and $p>\sqrt{6.75}$).
All cases can be understood from Fig. 1.
The trajectories with the given $p$ are related to the parallel
lines to the X-axis, $y=p$. These lines start at $x=1$ or at
infinity. For example if they start at infinity (i.e. having
negative $\mu$) and $p\geq \sqrt{6.75}$
they must have the turning point with $\mu=0$. Thus they
must pass through the point with radius $x_{\star}$ where
$p=x_{\star}^{3/2}/(x_{\star}-1)$.
At this point the cosine $\mu$
changes sign from $-$ to $+$ and after that the trajectory enters through the
point with radius $x$ at the positive angle
$\theta=\cos^{-1}\mu$.
\placefigure{fig1}
If the trajectory starts at $x=1$ , (i.e. having
positive $\mu$) and $p< \sqrt{6.75}$
the parallel line $y=p$ has no turning point.
If $\mu$ is negative at $x$, the trajectories starting
at the internal boundary (having positive cosine $\mu$)
have to pass through the turning point $\mu=0$ (changing the
cosine sign) at radius $x_{\star}$ where
$p=x_{\star}^{3/2}/(x_{\star}-1)$. However if the trajectory
starts at the outer boundary has no the turning points the cosine $\mu$ is
always negative along the trajectory.
This space integration is followed by integration $J^0(r,\mu^{\prime})$
over the angular variable $\mu^{\prime}$ in Eq. (23).
As the initial
distribution for $B^0(r,\mu)$ or $J^0(r,\mu)$
we can choose, for example, the uniform one.
We use the Gaussian integration to calculate of Eq. (23)
(see e.g. Abramovitz \& Stegan 1970 for details of the methods).
After quite a few iterations the iterative
process converges and it produces the eigenfunction source distribution
$B^0(r,\mu)$. The number of iterations $n$ is related to the average number
of scatterings which the soft photons undergo to transform into the hard
ones (e.g. Titarchuk 1994). It is determined by the
Thomson optical depth of the bulk motion atmosphere $\tau_b=\tau_T(r_s)$.
For the cold atmosphere ($T_e=0$)
the iteration number $n\simeq 2\tau_b$. The convergence of the process
can be done only with the proper choice of the value of a spectral index
$\alpha$.
\section{ RESULTS OF CALCULATIONS AND DISCUSSION}
Fig. 2 presents the results of
the calculations of the spectral indices as a function
of mass accretion rates.
It is clearly seen
that the spectral index is a weak function of mass accretion rate
in a wide range of $\dot m =3-10$. The asymptotic value of the spectral
index for high mass accretion rate is 1.75 which is between $\alpha =2$
that was found by Blandford \& Payne 1981 for the infinite medium and
$\alpha\approx 1.4$ which was found by TMK96 for the finite bulk motion
atmosphere.
\placefigure{fig2}
The latter two results are obtained in the nonrelativistic Fokker-Planck
approximation.
We see that the efficiency of the hard photon production in the cold
bulk motion atmosphere is quite low. This is not the case if the plasma
temperature is of order of a few keV or higher. The coupling effect
between the bulk and local Maxwellian motion occurs when the bulk
motion velocity is very close to the speed of light, i.e. when
the matter is very close to the horizon.
The upscattering effect increases significantly in the latter case.
In the regime of the relativistic bulk motion
the electron distribution (2) has a sharp maximum at $\theta=\pi$ and $V=c$.
In the vicinity of the maximum the distribution is characterized by
the exponential shape $F(r. P_e)\propto \exp(-\beta \gamma_b/2\gamma)$
(see also Eq. 5).
More results and details regarding the relativistic coupling would be
presented elsewhere.
\placefigure{fig3}
As an example, in Fig. 3 we demonstrate the zeroth moment of the
source function distribution (the hard photon production). It is seen there
that the distribution has a strong peak around
2 $r_s$. This means that the vicinity of the black hole is a place
where the hard photons are produced by upscattering of the soft photons
off the converging electrons.
Our calculations were made under the assumption of the free fall velocity
profile.
Since the energy gain due to the bulk motion Comptonization is not bigger
than factor 3 (if the spectral indices are higher than 1.5, TMK97), it
follows that we can safely neglect the effects of the radiation force in
our calculations if the injected photon flux in the converging inflow
is of order of a few percent of the Eddington luminosity.
The assumption of Thomson scattering accepted in our solution restricts
the relevant energy range to $E<m_ec^2$.
Our approach cannot determine accurately the exact position of the high
energy cutoff which is formed due the downscattering of the very energetic
photons in the bulk motion electron atmosphere. The additional efforts
are required to confirm the qualitative estimates of the high energy cutoff
position as of order $m_ec^2$ (TMK97). Laurent \& Titarchuk 1997 (in
preparation) by using Monte Carlo calculations checked and confirmed our
results for the spectral indices and the TMK97
estimates of the high energy cutoff position.
Futhermore, they found the prominent spectral features at
energies $\gtorder 400$ keV.
As a conclusion we would like to point out the definitive
(according to our model) difference between black holes and neutron
stars, as it can be ascertained in their spectral properties while
in their soft states, when their luminosity is dominated
by the quasithermal, soft, component: In the black hole case there
should always be an additional steep power law high energy tail extending
to energies $\sim m_e c^2$. This component should
be absent in neutron star systems, because the effect of the bulk
motion is suppressed by the radiation pressure in this case.
We presented the full relativistic formalism and solved
semi-analytically the kinetic equation by using TL95 eigenfunction method
(see also Giesler \& Kirk 1997) in the case of plasma
infalling radially into a compact object with a soft source of input photons.
We found that {\it the converging
flow has crucial effects on the emergent spectrum for moderately
super-Eddington mass accretion rates}.
Our power law spectra can be applicable for the explanation of
the observational situations in black hole candidate sources.
\section{ACKNOWLEDGMENTS}
We thank the anonymous referee for reading and evaluating the present
paper.
L.T. would like to acknowledge support from, NASA grants NCC5-52, NAG 5-
3408 and Alex Muslimov and Leonid Ozernoy for discussions and
useful suggestions. L.T. also acknowledges Wan Chen for pointing out
some particular details of the observational situations in black hole
candidate sources.
\newpage
|
1,116,691,497,290 | arxiv | \section{Introduction}
\begin{quotation}
\footnotetext[1]{[email protected]}\footnotetext[2
{farida\[email protected]}\emph{"Who of us would not be glad to lift the
veil behind which the future lies hidden; to cast a glance at the next
advances of our science and at the secrets of its developments during future
centuries?" }$-$ \textbf{David Hilbert (1900).}
\emph{"It is by the solution of problems that the investigator tests the
temper of his steel; he finds new methods and new outlooks, and gains a wider
and freer horizon" }$-$ \textbf{David Hilbert (1900).}
\end{quotation}
In the early seventies Abdus Salam and his co-workers proposed the concept of
strong gravity, in which the successive self -interaction of a nonlinear
spin-2 field was used to describe a non-abelian field of strong interactions.
This idea was formulated in a two-tensor theory of strong and gravitational
interactions, where the strong tensor fields are governed by Einstein-type
field equations with a strong gravitational constant $G_{f}\approx10^{38}$
times the Newtonian constant $G_{N}$. Within the framework of this proposal,
tensor fields were identified to play a fundamental role in the
strong-interaction physics of quantum chromodynamics (QCD) \cite{CJI, ASJ,CSI,
DJS, YNE, ASCS}.
All the calculations done in the numerical lattice QCD and other related
experiments indicate that QCD, the worthy theory of strong interactions,
possesses gauge symmetry based on the group $SU(3)-$color of quantum
Yang-Mills theory (QYMT). Gravitational interactions also have similar
symmetry (the coordinate invariance in a space-time manifold), but resist
quantization. \emph{This prevents physicists from constructing a quantum
theory of gravity based on the gauge principle, and also inhibits the direct
unification of gravity with strong interaction} \cite{IANC}.
The origin of the difficulties is now clear to us: QCD action is scale
invariantly quadratic in the field strengths $F_{\mu \nu}^{i}$
(i.e.\emph{non-unitary}) and \emph{renormalizable,} while the Einstein-Hilbert
action for pure gravity is \emph{unitary} and \emph{nonrenormalizable}. Thus,
the unification of gravity with QCD seems unattainable; however, that is not
the case: The valiant attempt to disprove this \emph{prima facie}
impossibility offers an outstanding example of the inspiring effect which such
a very special and apparently important solution may have upon physics community.
\bigskip Having now recalled to mind the origin of the problem, let us turn to
the question of whether there is an existing unification scheme that can be
used to solve the problem. Strong gravity formulation is such the unification
scheme that allows the gravity to be merged with QYMT. In this case, a
gravitational action which possesses quadratic terms in the curvature tensor
has been shown to be renormalizable (\cite{KSS}, P.963 \& P.967). Here, the
resulting non-gauge-invariant divergences are absorbed by nonlinear
renormalizations of the gravitational fields and Becchi-Rouet-Stora
transformations (\cite{KSS}, P.953). In the following, \emph{the dynamical
breaking of the scale invariance of} \emph{Weyl action (which describes the
short distance behavior of strong gravity theory) induces: (1)
perturbative/short-range component of \ the\ non-relativistic QCD potential,
and non-relativistic quantum electrodynamic (QED) potential. (2) Einstein
general relativity as an effective long distance limit of the theory} $-$
\textbf{This is the \emph{fons et origo} of the gauge/gravity duality; and the
solution to the quantum Yang-Mills existence on R}$^{4}$ \textbf{and dark
matter problems, within the strong gravity formulation.}
The catch here is that quantum gravity (i.e. a quantum mechanically induced
gravity) cannot be derived straightforwardly by quantizing nonrenormalizable
Einstein GR but Weyl action which leads to Einstein's theory of gravity at
large distances\cite{IANC}; in the same way the gauge theory of
Glashow-Weinberg-Salam, $G_{EW}=SU(2)_{L}\times U(1)_{Y},$ reduces to
$U(1)_{Q}$ after the spontaneous symmetry breakdown\cite{EW, JCA}.
QCD possesses four remarkable properties that strong gravity must have for it
to be called a complete theory of strong interactions. The \ \textbf{first} is
\emph{asymptotic freedom} (i.e., the logarithmic decrease of the QCD coupling
constant $\alpha_{s}(Q_{0}^{2})\sim1/(\ln$ $Q_{0}^{2})$ at large momentum
transfers, or equivalently the decrease of $\alpha_{s}$ at small distances,
$\alpha_{s}(r)\sim1/(\ln r)$) which permits one to perform consistent
theoretical computations of hard processes using perturbation theory. This
property also implies an increase of the running coupling constant at small
momentum transfer, that is, at large distances. The \textbf{second} important
property is the\emph{\ confinement,} in which quarks and gluons are confined
within the domain of their strong interaction and hence cannot be observed as
real physical objects. The physical objects observed experimentally, at large
distances, are hadrons (mesons and baryons). The \textbf{third} characteristic
property is the \emph{dynamical breakdown of chiral symmetry}, wherein the
vector gauge theories with massless Dirac fermion fields $\psi$ are perfectly
chiral symmetric. However, this symmetry is broken dynamically when the vector
gauge theory is subjected to chiral $SU(2)$ rotations. This is the primary
reason why chiral symmetry is not realized in the spectrum of hadrons and
their low energy interactions\cite{BIF, QHN}. The \textbf{fourth} property is
the \emph{mass gap(}$\Delta$\emph{). Here, }every\emph{\ }excitation of the
QCD vacuum has minimum positive energy (i.e. $\Delta>0$); in other words,
there are no massless particles in the theory\cite{EW, JCA}. Additionally,
strong gravity must also be able to reproduce the two fundamental parameters
of QCD (i.e., coupling $\alpha_{s}$ and fundamental quark mass $m_{q}$
\cite{JBER}, P.178).
Thus, the three demands that must be met by strong gravity theory for it to be
called a unification scheme for QYMT-GR are:
\textbf{(1)} It must admit the four QCD properties afore-listed.
\textbf{(2)} It must be able recover the fundamental parameters of QCD (i.e.,
$\alpha_{s}$ and $m_{q}$).
\textbf{(3)} It must be able to reproduce Einstein's general relativity as the
limiting case of its long-distance behavior.
Any theory that fulfills these three demands can be termed "\textbf{a unified
theory of nature}"\textbf{.}
In the present paper, we study the structure of a dynamically broken
scale-invariant quantum theory (Weyl's action) within the context of strong
gravity formulation, and its general properties. The major problem which has
to be faced immediately is the unresolved question of unitarity of pure
gravity: Weyl's action is non-unitary while the Einstein-Hilbert action for
pure gravity is unitary. This problem is circumvented within the framework of
strong gravity: where the unitary Einstein-Hilbert term is induced after the
breakdown of the scale invariance of Weyl's action (\cite{ASCS}, P.324). To
put it in a proper and succinct context, Einstein GR emerges from the Weyl's
action after the dynamical breakdown of its scale invariance. Hence Einstein's
theory of gravity is not a fundamental theory of nature but the classical
output of the more fundamental \ gluon-dependent Weyl's action.
The paper is organized as follows.\textbf{\ In section II}, we briefly review
the BCJ double-copy construction of gravity scattering
amplitudes.\textbf{\ Section III} is devoted to the review of strong gravity
theory. Most importantly, we prove that BCJ double-copy construction exists
within the strong gravity formulation. The calculation of the dimensionless
strong coupling constant is done in the \textbf{section IV. }The\textbf{\
theoretically obtained value is tested experimentally in the\textbf{\ section
V.} We present strong gravity as a massive spin-two theory in the
\textbf{section VI}. Here, we show that the dynamics of strong gravity theory
is fully symmetric, but its vacuum state is asymmetric. We also show in this
section that electroweak and custodial symmetries can be induced dynamically.
Critical temperature, fundamental mass and mass gap of the QCD vacuum are
obtained in the \textbf{section VII}. This leads to the derivation of the
effective pure Yang-Mills potential. The gauge-gravity duality property of
strong gravity theory is studied in the\textbf{\ section VIII}. We also show
that strong gravity possesses UV regularity and dynamical chiral symmetry
breaking in this same section. \ Confinement and asymptotic freedom properties
of the strong gravity is studied in the \textbf{section IX}. In this section,
we calculate the energy density of QCD vacuum. The existence of quantum
Yang-Mills theory on $R^{4}$ is established in the \textbf{section X}. The
vacuum stabilizing property of Higgs boson with mass $m_{H}=129GeV$ is studied
in \textbf{section XI.} The solutions to the neutrino mass, dark energy and
dark matter problems are presented in the \textbf{sections XII}, \textbf{XIII}
and \textbf{XIV} respectively. The physics of the repulsive gravity and cosmic
inflation is presented in the \textbf{section XV}. Conclusion is given in the
\textbf{section XVI.}
\section{Theoretical Preliminaries}
Research in strong gravity has always had a rather unique flavor, due to
conceptual difficulty of the field, and remoteness from experiment. We argue,
in this paper, that if the conceptual misconception $-$ namely, that gravity
is bedeviled with many untamable infinities $-$ that beclouds the field could
be circumvented, then the complexity enshrined in the field would become
highly trivialize.
The most powerful tool for removing this conceptual difficulty is encoded in a
long-known formalism: that the asymptotic states of gravity can be obtained as
tensor products of two gauge theory states (i.e. $gravity=gauge\otimes
gauge$). This idea was extended to certain interacting theories, in 1986, by
Kawai, Lewellen and Tye \cite{TOGO1}; and to strong-gravitational theory by A.
Salam and C. Sivaram in 1992 \cite{ASCS}. The modern understanding of this
double-copy formalism is largely due to the work of \textbf{B}ern,
\textbf{C}arrasco and \textbf{J}ohansson (BCJ). Formally, double-copy
construction (also known as \textbf{BCJ} construction) is used to construct a
gravitational scattering amplitude by using modern unitarity method, and the
scattering amplitudes of two gauge theory as building blocks \cite{TOGO2,
TOGO3}. This pathbreaking technique of computing perturbative scattering
amplitudes, which led to a deeper understanding of quantum field theory,
gravity, and to powerful new tools for calculating QCD processes, was awarded
the \emph{2014 J.J. Sakurai Prize for Theoretical Particle Physics}
\cite{TOGO4}.
\textbf{BCJ} construction has overturned the long-accepted dogma on Einstein's
GR, which posits that GR is nonrenormalizable. This new approach breaths new
life into the search for a fundamental unified theory of nature based on the
"supergravity" approach. Supergravity tries to tame the infinities encountered
in the Einstein's theory of gravity by adding "supersymmetries" to it. In a
variant of the theory called $N=8$ supergravity, which has eight new
"mirror-image" particles (gravitinos) allow physicists to tame the infinities
present in the Einstein's theory of gravity: other variants of supergravity
are $N=2,4$ Yang-Mills-Einstein-Supergravity (YMESG) and $N=0$
Yang-Mills-Einstein (YME) theories (\cite{TOGO5,TOGO6}, and the references
therein) $-$ Supergravity is like a \textbf{"young twig, which thrives and
bears fruit only when it is grafted carefully and in accordance with strict
horticultural rules upon the old stem"}.
As to the $N=0$ YME theory (where $N=0$ means that there are no
supersymmetries in the theory), we claim that this theory is by no means
different from the broken-scale-invariant Weyl's action. This assertion can
only be true if this action naturally possesses BCJ and guage-gravity duality
properties. The BCJ property is established in the next subsection, and we
show that the potential, carried by the broken-scale-invariant Weyl's action,
possesses this property in the \textbf{subsection D} of \textbf{section III}
of this paper. The gauge-gravity duality property of strong gravity is
established in \textbf{section VIII}: this is our \textbf{"guide post on the
mazy paths to the hidden truths"} of neutrino mass and dark energy problems.
The discovery made here is that both problems are connected by the effective
vacuum energy (or effective Weyl Lagrangian).
\subsection{Perturbative Quantum Gravity and Color/ Kinematics Duality: A
Review}
QCD (one of the variants of Yang-Mills theory) is the current well-established
theory of the strong interactions. Due to its asymptotic-free nature,
perturbation theory is usually applied at short distances; and the ensuing
predictions have achieved an astonishing success in explaining a wide range of
phenomena in the domain of large momentum transfers. Upon closer consideration
the question arises: Can perturbation theory be used to explore the quantum
behavior of gravity at short distances as well? The answer to that question is
a resounding yes! The discovery of \textbf{BCJ} principle is now our window
into the quantum world of gravity with tamable infinities at short distances.
This principle states that, regardless of the number of spacetime dimensions
and loops, a valid gravity scattering amplitude is obtained by replacing color
factors with kinematic numerators in a gauge-theory scattering amplitude. The
resulting gauge-coupling doubling is called \textbf{BCJ/double-copy} property
\cite{TOGO2, TOGO3}.
The gluon's scattering amplitudes, (in terms of cubic graphs) at L loops and
in D dimensions, are given by (\cite{TOGO2, TOGO3,TOGO5,TOGO6}, and the
references therein)
\begin{equation}
A_{m}^{(L)}=i^{L-1}g_{\alpha}^{m-2+2L}\underset{i\text{ }\in \text{
cubic}{\sum}\int \frac{d^{LD}\ell}{(2\pi)^{LD}}\frac{1}{S_{i}}\frac{c_{i}n_{i
}{D_{i}
\end{equation}
where $m$ is the number of points, $g_{\alpha}$ is the dimensionless gauge
coupling, $S_{i}$ are the standard symmetry factors and $D_{i}$ are
denominators encoding the structure of propagator in the cubic graphs. $c_{i}
$ are the color factors and $n_{i}$ are the kinematic numerators. BCJ
construction posits that within the gauge freedom of individual cubic graphs,
there exist unique amplitude representations that make kinematic factors
$n_{i}$ obey the same general algebraic identities as color factors. Hence,
color/kinematics duality holds: $n_{i}\iff c_{i}$ \cite{TOGO2, TOGO3}.
The double-copy principle then states that once the color/kinematics duality
is satisfied (i.e., $n_{i}\iff c_{i}$), the L-loop scattering amplitudes of a
supergravity theory (with $N\geq4$) are given b
\begin{equation}
M_{m}^{(L)}=i^{L-1}\left( \frac{k_{\alpha}}{2}\right) ^{m-2+2L
\underset{i\text{ }\in \text{ }cubic}{\sum}\int \frac{d^{LD}\ell}{(2\pi)^{LD
}\frac{1}{S_{i}}\frac{n_{i}^{2}}{D_{i}
\end{equation}
where dimensionless $k_{\alpha}$ is the gravity coupling; and it is assumed
that the two involved gauge fields are from the same Yang-Mills theory. From
Eqs. (1) and (2), we hav
\begin{equation}
A_{m}^{(L)}=M_{m}^{(L)}\iff k_{\alpha}=2g_{\alpha
\end{equation}
Eq.(3), which is valid for all variants of supergravity with $N\geq4$, is the
expected gauge-coupling doubling or BCJ property. This property shows that
gravitons and gluons should be part of a fundamental unified theory of nature.
However, the devil is in the detail: the color-kinematics duality ($n_{i}\iff
c_{i}$) is more or less a conjecture; and the scattering-amplitude method of
probing the quantum nature of gravity is full of many mathematical
\emph{landmines}. Nevertheless, the conclusions of N = 8 supergravity theory
are indisputable. For we are convinced that the gauge-coupling doubling and
gauge-gravity duality should exist in the correct theory of quantum gravity
without appealing to supersymmetries. This is where strong gravity theory (or
point-like gravity) kicks in. Our present knowledge of the theory of strong
gravity puts us in a position to \emph{attack successfully }the problem of
\textbf{quantum gravity/point-like gravity} by using powerful-mathematical
tools (\emph{formula operators from differential geometry with their duality
and supersymmetry-like properties}) \emph{bequeathed} to us by
\emph{antiquity}.
We conclude this section with a great quote from one of the greatest
revolutionary mathematicians the world has ever known (\emph{David Hilbert})
\cite{TOGO7}: "If we do not succeed in solving a mathematical problem, the
reason frequently consists in our failure to recognize the more general
standpoint from which the problem before us appears only as a single link in a
chain of related problems. After finding this standpoint, not only is this
problem frequently more accessible to our investigation, but at the same time
we come into possession of a method which is applicable also to related
problems" $-$The \textbf{"standpoint"} discovered in this paper is the
\textbf{strong gravity theory}.
\section{Strong Gravity Theory: A Review}
We briefly review the standard formulation of strong gravity theory in this
section: (for more details see \cite{CJI, ASJ, CSI, DJS, YNE, ASCS, KSS} and
the references therein). Beginning with the two-gluon phenomenological fields
(i.e. double-copy construction), we re-establish strong gravity as a
renormalizable four-dimensional quantum gauge field theory by varying
\emph{Weyl action} with respect to the spacetime metric constructed out of the
two-gluon configuration. In this case, the two-point configuration (which
leads to the quantization of space-time itself) naturally introduces a minimum
length $2r_{g}$ (i.e. "intergluonic distance"); where $r_{g}$ is the "gluonic
radius". It should be emphasized here that this way of quantizing space-time
begins from the trajectories of \emph{two 2-gluons},i.e., curves or paths of
the geometry used. This method of constructing spacetime geometry from 2-gluon
phenomenology has been shown to be compatible with nature: The visualization
of the QCD vacuum (i.e.\textbf{visualization of action density of the
Euclidean-space QCD vacuum in three-dimensional slices of a }$24^{3}\times
36$\textbf{\ spacetime lattice}), by D. B. Leinweber, has shown that empty
space is not empty; rather it contains quantum fluctuations in the gluon field
at all scales (this is famously referred to as "gluon activity in a vacuum")
\cite{TOGO8}. This can only mean one thing: that gluon field is the
fundamental field of nature, and the spacetime metric/gravity is emergent from
2-gluon configuration. This is the main argument of BCJ/double-copy
construction. \emph{Simpliciter!}
By taking the vacuum states of hadron to be colorless (i.e. color-singlet),
the approximation of an external QCD potential (the hadron spectrum above
these levels) can be generated by color-singlet quanta. Based on the fully
relativistic QCD theory, these contributions have to come from the summations
of suitable Feynman diagrams in which dressed n-gluon configurations are
exchanged between several "flavors" of massless quarks. Thus, the simplest
such system (with contributions from n-gluon irreducible parts
$n=2,3,...,\infty$ and with the same Lorentz quantum numbers) will have the
quantum numbers of 2-gluon. The color singlet external field is then
constructed from QCD gluon field as a sum (\cite{DJS}, P.572)
\begin{equation}
G_{\mu}^{a}G_{\nu}^{b}\eta_{ab}+G_{\mu}^{a}G_{\nu}^{b}G_{\sigma}^{c
d_{abc}+...
\end{equation}
where $\eta_{ab}$ is the $SU(3)_{C}$ color-metric, $d_{abc}$ is the totally
symmetric $8\otimes8\otimes8\rightarrow1$ coefficient and $G_{\mu}^{a}$ is the
dressed gluon field. The curvature would be generated by the derivatives of
$G_{\mu}^{a}$ (\cite{ASCS},P.323). The 2-gluon configuration can then be
written from Eq.(4) a
\begin{equation}
g_{\mu \nu}(x)=G_{\mu}^{a}G_{\nu}^{b}\eta_{ab
\end{equation}
with
\begin{equation}
g=\det(g_{\mu \nu}(x))
\end{equation}
Eq.(5) is taken as the dominating configuration in the excitation systematics.
In this picture, the metric is constructed from a gluon-gluon interaction, and
the gluon-gluon effective gravity-like potential (effective Riemannian metric,
$g_{\mu \nu}$) would act as a metric field passively gauging the effective
diffeomorphisms (general coordinate transformations), just as is done by the
Einstein metric field for the general coordinate transformations of the
covariance group (\cite{YNE}, P.174).
It is crystal-clear that Eq.(5), as put forward by the proponents of strong
gravity, is by no means different from the double-copy structure of gauge
fields in the BCJ construction ($gravity=gauge\otimes gauge$); as such we
should be able to arrive at the same conclusions. The BCJ formalism
(double-copy construction) is formulated by using scattering-amplitude method.
Similarly, we show that double-copy construction can be obtained by using
formula operators from the differential geometry. Our approach puts BCJ
formalism on a proper mathematical footing: \emph{it puts flesh on the bones
of BCJ formalism.}
\subsection{Scale-Invariant-Confining Action for Strong Gravity Theory}
In analogy with the scale-invariant QCD action which is \emph{quadratic in the
field strengths} $F_{\mu \nu}^{i}$(with dimensionless coupling), we have the
corresponding Weyl action for gravity (\cite{ASCS}, P.322)
\begin{equation}
I_{W}=-\alpha_{s}\int d^{4}x\sqrt{-g}C_{\alpha \beta \gamma \delta}C^{\alpha
\beta \gamma \delta
\end{equation}
where $\alpha_{s}$ is purely dimensionless and can be made into a running
coupling constant $\alpha_{s}(Q_{0}^{2}).$ It's worth noting that Eq.(7) is
not only \emph{generally covariant} but also \emph{locally scale invariant}
(\cite{IANC}, P.6). The Weyl's tensor ($C_{\alpha \beta \gamma \delta})$ is
constructed out of the corresponding Riemann curvature tensor, i.e., the
covariant derivatives involving gauge fields, characterized with the
generators of the conformal group. In the following, the metric is generated
by Eq.(5) (\cite{ASCS}, P.323).
The Weyl curvature tensor is defined as the traceless part of the Riemann
curvature \cite{TT}
\begin{align}
C_{\alpha \beta \gamma \delta} & =R_{\alpha \beta \gamma \delta}-\frac{1
{n-2}(R_{\alpha \gamma}\eta_{\beta \delta}-R_{\alpha \delta}\eta_{\beta \gamma
}\nonumber \\
& -R_{\beta \gamma}\eta_{\alpha \delta}+R_{\beta \delta}\eta_{\alpha \gamma
})\nonumber \\
& +\frac{1}{(n-1)(n-2)}R(\eta_{\alpha \gamma}\eta_{\beta \delta}-\eta
_{\alpha \delta}\eta_{\beta \gamma})
\end{align}
Eq.(8) is constructed by using the trace-free property of Weyl tensor
\begin{equation}
\eta^{\alpha \gamma}C_{\alpha \beta \gamma \delta}=C_{\beta \alpha \delta}^{\alpha
}=0
\end{equation}
By contracting Eq.(8) with itself, we ge
\begin{align}
C_{\alpha \beta \gamma \delta}C^{\alpha \beta \gamma \delta} & =R_{\alpha
\beta \gamma \delta}R^{\alpha \beta \gamma \delta}-\frac{4}{(n-2)}R_{\beta \delta
}R^{\beta \delta}\nonumber \\
& +\frac{2}{(n-2)(n-1)}R^{2
\end{align}
In \ four-dimension ($n=4$), Eq.(10) reduces to
\begin{equation}
C^{2}\equiv C_{\alpha \beta \gamma \delta}C^{\alpha \beta \gamma \delta
=R_{\alpha \beta \gamma \delta}R^{\alpha \beta \gamma \delta}-2R_{\beta \delta
}R^{\beta \delta}+\frac{1}{3}R^{2
\end{equation}
Thus, Eq.(7) becomes
\begin{equation}
I_{W}=-\alpha_{s}\int d^{4}x\sqrt{-g}(R_{\alpha \beta \gamma \delta
R^{\alpha \beta \gamma \delta}-2R_{\beta \delta}R^{\beta \delta}+\frac{1}{3}R^{2})
\end{equation}
\subsection{Gauss-Bonnet Invariant Theorem}
For space-time manifold topologically equivalent to flat space, the
Gauss-Bonnet theorem relates the various quadratic terms in the curvature as
\cite{IANC}
\begin{equation}
I_{GB}=-\alpha_{s}\int d^{4}x\sqrt{-g}(R_{\alpha \beta \gamma \delta
R^{\alpha \beta \gamma \delta}-4R_{\alpha \beta}R^{\alpha \beta}+R^{2})=0
\end{equation}
Using this property, we can rewrite Eq.(12) a
\begin{equation}
I_{W}\longrightarrow I_{WGB}=I_{W}-I_{GB}=I_{W
\end{equation}
\begin{equation}
I_{W}=-2\alpha_{s}\int d^{4}x\sqrt{-g}\left[ R_{\beta \delta}R^{\beta \delta
}-\frac{1}{3}(R_{\gamma}^{\gamma})^{2}\right]
\end{equation}
where $R_{\beta \delta}$ is the Ricci tensor, which is a symmetric tensor due
to the Bianchi identities of the first kind, and its trace defines the scalar
curvature $R_{\gamma}^{\gamma}=R$ (\cite{SW-2}, P.153). By using Eqs.(7) and
(15), we hav
\begin{equation}
\int d^{4}x\sqrt{-g}\left( R_{\beta \delta}R^{\beta \delta}-\frac{1}{3
R^{2}\right) =\frac{1}{2}\int d^{4}x\sqrt{-g}C_{\alpha \beta \gamma \delta
}C^{\alpha \beta \gamma \delta
\end{equation}
Eq.(15) leads to the field equations \cite{TOGO9}
\begin{equation}
\sqrt{-g}g_{\mu \alpha}g_{\nu \beta}\frac{\delta I_{W}}{\delta g_{\alpha \beta
}=-\frac{1}{2}T_{\mu \nu
\end{equation}
Eq.(17) would be of fourth-order in the form (\cite{ASCS}, P.323)
\begin{align}
& \frac{1}{2}g_{\mu \nu}(R_{\gamma}^{\gamma})_{;\delta}^{;\delta}+R_{\mu \nu
{}_{;\delta}^{;\delta}-R_{\mu;\nu;\delta}^{\delta}-R_{\nu;\mu;\delta}^{\delta
}-2R_{\mu \delta}R_{\nu}^{\delta}+\frac{1}{2}g_{\mu \nu}R_{\gamma \delta
}R^{\gamma \delta}\nonumber \\
& -\frac{1}{3}[2g_{\mu \nu}(R_{\gamma}^{\gamma})_{;\delta}^{;\delta
}-2(R_{\gamma}^{\gamma})_{;\mu;\nu}-2R_{\gamma}^{\gamma}R_{\mu \nu}+\frac{1
{2}g_{\mu \nu}(R_{\gamma}^{\gamma})^{2}]\nonumber \\
& =\frac{1}{4\alpha_{s}}T_{\mu \nu
\end{align}
The corresponding fourth-order Poisson equation and its linearized solution
are given as(\cite{ASCS}, P.323 \& 325)
\begin{align}
\delta_{s}\nabla^{4}V & =km_{0}\delta^{3}(r)\nonumber \\
V(r) & =\alpha r
\end{align}
It is clear from Eq.(18) that its left-hand side vanishes whenever $R_{\mu \nu
}$ is zero (the vanishing of a tensor is an invariant statement (\cite{SW-2
,P.146)), so that any vacuum solution of Einstein equations would also satisfy
the ones from the quadratic action. A complete exact solution of the field
Eq.(18) (with metric signature $+---$) for a general spherical symmetric
vacuum metric is given as (\cite{ASCS}, P.323-324)
\begin{equation}
ds^{2}=\alpha dt^{2}-\beta dr^{2}-r^{2}d\theta^{2}-r^{2}\sin^{2}\theta
d\phi^{2
\end{equation}
where
\begin{equation}
\alpha=1-\frac{\lambda_{1}}{r}-\lambda_{2}r-\lambda_{3}r^{2
\end{equation}
\begin{equation}
\beta=\left[ \alpha \right] ^{-1
\end{equation}
$\lambda_{1},$ $\lambda_{2},$ and $\lambda_{3}$ in Eq.(21) are suitable
constants, related to the coupling constant. Dimensional analysis and natural
unit formalism then tell us that coupling constant ($\alpha)$ would remain
dimensionless provided that $\lambda_{1}$ carries the dimension of distance
([L]$GeV^{-1}$), $\lambda_{2}$ the dimension of mass ([M]$GeV$), and
$\lambda_{3}$ the dimension of squared mass ([M]$^{2},$ $GeV^{2}$). If we take
the mass to be the mass of the quark ($m_{q}$), then we can rewrite Eq.(21) a
\begin{equation}
\alpha_{s}=1-\frac{\lambda_{1}}{r}-m_{q}r-m_{q}^{2}r^{2
\end{equation}
For the pure Yang-Mills theory (i.e. QCD without quarks), $m_{q}\rightarrow0 $
and Eq.(23) reduces t
\begin{equation}
\alpha_{s}=1-\frac{\lambda_{1}}{r
\end{equation}
Based on the strong gravity theory and the formalism of the vacuum solution of
Einstein field equations \cite{CSI,SW-2,CWKJ}, $\lambda_{1}=G_{f}$ $m$.
With this value, Eq.(24) reduces t
\begin{equation}
\alpha_{s}=g_{00}=1-\frac{G_{f}\text{ }m}{r
\end{equation}
and Eq.(20) become
\begin{align}
ds^{2} & =\left( 1-\frac{G_{f}\text{ }m}{r}\right) dt^{2}-\left(
1-\frac{G_{f}\text{ }m}{r}\right) ^{-1}dr^{2}\nonumber \\
& -r^{2}d\theta^{2}-r^{2}\sin^{2}\theta d\phi^{2
\end{align}
where mass $m$ is the only allowed mass in the theory, and is due to the
self-interaction of the two gluons (glueball). Eq.(26) is the well celebrated
Schwarzschild vacuum metric except that instead of normal Newtonian
gravitational constant ($G_{N}\approx10^{-19}GeV^{-1}$), we have
strong-gravitational constant ($G_{f}\approx1GeV^{-1}$).
\subsection{Broken Scale Invariance and Perturbative/Short Distance Behavior}
Once we have $\Lambda_{QCD}\equiv G_{f}^{-1}\approx1GeV$, the scale invariance
would be broken. An additional Einstein-Hilbert term linear in the curvature
would be induced, but the full action would still preserve its general
coordinate invariance (\cite{ASCS},P.324)
\begin{equation}
I_{eff}=-\int d^{4}x\sqrt{-g}\left( \alpha_{1}R_{\mu \nu}R^{\mu \nu}-\alpha
_{2}R^{2}+k^{-2}\alpha_{3}R\right)
\end{equation}
Here the induced Einstein-Hilbert term incorporates the phenomenological term
$1/k^{2}=\frac{1}{32\pi G_{N}}$ (\cite{KSS}, P.954 \& 967): this term is
called graviton propagator/ "pure Yang-Mills" propagator . By comparing
Eq.(27) with Eq.(15), we hav
\begin{align}
\alpha_{1} & =\alpha_{3}=2\nonumber \\
\alpha_{2} & =\frac{2}{3
\end{align}
Using natural units formalism, we can write
\begin{equation}
k^{-2}=\frac{1}{32\pi G_{N}}\approx1\times10^{17}GeV
\end{equation}
where $G_{N}\approx10^{-19}GeV^{-1}$ (in natural units) (\cite{ASJ}, P. 2668).
Eq.(27) gives rise to the mixture of fourth-order and second-order field
equations(\cite{ASCS}, P.324), whose solutions for the field of a
\textbf{localized mass} involves Yukawa and the normal $1/r$ potential terms
\begin{equation}
\alpha \nabla^{4}V+\beta \nabla^{2}V\approx km_{0}\delta^{3}(r)
\end{equation}
The corresponding solution of the Eq.(30) for a \textbf{point mass source} is
given as (\cite{VDS}, P. 3)
\begin{equation}
V(r)=\frac{C_{1}}{r}-\frac{C_{2}}{r}e^{-\beta_{1}/r}+\frac{C_{3}}{r
e^{-\beta_{2}/r
\end{equation}
where $C_{1}=k^{2}M/8\pi \alpha_{3},$ $C_{2}=k^{2}M/6\pi \alpha_{3},$
$C_{3}=k^{2}M/42\pi \alpha_{3,\text{ }}\beta_{1}=\left[ \alpha_{3
^{1/2}(\alpha_{1}k^{2})^{-1/2}\right] \times G_{f}^{3/2},$ and $\beta
_{2}=\alpha_{3}^{1/2}\left[ 2\left( 3\alpha_{2}-\alpha_{1}\right)
k^{2}\right] ^{-1/2}\times G_{f}^{3/2}.$ $M$ is unknown invariant mass (but
we identified it to be the invariant mass of the final hadronic state of the
theory, $M\equiv m$ (because final observable particle state must be color singlet)).
By using Eqs.(28) and (29), $\beta_{2}=\infty$ and thus Eq.(31) reduces t
\begin{equation}
V(r)=\frac{C_{1}}{r}-\frac{C_{2}}{r}e^{-\beta_{1}/r
\end{equation}
\begin{align}
C_{1} & =\frac{k^{2}m}{16\pi}\nonumber \\
C_{2} & =\frac{k^{2}m}{12\pi}\nonumber \\
\beta_{1} & =k^{-1}G_{f}^{3/2
\end{align}
As expected, the resulting infinity $\beta_{2}=\infty$ is tamed by the
nonlinear nature of the Weyl's action.
From Eqs.(32) and (33), we have
\begin{equation}
V(r)=\frac{k^{2}\text{ }m}{16\pi r}\left( 1-\frac{4}{3}e^{-\beta_{1
/r}\right)
\end{equation}
Eq.(32) is the exact equation obtained for the broken scale invariance and
perturbative behavior of strong gravity in\emph{\ }(\cite{ASCS}, P.325).
\subsection{Double-copy Construction in Strong Gravity}
From Eq.(34), we can writ
\begin{equation}
V(r)=\frac{k_{\alpha}^{2}\text{ }}{16\pi r}C
\end{equation}
where the dimensionless gravity coupling $k_{\alpha}^{2}\equiv k^{2}$ $m=32\pi
G_{N}\times m$ and $C\equiv1-\frac{4}{3}e^{-\beta_{1}/r}$ is the
"group-theoretic constant" of strong gravity theory.
It is to be recalled that the interaction energy, to the leading order, of two
static (i.e., symmetric) color sources of QCD without quarks (pure Yang-Mills
theory) is given by \cite{TOGO10,TOGO11,TOGO12,TOGO13}
\begin{equation}
E(r)=\frac{g_{\alpha}^{2}\text{ }}{4\pi r}C
\end{equation}
Where dimensionless gauge coupling $g_{\alpha}^{2}\equiv g^{2}(r)\times
m_{rg}$, and $m_{rg}$ is an arbitrary renormalization group scale formally
invoked, in quantum field theory, to keep the scale-dependent gauge coupling
($g^{2}(r)$) dimensionless. Since Eq.(35) is also the energy of two
interacting gluons, we can write (from Eqs.(35) and (36)
\begin{equation}
V(r)=E(r)\iff k_{\alpha}=2g_{\alpha
\end{equation}
Eq.(37) is the required BCJ property. We have therefore proved the existence
of double-copy construction in strong gravity. It is remarkable to note that
despite different approaches taken by supergravity (scattering amplitude
method) and strong gravity (effective potential method), we still arrive at
the same conclusion (see Eqs. (3) and (37)).
\section{ QCD Evolution}
The body of experimental data describing the strong interaction between
nucleons (which is the non-perturbative aspect of QCD for $r\longrightarrow
\infty$) is consistent with a strong coupling constant behaving as $\alpha
_{s}\approx1$ \cite{TOGO14}: obviously this aspect of QCD is consistent with
the Eq.(25) for $r\longrightarrow \infty.$
One of the discoveries about \textbf{strong force} is that it diminishes
inside the nucleons, which leads to the free movement of gluons and quarks
within the hadrons. The implication for the strong coupling is that it drops
off at very small distances. This phenomenon is called "asymptotic freedom" or
\textbf{perturbative aspect of QCD}, because gluons and massless quarks
approach a state where they can move without resistance in the tiny volume of
the hadron \cite{TOGO15}. Hence for the strong gravity to describe the
perturbative aspect of QCD correctly, it must reproduce the value of strong
coupling constant $\alpha_{s}$ (\emph{by using the observed properties of
gluons: the mediators of strong force}) that is compatible with the
experimental data. This is what we set out to do in this section.
\subsection{Gluon Density}
The first thing to note here is that gluon, being a bosonic particle, obeys
Bose-Einstein statistics. The Fermi-Dirac and Bose-Einstein distribution
functions are given as (\cite{PCR}, P. 115)
\begin{equation}
\aleph_{r}=\frac{g_{r}}{e^{\sigma_{1}+\sigma_{2}\in_{r}}\pm1
\end{equation}
where the positive sign applies to fermions and the negative to bosons.
$\aleph_{r}$ is the number of particles in the single-particle states, $g_{r}
$ is the degenerate parameter, $\sigma_{1}$ is the coefficient of expansion of
a gas of\textbf{\ weakly coupled particles} (\textbf{an ideal configuration
for describing the asymptotic freedom/perturbative regime of QCD}) inside the
volume $V$. $\sigma_{2}$ is the Lagrange undetermined multiplier and $\in_{r}$
is energy of the $r$-$th$ state. The value of $"\sigma_{1}"$ for boson gas at
a given temperature is determined by the normalization condition (\cite{PCR},
P. 112 and 115)
\begin{equation}
N=\underset{r}
{\displaystyle \sum}
}\frac{g_{r}}{e^{\sigma_{1}+\sigma_{2}\in_{r}}-1
\end{equation}
The summation sign in Eq.(39) can be converted into an integral, because for a
particle in a box, the states of the system have been found to be very close.
Using the density of single-particle states function, Eq.(39) reduces to
\begin{equation}
N=\underset{0}{\overset{\infty}{\int}}\frac{D(\in)d\in}{e^{\sigma_{1
+\sigma_{2}\in}-1
\end{equation}
where $D(\in)d\in$ is the number of allowed states in the energy range $\in$
to $\in+d\in$ and $\in$ is the energy of the single-particle state. Using the
density of states as a function of energy, we have (\cite{PCR}, P. 290)
\[
D(\in)d\in \text{ }=\frac{4\pi V}{h^{3}}2m\in \left( \frac{m}{p}\right) d\in
\]
wit
\[
p=\sqrt{2m\in
\
\begin{equation}
D(\in)d\in \text{ }=2\pi V\left( \frac{2m}{h^{2}}\right) ^{3/2}\in^{1/2}d\in
\end{equation}
where $p$ is the momentum of particle, $m$ its mass and $h$ is the Planck
constant. By putting Eq.(41) into Eq.(40), we hav
\begin{equation}
N=2\pi V\left( \frac{2m}{h^{2}}\right) ^{3/2}\underset{0}{\overset{\infty
}{\int}}\frac{\in^{1/2}d\in}{e^{\sigma_{1}+\sigma_{2}\in}-1
\end{equation}
but $\sigma_{1}=\sigma_{2}\times \mu_{eff}$ and $\sigma_{2}=1/kT.$ $\mu_{eff}$
is the effective potential, $k$ is the Boltzmann constant and $T$ denotes
temperature (\cite{PCR}, P.116). Since there is no restriction on the total
number of bosons (gluons), the effective potential is always equals to zero
($\mu_{eff}=0$) (this is true for the case where the minimum of the effective
potential continuously goes to zero as temperature grows\cite{CSB}). Thus,
Eq.(42) reduces to
\begin{equation}
N=2\pi V\left( \frac{2m}{h^{2}}\right) ^{3/2}\underset{0}{\overset{\infty
}{\int}}\frac{\in^{1/2}d\in}{e^{\in/kT}-1
\end{equation}
By using the standard integral(where $\varsigma(z)$ is the \textbf{Riemann
zeta function} and $\Gamma(z)$ is the \textbf{gamma function}
\begin{equation}
\underset{0}{\overset{\infty}{\int}}\frac{x^{z-1}dx}{e^{x}-1}=\varsigma
(z)\Gamma(z)
\end{equation}
\bigskip
Eq.(43) become
\begin{equation}
N=2.61V\left( \frac{2\pi mkT}{h^{2}}\right) ^{3/2
\end{equation}
Using $m=E/c^{2}$ and the average kinetic energy of boson gas in
three-dimensional space $E=3kT/2,$ Eq. (45) reduces to
\begin{equation}
\frac{N}{V}=\left[ \frac{(2.61)(3\pi)^{3/2}k^{3}}{(hc)^{3}}\right] T^{3
\end{equation}
Define $n_{g}\equiv \frac{N}{V}$ and $\Xi \equiv \left[ \frac{(2.61)(3\pi
)^{3/2}k^{3}}{(hc)^{3}}\right] =2.522\times10^{7}(mK)^{-3}.$ Hence the gluon
density ($n_{g}$) can be expressed as
\begin{equation}
n_{g}=\Xi T^{3
\end{equation}
Eq.(47) is the required result for the finite temperature and density relation
for gluon.
\subsection{Strong-gravity Coupling Constant}
The principle of general covariance tells us that the energy-momentum tensor
in the vacuum (with zero matter and radiation) must take the form
\begin{equation}
T_{00}=K\langle \rho \rangle
\end{equation}
Here $\langle \rho \rangle$ has the dimension of energy density and $K$
describes a real (strong-) gravitational field \cite{SER}. Hence Eq.(48)
reduces to
\begin{equation}
T_{00}=K(E_{vac})^{4
\end{equation}
and $K=g_{00}=C_{QCD}\times C_{grav}(strong-gravity$ coupling). $C_{QCD}$ is a
dimensionless coefficient which is entirely of QCD origin and is related to
the definition of QCD on a specific finite compact manifold. Similarly,
$C_{grav}$ is a dimensionless coefficient which is entirely of gravitational
origin \cite{SER,LZW,FRU,SWEN}. Therefore Eq.(49) become
\begin{equation}
T_{00}=g_{00}(E_{vac})^{4
\end{equation}
Recall that energy density ($\rho_{vac}$) can also be written a
\begin{equation}
\rho_{vac}=\frac{E_{vac}}{V}=V^{-1}\times E_{vac
\end{equation}
Eq.(51) is justified by the standard box-quantization procedure \cite{SER}.
Hence we hav
\begin{equation}
\rho_{vac}=n_{g}\times E_{vac
\end{equation}
where $n_{g}\equiv V^{-1}$ (number density)$.$
From the average kinetic energy for gas in three-dimensional space, we have
$T=2E_{vac}/3k.$ With this value, Eq.(47) reduces t
\begin{equation}
n_{g}=\frac{8\Xi(E_{vac})^{3}}{27k^{3}
\end{equation}
Thus Eq.(52) become
\begin{equation}
\rho_{vac}=\frac{8\Xi(E_{vac})^{4}}{27k^{3}
\end{equation}
Eq.(54) is the energy density of a single gluon. But based on double-copy
construction (see section II, Eqs.(3) and Eq.(37)), Eq.(54) is multiplied by
2, and thus
\begin{equation}
2\rho_{vac}=\frac{16\Xi(\Delta \varepsilon_{vac})^{4}}{27k^{3}
\end{equation}
Eq.(55) now represents two-point correlator-vacuum energy density. By
comparing Eq.(50) with Eq.(55), we hav
\[
\alpha_{s}=g_{00}=\frac{16\Xi}{27k^{3}}=2.336\times10^{19}(meV)^{-3
\]
As $1m=5.070\times10^{15}GeV^{-1}$, the above equation leads t
\begin{equation}
\alpha_{s}=g_{00}=C_{QCD}\times C_{grav.}=0.1797
\end{equation}
Eq.(56) is the required strong (-gravity) coupling constant at the starting
point of QCD evolution. In the next section, we show the compatibility of
Eq.(56) with the perturbative QCD, which is the theory that describes
asymptotic freedom regime analytically.
\section{Perturbative Quantum Chromodynamics}
Computations in perturbative QCD are formally based on three conditions:
\textbf{(1)} that hadronic interactions become weak at small invariant
separation $r\ll \Lambda_{QCD}^{-1}$; \textbf{(2)} that the perturbative
expansion in $\alpha_{s}(Q_{0}^{2})$ is well-defined mathematically;
\textbf{(3)} factorization dictates that all effects of collinear
singularities, confinement, non-perturbative interactions, and the dynamics of
bound state can be separated constituently at large momentum transfer in terms
of (process independent) structure functions $G_{i/H}(x,Q),$ hadronization
functions $D_{H/i}(z,Q),$ or in the case of exclusive processes, distribution
amplitudes $\phi_{H}(x_{i},Q)$ \cite{AHM,GPL}. The asymptotic freedom property
of perturbative QCD($\beta_{0}=11-(2/3)n_{f}$) is given as (\cite{ASO}, P. 1)
\begin{equation}
\alpha_{s}(Q_{0}^{2})=\frac{4\pi}{\beta_{0}\ln(\frac{Q_{0}^{2}}{\Lambda^{2}
)}<0.2\text{ \ for }Q_{0}^{2}>20GeV^{2
\end{equation}
In the framework of perturbative QCD, computations of observables are
expressed in terms of the renormalized coupling $\alpha_{s}(\mu_{R}^{2})$.
When one takes $\mu_{R}$ close to the scale of the momentum transfer $Q_{0}$
in a given process, then $\alpha_{s}(\mu_{R}^{2}\sim Q_{0}^{2})$ is indicative
of the effective strength of the strong interaction in that process. Eq.(57)
satisfies the following renormalization group equation (RGE) \cite{GDISS}
\begin{equation}
\mu_{R}^{2}\frac{d\alpha_{s}}{d\mu_{R}^{2}}=\beta(\alpha_{s})=-(b_{0
\alpha_{s}^{2}+b_{1}\alpha_{s}^{3}+b_{2}\alpha_{s}^{4}+O(\alpha_{s}^{5}))
\end{equation}
wit
\begin{equation}
b_{0}=(33-2n_{f})/12\pi
\end{equation
\begin{equation}
b_{1}=(153-19n_{f})/24\pi^{2
\end{equation
\begin{equation}
b_{2}=(2857-\frac{5033}{9}n_{f}+\frac{325}{27}n_{f}^{2})/128\pi^{3
\end{equation}
where Eqs.(59-61) are referred to as the 1-loop, 2-loop and 3-loop
beta-function coefficients respectively. The minus sign in Eq.(58) is the
origin of asymptotic freedom, i.e., the fact that the strong coupling becomes
weak for hard processes. Eq.(58) shows that \emph{RGE} is dependent on the
correct value of a purely dimensionless strong coupling constant ( $\alpha
_{s}$). Thus the precise calculation of its value (without appealing to the
choice of renormalization scheme and scale choice $Q_{0}^{2}$) would be the
holy grail of perturbative QCD.
\subsection{Experimental Test}
We begin by reviewing the systematic study of QCD coupling constant from deep
inelastic measurements in (\cite{VGKA} and the references therein), where many
experimental data were collected and analyzed at the next-to-leading order of
perturbative QCD (see Tables 2,3 and 6 of \cite{VGKA}) by using deep inelastic
scattering ($DIS$) structure functions $F_{2}(x,Q^{2}).$In these experimental
results, we are more interested in the $\alpha_{s}(90GeV^{2})=0.1797$ (in the
Table 6 of \cite{VGKA}) obtained when the number of points is 613. This is the
exact value we obtained theoretically in Eq.(56). Hence, we have not only
demonstrated that the perturbative expansion for hard scattering amplitudes
converges perturbatively at $\alpha_{s}=\alpha_{s}(90GeV^{2})=0.1797$ but also
able to prove that QCD is a strong-gravity-derived theory: \textbf{an
astonishing discovery!}\emph{\ }We have also validated the asymptotic freedom
property of perturbative QCD given in Eq.(57): namely, that the starting point
of QCD evolution is $Q_{0}^{2}=90GeV^{2}$ for $\alpha_{s}=0.1797<0.2.$
Having tested Eq.(56) experimentally, we therefore proceed to rewrite the
renormalization group equation (Eq.(58) ) as
\begin{equation}
\beta(\alpha_{s})=-\left[ b_{0}(0.1797)^{2}+b_{1}(0.1797)^{3}+b_{2
(0.1797)^{4}+...\right]
\end{equation}
Eq.(62) is an echo of \ "composition independence or universality property" of
the coupling $\alpha_{s}$ to all orders in the perturbative expansion for hard
scattering amplitudes.
\section{Strong Gravity as a Massive Spin-two Theory}
In the Einstein's GR, the \emph{Schwarzschild vacuum} is the solution to the
Einstein field equations that describes the gravitational field generated by a
\textbf{spherically symmetric mass }$m$, on the assumption that
the\textbf{\ electric charge}, and \textbf{orbital angular momentum (
$L$\textbf{)} of the \textbf{mass\ are all zero }\cite{CWKJ}.
It turns out that the Schwarzschild vacuum solution of the Einstein field
equations can be understood in terms of the Pauli-Fierz relativistic wave
equations for massive spin-2 particles which would mediate a short-range
tensor force (\cite{CSI}, P. 117). It follows that the two interacting gluon
fields ($G_{\mu}^{a}$ and $G_{\nu}^{b})$ are considered to be \textbf{dressed
gluon fields of the gravitational field}, i.e., the colors of the gluon fields
are covered or hidden within the spacetime base-manifold ($\eta_{ab}$) of the
color $-$ $SU(3)$ principal bundle (\cite{DJS}, P. 572), thereby making the
observable asymptotic states of gravity to be color-singlet/color-neutral.
Hence the resulting glueball (massive particle formed as a result of the
self-interaction of two gluons) of the theory (with spherically symmetric mass
$m$ and quantum numbers $J^{PC}=2^{-+}$) would still have the total angular
momentum of $2$. The validity of this statement is proved by using the
well-known Pauli-Fierz relativistic wave equations for massive particles of
spin-2(\cite{CSI}, P. 124)
\begin{equation}
\square \phi_{\mu \nu}+m^{2}\phi_{\mu \nu}=0
\end{equation}
\begin{equation}
\partial_{\mu}\phi^{\mu \nu}=0\text{ (coordinate gauge condition)
\end{equation}
\begin{equation}
\phi_{\mu}^{\mu}=0\text{ (conformal gauge condition)
\end{equation}
\begin{equation}
\phi_{\mu \nu}=\phi_{\nu \mu}\text{ (symmetric condition)
\end{equation}
For the symmetric condition (Eq.(66)), the coordinate gauge condition given in
Eq.(64) eliminates four out of the ten components of the wave function
$\phi_{\mu \nu}$ of the Eq.(63); and the condition given in Eq.(65) eliminates
one more, leaving 5 degrees of freedom
\begin{equation}
2S+1=D=5\Longrightarrow S=2
\end{equation}
As a result of the Eq.(67), the following is true: \emph{strong gravity, as a
massive spin-2 theory, has five degrees of freedom (}$D=5$\emph{).}
Recall that the parity (P) and charge (C) quantum numbers can be expressed b
\begin{equation}
P=(-1)^{J+1
\end{equation}
\begin{equation}
C=(-1)^{J
\end{equation}
and
\begin{equation}
J=L+S
\end{equation}
where $J$ is the total angular momentum, $L$ is the orbital angular momentum
and $S$ is the spin.
Thus, for the Schwarzschild vacuum solution (i.e., $L=0$), we have
\begin{equation}
J^{PC}=2^{-+
\end{equation}
\textbf{Requiring instead that }$\phi_{\mu \nu}\neq \phi_{\nu \mu}$
(\textbf{antisymmetric condition})\textbf{, we would have obtained
}$2S+1=D=1\Longrightarrow S=0$\textbf{\ and }$J^{PC}=0^{-+}$, \textbf{which is
a pseudoscalar state. An important consequence of this discovery is that the
underlying dynamics of the strong gravity theory is fully symmetric (i.e.
}$\phi_{\mu \nu}=\phi_{\nu \mu}\Longrightarrow S=2$\textbf{\ ) but its
ground/vacuum state is asymmetric (i.e. }$\phi_{\mu \nu}\neq \phi_{\nu \mu
}\Longrightarrow S=0;$ \textbf{meaning that the vacuum state must have massive
spin-zero particle(s) }$-$ \textbf{glueball/meson with mass }$m$\textbf{):
this is a formal description of spontaneous symmetry-breaking phenomenon.}
\subsection{Effective Lagrangian of a Massive Spin-2 Theory}
By using effective field theory (EFT) and the property of strong gravity (as a
massive spin-2 theory, $D=5$), the effective Lagrangian of the theory is
characterized by \cite{TOGO16}
\begin{equation}
L=\underset{i}{\sum}\frac{O_{i}}{M_{X}^{d_{i}-4}
\end{equation}
where $O_{i}$ are operators constructed from the \textbf{light fields (with
light mass)}, and information on any \textbf{heavy degrees of freedom ( with
heavy mass M}$_{X}$\textbf{)} is encoded in the coupling $\frac{1
{M_{X}^{d_{i}-4}}$. For $i=1$, we hav
\begin{equation}
L=\frac{O_{1}}{M_{X}^{d_{1}-4}
\end{equation}
Using $D=d_{1}=5$ means that the operator $O_{1}$ must carry the dimension of
squared energy ($O_{1}\sim E^{2}$) for the effective Lagrangian to carry the
dimension of energy
\begin{equation}
L=\frac{E^{2}}{M_{X}
\end{equation}
Eq.(74) is the effective Lagrangian of the strong gravity theory. The
invariant mass/energy operator $E^{2}=p_{\mu}p^{\mu}=m^{2}$ is called a flat
space/Poincar\.{e} invariant. This is characterized by an irreducible
representation of the Poincar\.{e} group (with spin $J$ ), and can be used to
describe a composite field (\cite{CSI}, P. 133-137) with five intrinsic
degrees of freedom (i.e. $D=d_{1}=5$). The importance of this statement will
be made manifest in the next subsection.
\subsection{Groups of Motions in Strong Gravity Admitting Custodial and
Electroweak Symmetries}
The fundamental theorem in the theory of strong gravity (as a massive spin-2
theory) contains two statements, namely:
(1) Strong gravity is a pseudo-gravity (\cite{YNE}, P.173).
(2) Strong gravity, as a massive spin-2 field theory, has five degrees of
freedom. The first statement means that the strong gravity must have a
fundamental group $SO(n_{1},n_{2})$. The group $SO(n_{1},n_{2})$ is the
special real pseudo-orthogonal group in $n_{1}+n_{2}$ dimensions. This group
has a non-compact group that is isomorphic to a generalized rotation group
(involving spherical (with positive curvature) and hyperbolic (with negative
curvature) rotations) in $R^{n_{1},n_{2}}$. Its maximal compact subgroup is
given as $SO(n_{1})\times SO(n_{2}).$The second statement forces us to write
$n_{1}+n_{2}=5$.
From the Eq.(5), the dressed gluon field $G_{\mu}^{a}$ can be separated into
asymptotic-flat connection ($N_{\mu}^{a}$), i.e. the \textbf{constant
curvature} (zero-mode) of the field and the normal gluon field ($A_{\mu}^{a}
$): $G_{\mu}^{a}=N_{\mu}^{a}+A_{\mu}^{a}$ (\cite{DJS}, P.572 \& \cite{YNE},
P.174). By using the de Sitter group formalism for the spacetime of constant
curvature, the non-compact groups (de Sitter groups) for strong gravity are
$SO(4,1)$ and $SO(3,2)$. The group $SO(4,1)$ is associated with the spacetime
manifold of constant positive curvature (denoted by $S(+)$), representing
spherical rotations, and $SO(3,2)$ is associated with the manifold of constant
negative curvature (denoted by $S(-)$), representing hyperbolic rotations. The
two spaces are embedded in the manifold with signature ($+--$). The maximal
compact subgroups for the two non-compact groups are (\cite{CSI}, P. 132)
\begin{equation}
SO(4)\times SO(1)\approx SO(4)\approx SU(2)\times SU(2)
\end{equation}
\begin{equation}
SO(3)\times SO(2)\approx SU(2)\times U(1)
\end{equation}
Eqs.(75) and (76) can be used to label left-right and isospin-hypercharge
symmetries respectively
\begin{equation}
SU(2)_{L}\times SU(2)_{R
\end{equation
\begin{equation}
SU(2)_{L}\times U(1)_{Y
\end{equation}
Eq.(77) is called custodial symmetry of the Higgs sector. This symmetry is
spontaneously broken to the diagonal/vector subgroup after the Higgs doublet
acquires a nonzero vacuum expectation value (VEV): $SU(2)_{L}\times
SU(2)_{R}\longrightarrow SU(2)_{V}$ \cite{TOGO17}. Eq.(78) is the
\textbf{electroweak gauge symmetry} of the Standard Model (SM) of particle physics.
To break the \textbf{electroweak symmetry} at the \textbf{weak scale} and give
mass to quarks and leptons, Higgs doublets (that can sit in either $5_{H}$ or
$\overline{5}_{H})$ are needed. The extra 3 states are color triplet Higgs
scalars. The couplings of these color triplets violate lepton and baryon
number, and also allows the decay of nucleons through the exchange of a single
color triplet Higgs scalar. In order not to violently disagree with the
non-observation of nucleon (e.g. proton) decay, the mass of the single color
triplet must be greater than $\sim10^{11}GeV$ \cite{TOGO18}. It is to be
remarked here that this heavy mass would not disallow the violation of lepton
and baryon number: this is the key to unlocking the mystery of neutrino mass
problem. \emph{We shall return to this a little later}.
If the composite light field (with its five independent components) in the
\textbf{subsection A} of \textbf{section VI} is taken to be the Higgs field,
transforming in five-dimensional representation (i.e. $5_{H}$), then nature
would be permanently cured of its \textbf{vacuum catastrophe disease}. In this
case the invariant mass/energy operator of the light field would now be taken
to be the VEV of the Higgs doublets (i.e. $E\equiv \upsilon=246GeV$), and the
heavy mass of color triplet Higgs scalar would be encoded in the coupling
$1/M_{X}^{d_{1}-4}=1/M_{X}$. Here $M_{X}$ is the heavy mass characteristic of
the symmetry-breaking scale of the high-energy unified theory \cite{TOGO19}.
Once the high-energy unified theory that is compatible with nature is found,
the value of $M_{X}$ will show up automatically. This is where pure Yang-Mills
propagator kicks in.
\subsection{Type-A 331 Model}
One of the beyond-SM's of particle physics is the $SU(3)_{C}\times
SU(3)_{L}\times U(1)_{X}$ \ or $331$ model, in which the three fundamental
interactions (i.e. electromagnetic, weak and strong interactions) of nature
are unified at a particular energy scale $M_{U}$. This model is formulated by
extending the electroweak sector of the SM gauge symmetry. The unification of
the three interactions occurs at the energy scale $M_{U}\approx1\times
10^{17}GeV$ in the type-A variant of this model. In this variant of the model,
the 331 symmetry is broken to reproduce the SM electroweak sector at the
energy scale of $M_{X}=1.63\times10^{16}GeV$ \cite{TOGO20}. It is apparent
from the Eq.(29) that $k^{-2}=M_{U}=1\times10^{17}GeV-$ this is not surprising
because \textbf{electromagnetic}, \textbf{strong} and \textbf{weak} nuclear
interactions are all variants of Yang-Mills interaction $-$ Hence the type-A
331 model is compatible with the nature and $M_{X}$ (which is identified as
the mass of the single color triplet Higgs scalar) $=1.63\times10^{16}GeV$.
Thus Eq.(74) become
\begin{equation}
L=\frac{E^{2}}{M_{X}}=\frac{\left( 246GeV\right) ^{2}}{1.63\times10^{16
GeV}=3.7\times10^{-3}eV
\end{equation}
and the symmetry-breaking pattern i
\begin{equation}
SU(3)_{L}\times U(1)_{X}\overset{M_{X}}{\longrightarrow}SU(2)_{L}\times
U(1)_{Y}\overset{E^{2}}{\longrightarrow}U(1)_{Q
\end{equation}
It is to be emphasized that the calculated value in the Eq.(79) is purely
based on the principle of naturalness: a composite field with five independent
components, which occurs naturally out of the strong gravity formulation, is
\textbf{identified} as the Higgs field $H$ transforming in five-dimensional
representations ($5_{H}$). \ \textbf{As we shall soon show, Eq.(79) connects
the solution of the dark energy problem to the neutrino mass problem.}
The chain of symmetry-breakings in the Eq.(80) has varying energy scales but
the Lagrangian $L$ of the whole system remains invariant: \emph{the physics of
vacuum seems to obey effective field theory rather than quantum field theory.}
\section{Some Consequences of Strong Gravity and their Physical
Interpretations}
This section is entirely devoted to the consequences of strong gravity. In
this case, we show the hitherto unknown connection between hadronic size,
physical lattice size and gluonic radius ($r_{g}$). From this, we calculate
the second-order phase transition/critical temperature $T_{c},$ and the
fundamental hadron mass of QCD.
\subsection{Calculation of the Gluonic Radius and Second-order Phase
Transition Temperature}
The configuration \ at $T>T_{c}$ for mass of the glueball for pure $SU(3)_{C}
$ is shown in the \textbf{Fig.1 }\cite{TOGO21}. Where $2r_{g}$ is the
intergluonic invariant separation.$\ S$ and $P$ represent scalar and
pseudoscalar glueball / gauge fields respectively. This figure is a perfect
representation of \ 2-gluon phenomenological field. It is interesting to note
that \textbf{Fig.1} has exactly the same structure with one-loop graviton
self-energy diagram (\cite{KSS}, P. 955). This is not a mere coincidence, it
only shows the compatibility of Eq.(5) with the tetrad formulation of GR, and
the existence of double-copy construction in all the variants of quantum
gravity theory. In what follows, we will heavily rely on the correctness of
the \textbf{Fig.1} as the valid geometry for strong gravity theory from the
point of view of 2-gluon phenomenology (double-copy construction).
We can therefore rewrite\ Eq.(25) for $T_{c}$ and gluonic radius $r_{g}$ a
\begin{equation}
T_{c}=\frac{\left[ 1-\alpha_{s}\right] r_{g}}{G_{f}
\end{equation}
\emph{\ }By using Eq.(56), Eq.(81) become
\begin{equation}
T_{c}=\frac{0.8203r_{g}}{G_{f}
\end{equation}
We now calculate the value of $r_{g}$ by using the value of the momentum
transfer, at which $\alpha_{s}$ converges perturbatively (i.e., $Q_{0
^{2}=90GeV^{2}$): see subsection A of section V.
Recall that the energy-wavelength relation is given a
\begin{equation}
Q_{0}=\frac{hc}{\lambda
\end{equation}
Based on the geometry of \textbf{Fig.1}, we can write its associated
wavelength as
\begin{equation}
\lambda=2\pi r_{g
\end{equation}
Hence Eq.(83) reduces to
\[
Q_{0}=\frac{hc}{2\pi r_{g}
\
\begin{equation}
r_{g}=\frac{\hslash c}{Q_{0}
\end{equation}
But $Q_{0}^{2}=90GeV^{2}\Longrightarrow$ $Q_{0}=9.487GeV$ and $\hslash$
$=6.582\times10^{-16}eVs.$ Thus Eq.(85) reduces to
\begin{equation}
r_{g}=2.08\times10^{-17}m
\end{equation}
\begin{figure}[ht!]
\begin{center}
\includegraphics[ height=4cm, width=6cm]{Fig.1.eps}
\end{center}
\caption{Diagram for the contribution to the glueball (two-gluon) mass.}
\end{figure}
Eq.(86) is the required gluonic radius. Clearly Eq.(86) is related to the
radius of hadron ($r_{h}$) \cite{CJI,ASCS,CSI,DJS,YNE}
\begin{equation}
r_{h}=10\times r_{g
\end{equation}
From the lattice QCD simulation performed at the initial run $\beta=2.2$ on a
$L^{3}T=24^{3}\times48$ lattice gives the physical lattice size ($L_{a}$) of
$2.08\times10^{-15}m$ \cite{AAKS}. By using Eq.(86), we can writ
\begin{equation}
L_{a}=10^{2}\times r_{g
\end{equation}
Hence Eqs.(86-88) show the connection between the gluonic radius, radius of
hadron and the physical lattice size.
It is generally believed that at sufficiently high temperature / density, the
QCD vacuum undergoes a phase transition into a chirally symmetric phase. Here,
the chirally symmetric phase transition will be second-order phase transition
$iff$ the conditions $T_{c}\neq0$ and $\mu_{eff}=0$ hold simultaneously
\cite{CSB}. Interestingly, we have \emph{a priori} claimed, during the
calculation of gluon density, that $\mu_{eff}=0$: an assertion that is
justified by the fact glueball, a self-conjugated particle with neutral color
and zero electric charge, has a vanishing effective/chemical potential (i.e.
$\mu_{eff}=0$) (\cite{TOGO19}, P.565). Thus the second-order chiral phase
transition temperature is calculated by using gluonic radius (Eq.(86)) , and
thus Eq.(82) become
\begin{equation}
T_{c}=0.129GeV=129MeV
\end{equation}
where $G_{f}=10^{38}\times G_{N}=6.674\times10^{27}m^{3}kg^{-1}s^{-2}$ and
$1GeV=1.78\times10^{-27}kg.$
Hence, the chiral second-order phase transition in the strong gravity theory
occurs when $T_{c}=129MeV$ and $\mu_{eff}=0.$ Exactly the same values were
obtained in \cite{CSB, DSE} for second-order chiral phase transition in QCD
vacuum. We have thus established that strong gravity theory exhibits
second-order chiral symmetry in the limit of vanishing quark masses (
$m_{q}\rightarrow0$). It is worth noting here that the pure $SU(3)_{C}$ vacuum
metric (Eq.(26)), obtained in the limit $m_{q}\rightarrow0$, is compatible
with the glueball mass configuration given in the \textbf{Fig.1}, because
\textbf{Fig.1} was obtained in the limit of vanishing quark masses
\cite{TOGO21}.
\subsection{ Charmed Final Hadronic State of Strong Gravity}
Since we have shown that strong gravity theory possesses $SU(2)$ gauge field
(i.e. isospin symmetry, $SU(2)_{V}$) in the subsection B of section VI, it is
pertinent to investigate the structure of the fundamental mass formula of the theory.
In lattice QCD theory, the lattice spacing plays the role of ultraviolet
cutoff, since distances shorter than $"a"$ is not accessible. In the limit of
vanishing of quark masses ($m_{q}\rightarrow0$), this is the only dimensional
parameter and therefore all dimensionful quantities e.g. hadron and quark
masses will have to be given in units of the lattice spacing (\cite{QHN}, P.
271)
\begin{equation}
m=\frac{1}{a}\text{ }f(\alpha(1/a),a)
\end{equation}
It is clear from Eq.(90) that the unknown function $f$ is dependent on the
strong coupling and lattice spacing. This equation is by no means different
from Eq.(25)
\begin{equation}
m=\frac{r_{h}}{G_{f}}(1-g_{00})
\end{equation}
It is evident from Eqs.(90) and (91) that $\frac{1}{a}\equiv \frac{r_{h}
{G_{f}}$ and $f(\alpha(1/a),a)\equiv(1-\alpha_{s})=(1-g_{00}).$ By using
Eqs.(56) and (87), Eq.(91) become
\begin{equation}
m=1.29GeV=1290MeV
\end{equation}
Eq.(92) is the fundamental, color-singlet mass scale of QCD vacuum.
The $\eta(1295)$ pseudoscalar state/ $\eta-meson$ state with $J^{PC}$
multiplets of $J^{PC}=0^{-+}$ has mass value of $m_{\eta}=1294\pm4MeV$
(\cite{TOGO18}, P.32). Similarly, the charm-quark (with charge $\frac{2}{3}$)
has the mass value ($m_{c}$) of $1.275\pm0.025GeV$ (\cite{TOGO18}, P.23). In
terms of the resumming threshold logarithms in the QCD form factor for the
B-meson decays to next-to-leading logarithmic accuracy, the mass formula for
the charm-quark is given as $m_{c}=m_{b}-m_{B}+m_{D}\approx1.29GeV$
\cite{UAG}. Where $m_{b},$ $m_{B}$ and $m_{D}$ denote bottom-quark, B- and
D-mesons respectively.
The correctness of the strong gravity theory in describing reality/nature is
clear from the above-quoted values. For we have shown in the \textbf{section
VI} of this paper that even though the underlying dynamics of the strong
gravity theory is fully symmetric ($\phi_{\mu \nu}=\phi_{\nu \mu}$), its vacuum
state is nonetheless asymmetric ($\phi_{\mu \nu}\neq \phi_{\nu \mu}$) with the
pseudoscalar quantum numbers $J^{PC}=0^{-+}$. In combining this fact with the
Eq.(92), the existence of the pseudoscalar $\eta$-$meson$ state $-$ with
$J^{PC}=0^{-+}$ and mass $m_{\eta}=1290MeV$ in the QCD vacuum $-$ is established.
\textbf{If} we take the dynamically induced coupling constant in the second
part of the Eq.(28) (i.e. $\alpha_{2}=\frac{2}{3}$) as the fundamental charge
of \ QCD vacuum $-$ attributed to the charm-quark $-$ (and taking into
consideration Eq.(92)), then we can say that charm-quark also exist in the QCD
vacuum. Thus, the fundamental quantities of the QCD vacuum are $\eta-meson$
(one of the examples of hadrons) and $charm-quark$. \emph{Based on this
understanding, we posit that the final hadronic state of strong gravity theory
is charmed (i.e. m = m}$_{\eta}=$\emph{\ m}$_{c}=1290MeV$\emph{).}
\textbf{In the next subsection, we establish the existence of mass gap within
the formulation of strong gravity (by using the vector sugroup (i.e. isospin
symmetry }$SU(2)_{V}$\textbf{) of the custodial symmetry in the Eq.(77)); and
also justify the validity of using the dynamically induced coupling constant
(}$\alpha_{2}=\frac{2}{3}$\textbf{) as the fundamental charge of the QCD
vacuum.}
\subsection{Mass Gap}
QCD is widely accepted as a dynamical quantum gauge theory of strong
interactions not only at the fundamental quark-gluon level, but also at the
hadronic level. In this picture, any color-singlet mass scale parameter must
be expressed in terms of the mass gap \cite{MGA}
\begin{align}
m & =const\times m_{gap}\nonumber \\
m & =const\times m_{gap}=1290MeV
\end{align}
where \emph{const.} denotes arbitrary constant.
In particle physics, particles that are affected equally by the strong force
but having different charges, such as protons and neutrons, are treated as
being different states of the same nucleon-particle with isospin values
related to the number of charge states
\begin{equation}
N=\left(
\begin{array}
[c]{c
N^{+}\\
N^{0
\end{array}
\right) =\left(
\begin{array}
[c]{c
p\\
n
\end{array}
\right)
\end{equation}
The isospin symmetry ($SU(2)_{V}$) then demands that both charge states should
have the same energy in order to preserve the invariance of the Hamiltonian
(\textbf{H}) of the system. This means that isospin symmetry is a statement of
the invariance of \textbf{H }of the strong interactions under the action of
the Lie group $SU(2)$. However, the near mass-degeneracy of the neutron and
proton points to an approximate symmetry of the Hamiltonian describing the
strong interactions \cite{DGR, CITZ}. The mass gap ($m_{gap}$) $-$ which is
responsible for the approximate symmetry of strong interaction $-$ in this
case must be the energy difference between the proton state and neutron state
of the proton-neutron $SU(2)$ doublet fundamental representation (with gauged
isospin symmetry): $m_{gap}\equiv m_{n}-m_{p}\approx1.29MeV$. Where $m_{p}$
and $m_{n}$ are the masses of proton and neutron respectively (\cite{TOGO19
,P.152). It is to be noted here that $m_{n}-m_{p}$ is the transition
(excitation) energy needed to transform neutron into proton (\cite{SW-2},
P.548). In this picture, the mass gap is nothing but the energy difference
between these two states in the isospin space. From the foregoing, the
approximate $SU(2)_{V}$ isospin symmetry of the strong nuclear force is
dependent on the non-vanishing of $m_{gap}$, and hence the color-singlet mass
spectrum of the QCD matter must depend on it.
Thus Eq.(93) become
\begin{equation}
m=10^{3}\times(m_{n}-m_{p})=1290MeV
\end{equation}
and
\begin{equation}
m_{gap}=m_{n}-m_{p}\approx1.29MeV
\end{equation}
It is to be recalled that the fundamental charge (of $U(1)$ and $SU(2)$ gauge
fields) is related to the electroweak coupling constants via the
Weinberg-Salam geometric relations: $e=g_{1}\cos \theta_{w}=g_{2}\sin \theta
_{w}$ and $\cos \theta_{w}=m_{W}/m_{Z}$ \cite{SW-3}. Where $g_{1}$ and $g_{2}$
are the gauge couplings of $U(1)$ and $SU(2)$ gauge fields respectively.
$\theta_{w}$ is the mixing angle, $e$ is the fundamental charge, $m_{W}$ is
the mass of $W$-boson and $m_{Z}$ is the mass of $Z$-boson. By using
$m_{W}=80.385GeV,m_{Z}=91.1876GeV$ \cite{SW-4}$,e=\alpha_{2}=2/3$, we have
$\theta_{w}\approx28.17^{0}$ and $g_{2
=0.6666666667/0.4720892507=1.4121623522.$ This is the nucleon coupling
constant for the two-flavor (i.e. proton and neutron) $SU(2)$ representation.
The value of $g_{2}$ ($=1.4121623522$) is to be compared with the nucleon
axial coupling constant computed from two-flavor $SU(2)$ lattice QCD:
$g_{A}=1.412(18)$ \cite{SW-5}.
In the next subsection, we demonstrate that the values of $m_{gap}$ and
$T_{c}$ do not only play a very important role in the Big Bang nucleosynthesis
but are also part of the primordial constituents of the QCD vacuum.
\subsection{Big Bang Nucleosynthesis (BBN)}
BBN refers to the production of relatively heavy nuclei from the lightest
pre-existing nuclei (i.e., neutrons and protons with $m_{gap}=1.29MeV$) during
the early stages of the Universe. Cosmologists believe that the necessary and
sufficient condition for nucleosynthesis to have occurred during the early
stages of the universe is that the value of equilibrium neutron fraction
($X_{n}$) or the neutron abundance must be close to the optimum value, i.e.,
$X_{n}\approx50\%$ (\cite{SW-2}, P.550). In fact, the value of $X_{n}$ at the
time $t=0$ was calculated to be $X_{n}=0.496=49.6\%$ (\cite{SW-2}, P.549).
The equilibrium neutron fraction for temperature $T\gtrsim3\times10^{10}K$ is
given as (\cite{SW-2}, P.550)
\begin{equation}
X_{n}\approx \left[ 1+e^{E/kT}\right] ^{-1
\end{equation}
where $E=m_{gap}=1.29MeV.$ By using natural unit approach (i.e., setting the
Boltzmann constant $k=1$) and using the value of critical temperature
($T=T_{c}=129MeV$), Eq.(97) reduces t
\begin{equation}
X_{n}\approx \left[ 1+e^{0.01}\right] ^{-1}=49.75\%
\end{equation}
The value in the Eq.(98) is compatible with the value obtained at the time
$t=0$ (i.e., $X_{n}=49.6\%$) , and is approximately equal to the optimum value
($X_{n}\approx50\%$). This can only mean two things: (i) $m_{gap}$ and $T_{c}$
existed at time $t=0$ of BBN processes. (ii) These two quantities are the
fundamental quantities of QCD / quantum vacuum.
According to the detailed calculations of Peebles and Weinberg, the abundance
by weight of cosmologically produced helium is given as (\cite{SW-2}, P.554)
\begin{equation}
X_{H^{4}e}=2X_{n
\end{equation}
By combining Eqs.(98) and (99), we hav
\begin{equation}
X_{H^{4}e}=99.5\%
\end{equation}
Eq.(100) confirms the validity of Eq.(99), namely, that the total amount of
neutrons before nucleosynthesis must be equal to total amount of helium
abundance after the nucleosynthesis.
The threshold for the reaction $p+\overline{\nu}_{e}\rightarrow n+e^{+}$ is at
$m_{e}+m_{gap}=1.8MeV$ (\cite{SW-2}, P.544). Thus the mass of electron
($m_{e}$) is $m_{e}=0.51MeV.$
The invariance of the mass gap is supported by the following transitions
(\cite{SW-2}, P.548)
\begin{align}
E_{e}-E_{\nu} & =m_{gap}\text{ for }n+\nu \longleftrightarrow p+e^{-
\nonumber \\
E_{\nu}-E_{e} & =m_{gap}\text{ for }n+e^{+}\longleftrightarrow p+\overline
{\nu}\nonumber \\
E_{\nu}+E_{e} & =m_{gap}\text{ for }n\longleftrightarrow p+e^{-
+\overline{\nu
\end{align}
Eq.(101) clearly shows that mass gap is invariant under crossing-symmetry.
By using the values of $\alpha_{s}$ and $m$, we proceed to solve Eqs.(19) and
(34) completely. From Eq.(34), we hav
\begin{align}
F & \equiv \frac{k^{2}\text{ }m}{16\pi}=\frac{32\pi G_{N}\times m}{16\pi
}\nonumber \\
F & =2G_{N}\text{ }m=2.580\times10^{-19
\end{align}
Eq.(102) is to be compared with the ratio of the proton mass to the Planck
mass scale ($\frac{M_{proton}}{M_{Planck}}\approx10^{-19}$).
By using Eq.(29) and the value of $G_{f}$ ($\approx1GeV^{-1}$ \cite{ASJ}, P.
2668), the last part of Eq.(33) become
\begin{equation}
\beta_{1}=3.162\times10^{8}GeV^{-1
\end{equation}
One of the properties of the confining force is the notion of "dimensional
reduction" which suggests that the calculation of a large planar Wilson loop
in $D=4$ dimensions reduces to the corresponding calculation in $D=2$
dimensions. In this case, the leading term for the string tension is derived
from the two-dimensional strong-coupling expansion (\cite{JEFF}, P.49-50).
Following this line of reasoning, $\alpha_{s}$ is made into a dimensionful
coupling (dimensional transmutation) as follows
\begin{align}
\sigma & \equiv \alpha_{s}[m]^{4-D}=0.1797\times(1.29GeV)^{2}\nonumber \\
\sigma & =0.299GeV^{2
\end{align}
Note that $\alpha_{s}$ is dimensionless (as expected) only in four dimensions,
but here we use $D=2$ in order to obtain the \textbf{Wilson-like}
\textbf{string tension} (which represents the geometry of the Weyl's action
because it is rotationally symmetric). Eq.(104), which is called string
tension, is to be compared with the value $\sigma=0.27GeV^{2}$ \cite{ANIVA}.
With these values, the confinning potential ($V_{conf}$)/linearly rising
potential in the Eq.(19) reduces t
\begin{equation}
V_{conf}\text{ }(r)=\sigma r
\end{equation}
\bigskip and the perturbative aspect ($V_{pert}$) of strong gravity (Eq.(34)) become
\begin{equation}
V_{pert}(r)=\frac{F}{r}-\frac{4}{3}\frac{(Fe^{-\beta_{1}/r})}{r
\end{equation}
Where the color factor ($C_{F}$)/Casimir invariant associated with gluon
emission from a fundamental quark $-$ present in the Eq.(106) $-$ for $SU(3)$
gauge group (with $N=3$) is given as
\begin{equation}
C_{F}=\frac{1}{2}\left( N-\frac{1}{N}\right) =\frac{4}{3
\end{equation}
and
\begin{equation}
e^{-\beta_{1}/r}=\underset{n=0}{\overset{\infty}{\sum}}\left( -1\right)
^{n}\frac{\left( \frac{\beta_{1}}{r}\right) ^{n}}{n!
\end{equation}
Hence the effective pure Yang-Mills potential ($V_{YM}^{eff}$ $(r)$) of strong
gravity theory (from Eqs.(105) and (106)) i
\begin{align}
V_{YM}^{eff}(r) & =V_{pert}(r)+V_{conf}\text{ }(r)\nonumber \\
V_{YM}^{eff}(r) & =\frac{F}{r}-\frac{4}{3}\frac{(Fe^{-\beta_{1}/r})
{r}+\sigma r
\end{align}
\section{GAUGE-GRAVITY DUALITY}
In this section, we show that strong gravity theory possesses gauge-gravity
duality property.
\subsection{NRQED and NRQCD Potentials}
The perturbative non-relativistic quantum electrodynamics (NRQED) that gives
rise to a \textbf{repulsive Coulomb potential between an electron-electron
pair} is due to one photon exchange, and this repulsive Coulomb potential is
given by \cite{TOGO22}
\begin{equation}
V_{QED}(r)=\frac{\alpha_{e}}{r
\end{equation}
where the QED running coupling $\alpha_{e}=\frac{\alpha(0)}{1
{\textstyle \prod}
(Q^{2})}$ . $\alpha(0)\approx1/137$ and
{\textstyle \prod}
(Q^{2})$ are the vacuum polarization insertions \cite{TOGO23}. Similarly, the
perturbative component of the NRQCD potential between two gluons or between a
quark and antiquark is given as \cite{TOGO22}
\begin{equation}
V_{QCD}(r)=-\frac{4}{3}\frac{\alpha_{s}(r)}{r
\end{equation}
where the strong running coupling $\alpha_{s}(r)$ must exponentiate in order
to account for the nonlinearity of the gluon self-interactions.
The total color-singlet NRQCD potential is (\cite{TOGO22}, P.273 \&
\cite{TOGO24}, P.39):
\begin{equation}
V_{QCD}(r)=-\frac{4}{3}\frac{\alpha_{s}(r)}{r}+kr
\end{equation}
Obviously, Eq.(106) contains both the NRQED potential (Eq.(110)) and NRQCD
potential (Eq.(111)). Hence the perturbative/short-range aspect of the strong
gravity theory (derived completely entirely from the broken-scale-invariant
Weyl's action in the Eq.(27)) unifies NRQED and NRQCD with one single coupling
constant $F$
\begin{equation}
V_{pert}(r)=F\left( \frac{1}{r}-\frac{4}{3}\frac{e^{-\beta_{1}/r}}{r}\right)
\end{equation}
It is important to note that the QCD part (second term) of the Eq.(113) is
QED-like (first term) apart from the color factor 4/3 $-$ which shows that
there is more than one gluon $-$ and the exponential function $-$ which
accounts for the self-interaction between the gluons ( the \emph{fons et
origo} of nonlinearity in the Yang-Mills theory). Thus, strong gravity theory
is a \textbf{gauge theory}: we mention in passing that Eq.(112) is also
obtainable from the Eq.(109).
In the next subsection, we prove that the Einstein's theory of \textbf{gravity
}can also be derived from the same equation (Eq.(27)) that gave rise to the Eq.(106).
\subsection{\bigskip Effective Einstein General Relativity}
So far, we have been dealing with the short-range behavior of the strong
gravity theory. In this subsection, we take a giant step towards deriving the
Einstein GR entirely from the strong gravity formulation. To set the stage, we
rewrite Eq.(27) as
\begin{align*}
I_{eff} & =-\int d^{4}x\sqrt{-g}\left( \alpha_{1}R_{\mu \nu}R^{\mu \nu
-\alpha_{2}R^{2}\right) -\\
& \int d^{4}x\sqrt{-g}k^{-2}\alpha_{3}R
\end{align*}
By using Eq.(28), the above equation become
\begin{align}
I_{eff} & =-\int d^{4}x\sqrt{-g}\left( 2R_{\mu \nu}R^{\mu \nu}-\frac{2
{3}R^{2}\right) -\nonumber \\
& 2\int d^{4}x\sqrt{-g}k^{-2}R
\end{align}
\subsubsection{The Matter Action}
Without using any rigorous mathematics, we would like to show that the part of
the Eq.(114) containing the quadratic terms is in fact the matter action
($I_{M}$). From Eq.(16), we hav
\begin{align}
I_{M} & \equiv \int d^{4}x\sqrt{-g}\left( 2R_{\mu \nu}R^{\mu \nu}-\frac{2
{3}R^{2}\right) =\nonumber \\
& \int d^{4}x\sqrt{-g}C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta}\nonumber \\
I_{M} & =\int d^{4}x\sqrt{-g}C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta
\end{align}
The fact that Weyl Lagrangian density ($C_{\mu \nu \alpha \beta}C^{\mu \nu
\alpha \beta}$) is a conserved quantity due to its general covariance property
means that we can write
\begin{equation}
\delta \left( C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta}\right) =0
\end{equation}
This ensures the conservation of energy-momentum.
By using the principle of stationary action on the Eq.(115) and taking
Eq.(116) into consideration, we hav
\begin{equation}
\delta I_{M}=\frac{1}{2}\int d^{4}x\sqrt{-g}\left( g^{\mu \nu}C_{\mu \nu
\alpha \beta}C^{\mu \nu \alpha \beta}\right) \delta g_{\mu \nu
\end{equation}
Recall that the energy-momentum tensor is defined as \cite{TOGO25}
\begin{equation}
T_{\mu \nu}\equiv \frac{-2\delta \left( \sqrt{-g
\mathcal{L
_{M}\right) }{\sqrt{-g}}=\frac{-2\delt
\mathcal{L
_{M}}{\delta g^{\mu \nu}}+g_{\mu \nu
\mathcal{L
_{M
\end{equation}
where
\mathcal{L
_{M}$ is the matter conserved Lagrangian density. Using the Weyl conserved
Lagrangian density, we hav
\begin{equation}
T_{\mu \nu}=\frac{-2\delta \left( C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta
}\right) }{\delta g^{\mu \nu}}+g_{\mu \nu}\left( C_{\mu \nu \alpha \beta
C^{\mu \nu \alpha \beta}\right)
\end{equation}
By using Eq.(116), Eq.(119) reduces t
\begin{align}
T_{\mu \nu} & =g_{\mu \nu}\left( C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta
}\right) \text{or}\nonumber \\
T^{\mu \nu} & =g^{\mu \nu}\left( C_{\mu \nu \alpha \beta}C^{\mu \nu \alpha \beta
}\right)
\end{align}
Clearly Eq.(120) is a conserved (due to Eq.(116)) symmetric (due to the
presence of $g^{\mu \nu}$) tensor (\cite{SW-2}, P. 360); and its nonlinearity
represents the effect of gravitation on itself. To deal with this nonlinear
effect, Principle of Equivalence is normally invoked, in which any point $X$
in an arbitrarily strong gravitational field is the same as a locally inertial
coordinate system such that $g_{\alpha \beta}(X)=\eta_{\alpha \beta}$
(\cite{SW-2}, P. 151).
Hence Eq.(117) become
\begin{equation}
\delta I_{M}=\frac{1}{2}\int d^{4}x\sqrt{-g}T^{\mu \nu}\delta g_{\mu \nu
\end{equation}
Eq.(121) is the equation of energy-momentum tensor for a material system
described by matter action \cite{SW-2}.
\subsubsection{Pure Gravitational Action}
By using the value of \ $k^{-2}(=\frac{1}{32\pi G_{N}})$ from Eq.(29), the
linear term part of the Eq.(114) is written a
\begin{align}
I_{G} & =-2\int d^{4}x\sqrt{-g}k^{-2}R\nonumber \\
I_{G} & =-\frac{1}{16\pi G_{N}}\int d^{4}x\sqrt{-g}R
\end{align}
We can therefore writ
\begin{equation}
I_{eff}=I_{M}+I_{G
\end{equation}
By using the general covariance property of Weyl's action, we can write
\begin{equation}
\delta I_{eff}=\delta I_{M}+\delta I_{G}=0
\end{equation}
However, this can only be true $iff$
\begin{equation}
\delta I_{M}+\delta I_{G}=0\iff \delta I_{G}=-\delta I_{M
\end{equation}
The curvature scalar $R$ can be defined as $g^{\mu \nu}R_{\mu \nu},$ and the
following standard equations are valid (\cite{SW-2}, P.364)
\begin{equation}
\delta \left( \sqrt{-g}R\right) =\sqrt{-g}R_{\mu \nu}\delta g^{\mu \nu
+R\delta \sqrt{-g}+\sqrt{-g}g^{\mu \nu}\delta R_{\mu \nu
\end{equation}
\begin{equation}
\delta R_{\mu \nu}=(\delta \Gamma_{\mu \lambda}^{\lambda})_{;\nu}-(\delta
\Gamma_{\mu \nu}^{\lambda})_{;\lambda
\end{equation}
\begin{equation}
\sqrt{-g}g^{\mu \nu}\delta R_{\mu \nu}=\frac{\partial}{\partial x^{\nu}
(\sqrt{-g}g^{\mu \nu}\delta \Gamma_{\mu \lambda}^{\lambda})-\frac{\partial
}{\partial x^{\lambda}}(\sqrt{-g}g^{\mu \nu}\delta \Gamma_{\mu \nu}^{\lambda})
\end{equation}
\begin{equation}
\delta \sqrt{-g}=\frac{1}{2}\sqrt{-g}g^{\mu \nu}\delta g_{\mu \nu
\end{equation}
\begin{equation}
\delta g^{\mu \nu}=-g^{\mu \rho}g^{\nu \sigma}\delta g_{\rho \sigma
\end{equation}
Eq.(128) vanishes when we integrate over all space (\cite{SW-2}, P. 364).
Thus, for the pure gravitational part,we hav
\begin{align}
\delta I_{G} & =\frac{1}{16\pi G_{N}}\int \sqrt{-g}\times \nonumber \\
& \left[ R_{\mu \nu}g^{\mu \rho}g^{\nu \sigma}\delta g_{\rho \sigma}-\frac{1
{2}g^{\mu \nu}\text{ }R\text{ }\delta g_{\mu \nu}\right] \text{
d^{4}x\nonumber \\
\delta I_{G} & =\frac{1}{16\pi G_{N}}\int \sqrt{-g}\left[ R^{\mu \nu}-\frac
{1}{2}g^{\mu \nu}\text{ }R\right] \delta g_{\mu \nu}\text{ }d^{4}x
\end{align}
From Eqs. (121), (125) and (131), we hav
\begin{align}
\delta I_{G} & =-\delta I_{M}\Longrightarrow \frac{1}{16\pi G_{N}}\left[
R^{\mu \nu}-\frac{1}{2}g^{\mu \nu}\text{ }R\right] =\nonumber \\
& -\frac{1}{2}T^{\mu \nu}\nonumber \\
\delta I_{G}+\delta I_{M} & =R^{\mu \nu}-\frac{1}{2}g^{\mu \nu}\text{ }R+8\pi
G_{N}T^{\mu \nu}=0
\end{align}
By usin
\begin{equation}
g_{\alpha \gamma}g_{\beta \delta}A^{\gamma \delta}=A_{\alpha \beta
\end{equation}
and redefining the resulting indices as $\mu$ and $\nu$, we ge
\begin{equation}
R_{\mu \nu}-\frac{1}{2}g_{\mu \nu}\text{ }R=-8\pi G_{N}T_{\mu \nu
\end{equation}
It should be noted that all terms in the Eq.(134) are already present in the
Eqs.(15) and (18), as such the underlying symmetry (general coordinate
invariance) of the Eq.(7) is still preserved in a covariant manner. Eq.(132)
ensures the conservation of energy-momentum (which is a statement of general
covariance \cite{SW-2}, P. 361). Thus, the Weyl's action given in the Eq.(123)
would be stationary / invariant with respect to the variation in $g_{\mu \nu},$
$iff$ \ Eq.(132) holds. Interestingly, it holds because Eq.(132) is the
Einstein field equations, and hence the full Weyl's action is stationary with
respect to the variation in $g_{\mu \nu}.$\emph{This is precisely what we
expect: that the invariance of Weyl's action is maintained by inducing general
relativity. Hence the general covariance property of Eq.(7) has been revealed
because the statement that }$\delta I_{eff}$ \emph{should vanish is "generally
covariant", and this leads to the energy-momentum conservation }(\cite{SW-2},
P. 361).
Conclusively, the perturbative aspect of strong gravity theory (i.e. Eq.(27))
possesses quantum gauge theory (Eq.(106)) and gravity theory (Eq.(134)); thus
proving the existence of gauge-gravity duality in the strong gravity formulation.
\subsection{Ultraviolet Finiteness}
The strong gravity program adopts the Wilsonian viewpoint on quantum field
theory. Here the basic input data to be fixed \emph{ab initio }are the kind of
quantum fields (i.e., gluon fields) carrying the theory's degrees of freedom
(one graviton equals two gluons: BCJ construction), and the underlying
symmetry (spherical/rotational symmetry). The fact that two gluons are used to
construct spacetime metric means that the resulting gravity must be
point-like. This fact is encoded in the three-dimensional Dirac delta
functions in the first part of the Eqs.(19), and (30). The point-like nature
of gravity in this picture is the origin of ultraviolet (UV) divergence. The
question here is: Is Eq.(106) (the effective potential carried by Eqs.(27)) UV
finite, or perturbatively renormalizable? This question can be answered by
using Eqs.(106) and (108)
\begin{equation}
V_{pert}(r)=\frac{F}{r}-\frac{4F}{3r}\left[ 1-\frac{\beta_{1}}{r}+\frac
{\beta_{1}^{2}}{2r^{2}}-\frac{\beta_{1}^{3}}{6r^{3}}+\frac{\beta_{1}^{4
}{24r^{4}}-\text{ }...\right]
\end{equation}
It is to be noted, from \textbf{subsection C} of \textbf{section III}, that
the expression for $\beta_{1}$( with dimension of $GeV^{-1}\longrightarrow
E^{-1}$) contains inverse of \textbf{boson fields} dimension ($E^{-1}$),
and\textbf{\ fermion fields} dimension ($G_{f}^{3/2}\longrightarrow E^{-3/2
$). So it suffices to posit that $\beta_{1}$ contains both boson and fermion
fields: A perfect replica of supersymmetric fermion-boson field duality. Let
us now test for the UV behavior of the Eq. (106)
\begin{equation}
\underset{r\longrightarrow0}{V_{pert}(r)}\text{ }=\infty-\infty \left[
1-\infty+\infty-\infty+\infty-\text{ }...\right] =0
\end{equation}
Clearly Eq.(136) is a host of infinities, but they all cancel out, thus
rendering Eq.(106) UV finite. Hence, strong gravity theory has \textbf{UV
regularity}. Interestingly, this is the main conclusion of the theories of
supergravity ("enhanced cancellations").
\subsection{Breaking of Chiral Symmetry in Strong Gravity Theory}
QCD admits a \textbf{chiral symmetry} in the advent of vanishing quark masses.
\textbf{This symmetry is broken spontaneously by dynamical chiral symmetry};
and \textbf{broken explicitly by quark masses}. The nonperturbative scale of
dynamical chiral symmetry breaking is around $\Lambda_{x}\approx1GeV$
\cite{TOGO26}. Apparently, the chiral symmetry in the strong gravity is broken
spontaneously by its inherent dynamical chiral symmetry breaking $G_{f
^{-1}=\Lambda_{QCD}=\Lambda_{x}\approx1GeV$. In much the same spirit, the
calculated value of mass scale of the theory reverberates the existence of the
approximate symmetry in the strong interaction: $m=1.29GeV$ and $G_{f
^{-1}=\Lambda_{QCD}\approx1GeV$.
\section{Confinement and Asymptotic Freedom}
In the past few decades it became a common knowledge that confinement is due
to a linearly rising potential between static test quarks / gluons in the
4-dimensional pure Yang-Mills theory (see Eq.(105)). The fact that confinement
(i.e. non-perturbative aspect of QCD) is a simple consequence of the strong
coupling expansion means that an infinitely rising linear potential becomes
highly non-trivial in the weak coupling limit of the theory. This short-scale
weak coupling limit is called asymptotic freedom \cite{MCORN,CGATT}. By all
standards, these two properties of QCD contradict all previous experience in
physics with strong force decreasing with distance. The asymptotic freedom
part of the paradox has been correctly resolved \cite{TOGO10,TOGO11}, leaving
out the hitherto unresolved color confinement property of the non-perturbative
QCD regime. As we have remarked previously, a complete theory of strong
interaction should be able to explain these two properties of QCD
simultaneously (i.e., the dominance of asymptotic freedom at the small scale
distances (quark-gluon regime) and the emergence of infrared slavery
(confinement) at long scale distances (hadronic regime)). These dual
properties of QCD are succinctly depicted in the Eq.(109).
The linearly rising potential means that the potential between a static
gluon-gluon pair keeps rising linearly as one tries to pull the two
constituents apart (see Eq.(105)). Thus they are confined in a strongly bound
state \cite{MCORN}. Based on the dynamics of Eq.(105), an infinite amount of
energy would be required to pull the two constituents of bound glueball/meson
state apart.
The resulting force of strong gravity theory is called Yang-Mills-Gravity
force ($F_{YMG}(r)$), because Eq.(7) $-$which gives rise to the confining
potential $-$ is the Weyl's action for \textbf{gravity }(\cite{ASCS},
P.322)\textbf{, }and the action in the Eq.(27)\textbf{\ }$-$ which gives rise
to the perturbative QYMT $-$ also contains Einstein-Hilbert action for
\textbf{gravity. }To explain the behavior of this force at both small and
large distance scales, we differentiate Eq.(109) with respect to the
gluon-gluon separating distance $r$ (and taking into consideration Eq.(107))
\begin{equation}
F_{YMG}(r)=-\frac{F\left( 1-C_{F}\text{ }e^{-\beta_{1}/r}\right) }{r^{2
}-\frac{FC_{F}\beta_{1}e^{-\beta_{1}/r}}{r^{3}}+\sigma
\end{equation}
The summing graphs of strong gravitational gluodynamics are shown in the
\textbf{Fig.2. }The blue graphs are the graphs of the effective pure
Yang-Mills potential (Eq.(109)), while the red plots are the graphs of the
Yang-Mills-Gravity force (Eq.(137)). It is easy to show that these equations
possess UV asymptotic freedom (albeit with tamable infinities) and infrared
(IR) slavery behaviors of the QCD. For us to see these behaviors, the
following facts are in order: (i) If the radial derivative of potential is
positive, then the force is attractive. (ii) If the radial derivative of
potential is negative, then the force becomes repulsive \cite{HJW}. (iii)
Since only color singlet states (hadrons)/ or dressed glueball can exist as
free observable particles, we multiplied the gluon-distance scale (in the
\textbf{Figs.2} and \textbf{3}) by factor of 10 in order to convert gluon
radius to the more observable hadronic radius (in line with the Eq.(87)). (iv)
The graphs in the \textbf{Figs.2} and \textbf{3} are plotted by using the
highly interactive plotting software \cite{DAVID}.
The strong interaction is observable in two areas: (i) on a shorter distance
scale ( for $10^{-19}GeV^{-1}\leq r\leq3.0277GeV^{-1}$ ), $F_{YMG}(r)$ is
repulsive (i.e., negative force) and reducing in strength as we probe shorter
and shorter distances (up to Planck length ($10^{-19}GeV^{-1}$)). This makes
Eq.(137) to be compatible with the asymptotic freedom property of QCD, where
the force that holds the quark-antiquark or gluon-gluon together decreases as
the distance between them decreases. Being a repulsive force (within the range
$10^{-19}GeV^{-1}\leq r\leq3.0277GeV^{-1}$ ), it would disallow the formation
of quark-antiquark / gluon-gluon singularity because the constituents can only
come close up to a minimum distance scale at which the repulsive force would
be strong enough to prevent further reduction in their separating distance.
(ii) On a longer distance scale ($r\geq3.0278GeV^{-1}$), $F_{YMG}(r)$ becomes
attractive (i.e., positive force). Here $F_{YMG}(r)$ does not diminish with
increasing distance. After a limiting distance ($r=10^{4}GeV^{-1}$) has been
reached, it remains constant at a strength of $0.299GeV^{2}$ (no matter how
much farther the separating distance between the quarks /gluons). Meanwhile,
the linearly rising potential keeps on increasing \emph{ad infinitum (see the
blue curve in the Fig.3).} This phenomenon is called color confinement in QCD.
The explanation is that the amount of workdone against a force of
$0.299GeV^{2}$ ($=2.449\times10^{5}N$) is enough to create
particle-antiparticle pairs within a short distance $r=10^{4}GeV^{-1
=1.972\times10^{-12}m$ than to keep on increasing the color force indefinitely.
By using \cite{DAVID}, we demonstrate that Eqs.(109) and (137) are consistent
and well-behaved down to the Planck scale:\ (i) At $r=10^{-19}GeV^{-1
=1.972\times10^{-35}m$ (Planck length), $V_{YM}^{eff}(r)=2.58\times10^{19}GeV$
(Planck energy) and $F_{YMG}(r)=-2.6\times10^{38}GeV^{2}.$ The negative sign
of $F_{YMG}(r)$ is the hallmark of the asymptotic freedom and the weakness of
gravitational field ($F_{YMG}(r)<0$) at the Planck scale! This would also
disallow the formation of singularity at the centre of a blackhole (see
\textbf{Fig.3} for more details). Based on the foregoing, we therefore assert
that strong gravity theory is consistent and well-behaved down to Planck
distance scale ($\sim10^{-19}GeV^{-1}$) .
\begin{figure}[ht!]
\begin{center}
\psfig{file=Fig.2.eps,scale=0.55,angle=0,clip=}
\end{center}
\caption{Summing graphs of strong gravitational gluodynamics.}
\end{figure
\begin{figure}
\begin{center}
\psfig{file=Fig.3.eps,scale=0.45,angle=0,clip=}
\end{center}
\caption
{Graphs of pure Yang-Mills potential (in blue) and Yang-Mills-Gravity force (in red).}
\end{figure
\subsection{Energy density of QCD vacuum}
The scale invariance of the strong gravity is broken at $\Lambda_{QCD
\approx1GeV$ (\cite{ASCS}. P. 324). Hence the associated distance scale would
be given as $r_{g}=G_{f}=1GeV^{-1}.$ In terms of the observable hadronic
radius (see Eq.(87)), we have $r_{h}=10GeV^{-1}=1.972\times10^{-15}m=1.972$
$fm.$ The QCD potential at this distance scale is given as $V_{YM
^{eff}=2.495761GeV$ from the \textbf{Fig.3}, and the energy density
($\varepsilon$) of the QCD vacuum is calculated as
\begin{equation}
\varepsilon=\frac{V_{YM}^{eff}}{(r_{h})^{3}}=\frac{2.495761GeV}{(1.972)^{3
fm^{3}}=0.325GeV\text{ }/\text{ }fm^{3
\end{equation}
Eq.(138) is to be compared with the value calculated from the Lattice QCD
($\varepsilon \approx0.33GeV$ $/$ $fm^{3}$) (\cite{HENG}, P.54).
\section{ Existence of Quantum Yang-Mills Theory on R$^{4}$}
The existence of quantum Yang-Mills theory on $R^{4}$ (with its characteristic
mass gap) is one of the seven (now six) Millennium prize problems in
mathematics that was put forward by Clay Mathematics Institute in 2000
\cite{TOGO27}. The problem is stated as follows:
Prove that for any compact simple gauge group $G=SU(N)$, a fully renormalized
quantum Yang-Mills theory exists on $R^{4}$ and has a non-vanishing mass gap.
\subsection{Solution-plan}
The first thing to note here is that Yang-Mills theory is a non-abelian gauge
theory, and the idea of a gauge theory emerged from the work of Hermann Weyl
\cite{TOGO27A} (the same Weyl that formulated the Weyl's action that was used
in the formulation of strong gravity theory, based on the
\emph{Weyl-Salam-Sivaram's approach }\cite{ASCS}).
The Maxwell's theory of electromagnetism is one of the classical examples of
gauge theory. In this case, the gauge symmetry group of the theory is the
abelian group $U(1)$. If $A$ designates the $U(1)$ gauge connection (locally a
one-form on spacetime), then the potential of the field is the \textbf{linear}
two-form $F=dA$. To formulate the classical version of the Yang-Mills theory,
we must replace the gauge group $U(1)$ of electromagnetism by a compact gauge
group $SU(N)$, and the potential arising from the field would be a generalized
form of the Maxwell's: $F=dA+A\Lambda A $. This formula still holds at the
quantum level of the theory because Yang-Mills field shows quantum behavior
that is very similar to its classical behavior at short distance scales
(\cite{TOGO27}, P.1-2). However, the Maxwell's theory must be replaced by its
quantum version (i.e. QED; photon-electron interaction), and the nonlinear
part ($A\Lambda A$) must now describe the self-interaction of gluons (which is
the source of nonlinearity of the theory). The fact that the physics of strong
interaction is described by a non-abelian gauge group $G=SU(3)$ (i.e. QCD),
suggests immediately that the potentials of the four-dimensional quantum
Yang-Mills field must be the sum of the linear QED ($dA$) and nonlinear QCD
($A\Lambda A$) potentials at quantum level. Thus the first composite hurdle
for any would-be solution of the problem to cross is to: (1) obtain $QED+QCD$
potential at short distances with a single unified coupling constant. (2) The
two potentials must perfectly explain the individual physics of QED and QCD at
the quantum scale. (3) The two potentials must be obtained from
a\textbf{\ four-dimensional quantum gauge theory. }To surmount this composite
hurdle, one must first of all\emph{\ establish the existence of
four-dimensional quantum gauge theory with gauge group }$G=SU(N)$, and then
every other thing will follow naturally.
\subsubsection{Jaffe-Witten Existence Theorem (\cite{TOGO27},P.6)}
The official description of this (i.e. Yang-Mills existence and mass gap)
problem was put forward by Arthur Jaffe and Edward Witten. Their
\emph{existence theorem} is briefly paraphrased as follows: The existence of
four-dimensional quantum gauge theory (with gauge group $SU(N)$) can be
established mathematically, by defining a quantum field theory with local
quantum field operators in connection with the local gauge-invariant
polynomials, in the curvature $F$ and its covariant derivatives, such as
$TrF_{ij}F_{kl}(x)$. In this case, the correlation functions of the quantum
field operators should be in agreement with the predictions of
\textbf{perturbative renormalization (i.e. the theory must have UV
regularity)} and \textbf{asymptotic freedom (i.e. the weakness of strong force
at extremely short-distance scale)}; and there must exist a stress tensor and
an operator product expansion, admitting well-defined local singularities
predicted by asymptotic freedom.
By using the \emph{eye of differential geometry}, we observed that the
solution to the problem is concealed in the mathematical structures rooted in
the differential geometry . In other words, the above-stated existence theorem
is the mathematical description of the \emph{strong gravity formulation}.
\subsubsection{R$^{4}-$Weyl-Salam-Sivaram Theorem\cite{ASCS}}
The Weyl-Salam-Sivaram theorem is in fact the geometrical interpretation of
the Jaffe-Witten existence theorem. In the following, the local quantum field
operators are the two strong tensor fields ($G_{\mu}^{a}(x)$ and $G_{\nu
^{b}(x);$ two gluons forming double-copy construction) used to construct the
spacetime metric in the \textbf{section III} of this paper. These local
quantum fields have a direct connection (via $g=\det(G_{\mu}^{a}G_{\nu
^{b}\eta_{ab})$) with the gauge-invariant \ local polynomials in the curvature
$C$ and its covariant derivatives: $\sqrt{-g}C_{\alpha \beta \gamma \delta
}C^{\alpha \beta \gamma \delta}(x)$. Note that "$Tr$" in the Jaffe-Witten
existence theorem denotes an invariant quadratic form on the Lie algebra of
group $G$. Similarly, $\sqrt{-g}$ in the Weyl-Salam-Sivaram theorem denotes an
invariant quadratic form on the gauge group $SU(3).$ The correlation function
in this case is nothing but the spacetime metric ($g_{\mu \nu}(x)$) constructed
out of the two local quantum fields ($G_{\mu}^{a}(x)$ and $G_{\nu}^{b}(x)$),
and used as a function of the spatial cum temporal distance between these two
random variables (gluons). We have painstakingly demonstrated that this
spacetime metric agrees, at short distance scales, with the predictions of
asymptotic freedom (i.e. the weakness of strong force at extremely short
distance scales (see \textbf{section IX})) and perturbative renormalization
(i.e. the existence of UV regularity of the theory at short distances; the
theory should be able to regularize its own divergences at extremely short
distance scales, say, $r=0$ (see \textbf{subsection C of section VIII})).
There also exist a stress energy-momentum tensor (\textbf{Eq.(17)}), and field
product expansion (\textbf{Eq.(18)}), having local singularities encoded in
the three-dimensional Dirac delta functions (\textbf{Eqs. (19) and (30)})
predicted by asymptotic freedom. \textbf{Overall the broken-scale-invariant
Weyl action (Eq.(27)) is the required perturbative four-dimensional quantum
gauge field theory with its inherent gauge group }$SU(3)$\textbf{\ that gives
rise to color/Casimir factor }$4/3$\textbf{\ (Eq.(107))}. However, for this
statement to be valid the theory must possess both\emph{\ QED} and\emph{\ QCD}
\emph{potentials (i.e. }$F=dA+A\Lambda A$\emph{). Happily, the theory does
possess these potentials with a single coupling constant (see Eqs. (106),
(110), (111) and (113)).}
The fact that the scale invariance of Weyl action is broken at the strong
scale $\Lambda_{QCD}=G_{f}^{-1}\approx1GeV$ (\cite{ASCS}, P.324) $-$ which is
equal to its dynamical chiral symmetry breaking scale \cite{TOGO26} $-$ is a
clear indication of the existence of proton as the fundamental hadron of the
theory. In this case, one must therefore investigate the ground state (neutron
state) of the proton state using isospin symmetry. But for this to be
possible, the gauge group that describes isospin symmetry must exist within
the framework of the theory. This is where custodial symmetry (Eq.(77)) kicks
in. The vector subgroup of custodial symmetry is in fact the isospin symmetry:
$SU(2)_{L}\times SU(2)_{R}\longrightarrow SU(2)_{V}$ \cite{TOGO28}. This
isospin symmetry then demands that the Hamiltonian ($H$) of proton-neutron
state must be zero. However, the near mass-degeneracy of the neutron and
proton in the $SU(2)$ doublet representation points to an approximate isospin
symmetry of the Hamiltonian describing the strong interaction \cite{DGR,
CITZ}. The mass gap in this picture is nothing but the energy difference
between the two sub-states of the proton-neutron configuration: $m_{gap
=m_{n}-m_{p}\approx1.29MeV.$ Hence the mass formula of QCD (Eq.(93)) and the
stable Higgs boson mass (see next section) must be expressed in terms of this
mass gap.
Conclusively, the two gauge groups that are needed to accurately describe the
solution to this Millennium prize problem are $\ SU(3)-$ for the establishment
of the existence theorem$-$ and $SU(2)-$ for describing the mass gap of the
solution.\ \textbf{Hence, the Weyl-Salam-Sivaram existence theorem of strong
gravity puts quantum gauge field theory (QFT) on a solid mathematical footing
of the differential geometry; in this sense, QFT is a full-fledged part of
mathematics.}
\section{Stability of Vacuum: A hint for Planck scale physics from
$m_{H}=126GeV$}
The 126GeV Higgs mass seems to be a rather special value, from all the \emph{a
priori} possible values, because it just at the edge of the mass range
implying the stability of Minkowski vacuum all the way down to the Planck
scale \cite{EFEMI3}. If one uses the Planck energy ($G_{N}^{-1}\approx
10^{19}GeV$) as the cutoff scale, then the vacuum stability bound on the mass
of the Higgs boson is found to be 129GeV. That is, vacuum stability requires
the Higgs boson mass to be $m_{H}=129GeV$ \cite{EFEMI4} . A new physics beyond
SM is thus needed to reconcile the discrepancy between 126GeV and 129GeV mass
of Higgs boson. The first thing to observe here is that the vacuum stability
bound on the mass of Higgs boson ($m_{H}=129GeV$) has exactly the same
"number-structure" with the values that we have been working with in this paper.
By using Eq.(93), we can writ
\begin{equation}
m_{H}=const.\times m
\end{equation}
Comparing the energy scale of the pure Yang-Mill propagator in the Eq.(29)
($k^{-2}=1\times10^{17}GeV$) with the Planck scale ($\approx G_{N}^{-1
\approx10^{19}GeV$) shows a magnitude difference of $10^{2}.$ By using this
value as our constant (i.e. $const.=\frac{G_{N}^{-1}}{k^{-2}}$), we get
exactly $m_{H}=129GeV$
\begin{equation}
m_{H}=m\left( \frac{G_{N}^{-1}}{k^{-2}}\right) =129GeV
\end{equation}
Eq.(140) is very important because: (1) it shows the coupling of Higgs mass
($m_{H}$) to the fundamental mass, and mass gap of the QCD vacuum
($m=1290MeV=10^{3}\times m_{gap}$). (2) It connects Higgs mass to the Planck
energy scale.\textbf{\ }To show the vacuum stability property of the Eq.(140),
we eliminate the fundamental mass of the QCD vacuum by using the value of
critical temperature from Eq.(89) ($T\equiv m=10T_{c}$)
\begin{equation}
m_{H}=T\left( \frac{G_{N}^{-1}}{k^{-2}}\right) =129GeV
\end{equation}
Obviously, $T>T_{c}$ (see subsection A of section VII). This is the well-known
vacuum stability condition in the \textbf{second-order phase transition
theory}; while the condition for vacuum instability is $T<T_{c}$ (see
\cite{FRAG} and the references therein).
The mass range of the Higgs boson that would allow the stability of vacuum is
given as \cite{IATO}
\begin{equation}
123GeV\leq m_{H}\leq129GeV
\end{equation}
By taking the average value of Eq.(142), we have
\begin{equation}
m_{H}^{avg}=\frac{123GeV+129GeV}{2}=126GeV
\end{equation}
Clearly, 126GeV Higgs mass is special because it just at the midpoint of the
mass range that guarantees the stability of the vacuum.
\section{The Enigmatic Neutrino}
\begin{quotation}
"A cosmic mystery of immense proportions, once seemingly on the verge of
solution, has deepened and left astronomers and astrophysicists more baffled
than ever. The crux ...is that the vast majority of the mass of the universe
seems to be missing." - \textbf{William J. Broad (1984)}
"A billion neutrinos go swimming in heavy water: \ one gets wet." -
\textbf{Michael Kamakana}
\end{quotation}
Studying the properties of neutrinos has been one of the most exciting and
challenging activities in particle physics and astrophysics ever since Pauli,
"the unwilling father" of neutrino, proposed their existence in 1930 in order
to find the desperate remedy for the law of conservation of energy, which
appeared to be violated in $\beta-$decay processes. Since then, many hidden
facts about neutrinos have been unveiled step by step\cite{FBP, SWJ}. In spite
of their \emph{weakly interacting nature}, we have so far gathered an
avalanche of knowledge about neutrinos. From the neutrino oscillation
experiments (an effort that has been duly awarded the 2015 Nobel prize in
physics \cite{TKAB}), we learned that there are two major problems that plague
neutrino physics:
\textbf{(1)} Determination of the absolute masses of neutrinos. The results
from the neutrino oscillation experiments have confirmed the massive nature of
neutrino. However, this confirmation provides a crack in the foundation of the
Standard Model (SM) of particle physics, because SM treats neutrinos as
massless particles. This disagreement between SM and experimental results
(which opens a new door to the physics beyond SM) constitutes what is called
"neutrino mass problem"\cite{TKAA7, ABMD3, ASNE,KEG,PAD,YABD,FANE,JAHN}.
\textbf{(2) }Another major problem in the neutrino physics that is somehow
related to the one above-mentioned, is to establish whether the neutrinos with
definite masses $m_{k}$ are Dirac particles (with particles and antiparticles
being different objects thereby conserving the lepton number) or Majorana
particles (with particles and antiparticles being the same thereby violating
lepton number). An experimental distinction between these two seems to be much
more complicated than the confirmation of non-vanishing mass of the neutrino.
These are the two major problems in neutrino physics that have hitherto defied
all solutions.
Based on the formulation of the strong gravity theory (that hadronic
interactions become weak in strength at small invariant separation), we assert
that the absolute masses of neutrinos are actually calculable. More
importantly, we will demonstrate,\textbf{\ in this section}, that neutrinos
are Majorana particles! In few lines, we explain the theoretical properties of
neutrino's nature that form the basis for using the strong gravity formulation.
\textbf{(i) }All types of neutrino participate in weak nuclear and
gravitational interactions with ordinary matter \cite{YVKO}. This means that
their physics can be explained by using a gas of weakly coupled particles
system (a configuration that we used to solve the problem of asymptotic
freedom (i.e., calculation of the dimensionless strong coupling constant at
the starting point of QCD evolution in this paper)). The fact that strong
gravity combines strong nuclear force (which becomes weak at extremely short
distance scale: $r\ll \Lambda_{QCD}^{-1}$) and gravitational force into one
unified force makes the determination of neutrino masses possible within the
framework of massive spin-2 field theory with $D=5$: note that the Lagrangian
of the Majorana neutrino is valid only when $D=5$.
\textbf{(ii)} Majorana neutrino Lagrangian possesses symmetry axis /
CP-symmetry (\cite{TOGO19}, P. 203-205).
These two points form the basis of our solution-plan for solving the neutrino
mass problem. This approach shows a compelling interplay between gravitation
and principle of linear superposition of different mass eigenstates of
neutrino as alluded to in \cite{DVACB}.
\subsection{Effective Majorana Mass Matrix}
Since the Majorana neutrino has only left-handed chiral field $\nu_{L},$ which
is present in the SM, it is therefore natural to ask if it possible for SM
neutrinos to have Majorana masses. The simple answer is that it is not
possible, due to the fact that the left-handed chiral field $\nu_{L}$ has weak
\textbf{isospin triplet} with hypercharge $Y=-2.$ The fact that SM does not
contain any weak isospin triplet with $Y=2$ clearly \ shows that it is not
possible to have a renormalizable Lagrangian term which can generate Majorana
neutrino masses (\cite{TOGO19}, P. 205).
However, the lowest dimensional Lagrangian which could generate Majorana
neutrino masses that one can construct with the SM fields, respecting the SM
symmetries, is the lepton number violating Lagrangian (with $D>4$)
(\cite{TOGO19}, P. 216)
\begin{equation
\mathcal{L
_{d}=M_{X}^{4-D}\underset{\alpha \beta}{\sum}g_{\alpha \beta}(L_{\alpha
L}^{^{\prime}T}\tau_{2}\Phi)C^{\dag}(\Phi^{T}\tau_{2}L_{\beta L}^{^{\prime
})+H.c.
\end{equation}
where $M_{X}$ is a heavy mass ( of a single color triplet Higgs scalar )
characteristic of the symmetry-breaking scale of the high-energy unified
theory, $D$ is called a \emph{dimension-D operator} and its value in this case
is $D=5.$ $g_{\alpha \beta}$ is a yet-unknown symmetric $3\times3$ matrix of
coupling constants. With $D=5,$ Eq.(144) become
\begin{equation
\mathcal{L
_{5}=\frac{1}{M_{X}}\underset{\alpha \beta}{\sum}g_{\alpha \beta}(L_{\alpha
L}^{^{\prime}T}\tau_{2}\Phi)C^{\dag}(\Phi^{T}\tau_{2}L_{\beta L}^{^{\prime
})+H.c.
\end{equation}
The electroweak symmetry breaking VEV ($=\upsilon=246GeV$ \cite{DBAL}) of the
Higgs field leads to the Majorana neutrino mass term(\cite{TOGO19}, P. 216)
\begin{equation
\mathcal{L
_{mass}^{M}=\frac{1}{2}\frac{\upsilon^{2}}{M_{X}}\underset{\alpha \beta}{\sum
}g_{\alpha \beta}\text{ }\nu_{\alpha L}^{^{\prime}T}C^{\dag}\nu_{\beta
L}^{^{\prime}}+H.c.
\end{equation}
From Eq.(146), \ the Majorana mass matrix has elements (\cite{TOGO19}, P. 216
\begin{equation}
M_{\alpha \beta}^{L}=\frac{\upsilon^{2}}{M_{X}}\text{ }g_{\alpha \beta
\end{equation}
with (\cite{TOGO19}, P. 208
\begin{equation}
M_{\alpha \beta}^{L}=M_{\beta \alpha}^{L
\end{equation}
Eq.(148) is the reason why the $g_{\alpha \beta}$ matrix must be symmetric.
With $\alpha=\beta=0,1,2$, Eq.(147) reduces t
\begin{equation}
M_{00}^{L}=\frac{\upsilon^{2}}{M_{X}}\text{ }g_{00
\end{equation}
\begin{equation}
M_{11}^{L}=\frac{\upsilon^{2}}{M_{X}}\text{ }g_{11
\end{equation}
\begin{equation}
M_{22}^{L}=\frac{\upsilon^{2}}{M_{X}}\text{ }g_{22
\end{equation}
\emph{(It is worth noting that if all the diagonal elements of
$g_{\alpha \beta}$\emph{\ are all 1's, then the Eqs. (147) and 149-151 reduce
to Eq.(74).)}
The gravitational potential ($g_{\mu \nu}$) which is capable of representing a
combined gravitational and electromagnetic field outside a \textbf{spherically
symmetric material distribution }is given as \cite{MIW}
\begin{equation}
g_{\mu \nu}=\left(
\begin{array}
[c]{cccc
g_{00} & g_{01} & 0 & 0\\
g_{10} & g_{11} & 0 & 0\\
0 & 0 & g_{22} & 0\\
0 & 0 & 0 & g_{33
\end{array}
\right)
\end{equation}
wher
\begin{equation}
g_{00}=\frac{(1-\frac{m}{2r})^{2}}{(1+\frac{m}{2r})^{2}}+\frac{\zeta^{2
}{r(1+\frac{m}{2r})^{2}
\end{equation}
\begin{equation}
g_{01}=g_{10}=-\frac{\zeta(1+\frac{m}{2r})}{r^{1/2}
\end{equation}
\begin{equation}
g_{11}=(1+\frac{m}{2r})^{4
\end{equation}
\begin{equation}
g_{22}=g_{11}r^{2
\end{equation}
\begin{equation}
g_{33}=g_{22}\sin^{2}\theta=g_{11}r^{2}\sin^{2}\theta
\end{equation}
The quantity $m$ represents an effective gravitational mass, and $\zeta$ is an
electric-charge dependent parameter \cite{MIW}. Since neutrinos are
electrically neutral, we set $\zeta$ to zero: $\zeta=0.$ Hence Eq.(152)
reduces t
\begin{equation}
g_{\mu \nu}=\left(
\begin{array}
[c]{cccc
g_{00} & 0 & 0 & 0\\
0 & g_{11} & 0 & 0\\
0 & 0 & g_{22} & 0\\
0 & 0 & 0 & g_{33
\end{array}
\right)
\end{equation}
an
\begin{equation}
g_{00}=\frac{(1-\frac{m}{2r})^{2}}{(1+\frac{m}{2r})^{2}
\end{equation}
\[
g_{01}=g_{10}=0
\]
This matrix (Eq.158) has Euclidean space signature $++++.$ It's worth noting
that for us to impose Lorentz signature on the above matrix, we must invoke
the Levi-Civita indicator \ on the matrix to account for the special
relativity in the limiting case, and to also transform the metric from 4
dimensions to 3+1 dimensions. It doesn't matter whether we insert the Lorentz
signature before or after solving the Eq.(158), due to the fact that it is a
diagonalized matrix \cite{FIMMW}.
The fact that Majorana neutrino Lagrangian preserves CP symmetry means that it
possesses symmetry axis ($\theta=0$). The reason why Majorana neutrino
Lagrangian preserves CP symmetry is that Majorana particles are invariant to
CP transformation (because Majorana particle = Majorana antiparticle)
(\cite{TOGO19}, P. 203-205).
Consequently (by setting $\theta=0$), Eqs.(157-158) reduce t
\begin{equation}
g_{33}=0
\end{equation}
\begin{equation}
g_{\mu \nu}=\left(
\begin{array}
[c]{cccc
g_{00} & 0 & 0 & 0\\
0 & g_{11} & 0 & 0\\
0 & 0 & g_{22} & 0\\
0 & 0 & 0 & 0
\end{array}
\right)
\end{equation}
Hence, Eq.(147) become
\begin{equation}
M_{\alpha \beta}^{L}=\frac{\upsilon^{2}}{M_{X}}\text{ }\left(
\begin{array}
[c]{cccc
g_{00} & 0 & 0 & 0\\
0 & g_{11} & 0 & 0\\
0 & 0 & g_{22} & 0\\
0 & 0 & 0 & 0
\end{array}
\right)
\end{equation}
By solving Eq.(159) completely for mass $m$, we hav
\begin{equation}
m=\frac{2r(1-g_{00}^{1/2})}{(1+g_{00}^{1/2})
\end{equation}
Multiplying Eq.(159) by $\frac{(1+\frac{m}{2r})^{2}}{(1+\frac{m}{2r})^{2}}$
and solve the resulting equation completely for mass $m$
\begin{equation}
m=\pm2r[1-(g_{11}g_{00})^{1/2}]^{1/2
\end{equation}
where $\pm$ sign \ in Eq.(164) leads to the same result. By comparing Eq.(163)
with Eq.(164), we ge
\begin{equation}
g_{11}=\frac{1}{g_{00}}\left[ 1-\frac{(1-g_{00}^{1/2})^{2}}{(1+g_{00
^{1/2})^{2}}\right] ^{2
\end{equation}
Since our calculated value for $g_{00}$ is $g_{00}=0.1797,$ thus Eqs.(156) and
(165) reduce t
\begin{equation}
g_{11}=3.8922
\end{equation}
\begin{equation}
g_{22}=3.8922r^{2
\end{equation}
We now look for an ingenious way to eliminate $r^{2}$ in Eq.(167). It is
tempting to straightforwardly use unit sphere formalism but this direct
approach will not work because $M_{\alpha \beta}^{L}$ is a linear superposition
of three different neutrino masses, albeit from the same source. The best
mathematical approach that we can use to circumvent this problem is the
3-sphere formulation (note that this approach is anchored on the fact that
3-sphere is a sphere in 4-dimensional Euclidean space) \cite{GEOLE, MAAPE}
\begin{align}
r^{2} & =\overset{3}{\underset{i=0}{\sum}}(x_{i}-C_{i})^{2}=(x_{0}-C_{0
)^{2}+(x_{1}-C_{1})^{2}+\nonumber \\
& (x_{2}-C_{2})^{2}+(x_{3}-C_{3})^{2
\end{align}
We turn Eq.(168) on its head by using it to represent three spheres
(representing three types of neutrino) with common origin. This reduces
Eq.(168) to ordinary linear superposition of three spheres (in two-dimension,
they reduce to circles) with common origin / source. Suppose we further impose
the condition that the common origin is centred at zero (i.e., $x_{0}-C_{0
=0$), then Eq.(168) reduces t
\begin{equation}
r^{2}=\overset{3}{\underset{i=1}{\sum}}(x_{i}-C_{i})^{2}=(x_{1}-C_{1
)^{2}+(x_{2}-C_{2})^{2}+(x_{3}-C_{3})^{2
\end{equation}
where $x_{1}-C_{1},$ $x_{2}-C_{2}$ and $x_{3}-C_{3}$ are the radii of the
spheres. By using unit sphere formalism individually on the three sphere,
Eq.(169) reduces t
\begin{equation}
r^{2}=\overset{3}{\underset{i=1}{\sum}}(x_{i}-C_{i})^{2}=3
\end{equation}
Thus, Eq.(167) become
\begin{equation}
g_{22}=11.6766
\end{equation}
and Eq.(161) reduces t
\begin{equation}
g_{\mu \nu}=\left(
\begin{array}
[c]{cccc
0.1797 & 0 & 0 & 0\\
0 & 3.8922 & 0 & 0\\
0 & 0 & 11.6766 & 0\\
0 & 0 & 0 & 0
\end{array}
\right)
\end{equation}
With $M_{X}=1.63\times10^{16}GeV$ (see subsection C of section VI) and
$\upsilon=246GeV$ \cite{DBAL}, $\frac{\upsilon^{2}}{M_{X}}=3.7meV$ (see Eq.(79)).
Hence Eqs.(149-151) reduce t
\begin{equation}
m_{0}=0.665meV
\end{equation}
\begin{equation}
m_{1}=14.401meV
\end{equation}
\begin{equation}
m_{2}=43.203meV
\end{equation}
where $m_{0}\equiv M_{00}^{L},$ $m_{1}\equiv M_{11}^{L},m_{2}\equiv M_{22
^{L}$ and $m_{3}\equiv0.$ And Eq.(162) reduces t
\begin{equation}
M_{\alpha \beta}^{L}=3.7meV\left(
\begin{array}
[c]{cccc
0.1797 & 0 & 0 & 0\\
0 & 3.8922 & 0 & 0\\
0 & 0 & 11.6766 & 0\\
0 & 0 & 0 & 0
\end{array}
\right)
\end{equation}
For the purpose of book-keeping, we set $m_{0}\equiv m_{1},$ $m_{1}\equiv
m_{2}$ and $m_{2}\equiv m_{3}$. It is evident from Eqs.(173-175) that
$m_{1}<m_{2}<m_{3},$ which is clearly a Normal Mass Hierarchy signature. The
validity of which can also be confirmed by considering the approach of M.
Kadastik et al \cite{MKMR}
\begin{equation}
N_{1}=\frac{-m_{1}^{2}+m_{2}^{2}+3m_{3}^{2}}{2m_{1}^{2}+m_{2}^{2}
\end{equation}
wit
\begin{align}
N_{1} & >1\rightarrow \text{ normal mass hierarchy}\nonumber \\
N_{1} & <1\rightarrow \text{ inverted mass hierarchy}\nonumber \\
N_{1} & \approx1\rightarrow \text{ degenerate masses
\end{align}
Taking the values of $m_{1},m_{2},$ and $m_{3}$ from Eqs.(173-175), Eq.(177)
gives the value $N_{1}\approx28,$ which satisfies the criterion of normal mass
hierarchy in Eq.(178).
The \ mass-squared difference is defined mathematically a
\begin{equation}
\Delta m_{ij}^{2}=m_{i}^{2}-m_{j}^{2
\end{equation}
where $i>j.$ Based on the Eq.(179) (and taking into account Eqs.(173-175)), we
have the following equations
\begin{equation}
\Delta m_{21}^{2}=m_{2}^{2}-m_{1}^{2}=2.06\times10^{-4}eV^{2
\end{equation}
\begin{equation}
\Delta m_{31}^{2}=m_{3}^{2}-m_{1}^{2}=1.87\times10^{-3}eV^{2
\end{equation}
\begin{equation}
\Delta m_{32}^{2}=m_{3}^{2}-m_{2}^{2}=1.57\times10^{-3}eV^{2
\end{equation}
\subsubsection{Experimental Test}
\textbf{(1)} \ The combined results of all solar experiments with
Super-Kamiokande-I zenith spectrum and KamLAND data give $\Delta m_{sol
^{2}=\Delta m_{21}^{2}=2\times10^{-4}eV^{2}$ at $99.73\%$ C. L.\cite{HNWJ}.
This experimental value is compatible with our Eq.(180): Thus confirming the
validity of our $m_{1}$ and $m_{2}$ values.
\textbf{(2) }From the atmospheric neutrino oscillation experiments, the bound
on the mass of the heaviest neutrino is $m_{3}\gtrsim40meV$ \cite{HVKLK}. This
value experimentally confirms our value in the Eq.(175). We therefore assert
that the values of our $m_{1},m_{2}$ and $m_{3}$ conform with the experimental data.
\subsection{Observational Test}
The energy density of light massive neutrinos is given as (\cite{TOGO19}, P.
590-591)
\begin{equation}
\Omega_{\nu}^{0}h^{2}=\frac{\overset{3}{\underset{i=1}{\sum}}m_{i}}{94.14eV
\end{equation}
where $\Omega_{\nu}^{0}h^{2}$ is the neutrino energy density (which is also
known as the \emph{Gershtein-Zeldovich limt or Cowsik-McClelland limit}) and
$\overset{3}{\underset{i=1}{\sum}}m_{i}$ is the sum of the three active
neutrino masses. From Eqs.(173-175), $\overset{3}{\underset{i=1}{\sum}
m_{i}=0.058269eV.$ To obtain an accurate result, we must convert the
calculated value of sum of the neutrino into two decimal places in conformity
with the denorminator of Eq.(183). Thu
\begin{equation}
\overset{3}{\underset{i=1}{\sum}}m_{i}\approx0.06eV
\end{equation}
Consequently,
\begin{equation}
\Omega_{\nu}^{0}h^{2}\approx0.00064
\end{equation}
Eqs.(184) and (185) are the fiducial parameter values that have been taken to
be valid for the background Cosmology to be consistent with the most recent
cosmological measurements \cite{JBDH}. Here, it turns out that the South Pole
Telescope (SPT) cluster abundance is lower than preferred by either the WMAP9
or Planck+WMAP9 polarization data for the Planck base $\Lambda CDM$ model; but
assuming a normal mass hierarchy for the sum of of the neutrino masses with
$\sum m_{\nu}\approx0.06eV$ (\cite{TOGO18}, P.237 \& 239) the data sets are
found to be consistent at the $1.0\sigma$ level for WMAP9 and $1.5\sigma$
level for Planck+WMAP9 \cite{SBDCQ}. Obviously, our calculations confirm that
the Planck base $\Lambda CDM$ model's prediction of sum of the neutrino masses
is correct.
\section{Dark Energy}
For the strong gravity theory to be a complete theory of \textbf{QCD} and
\textbf{gravity}, it must be tested okay at both small and large scale
distances. The large distance ,here, is the cosmological scale where "dark
energy" is dominant $-$ Dark energy ($\rho$) is an unknown form of energy,
which was invented to account for the acceleration of the expanding universe.
The observed value (upper limit) of $\rho$ is $\rho^{observed}\approx
(2.42\times10^{-3}eV)^{4}$ \cite{AGRI} $-$ A major outstanding problem is that
most quantum field theories \textbf{naively} predict a huge value for the dark
energy: the prediction is wrong by a factor of $10^{120}$ \cite{AGRI1}. The
origin of the problem is now clear to us: \emph{Eqs.(118) and (120) clearly
show that the energy-momentum tensor is related to the invariant (Weyl)
Lagrangian density, but not to the total energy density of a vacuum, which is
not operationally measurable, due to quantum fluctuations!}
The energy density of any given system, such as the universe, is categorized
into two parts: one is due to the true vacuum ($\rho$) and the other to the
matter and radiation (pressure ($p$)) present in the system. These two types
of energy density are related by the energy-momentum tensor $T_{\mu \nu}$
\cite{SW-2}
\begin{equation}
T_{\mu \nu}=\text{ }\left(
\begin{array}
[c]{cccc
\rho & 0 & 0 & 0\\
0 & -p & 0 & 0\\
0 & 0 & -p & 0\\
0 & 0 & 0 & -p
\end{array}
\right)
\end{equation}
\emph{( Note that by putting Eq.(186) into Eq.(134), two things happen: (1)
The energy density of the true vacuum becomes negative, meaning repulsive
gravity and (2) the energy density of matter and radiation becomes positive,
meaning attractive gravity. These two results are compatible with the
observations. The gravity of ordinary matter/energy is always attractive,
while the gravity of true vacuum (i.e., dark energy) is always repulsive.)}
Since it has been observationally confirmed that the acceleration of the
expanding universe is controlled by the energy density of true vacuum ($\rho
$),but not by the matter/energy content of the universe, we can write (from
Eq.(186)
\begin{equation}
T_{00}=\rho
\end{equation}
By combining Eq.(120) with Eq.(187), we get Eq.(50)
\begin{equation}
\rho=g_{00}[E_{vac}]^{4
\end{equation}
Note that the Weyl Lagrangian density scales as $C_{\alpha \beta \gamma \delta
}C^{\alpha \beta \gamma \delta}\sim \lbrack E_{vac}]^{4}$ (where $E_{vac}$ denotes
the effective (Weyl) Lagrangian of the vacuum), due to the scale invariance of
the Weyl's action in the Eq.(7) (\cite{TOGO19}, P.206). To see the repulsive
nature of the dark energy, we combine Eqs.(134) and (188) to get
\begin{equation}
G_{\mu \nu}=-8\pi G_{N}\text{ }\left(
\begin{array}
[c]{cccc
\rho & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{array}
\right)
\end{equation}
where $G_{\mu \nu}\equiv R_{\mu \nu}-\frac{1}{2}g_{\mu \nu}$ $R$ is the
Einstein's tensor. The negative sign in the Eq.(189) is the hallmark of the
repulsive nature of dark energy. It is to be noted that we have not invoked
the presence of the famous cosmological constant ($\Lambda$) in the Eq.(134)
because it is not needed for the expanding or contracting universe
(\cite{TOGO18}, P.232). All what is needed for the accelerating expansion of
the universe, as currently observed, is the Eq.(189); while the combination of
the Eqs.(134) and (186) tell us that the universe will either expand (if the
right-hand side of the Eq.(134) is \textbf{negative (}$\rho$\textbf{)}) or
contract (if the right-hand side of the Eq.(134) is \textbf{positive (p)}).
Albert Einstein was right after all: the introduction of the fudged factor
($\Lambda$) was his greatest blunder. \emph{You cannot out-einstein Einstein!}
Using Eqs.(56) and (79), Eq.(188) becomes
\begin{equation}
\rho=(2.41\times10^{-3}eV)^{4
\end{equation}
Obviously, Eq.(190) compares favorably with the upper bound value of the
observed $\rho$ ($\rho^{observed}=(2.42\times10^{-3}eV)^{4}$) \cite{AGRI}.
It has long been suggested that the nonrelativistic massive neutrinos may give
a significant contribution to the energy density (i.e. the so-called dark
energy) of the universe (\cite{TOGO19}, P.590). This statement has been
confirmed to be true via Eqs.(176) and (188): with $E_{vac}=\upsilon^{2
/M_{X}=3.7meV$.
We understand, of course, that the energy of vacuum is extremely large (due to
quantum fluctuations) but the \textbf{strong gravity }and
\textbf{Majorana-neutrino Lagrangian (a conserved quantity that encodes the
information about the dynamics of the universe)} tell us that it is only the
effective Lagrangian of the universe that is physically measurable (i.e.
$\upsilon^{2}/M_{X}=3.7meV$).
\section{Dark Matter OR Leftover Yang-Mills-Gravity Force?}
\subsection{The Galaxy Rotation Problem (GRP)}
The GRP is the inconsistency between the theoretical prediction and the
observed galaxy rotation curves, assuming a centrally dominated mass
associated with the observed luminous material. The direct computation of mass
profiles of galaxies from the distribution of stars and gas in spirals and
mass-to-light ratios in the stellar disks, utterly disagree with the masses
derived from the observed rotation curves using Newtonian force law of
gravity. Based on the Newtonian dynamics, most of the mass of the galaxy had
to be in the galactic bulge near the center, and that stars and gas in the
disk portion should orbit the center at decreasing velocities with increasing
radial distance, away from the galactic center \cite{V.R,ECO,VTR}: This is
achieved by equating the centripetal force experienced by the orbiting
gas/stars to the Newton force law (\cite{TOGO18}, P.241)
\begin{align}
F_{c} & =F_{N},\nonumber \\
v & =\sqrt{\frac{G_{N}\text{ }M}{r}}\implies v(r)\propto1/\sqrt{r
\end{align}
where $v$ is the speed of the orbiting star, $M$ is the centrally dominated
mass of the galaxy and $r$ is the radial distance from the center of the galaxy.
However, the actual observations of the rotation curve of spirals completely
disagree with the Eq.(191): the curves do not decrease in the expected inverse
square root relationship. Rather, in most galaxies observed, one finds that
$v$ becomes approximately constant out to the largest values of radial
distance $(r)$ where the rotation curve can be measured (\cite{TOGO18},
P.241). A solution to this problem was to hypothesize the existence of a
substantial invisible amount of matter to account for this inexplicable extra
mass/gravity force that keeps the speed of orbiting stars/gas approximately
constant for extremely large values of $r$. \ This extra mass/gravity was
dubbled "dark matter" \cite{FKA}.
Though dark matter is by far the most accepted explanation of the rotation
problem, other alternatives have been proposed with varying degrees of
success. The most notable of these alternatives is the Modified Newtonian
Dynamics (MOND), which involves modifying the Newton force law by
phenomenologically adding a small fudged factor $\alpha_{0}$
\begin{equation}
F_{MOND}=\frac{G_{N}\text{ }Mm}{r^{2}}+\alpha_{0
\end{equation}
Within the central bulge of galaxy, the first term of the Eq.(192) dominates,
and to the largest value of $r$ where the rotation curve can be measured (the
domain of dark matter), the second term dominates. MOND has had a remarkable
amount of success in predicting the flat rotation curves of
low-surface-brightness galaxies, matching the Tully-Fisher relation of the
baryonic distribution, and the velocity dispersions of the small orbiting
galaxies of the local group \cite{22}.
The ensuing fundamental question here is: Do we really need to modify
\textbf{Newtonian dynamics} and \textbf{Einstein's GR} before we could account
for this extra gravitational force with no "origin"? The answer is a big No!
The two theories are fantastically accurate in their respective domains of
validity. But the core of the problem is that we assumed that both theories
should be valid at all distance scales (from particle physics scale (say,
Planck scale) to the edge of the Universe); but the irony is that they are
not. It turns out that in order to solve GRP, one needs a force law that is
valid for all distance scales. This is where BCJ construction kicks in. As we
have painstakingly demonstrated (by obtaining Eqs.(106) and (134) from
Eq.(27)) , the major conclusion of double copy construction is the existence
of gauge/gravity duality. This duality property led to the formulation of the
Yang-Mills-Gravity (YM) force in the Eq.(137). A close perusal of Eqs.(137)
and (192) shows that both equations are essentially the same; and that
Eq.(137) explains the \textbf{universal rotation curve perfectly, by producing
a flatly stable curve at large values of }$r$ (see red curve of the
\textbf{Fig.3}) $-$ (a universal rotation curve can be expressed as the sum of
exponential distributions of visible matter that reduce to zero with large
values of $r$ away from the center of galaxy, and spherical dark matter halo
(just like $\sigma$ in the Eq.(137)) that tapers to flat rotation curve with
constant speed and gravitational force \cite{45}) $-$ Hence, the deep
significance of structure of the Eq.(137) to cosmology is not accidental but
fundamental to the evolutionary histories of our universe.
As pointed out by \emph{V. de Sabbata and C. Sivaram}, a deviation from the
Newton's inverse square law can arise naturally from $R+R^{2}$ theory (such as
strong gravity theory), whose solution gives a Newtonian/Coulomb potential and
a Yukawa term (\cite{VDS}, P.4). Thus it is natural to investigate the
behavior of the Eq.(137) on a cosmological scale. Of course, Eq.(137) tapers
to $\sigma$ on the cosmological scale
\begin{equation}
F_{YMG}=0.299GeV^{2}=2.449\times10^{5}N
\end{equation}
with mass ($M_{g}$
\begin{equation}
M_{g}=\sqrt{F_{YM}}=546.809MeV
\end{equation}
As a result of the Eqs.(137) and (194), the following facts emerge: (1) empty
space/vacuum is permeated with constant-attractive-gravitational force (dark
matter) with mass $M_{g}=546.809MeV$. (2) The dark matter is stable on
cosmological time scales due to the flat curve property of $M_{g}$ for
$r\longrightarrow \infty$ (see red curve in the Fig.3). (3) Newtonian dynamics
and Einstein's GR need no modifications. (4) MOND is phenomenologically
correct and happens to be compatible with YMG force law.
\section{Repulsive Gravity and Cosmic Inflation}
It is true (from the Eq.(137)) that $F_{YMG}$ can only get more repulsive as
we probe shorter and shorter distances. As such, the separating distance
between two gluons cannot taper to zero. This means that the theory "realizes
asymptotic freedom" because two gluons cannot sit on top of each other (i.e.
separating distance $r=0$ is forbidden), hence they are \textbf{almost free}
to move around due to the \textbf{non-existent of attractve force at
$r\ll \Lambda_{QCD}^{-1}$. This explanation is then carried by analogy into the
construction of spacetime geometry. The fact that the spacetime metric
($g_{\mu \nu}$) is \emph{ab initio} constructed out of the two entangled gluons
(BCJ construction) means that spacetime \textbf{cannot} realize singularity
(i.e. $r\neq0$). From the foregoing, one is therefore forced to ask a
fundamentally disturbing question: How did our universe "begin", or what
existed "before" the Big Bang?
A. C. Doyle famously claimed that "once you eliminate the impossible, whatever
remain, no matter how improbable, must be the truth." In line with this quote,
we posit that the behavior of the universe during the first fraction of a
second ($t<10^{-44}s$) after the Big Bang can only be a matter for conjecture
but we are certain that $t\neq0$ and $r\neq0$ due to the ever-increasing
repulsive nature of $F_{YMG}$ as we probe short-distance scales. Hence,
perhaps our universe had its origin in the ever-recurring interplay between
\textbf{expanding} and \textbf{contracting} universe. We are sure of the
\textbf{former} but the \textbf{latter }is highly unlikely, given the present
behavior of dark energy and the ever-constant effective Lagrangian of the
vacuum $\upsilon^{2}/M_{X}=3.7meV$.
The fact that most of the calculations done using Planck epoch parameters
(i.e., Planck time, Planck energy and Planck length) conform to what is
obtainable in nature strongly suggests that any epoch less than Planck epoch
is operationally meaningless. Thus, a plausible theory can be constructed
(starting from the Planck time $t=10^{-44}s$) by bringing the calculations
done in the\textbf{\ section IX }of this paper to bear: After about
$10^{-44}s$ \ (with Planck length $10^{-19}GeV^{-1}$) the repulsive gravity
was $F_{YMG}=-2.6\times10^{38}GeV^{2}(=-2.129\times10^{44}N).$ This caused the
universe to undergo an exponential expansion (due to the exponential nature of
$F_{YMG}$). The exponential expansion lasted from $10^{-44}s$ after the Big
Bang/Bounce to time $10^{4}GeV^{-1}(=6.6\times10^{-21}s):$ this is the time
that produced the stable-flat curve (red curve in the Fig.(3)), signaling the
demise of the exponential era. Following this cosmic inflationary (exponential
expansion) epoch, the dark matter $-$ which is nothing but the remnant of
Yang-Mills-Gravity force, $F_{YMG}=2.449\times10^{5}N$ $-$ dominates and the
universe continues to expand but at a less rapid rate. The battle of supremacy
between dark energy (repulsive gravity) and dark matter (attractive gravity)
was won by dark energy during the time $t=6.6\times10^{-21}s$: since dark
energy is intimately connected to the spacetime metric itself (see Eq.(189)),
it would have increased tremendously during the exponential expansion period,
when the space increased in size by factor of $10^{23}(=10^{4}GeV^{-1
/10^{-19}GeV^{-1})$ in a small fraction of a second! The victory of dark
energy over dark matter means that our universe will continue to expand
\emph{ad infinitum}: \emph{we are living in a runaway universe.}
Another formal way of explaining the inflationary epoch (i.e. the
vacuum-dominated universe approach) of the universe $-$ which makes the
pre-existing universe scenario conceivable $-$ can be effected by using
\textbf{Fig.3}. In this approach, it is believed that the universe passed
through an early epoch of vacuum dominance (i.e. inflation), presided over by
the varying potential energy (i.e. Eq.(113)) of the scalar field, called
inflaton (\cite{TOGO19}, P.564). When the scalar field reached the minimum of
the potential (which corresponds to the minimum of the potential curve (blue
curve) in the Fig.3) exponential expansion ended. Based on the law of
conservation of energy, the reduction in the potential energy (due to the
rolling down of the inflaton from the top of the potential curve, i.e. the
decaying of the inflaton field) generated hot (quark-gluon) plasma epoch
(which later generated the matter and radiation epoch). So from then on, the
Big Bang evolved according to the Standard Cosmological Model (\cite{TOGO19},
P.564); and governed by the Eqs.(105), (134), (189) and (193).
We conclude this section with the following facts: (1) The initial conditions
of our universe are the Planck epoch parameters (i.e., Planck time, Planck
energy and Planck length). (2) Inflationary theory is correct. (3) The
expansion epoch of the universe consists two phases: (i) the exponential
expansion (governed by $F_{YMG}=-2.129\times10^{44}N$) and the normal
accelerating expansion (governed by Eq.(189)).
\section{Conclusion}
We have shown, in this paper, that the point-like theory of quantum gravity
(strong gravity theory) is geometrically equivalent to the four-dimensional,
nonlinear quantum gauge field theory (i.e. QYMT), and the Einstein General
Relativity. The inherent UV regularity, BCJ and gauge-gravity duality
properties of this renormalizable theory allowed us to solve four of the most
difficult problems in the history of physics: namely, dark matter, existence
of quantum Yang-Mills theory on $R^{4}$, neutrino mass and dark energy problems.
In any geometric field theory, all physical quantities and fields should be
induced from one geometric entity (Weyl's action) and the building blocks of
the geometry used (2-gluon configuration/double-copy construction). This
principle has been inspired by Einstein's statement $-$ "a theory in which the
gravitational field and electromagnetic field do not enter as logically
distinct structures, would be much preferable"$-$ and established in this
paper. As we have demonstrated, this principle implies that the Weyl
Lagrangian density used to construct the field equations of the strong gravity
theory is composed of the building blocks (two gluons) of the geometry and
their derivatives (in which the curvature arised in terms of derivatives of
the dressed gluon field). In other words, Weyl Lagrangian is not constructed,
\emph{a priori}, \textbf{from different parts} (each corresponding to a
certain field) as usually done. This makes strong gravity theory to pass the
test of unification principle.
\textbf{Acknowledgement}
Mr. O. \ \ F. Akinto is indebted to the Department of Physics, CIIT, Islamabad
\ and the National Mathematical Center Abuja, Nigeria \ for their financial support.
|
1,116,691,497,291 | arxiv | \section{Introduction}\label{sec: introduction}
Let $q\geq 5$ be a prime number, and let $N$ be a positive integer.
Let $X_0(Nq)$ denote the modular curve over ${\mathbb{Q}}$ and $J_0(Nq)$ its Jacobian variety. For any integer $n$, there is the Hecke operator $T_n$ acting on $J_0(Nq)$.
Let $\Phi_q(Nq)$ denote the component group of the special fiber ${\cmcal{J}}$ of the N\'eron model of $J_0(Nq)$ at $q$.
According to the theorems of Ribet \cite{R88, R90} (when $q$ does not divide $N$) and Edixhoven \cite{Ed91} (in general),
the action of the Hecke algebra on $\Phi_q(Nq)$ is ``Eisenstein.''
Here by ``Eisenstein'' we mean the Hecke operator $T_\ell$ acts on $\Phi_q(Nq)$ by $\ell+1$ when a prime number $\ell$ does not divide $Nq$.\footnote{On the other hand, Ribet and Edixhoven did not proceed to compute the action of the Hecke operator $T_p$ on $\Phi_q(Nq)$ for a prime divisor $p$ of the level $Nq$ because their results were enough for their applications.} In this article, we compute the action of the Hecke operators $T_\ell$ on the component group $\Phi_q(Nq)$ when $\ell$ divides $Nq$ and $q$ does not divide $N$.
Here is an exotic example\footnote{this phenomenon cannot occur when the residual characteristic is greater than $3$} which leads us to this study: Let $N=\prod_{i=1}^\nu p_i$ be the product of distinct prime numbers with $\nu\geq 1$, and let $q\equiv 2 \text{ or } 5 \pmod 9$ be an odd prime number. Assume that $p_i \equiv 4 \text{ or } 7 \pmod 9$ for all $1 \leq i \leq \nu$.
Let ${\mathbb{T}}(Nq)$ (resp. ${\mathbb{T}}(N)$) denote the ${\mathbb{Z}}$-subalgebra of ${\operatorname{End}}(J_0(Nq))$ (resp. ${\operatorname{End}}(J_0(N))$) generated by all the Hecke operators $T_n$ for $n\geq 1$. Let
\[
{\mathfrak{m}}:=(3, T_{p_i}-1, T_q+1, T_\ell-\ell-1 : \text{ for all } 1 \leq i \leq \nu, \text{ and for primes } \ell \nmid Nq \} \subset {\mathbb{T}}(Nq)
\]
and
\[
{\mathfrak{n}}:=(3, T_{p_i}-1, T_\ell-\ell-1 : \text{ for all } 1 \leq i \leq \nu, \text{ and for primes } \ell \nmid N \} \subset {\mathbb{T}}(N)
\]
be Eisenstein ideals. By \cite[Theorem 1.4]{Yoo3}, ${\mathfrak{m}}$ is maximal. Furthermore, ${\mathfrak{n}}$ is maximal if and only if $\nu \geq 2$.
As observed by the second author \cite{Yoo10}, the dimension of $J_0(N)[{\mathfrak{n}}]=\nu$ if ${\mathfrak{n}}$ is maximal, i.e., $\nu \geq 2$. (Here $J_0(N)[{\mathfrak{n}}]:=\{ x \in J_0(N)(\overline{{\mathbb{Q}}}) : Tx =0 \text{ for all } T \in {\mathfrak{n}} \}$.)
It is an extension of $\mu_3^{\oplus {\nu-1}}$ by $\zmod 3$,
and it does not contain a submodule isomorphic to $\mu_3$. On the other hand, the dimension of $J_0(Nq)[{\mathfrak{m}}]$ is either $2\nu$ or $2\nu+1$. Furthermore $J_0(Nq)[{\mathfrak{m}}]$ contains a submodule ${\cmcal{N}}$ isomorphic to $J_0(N)[{\mathfrak{n}}]$, and it also contains $\mu_3^{\oplus \nu}$ (which is contributed from the Shimura subgroup). As ${\cmcal{N}}$ is unramified at $q$, by Serre-Tate \cite{ST68}, ${\cmcal{N}}$ maps injectively into ${\cmcal{J}} [{\mathfrak{m}}]$ and it turns out that its image is isomorphic to ${\cmcal{J}}^0[{\mathfrak{m}}]$, where ${\cmcal{J}}^0$ is the identity component of ${\cmcal{J}}$.
(Note that $\Phi_q(Nq)$ is the quotient of ${\cmcal{J}}$ by ${\cmcal{J}}^0$.)
Since $\mu_3^{\oplus \nu}$ is also unramified at $q$, it maps into ${\cmcal{J}}[{\mathfrak{m}}]$ and therefore its image maps injectively to $\Phi_q(Nq)[{\mathfrak{m}}]$. (This statement is also true when $\nu=1$.) The structure of the component group $\Phi_q(Nq)$ is known by the work of Mazur and Rapoport \cite{MR77}\footnote{there are some minor errors in the paper, which are corrected by Edixhoven \cite[{\textsection} 4.4.1]{Ed91}}:
\[
\Phi_q(Nq)=\Phi \oplus (\zmod 3)^{2^\nu-1},
\]
where $\Phi$ is cyclic and generated by the image of the cuspidal divisor $(0)-(\infty)$. The action of the Hecke operators on $\Phi$ is well-known (e.g. \cite[Appendix A1]{Yoo2}), and so $\Phi[{\mathfrak{m}}]=0$. Therefore $(\zmod 3)^{2^\nu-1}[{\mathfrak{m}}] \neq 0$ and its dimension is at least $\nu$. Indeed it is equal to $2^{\nu-1}$, which can easily be computed by the theorems below.
Now, we introduce our results. The first one is as follows:
\begin{thm}\label{thm: main theorem Tp}
For a prime divisor $p$ of $N$, the Hecke operator $T_p$ acts on $\Phi_q(Nq)$ by $p$.
\end{thm}
The key idea of the proof is that the two degeneracy maps coincide on the component group (cf. \cite{R88}, \cite[Lemme 2 of {\textsection} 4.2]{Ed91}).
Now, the missing one is the action of the Hecke operator $T_q$ on $\Phi_q(Nq)$. Note that $T_q$ acts on $\Phi_q(Nq)$ by an involution because the action of the Hecke algebra on $\Phi_q(Nq)$ is ``$q$-new.'' To describe its action more precisely, we need notation:
For $N=\prod_{p \mid N} p^{n_p}$ being the prime factorization of $N$ (i.e., $n_p >0$), let $\nu:=\#\{ p : p \neq 2, 3 \}$ and let
\begin{align*}
u &:=
\begin{cases}
0 & \text{if } q \equiv 1 \pmod 4 ~~\text{ or } ~~4 \mid N ~~\text{ or } ~~\exists ~p \equiv -1 \pmod 4 \\
1 & \text{ otherwise},
\end{cases}
\\
v &:=
\begin{cases}
0 & \text{if } q \equiv 1 \pmod 3 ~~\text{ or } ~~9 \mid N ~~\text{ or } ~~\exists ~p \equiv -1 \pmod 3\\
1 & \text{otherwise}.
\end{cases}
\end{align*}
Suppose that $(u, v)=(0, 0)$ or $\nu=0$. Then $\Phi_q(Nq)=\Phi$ and $T_q$ acts on $\Phi$ by $1$,
where $\Phi$ is the cyclic subgroup generated by the image of the cuspidal divisor $(0)-(\infty)$ (Proposition \ref{prop: case 1}).
If $\nu \geq 1$, $\Phi_q(Nq)$ becomes isomorphic to
\[
\Phi' \oplus \mathbf{A} \oplus \mathbf{B},
\]
where $\mathbf{A} \simeq (\zmod 2)^{\oplus u(2^\nu-2)}$, $\mathbf{B} \simeq (\zmod 3)^{\oplus v(2^\nu-1)}$ and $\Phi'$ is a cyclic group containing $\Phi$ and $\Phi'/\Phi \simeq (\zmod {2^u})$.\footnote{The structure of $\Phi_q(Nq)$ is already known by Mazur and Rapoport \cite{MR77} when $N$ is square-free and prime to $6$, and by Edixhoven \cite[{\textsection} 4.4.1]{Ed91} in general.}
\begin{thm}\label{thm: main theorem Tq}
Assume that $(u, v)\neq (0, 0)$ and $\nu \geq 1$.
\begin{enumerate}
\item
Suppose that $v=1$.
Then there are distinct subgroups $B_i \simeq \zmod 3$ of $\mathbf{B}$ so that $\mathbf{B} = \oplus B_i$. For any $1\leq i \leq (2^\nu-1)$, $T_q$ acts on $B_i$ by $(-1)^i$.
\item
Suppose that $u=1$.
Then there are distinct subgroups $A_i \simeq \zmod 2$ of $\mathbf{A}$ so that $\mathbf{A} = \oplus A_i$.
For any $1 \leq k \leq (2^{\nu-1}-2)$, $T_q$ acts on $A_{2k-1}\oplus A_{2k}$ by the matrix $\mat 1 0 1 1$.\footnote{This remind us the result by Mazur \cite{M77}: when $N$ is a prime number, the kernel of the Eisenstein prime of $J_0(N)$ containing a prime number $\ell$ is completely reducible when $\ell$ is odd, and is indecomposable when $\ell=2$.} In other words, if $A_{2k-1}=\<\mathbf{u}_{2k-1}\>$ and $A_{2k}=\<\mathbf{u}_{2k}\>$, then
\begin{equation*}
T_q(\mathbf{u}_{2k-1})=\mathbf{u}_{2k-1}+\mathbf{u}_{2k} {\quad \text{and} \quad} T_q(\mathbf{u}_{2k})=\mathbf{u}_{2k}.
\end{equation*}
\end{enumerate}
\end{thm}
For a complete description of the action of $T_q$ on each subgroups, see Section \ref{sec: Tq action}.
\section{Supersingular points of $X_0(N)$} \label{sec: supersingular}
From now on, we always assume that \textsf{$q\geq 5$ is a prime number} and \textsf{$N$ is a positive integer which is prime to $q$}.
Let \textsf{$p$ denote a prime divisor of $N$}. Let $\mathbf{F}$ be an algebraically closed field of characteristic $q$.
Let $\Sigma(N)$ denote the set of supersingular points of $X_0(N)(\mathbf{F})$.
Since we assume that $q\geq 5$, the group of automorphisms of supersingular points is cyclic of order $2$, $4$ or $6$. Let
\[
\Sigma_n(N) := \{ s \in \Sigma(N) : \# {\operatorname{Aut}}(s)=n \} {\quad \text{and} \quad} s_n(N) := \#\Sigma_n(N).
\]
Note that $s_4(N)=u\cdot 2^\nu$ and $s_6(N)=v \cdot 2^\nu$ (cf. \cite[{\textsection} 4.2, Lemme 1]{Ed91}),
where $u, v$ and $\nu$ are as in Section \ref{sec: introduction}.
Moreover $s_2(N)$ can be computed using Eichler's mass formula \cite[Theorem 12.4.5, Corollary 12.4.6]{KM85}:
\begin{equation} \label{eqn: mass formula}
\frac{s_2(N)}{2} + \frac{s_4(N)}{4}+\frac{s_6(N)}{6} = \frac{(q-1)Q}{24},
\end{equation}
where $Q:=N\prod_{p \mid N} (1+p^{-1})$ is the degree of the degeneracy map $X_0(N) \to X_0(1)$.
In the remainder of this section, we study $\Sigma_4(N)$ and $\Sigma_6(N)$ in detail. (See also \cite[{\textsection} 2]{R88}, \cite[{\textsection} 4]{R97} or \cite[{\textsection} 4.2]{Ed91}.) In the section below, \textsf{we always assume that $\nu\geq 1$}, i.e., there is a prime divisor $p\geq 5$ of $N$. (If $\nu=0$ then $s_{2e}(N)\leq 1$ for $e=2$ or $3$, and the description is very simple.)
Let ${\cmcal{E}}$ be a supersingular elliptic curve with ${\operatorname{Aut}}({\cmcal{E}})=\< \sigma \>$, and let $C$ be a cyclic subgroup of ${\cmcal{E}}$ of order $N$.
Assume that $q \equiv -1 \pmod 4$ (resp. $q \equiv -1 \pmod 3$) if $\sigma=\sigma_4$ (resp. $\sigma=\sigma_6$), where $\sigma_k$ is a primitive $k$-th root of unity.
\begin{prop}\label{prop:supersingular s6}
Let $N=p^n$ for some $n\geq 1$ with $p\geq 5$. Suppose ${\operatorname{Aut}}({\cmcal{E}}, C)=\< \sigma \>$. Then, there exists another cyclic subgroup $D$ of order $N$ such that ${\cmcal{E}}[N]\simeq C \oplus D$. Moreover, ${\operatorname{Aut}}({\cmcal{E}}, D)=\< \sigma \>$ and $({\cmcal{E}}, C)$ is not isomorphic to $({\cmcal{E}}, D)$.
\end{prop}
\begin{proof}
Here, we closely follow the argument in the proof of Proposition 1 in \cite[{\textsection} 2]{R88}.
Let $R$ be the subring ${\mathbb{Z}}[\sigma]$ of ${\operatorname{End}}({\cmcal{E}}, C)$. Since ${\operatorname{Aut}}({\cmcal{E}}, C)=\< \sigma \>$, $p\equiv 1 \pmod 4$ (resp. $p \equiv 1 \pmod 3$) if $\sigma=\sigma_4$ (resp. $\sigma=\sigma_6$). Therefore $p$ splits completely in $R$.
Note that $R={\mathbb{Z}}[\sigma]$ is a principal ideal domain and therefore
\[
R/{p R} \simeq R/{\gamma R} \oplus R/{\delta R} \simeq {\delta R}/{p R} \oplus {\gamma R}/{p R}
\]
with $p=\gamma \delta$. Moreover,
\[
R/NR=R/{p^n R} \simeq R/{\gamma^n R} \oplus R/{\delta^n R} \simeq {\delta^n R}/{N R} \oplus {\gamma^n R}/{N R}.
\]
Note that ${\cmcal{E}}[N]$ is a free of rank $1$ module over $R/NR$ by the action of $R$ on ${\cmcal{E}}$.
We may identify $C$ with the quotient $I/NR$ for some ideal $I$ of $R$ containing $N$ if we fix an $R$-isomorphism between ${\cmcal{E}}[N]$ and $R/NR$. Thus, $I=\delta^n R$ or $\gamma^n R$.
Suppose that $I=\delta^n R$. Then, by the fixed isomorphism $C={\cmcal{E}}[\gamma^n]$. Let $D:={\cmcal{E}}[\delta^n]$ so that its corresponding ideal is $\gamma^n R$. Then, ${\cmcal{E}}[N] \simeq C \oplus D$. Moreover since $\gamma^n R$ is also an ideal of $R$, $D$ is also stable under the action of $\sigma$. In other words,
${\operatorname{Aut}}({\cmcal{E}}, D)=\< \sigma \>$.
Since ${\operatorname{Aut}}({\cmcal{E}})=\< \sigma \>$ and $\sigma(C)=C$, $({\cmcal{E}}, C)$ cannot be isomorphic to $({\cmcal{E}}, D)$.
\end{proof}
From now on, we use the same notation as in the proof of Proposition \ref{prop:supersingular s6}.
\begin{defn}
By the above formulas for every $n\geq 1$ and $p \equiv 1 \pmod 4$ (resp. $p\equiv 1 \pmod 3$), there are precisely two cyclic subgroups $C$, $D$ of ${\cmcal{E}}$ of order $p^n$ such that ${\operatorname{Aut}}({\cmcal{E}}, C) ={\operatorname{Aut}} ({\cmcal{E}}, D)= \< \sigma \>$ (and ${\cmcal{E}}[p^n] \simeq C\oplus D$) if $\sigma=\sigma_4$ (resp. if $\sigma=\sigma_6$). Thus, for each $n\geq 1$ we define ${\cmcal{C}}_{p^n}$ and ${\cmcal{D}}_{p^n}$ by
\[
{\cmcal{C}}_{p^n}:={\cmcal{E}}[\gamma^n] {\quad \text{and} \quad} {\cmcal{D}}_{p^n}:={\cmcal{E}}[\delta^n].
\]
\end{defn}
\begin{prop}\label{prop: degeneracy alpha}
For each $n\geq 1$, ${\cmcal{C}}_{p^{n+1}}[p^{n}]={\cmcal{C}}_{p^{n}}$ and ${\cmcal{D}}_{p^{n+1}}[p^{n}]={\cmcal{D}}_{p^{n}}$.
\end{prop}
\begin{proof}
By the fixed $R$-isomorphism $\iota$ between ${\cmcal{E}}[p^{n+1}]$ and $R/{p^{n+1} R}$, we identify ${\cmcal{C}}_{p^{n+1}}$ with $I/{p^{n+1} R}$, where $I=\delta^{n+1} R$. As $I$ is an ideal of $R$, $\gamma I = p (\delta^n R) \subset I$ and
$I/{\gamma I} \simeq R/{\gamma R} \simeq \zmod p$. Therefore
\[
\xymatrix{
{\cmcal{C}}_{p^{n+1}}[p^n] \ar[r]^-{\iota} & \left( I/{p^{n+1} R} \right) [p^n] = \gamma I/{p^{n+1} R} \ar[r]_-{\times 1/p}^-{\sim} & (\delta^n R)/{p^n R},
}
\]
which corresponds to ${\cmcal{C}}_{p^n}$. Similarly, we prove that ${\cmcal{D}}_{p^{n+1}}[p^n]={\cmcal{D}}_{p^n}$, and the proposition follows.
\end{proof}
Let $N=Mp^n$ with $(6M, p)=1$ and $n \geq 1$.
Let $L$ be a cyclic subgroup of ${\cmcal{E}}$ of order $M$.
\begin{prop} \label{prop: degeneracy beta}
Suppose that ${\operatorname{Aut}}({\cmcal{E}}, {\cmcal{C}}_{p^{n+1}}, L)=\< \sigma \>$. Then, there is an isomorphism between \\
$({\cmcal{E}}/{{\cmcal{C}}_p}, {\cmcal{C}}_{p^{n+1}}/{{\cmcal{C}}_p}, (L\oplus {\cmcal{C}}_p)/{{\cmcal{C}}_p}))$ and $({\cmcal{E}}, {\cmcal{C}}_{p^{n}}, L)$.
\end{prop}
\begin{proof}
We mostly follow the idea of the proof of Proposition 2 in \cite[{\textsection} 2]{R88}.
The endomorphism $\gamma$ sends ${\cmcal{E}}[\gamma^{n+1}]={\cmcal{C}}_{p^{n+1}}$ to ${\cmcal{E}}[\gamma^n]={\cmcal{C}}_{p^n}$, and $L$ to itself (because $L \cap {\cmcal{E}}[p]=0$).
Now we denote by $\overline{\gamma}$ the map ${\cmcal{E}}/{{\cmcal{C}}_{p}} \to {\cmcal{E}}$ induced by $\gamma$. Note that $\overline{\gamma}$ is an isomorphism because ${\cmcal{C}}_{p}$ is ${\cmcal{E}}[\gamma]$, the kernel of $\gamma$.
By the above consideration, this isomorphism $\overline{\gamma}$ sends $({\cmcal{C}}_{p^{n+1}}/{{\cmcal{C}}_p}, (L\oplus {\cmcal{C}}_{p})/{{\cmcal{C}}_{p}})$ to $({\cmcal{C}}_{p}, L)$ because ${\cmcal{C}}_{p^{n+1}}/{{\cmcal{C}}_p}$ and $(L\oplus {\cmcal{C}}_{p})/{{\cmcal{C}}_{p}}$ are the images of ${\cmcal{C}}_{p^{n+1}}$ and $L$ by the quotient map ${\cmcal{E}} \to {\cmcal{E}}/{{\cmcal{C}}[p]}$, respectively. Therefore $\overline{\gamma}$ gives rise to the desired isomorphism between triples.
\end{proof}
\begin{cor}\label{cor: bijection alpha and alpha=beta}
The map $({\cmcal{E}}, C, L) \to ({\cmcal{E}}, C[p^n], L)$ induces a bijection between $\Sigma_{2e}(Np)$ and $\Sigma_{2e}(N)$, where $\sigma=\sigma_{2e}$. Moreover if $({\cmcal{E}}, C, L) \in \Sigma_{2e}(Np)$, we have
\begin{equation*}
({\cmcal{E}}, C[p^n], L) \simeq ({\cmcal{E}}/{C[p]}, C/{C[p]}, (L\oplus C[p])/{C[p]}).
\end{equation*}
\end{cor}
The corollary tells us that two degeneracy maps
$\alpha_p$ and $\beta_p$ in Section \ref{sec: hecke action on ss sets}
coincide on $\Sigma_{2e}(Np)$, which is a generalization of \cite[{\textsection} 4.2, Lemme 2]{Ed91}.
\begin{prop}\label{prop: frobenius on S4 S6}
Suppose that ${\operatorname{Aut}}({\cmcal{E}}, {\cmcal{C}}_{p^n}, L)=\< \sigma \>$. Then, ${\operatorname{Frob}}({\cmcal{E}})={\cmcal{E}}$ and ${\operatorname{Frob}}({\cmcal{C}}_{p^n}) = {\cmcal{D}}_{p^n}$, where ${\operatorname{Frob}}$ is the Frobenius morphism in characteristic $q$. Furthermore, ${\operatorname{Frob}}^2({\cmcal{E}}, {\cmcal{C}}_{p^n}, L)=({\cmcal{E}}, {\cmcal{C}}_{p^n}, L)$.
\end{prop}
\begin{proof}
Since ${\cmcal{E}}$ is isomorphic to the reduction of the elliptic curve with $j$-invariant $1728$ (resp. $0$) if $\sigma=\sigma_4$ (resp. $\sigma=\sigma_6$), the Frobenius morphism is an endomorphism of ${\cmcal{E}}$ (cf. \cite[Chapter V, Examples 4.4 and 4.5]{Si86}). Moreover, the Frobenius morphism and $\sigma$ generates ${\operatorname{End}}({\cmcal{E}})$, which is a quaternion algebra.
(Note that the degree of the Frobenius morphism is $q$.) Since ${\operatorname{End}}({\cmcal{E}})$ is a quaternion algebra, we have
\begin{equation*}
\sigma \circ {\operatorname{Frob}} = {\operatorname{Frob}} \circ \bar{\sigma} = {\operatorname{Frob}} \circ \sigma^{-1},
\end{equation*}
where $\bar{\sigma}$ denotes the complex conjugation in $R={\mathbb{Z}}[\sigma]$. Analogously, we have
\begin{equation*}
\gamma \circ {\operatorname{Frob}} = {\operatorname{Frob}} \circ \bar{\gamma} = {\operatorname{Frob}} \circ \delta.
\end{equation*}
Since $\sigma({\operatorname{Frob}}({\cmcal{C}}_{p^n}))={\operatorname{Frob}}(\sigma^{-1}({\cmcal{C}}_{p^n}))={\operatorname{Frob}}({\cmcal{C}}_{p^n})$, ${\operatorname{Frob}}({\cmcal{C}}_{p^n})$ is also stable under the action of $\sigma$. Moreover ${\cmcal{C}}_{p^n}$ does not intersect with the kernel of ${\operatorname{Frob}}$.
Thus, ${\operatorname{Frob}}({\cmcal{C}}_{p^n})$ is either ${\cmcal{C}}_{p^n}$ or ${\cmcal{D}}_{p^n}$.
As an endomorphism of ${\cmcal{E}}$, $\gamma$ sends ${\cmcal{C}}_{p^n}$ (resp. ${\cmcal{D}}_{p^n}$) to ${\cmcal{C}}_{p^{n-1}}$ (resp. ${\cmcal{D}}_{p^n}$). Similarly, $\delta$ maps ${\cmcal{C}}_{p^n}$ (resp. ${\cmcal{D}}_{p^n}$) to ${\cmcal{C}}_{p^n}$ (resp. ${\cmcal{D}}_{p^{n-1}}$). Therefore if ${\operatorname{Frob}} ({\cmcal{C}}_{p^n})={\cmcal{C}}_{p^n}$, then
\begin{equation*}
\gamma \circ {\operatorname{Frob}} ({\cmcal{C}}_{p^n})=\gamma ({\cmcal{C}}_{p^n})={\cmcal{C}}_{p^{n-1}} {\quad \text{and} \quad} {\operatorname{Frob}} \circ \delta ({\cmcal{C}}_{p^n})={\operatorname{Frob}} ({\cmcal{C}}_{p^n})={\cmcal{C}}_{p^n},
\end{equation*}
which is a contradiction. Thus, we get ${\operatorname{Frob}} ({\cmcal{C}}_{p^n})={\cmcal{D}}_{p^n}$.
Since every supersingular point can be defined over ${\mathbb{F}}_{q^2}$, the quadratic extension of ${\mathbb{F}}_q$,
${\operatorname{Frob}}^2$ acts trivially on $\Sigma(N)$ (cf. \cite[Remark 3.5.b]{R90}), which proves the last claim.
\end{proof}
\begin{rem}
By taking $H=(\zmod N)^*$ in Lemma 1 of \cite{R97}, we can obtain a similar result if we show that the Atkin-Lehner style involution in \cite[{\textsection} 4]{R97} is equal to the Frobenius morphism.
\end{rem}
\section{The action of $T_p$ on the component group}\label{sec: hecke action on ss sets}
Before discussing the action of the Hecke operators on the component group, we study it on the group of divisors supported on supersingular points, which we denote by ${\operatorname{Div}}(\Sigma(N))$.
Let $N=Mp^n$ with $(M, p)=1$ and $n\geq 1$, and assume that $(N, q)=1$.
Let $\alpha_p, ~~\beta_p : X_0(Npq) \rightrightarrows X_0(Nq)$ denote two degeneracy maps of degree $p$, defined by
\[
\alpha_p(E, C, L):=(E, C[p^n], L) {\quad \text{and} \quad} \beta_p(E, C, L):=(E/{C[p]}, C/{C[p]}, (L+C[p])/{C[p]}),
\]
where $C$ (resp. $L$) denotes a cyclic subgroup of order $p^{n+1}$ (resp. $Mq$) in an elliptic curve $E$
(cf. \cite[{\textsection} 13]{MR91}).
Let $T_p$ and $\xi_p$ be two Hecke correspondences defined by
\[
\xyv{1.5}
\xyh{0.5}
\xymatrix{
& X_0(Npq) \ar[dl]_-{\alpha_p} \ar[dr]^-{\beta_p}& \\
X_0(Nq) \ar@<.5ex>@{-->}[rr]^-{\xi_p} && X_0(Nq). \ar@<.5ex>@{-->}[ll]^-{T_p}
}
\]
By pullback, the Hecke correspondence $T_p$ (resp. $\xi_p$) induces the Hecke operator $T_p:=\beta_{p,*} \circ \alpha_p^*$ (resp. $\xi_p:=\alpha_{p,*} \circ \beta_p^*$) on $J_0(Nq)$.
The same description of the Hecke operator $T_p$ on ${\operatorname{Div}}(\Sigma(N))$ as above works. In other words,
we have two degeneracy maps\footnote{every elliptic curve isogenous to a supersingular one is also supersingular} $\alpha_p, \beta_p : \Sigma(Np) \rightrightarrows \Sigma(N)$ of degree $p$, defined by
\[
\alpha_p(E, C, L):=(E, C[p^n], L) {\quad \text{and} \quad} \beta_p(E, C, L):=(E/{C[p]}, C/{C[p]}, (L+C[p])/{C[p]}),
\]
where $C$ (resp. $L$) denotes a cyclic subgroup of order $p^{n+1}$ (resp. $M$) in a supersingular elliptic curve $E$ over $\mathbf{F}$. These maps induce the maps on their divisor groups:
\begin{equation*}
\xymatrix{
{\operatorname{Div}}(\Sigma(N)) \ar@<.5ex>[r]^-{\alpha_p^*} \ar@<-.5ex>[r]_{\beta_p^*} & {\operatorname{Div}}(\Sigma(Np))
\ar@<.5ex>[r]^-{\alpha_{p, *}} \ar@<-.5ex>[r]_{\beta_{p, *}}& {\operatorname{Div}}(\Sigma(N))
}
\end{equation*}
and the Hecke operator $T_p$ (resp. $\xi_p$) can be defined by $\beta_{p, *} \circ \alpha_p^*$ (resp. $\alpha_{p, *} \circ \beta_p^*$). (For the details when $n=0$, see \cite[{\textsection} 3]{R90}, \cite[p. 18--22]{Ra91}, \cite[{\textsection} 4.1]{Ed91} or \cite[{\textsection} 7]{Em02}. By the same method, we get the above description without further difficulties.)
Now, let $\Phi_q(Nq)$ denote the component group of the special fiber ${\cmcal{J}}$ of the N\'eron model of $J_0(Nq)$ at $q$. To compute the action of $T_p$ on it, we closely follow the method of Ribet (cf. \cite{R88}, \cite[{\textsection} 2, 3]{R90}, \cite[{\textsection} 1]{Ed91}).
Since $N$ is not divisible by $q$, the identity component ${\cmcal{J}}^0$ of ${\cmcal{J}}$ is a semi-abelian variety by Deligne-Rapoport \cite{DR73} and Raynaud \cite{Ra70}. Moreover, ${\cmcal{J}}^0$ is an extension of $J_0(N)_{\mathbf{F}} \times J_0(N)_{\mathbf{F}}$ by ${\cmcal{T}}$, the torus of ${\cmcal{J}}^0$. Let ${\cmcal{X}}$ be the character group of the torus ${\cmcal{T}}$. By Grothendieck, there is a (Hecke-equivariant) monodromy exact sequence \cite{Gro72} (see also \cite[{\textsection} 2, 3]{R90}, \cite{Ra91}, or \cite[{\textsection} 4]{Il15}),
\[
\xymatrix{
0 \ar[r] & {\cmcal{X}} \ar[r]^-\iota & {\operatorname{Hom}}({\cmcal{X}}^t, {\mathbb{Z}}) \ar[r] & \Phi_q(Nq) \ar[r] & 0.
}
\]
Here ${\cmcal{X}}^t$ denotes the character group corresponding to the dual abelian variety of $J_0(Nq)$, which is equal to $J_0(Nq)$.
Namely, ${\cmcal{X}}^t={\cmcal{X}}$ as sets, but the action of the Hecke operator $T_\ell$ on ${\cmcal{X}}^t$ is equal to the action of its dual $\xi_\ell$ on ${\cmcal{X}}$ (cf. \cite{R88}, \cite[{\textsection} 3]{R90} and \cite[{\textsection} 7]{Em02}).
Note that ${\cmcal{X}}$ is the group of degree $0$ elements in ${\mathbb{Z}}^{\Sigma(N)}$. For $s, t \in \Sigma(N)$, let $e(s):=\frac{\#{\operatorname{Aut}}(s)}{2}$ and
\[
\phi_s(t):=\begin{cases} e(s) & \text{ if } s=t, \\
~~~~~~ 0 & \text{otherwise,} \end{cases}
\]
and extends via linearity, i.e., $\phi_s(\sum a_i t_i)=\sum a_i \phi_s(t_i)$.
Then, $\iota(s-t)=\phi_s-\phi_t$.
Note also that ${\operatorname{Hom}}({\mathbb{Z}}^{\Sigma(N)}, {\mathbb{Z}}$) is generated by $\psi_s:=1/e(s) \phi_s$,
and ${\operatorname{Hom}}({\cmcal{X}}^t, {\mathbb{Z}})$ is its quotient by the relation:
\begin{equation*}
\sum_{s \in \Sigma(N)} \psi_s=\sum_{s \in \Sigma(N)} \frac{1}{e(s)}\phi_s=0.
\end{equation*}
(This is the minimal relation to make $\sum a_w \psi_w$ vanish for all the divisors of the form $s-t$, which are the generators of ${\cmcal{X}}$.) For more details, see \cite[{\textsection} 2, 3]{R90} or \cite{Ra91}.
In conclusion, the component group $\Phi_q(Nq)$ is isomorphic to
\[
{\operatorname{Hom}}({\mathbb{Z}}^{\Sigma(N)}, {\mathbb{Z}})/R,
\]
where $R$ is the set of relations:
\begin{equation}\label{eqn: relations}
R = \{ e(s)\psi_s = e(t)\psi_t \text{ for any } s, t \in \Sigma(N), \sum_{t \in \Sigma(N)} \psi_t=0 \}.
\end{equation}
Let $\Psi_s$ denote the image of $\psi_s$ by the natural projection ${\operatorname{Hom}}({\mathbb{Z}}^{\Sigma(N)}, {\mathbb{Z}}) \to \Phi_q(Nq)$.
The Hecke operator $T_p$ acts on ${\operatorname{Hom}}({\mathbb{Z}}^{\Sigma(N)}, {\mathbb{Z}})$ via the action of $\xi_p$ on ${\operatorname{Div}}(\Sigma(N))$, i.e.,
\[
T_p(\psi_s)(t):=\psi_s(\xi_p(t))=\psi_{s}(\alpha_{p, *} \circ \beta_p^*(t)).
\]
For $s \in \Sigma(N)$, we temporarily denote $\alpha_p^*(s)= \sum_{i=1}^p A^i(s)$ and $\beta_p^*(s)=\sum_{i=1}^p B^i(s)$ (allowing repetition). We note that if $e(s)=1$ then there is no repetition, i.e., $A^i(s) \not\simeq A^j(s)$ and $B^i(s) \not\simeq B^j(s)$ if $i \neq j$. If $e(s)=e>1$, then after renumbering the index properly we have
\begin{equation*}
e(A^i(s))=1 \text{ for } 1 \leq i \leq p-1 {\quad \text{and} \quad} e(A^p(s))=e.
\end{equation*}
Moreover, we have
\begin{equation*}
A^{e(k-1)+1}(s) \simeq \cdots \simeq A^{ek}(s) \text{ for } 1 \leq k \leq \frac{p-1}{e}, \text{ and } A^i(s) \not\simeq A^j(s) \text{ if } \left [\frac{i-1}{e} \right ] \neq \left [\frac{j-1}{e}\right ],
\end{equation*}
where $[x]$ denotes the largest integer less than or equal to $x$.
This can be seen as follows: Let $\sigma=\sigma_{2e}$, and let $s$ represent a pair $({\cmcal{E}}, C)$, where $C$ is a cyclic subgroup of $E$ of order $N$. Since $e(s)=e$, $\sigma(C)=C$. Suppose that $s' \in \Sigma(Np)$ with $\alpha_{p, *}(s')=s$.
Then $s'$ represents a pair $({\cmcal{E}}, D)$ with $D[N]=C$.
If $\sigma(D)=D$, then ${\operatorname{Aut}} ([({\cmcal{E}}, D)])=\< \sigma \>$ and $({\cmcal{E}}, D) \not\simeq ({\cmcal{E}}, D')$ if $D \neq D'$.
(Note that there is a unique such $D$.)
On the other hand, if $\sigma(D) \neq D$ then
\begin{equation*}
({\cmcal{E}}, D) \simeq ({\cmcal{E}}, \sigma(D)) \simeq \cdots \simeq ({\cmcal{E}}, \sigma^{e-1}(D)) \simeq ({\cmcal{E}}, \sigma^e(D))=({\cmcal{E}}, D)
\end{equation*}
and ${\operatorname{Aut}} ([({\cmcal{E}}, D)])= \{ \pm 1 \}$. Thus, we can rearrange $A^i(s)$ as above. (Note that this can only be possible when $p \equiv 1 \pmod {2e}$, which is indeed true because $e(s)=e$.)
Now, we claim that $\phi_s(\alpha_{p, *}(t))=\phi_t(\alpha_p^*(s))$.
Indeed, $\phi_s(\alpha_{p,*}(t))$ is nonzero if and only if $t \in \{A^1(s), \dots, A^p(s) \}$. So, it suffices to show this equality when $t \in \{A^1(s), \dots, A^p(s) \}$. If $e(s)=1$, then there is no repetition and the claim follows clearly (both are $1$). Now, let $e(s)=e>1$. If $e(t)=1$, then $t=A^i(s)$ for some $1\leq i \leq p-1$. Since the number of repetition of $t=A^i(s)$ in $\{A^1(s), \dots, A^p(s) \}$ is $e$, the above equality holds.
If $e(t)=e$, then $t=A^p(s)$ and $\phi_s(\alpha_{p,*}(t))=e=\phi_t(\alpha_p^*(s))$, as claimed.
Analogously, we have
\begin{equation*}
\phi_t(\beta_{p, *}(s))=\phi_s(\beta_p^*(t)).
\end{equation*}
More generally, we get
\begin{align*}
\phi_s(\alpha_{p,*}\circ \beta_p^*(t))&=\sum_{i=1}^p \phi_s(\alpha_{p,*}(B^i(t)))
=\sum_{i=1}^p \sum_{j=1}^p \phi_{B^i(t)} (A^j(s))
=\sum_{j=1}^p \sum_{i=1}^p \phi_{A^j(s)} (B^i(t)) \\
&=\sum_{j=1}^p \phi_{A^j(s)} (\beta_p^*(t))=\sum_{j=1}^p \phi_t(\beta_{p,*}(A^j(s)))=\phi_t(\beta_{p, *} \circ \alpha_p^*(s))=\phi_t(T_p(s)).
\end{align*}
If we set $T_p(s)=\sum s_i$, then $\phi_t(T_p(s))=\sum \phi_{s_i}(t)=\sum e(s_i) \psi_{s_i}(t)$ and hence for any $t \in \Sigma(N)$,
\begin{equation*}
e(s)T_p(\psi_s)(t)=\phi_s(\alpha_{p,*}\circ \beta_p^*(t))=\phi_t(T_p(s))=e(s_i)\psi_{s_i}(t).
\end{equation*}
In other words, we get
\begin{equation}
T_p(\Psi_s)=\frac{1}{e(s)} \sum e(s_i) \Psi_{s_i}.
\end{equation}
We can also define the action of $T_p$ on the component group via functorialities. Namely, let
\begin{equation*}
\xymatrix{
\Phi_q(Nq) \ar@<.5ex>[r]^-{\alpha_p^*} \ar@<-.5ex>[r]_{\beta_p^*} & \Phi_q(Npq)
\ar@<.5ex>[r]^-{\alpha_{p, *}} \ar@<-.5ex>[r]_{\beta_{p, *}}& \Phi_q(Nq).
}
\end{equation*}
denote the maps functorially induced from the degeneracy maps\footnote{if $\alpha_p^*(s)=\sum t_j$ then $\alpha_p^*(\Psi_s)=\sum \Psi_{t_j}$ and if $\alpha_p(t)=s$ then $\alpha_{p, *}(\Psi_t)=e(s)/e(t)\Psi_s$; and similarly for $\beta_p^*$ and $\beta_{p,*}$}. Then, as before $T_p:=\beta_{p, *} \circ \alpha_p^*$. Note that since the degrees of $\alpha_p$ and $\beta_p$ are $p$, we have
$\alpha_{p,*} \circ \alpha_p^* =\beta_{p,*} \circ \beta_p^* =p$.
\begin{lem}
$\alpha_{p, *} = \beta_{p, *}$ on $\Phi_q(Npq)$.
\end{lem}
\begin{proof}
For $s \in \Sigma_{2e}(Npq)$ with $e=2$ or $3$, $\alpha_p(s)=\beta_p(s)$ by Remark \ref{cor: bijection alpha and alpha=beta}, and hence $\alpha_{p, *}(\Psi_s)=\beta_{p, *}(\Psi_s)$. For $s \in \Sigma_2(Npq)$, let $\alpha_p(s)=t$ and $\beta_p(s)=w$. Then, $\alpha_{p, *}(\Psi_s)=e(t)\Psi_t = e(w)\Psi_w=\beta_{p, *}(\Psi_s)$. In other words, for any $s \in \Sigma(Npq)$, $\alpha_{p, *}(\Psi_s)=\beta_{p, *}(\Psi_s)$. Since $\Psi_s$'s generates $\Phi_q(Npq)$, the result follows.
\end{proof}
In fact, Theorem \ref{thm: main theorem Tp} is an easy corollary of the above lemma.
\begin{proof}[Proof of Theorem \ref{thm: main theorem Tp}]
Since $\alpha_{p, *}=\beta_{p, *}$ on $\Phi_q(Npq)$, we have
\begin{equation*}
T_p(\Psi_s)=\beta_{p, *}\circ \alpha_p^*(\Psi_s)=\alpha_{p, *}\circ \alpha_p^*(\Psi_s)=p\Psi_s,
\end{equation*}
which implies the result.
\end{proof}
\section{The action of $T_q$ on the component group} \label{sec: Tq action}
In this section, we provide a complete description of the action of $T_q$ on the component group $\Phi_q(Nq)$. See Propositions \ref{prop: case 2}, \ref{prop: case 3} and \ref{prop: case 4}, which imply Theorem \ref{thm: main theorem Tq}.
Note that the Hecke operator $T_q$ acts on $\Sigma(N)$ by the Frobenius morphism \cite[Proposition 3.8]{R90}, and the same is true for $\xi_q$. Since the Frobenius morphism is an involution on $\Sigma(N)$ (cf. Proposition \ref{prop: frobenius on S4 S6}), we have
\begin{equation}
T_q(\psi_s)(t)=\psi_s(\xi_q(t))=\psi_s({\operatorname{Frob}} (t))=\psi_{{\operatorname{Frob}} (s)}(t) \text{ for any } t \in \Sigma(N),
\end{equation}
which implies that $T_q(\psi_s)=\psi_{{\operatorname{Frob}} (s)}$.
From now on, if there is no confusion we remove $(N)$ from the notation for simplicity.
Let $n:=\frac{(q-1)Q}{12}$ (which is not necessarily an integer), and let $\Phi$ denote the cyclic subgroup of $\Phi_q(Nq)$ generated by $\Psi_{\mathfrak{s}}$ for a fixed ${\mathfrak{s}} \in \Sigma_2$. (Note that this $\Phi$ is the same as that of Mazur and Rapoport \cite{MR77}, namely, $\Phi$ is equal to the cyclic subgroup generated by the image of the cuspidal divisor $(0)-(\infty)$.)
\subsection{Case 1: $(u, v)=(0, 0)$ or $\nu=0$} $~$
Let $e=1$ if $(u, v)=(0, 0)$ and $e=2u+3v$ if $(u, v)\neq (0, 0)$ and $\nu=0$.
If $(u, v)=(0,0)$, $s_2=n$ and $s_4=s_6=0$. If $(u, v) \neq (0, 0)$ and $\nu=0$, then $s_{2e}=1$ and $s_2=\frac{en-1}{e}$. (Note that $s_2$ is an integer but $n$ is not.)
\begin{prop}\label{prop: case 1}
The component group $\Phi_q(Nq)$ is equal to $\Phi$, which is cyclic of order $en$. The Hecke operator $T_q$ acts on it by $1$.
\end{prop}
\begin{proof}
First, we assume that $(u, v)=(0, 0)$. Then for any $s \in \Sigma=\Sigma_2$, $\Psi_s=\Psi_{\mathfrak{s}}$. Therefore $\Phi_q(Nq)=\Phi$ and $n\Psi_{\mathfrak{s}}=\sum_{s \in \Sigma} \Psi_s=0$.
Moreover, $T_q(\Psi_{\mathfrak{s}})=\Psi_{s'}=\Psi_{\mathfrak{s}}$, where $s'={\operatorname{Frob}}({\mathfrak{s}})$.
Now, we assume that $(u, v) \neq (0, 0)$ and $\nu=0$. In this case, either $N=2q$ (with $(u, v)=(1, 0)$ and $e=2$) or $N=3q$ (with $(u, v)=(0, 1)$ and $e=3$). In each case, let $z \in \Sigma_{2e}$. Then
\begin{equation*}
\sum_{s \in \Sigma_2}\Psi_s + \Psi_z=s_2 \Psi_{\mathfrak{s}}+\Psi_z=0 {\quad \text{and} \quad} \Psi_{\mathfrak{s}}=e \Psi_z.
\end{equation*}
Therefore the component group is generated by $\Psi_z$, and its order is $(es_2+1)=en$. Since $en=es_2+1$ is prime to $e$, this group is also generated by $\Psi_{\mathfrak{s}}=e\Psi_z$. (In fact, $\Psi_z=-s_2 \Psi_{\mathfrak{s}}$.) Moreover we have $T_q(\Psi_{\mathfrak{s}})=\Psi_{\mathfrak{s}}$ as above.
\end{proof}
\subsection{Case 2: $(u, v)=(0, 1)$ and $\nu\geq 1$}$~$
In this case, $s_4=0$, $s_6=2^\nu$, and $s_2=\frac{3n-2^\nu}{3}$. Let $\Sigma_6:=\{ t_1, t_2, \dots, t_{2^\nu} \}$. Here we assume that ${\operatorname{Frob}}(t_{2k-1})=t_{2k}$ for $1 \leq k \leq 2^{\nu-1}$.\footnote{By Proposition \ref{prop: frobenius on S4 S6}, we know that ${\operatorname{Frob}}$ is an involution of $\Sigma_6$ without fixed points.} Let $t:=t_{2^\nu-1}$ and $t':=t_{2^\nu}$.
\begin{prop}\label{prop: case 2}
The component group $\Phi_q(Nq)$ decomposes as follows:
\[
\Phi_q(Nq)=\bigoplus_{i=0}^{2^\nu-1} B_i =: B_0 \oplus \mathbf{B},
\]
where $B_0=\Phi$ is cyclic of order $3n$, and for $1\leq i \leq 2^\nu-1$, $B_i$ is cyclic of order $3$.
For $1\leq k \leq 2^{\nu-1}$, $B_{2k-1}$ and $B_{2k}$ are generated by
\[
\mathbf{v}_{2k-1}:=\Psi_{t_{2k-1}}-\Psi_{t_{2k}} ~~\text{ and }~~\mathbf{v}_{2k}:=\Psi_{t_{2k-1}}+\Psi_{t_{2k}}-\Psi_t-\Psi_{t'}, \text{respectively}.
\]
The Hecke operator $T_q$ acts on $B_i$ by $(-1)^i$.
\end{prop}
\begin{proof}
Note that $\Psi_s=3\Psi_{t_i}=3\Psi_{t_j}$ for all $i, j$ and $\sum_{i=1}^{2^\nu} \Psi_{t_i} + s_2 \Psi_s=0$. Therefore $\Phi_q(Nq)$ is generated by $\Psi_{t_i}$ for $1\leq i \leq 2^\nu-1$. The order of each group $\<\Psi_{t_i}\>$ is $9n$ because
\[
9n \Psi_{t_i}= 3s_2(3\Psi_{t_i})+\sum_{i=1}^{2^\nu} 3\Psi_{t_i}=3\left(\sum_{s \in \Sigma_2} \Psi_s +\sum_{i=1}^{2^\nu} \Psi_{t_i} \right)=0,
\]
and $9n$ is the smallest positive integer to make this happen.
Moreover $\< \Psi_{t_i} \> \cap \< \Psi_{t_j} \>$ is of order $3n$ for any $i\neq j$.
Since $3n=3s_2+2^\nu$ is prime to $3$, we can decompose the component group into
\begin{equation}\label{equation}
\< 3\Psi_t \> \oplus \<(3s_2+2^\nu) \Psi_t \> \bigoplus_{i=1}^{2^\nu-2} \< \Psi_{t_i}-\Psi_t \>.
\end{equation}
Since $\Psi_s=3\Psi_{t_i}=3\Psi_t=3\Psi_{t'}$ for any $i$ and $\sum_{i=1}^{2^\nu} \Psi_{t_i}=-3s_2\Psi_t$, we have
\[
\Psi_{2k-1}-\Psi_t=2\mathbf{v}_{2k-1}+2\mathbf{v}_{2k}+\mathbf{v}_{2^{\nu}-1};
\]
\[
\Psi_{2k}-\Psi_t=\mathbf{v}_{2k-1}+2\mathbf{v}_{2k}+\mathbf{v}_{2^{\nu}-1};
\]
\[
(3s_2+2^\nu)\Psi_t=\sum_{i=1}^{2^\nu} (\Psi_{t}-\Psi_{t_i})
=-\sum_{k=1}^{2^{\nu-1}} \mathbf{v}_{2k}-(-1)^\nu \mathbf{v}_{2^\nu-1}.
\]
Therefore the decomposition in the proposition is isomorphic to (\ref{equation}). The action of $T_q$ on each $B_i$ is obvious from its construction.
\end{proof}
\subsection{Case 3: $(u, v)=(1, 0)$ and $\nu\geq 1$}$~$
Note that $s_4=2^\nu$, $s_6=0$, and $s_2=n-2^{\nu-1}$.
Let $\Sigma_4=\{w_1, w_2, \dots, w_{2^\nu} \}$. As before, we assume that ${\operatorname{Frob}}(w_{2k-1})=w_{2k}$ for $1 \leq k \leq 2^{\nu-1}$.\footnote{By Proposition \ref{prop: frobenius on S4 S6}, we know that ${\operatorname{Frob}}$ is an involution of $\Sigma_4$ without fixed points.} Let $w:=w_{2^\nu-1}$ and $w':=w_{2^\nu}$.
\begin{prop} \label{prop: case 3}
The component group $\Phi_q(Nq)$ decomposes as follows:
\[
\Phi_q(Nq)=\bigoplus_{i=0}^{2^\nu-2} A_i=A_0 \oplus \mathbf{A},
\]
where $A_0$ is cyclic of order $4n$ generated by $\Psi_w$, and for $1\leq i \leq 2^\nu-2$, $A_i$ is cyclic of order $2$.
For $1\leq k \leq 2^{\nu-1}-2$, $A_{2k-1}$ and $A_{2k}$ are generated by
\[
\mathbf{u}_{2k-1}:=\Psi_{w_{2k-1}}-\Psi_w ~~\text{ and }~~\mathbf{u}_{2k}:=\Psi_{w_{2k-1}}+\Psi_{w_{2k}}-\Psi_w-\Psi_{w'}, ~\text{respectively}.
\]
And $A_{2^\nu-3}$ and $A_{2^\nu-2}$ are generated by
\[
\mathbf{u}_{2^\nu-3} :=\Psi_{w_{2^\nu-3}}-\Psi_w ~~\text{ and }~~\mathbf{u}_{2^\nu-2}:=\Psi_{w_{2^\nu-3}}-\Psi_{w_{2^\nu-2}}, ~\text{respectively}.
\]
Moreover, the action of the Hecke operator $T_q$ on each group as follows:
\[
T_q(\Psi_w)=(1+2n)\Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i};
\]
\[
T_q(\mathbf{u}_{2k-1})=\mathbf{u}_{2k-1}+\mathbf{u}_{2k} {\quad \text{and} \quad} T_q(\mathbf{u}_{2k})=\mathbf{u}_{2k} ~~\text{ for }~ 1 \leq k \leq 2^{\nu-1}-2;
\]
\[
T_q(\mathbf{u}_{2^\nu-3})=2n \Psi_w+\mathbf{u}_{2^\nu-3}+\sum_{i=1}^{2^{\nu-1}-2} \mathbf{u}_{2i} {\quad \text{and} \quad} T_q(\mathbf{u}_{2^\nu-2})=\mathbf{u}_{2^\nu-2}.
\]
\end{prop}
\begin{proof}
The argument in Proposition \ref{prop: case 2} applies \textit{mutatis mutandis}. For instance, when $\nu\geq 2$ an isomorphism between $A_0 \bigoplus_{i=1}^{2^\nu-2} \<\Psi_{w_i}-\Psi_w\>$ and $A_0\oplus \mathbf{A}$ can be given by the following data:
for $1 \leq k \leq 2^{\nu-1}-2$,
\[
\Psi_{w_{2k}}-\Psi_w=\mathbf{u}_{2k}+\mathbf{u}_{2k-1}+(\Psi_{w'}-\Psi_w) \text{ and }
\Psi_{w}-\Psi_{w'}=2n \Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i};
\]
\[
\Psi_{w_{2^\nu-2}}-\Psi_w=\mathbf{u}_{2^\nu-3}+\mathbf{u}_{2^\nu-2}.
\]
The action of Hecke operator $T_q$ on each $A_i$ is clear except
\[
T_q(\Psi_w)=\Psi_{w'}=\Psi_{w}-(\Psi_{w}-\Psi_{w'})=(1+2n)\Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i},
\]
\[
T_q(\mathbf{u}_{2^\nu-3})=\Psi_{w_{2^\nu-2}}-\Psi_{w'}=\mathbf{u}_{2^\nu-3}+\mathbf{u}_{2^\nu-2}+(\Psi_{w}-\Psi_{w'})=2n\Psi_{w}+\mathbf{u}_{2^\nu-3} + \sum_{i=1}^{2^{\nu-1}-2} \mathbf{u}_{2i}.
\]
\end{proof}
\subsection{Case 4: $(u, v)=(1, 1)$ and $\nu\geq 1$}$~$
Note that $s_4=s_6=2^\nu$ and $s_2=\frac{6n-5\cdot 2^\nu}{6}$.
Let $\Sigma_4=\{w_1, \dots, w_{2^\nu} \}$ and $\Sigma_6:=\{ t_1, \dots, t_{2^\nu} \}$. As before, we assume that ${\operatorname{Frob}}(w_{2k-1})=w_{2k}$ and ${\operatorname{Frob}}(t_{2k-1})=t_{2k}$ for $1 \leq k \leq 2^{\nu-1}$. Let $w:=w_{2^\nu-1}$ and $w':=w_{2^\nu}$. Also, let $t:=t_{2^\nu-1}$ and $t':=t_{2^\nu}$.
\begin{prop}\label{prop: case 4}
The component group $\Phi_q(Nq)$ decomposes as follows:
\[
\Phi_q(Nq)=A_0 \oplus \mathbf{A} \oplus \mathbf{B},
\]
where $A_0$ is cyclic of order $12n$ generated by $\Psi_w$.
The structures of $\mathbf{A}$ and $\mathbf{B}$ are the same as those in Propositions \ref{prop: case 2} and \ref{prop: case 3}. The actions of $T_q$ on $\mathbf{A}$ and $\mathbf{B}$ are the same as before except on $A_{2^\nu-3}$ (when $\nu \geq 2$), where $T_q$ acts by
\[
T_q(\mathbf{u}_{2^\nu-3})=6n \Psi_w+\mathbf{u}_{2^\nu-3}+\sum_{i=1}^{2^{\nu-1}-2} \mathbf{u}_{2i}.
\]
Moreover, the action of $T_q$ on $A_0$ is analogous as before:
\[
T_q(\Psi_w)=(1+6n)\Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i}.
\]
\end{prop}
\begin{proof}
Note that from (\ref{eqn: relations}) we have
\[
s_2\Psi_s+\Psi_{w_1}+\cdots+\Psi_{w'}+\Psi_{t_1}+\cdots+\Psi_{t'}=0.
\]
Multiplying by $3$, we have
\begin{equation}\label{eqn: 2}
\Psi_{w_1}+\cdots+\Psi_{w'}=-(3s_2+2\cdot 2^{\nu})\Psi_s=-(6s_2+4 \cdot 2^\nu)\Psi_w.
\end{equation}
Also, multiplying by $4$, we have
\begin{equation}\label{eqn: 3}
\Psi_{t_1}+\cdots+\Psi_{t'}=-(4s_2+3\cdot 2^\nu)\Psi_s=-(12s_2+9\cdot 2^{\nu})\Psi_t.
\end{equation}
Therefore $\Psi_{w_1}, \dots, \Psi_w, \Psi_{t_1}, \dots, \Psi_t$ can generate the whole group.
By the similar computation, the order of $\< \Psi_{w_i} \>$ is $12n$ and the order of $\<\Psi_{t_i} \>$ is $18n$. All of them contain $\Phi$ as a subgroup, which is of order $6n$. Here we note that $\< \Psi_t \>=\< 3\Psi_t \> \oplus \< 6n\Psi_{t} \>$ because $6n=6s_2+5\cdot 2^\nu$ is prime to $3$.
Therefore we can decompose $\Phi_q(Nq)$ into
\begin{equation}\label{eqn : 5}
\< \Psi_w \> \bigoplus_{i=1}^{2^\nu-2} \< \Psi_{w_i}-\Psi_w \> \bigoplus_{i=1}^{2^\nu-2} \< \Psi_{t_i}-\Psi_t \>\bigoplus \< 6n\Psi_t \>.
\end{equation}
As in Propositions \ref{prop: case 2} and \ref{prop: case 3}, we can find an isomorphism between (\ref{eqn : 5}) and $A_0\oplus\mathbf{A} \oplus \mathbf{B}$, which proves the first part.
From (\ref{eqn: 2}) (and the previous discussions) we have
\[
\Psi_{w}-\Psi_{w'}= (6s_2+5\cdot 2^\nu)\Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i}=6n\Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i}.
\]
The action of $T_q$ on each components is also obvious except
\[
T_q(\Psi_w)=\Psi_{w'}=\Psi_w-(\Psi_{w}-\Psi_{w'})=(1+6n)\Psi_w+\sum_{i=1}^{2^{\nu-1}-1} \mathbf{u}_{2i} ~~\text{ and}
\]
\[
T_q(\mathbf{u}_{2^\nu-3})=\Psi_{w_{2^\nu-2}}-\Psi_{w'}=\mathbf{u}_{2^\nu-3}+\mathbf{u}_{2^\nu-2}+(\Psi_{w}-\Psi_{w'})=6n\Psi_{w}+\mathbf{u}_{2^\nu-3} + \sum_{i=1}^{2^{\nu-1}-2} \mathbf{u}_{2i}.
\]
\end{proof}
\subsection*{Acknowledgements}
The second author would like to thank Kenneth Ribet for a number of very helpful discussions about Eisenstein ideals and component groups. The anonymous referee deserves special thanks for a thorough reading of the manuscript and for many useful comments and suggestions.
This work was supported by IBS-R003-D1.
\bibliographystyle{annotation}
|
1,116,691,497,292 | arxiv | \section{Introduction}
In the age of autonomous driving, researchers and companies are getting ever-so-close to enabling cars to generate driving behavior that includes reaching the destination while satisfying safety constraints, like not colliding with other cars or pedestrians.
Once autonomous cars attain that level of capability, initially, they might be able to generate, for each driving situation, only one solution trajectory (or behavior) that satisfies these safety and feasibility constraints. But really, many solutions exist -- there are many ways to drive. This depend on the individual trade-offs that each driver makes. We have an existence proof for that. Some of us are more \emph{aggressive} drivers, valuing efficiency and being comfortable getting close to other cars on the road. Others are more \emph{defensive}, a bit more conservative when it comes to safety, leaving a large distance to the next car for example, or quickly braking when someone attempts to merge in front.
Soon after we are able to generate \emph{one} feasible behavior, we will be asking ourselves \emph{which} behavior we should try to generate: what driving \emph{style} should an autonomous car have? There is a natural answer to this question: cars should do what users want them to \cite{kuderer2015learning, Here360, karjanto2015comfortstyle}. If the user drives aggressively, so should the car. The car should borrow the user's driving style (though not the imperfections). This is very apparent from the expression ``back seat driving'', which suggests that people want the driver to do what they would do.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figs/defensiveness_hri.pdf}
\caption{We first get data from user driving in different scenarios, and in a second session ask them to compare their own style (without knowing it is theirs), a more defensive style, and a more aggressive style. Participants tended to prefer a more defensive style than their own, but mistakenly thought they were actually picking their own. }
\label{fig:set up}
\end{figure}
Prior work has focused on identifying the user's driving style, via Inverse Reinforcement Learning \cite{kuderer2015learning,sadighinformation,lam2015efficient}.
In all of them, the underlying assumption is that we want cars to match our driving style: that we want them to drive like us.
In this paper, we challenge this assumption, and hypothesize that users want a driving style that is different from their own. We design and conduct a user study to start analyzing the potential differences between how users drive and how they want to be driven. Our study, conducted in a driving simulator, has two parts: first, the users come in and demonstrate their driving in different environments; second, at a later date, the same users come in and test four driving styles: their own (though they do not know it is their own), an aggressive style, a defensive style, and another user's style. We measure their preference for these styles, as well as the perceived similarity to their own style.
Our results suggest that there is truth to both sides:
\begin{quote}
Users do not actually want the car to drive like they drive. Instead, they want the car to drive like they \emph{think} they drive.
\end{quote
We found a significant difference in user's own style and preferred style, with users typically preferring more defensive driving when they are passengers. However, we also found a strong correlation between the style that users preferred, and the style that users perceived as closest to their own. There was little correlation, however, between what they thought was their own style and what \emph{actually} was their own style.
Overall, our work does not contradict the need for customization, but suggests that it might not be sufficient to learn how the user drives. Instead, we need to learn how the user actually wants to be driven. This raises challenges for learning, because we can no longer rely on demonstrations -- users can easily demonstrate how they drive, but they might not be able to demonstrate the driving style they want. Instead, we need to rely on different kinds of input and guidance from users in the learning process.
Furthermore, there is a tension between what users think they want (their style) and what they actually want (a more defensive style). On the brighter side, our results suggest that the learned output should be easily accepted by users: when the car drives in the preferred style, chances are users will perceive it as their own style anyway!
We define \textit{driving style}, informed by prior work, in Related Work, followed by our statement of hypothesis, description of the manipulated variables, the simulation environment, the user studies and the confounds in the Methods section. Here, we also present a quantitative measure of driving style in terms of driving features derived from prior research. The rest of paper is organized into Results and Discussion.
\section{Related Work}
The typical behavioral patterns of a driver are usually referred to by the term \emph{driving style}. This includes the choice of driving speed, headway, overtaking of other vehicles, or the tendency to commit traffic violations \cite{van2015measuring}.
Defensiveness-aggressiveness is the most commonly used metric for defining driving style. Prior work refers to drivers as aggressive/assertive versus defensive \cite{karjanto2015comfortstyle}; or mild versus moderate versus aggressive \cite{xu2015establishing}. In the Multidimensional Driving Style Inventory (MDSI), Taubman-Ben-Ari \textit{et al.} identified four broad driving styles: (1) reckless and careless driving, characterized by, for example, higher speed; (2) anxious driving; (3) angry and hostile driving, characterized by more use of the horn and flash functionality; and (4) patient and careful driving \cite{taubman2004multidimensional}. Similarly, Huysduynen categorized driving style as angry driving, anxious driving, dissociative driving, distress-reduction driving and careful driving style \cite{van2015measuring}. Horswill \textit{et al.} provided a valuable distinction between skill and style in the context of driving behaviors \cite{horswill1999}. Hong \textit{et al.} \cite{hong2014smartphone} differentiated styles in terms of defensiveness, as well as by propensity for violation of rules. Scherer defined driving style in terms of comfort \cite{scherer2015driver}. Lee \textit{et al.} \cite{lee2004comprehensive} analyzed lane changes as a function of its severity (degree to which the vehicle in the destination lane was cut off), urgency (how soon the lane change was needed), and type classification for the full population of 8,667 lane changes.
\textit{We focus on driving style based on degree of defensiveness.}
Driving style is a ``humanized driving'' quality \cite{Here360}. Hence, most of the driving style literature relates to understanding and modeling human driver behavior, in very specific traffic situations or contexts, like lane changing \cite{lee2004comprehensive, salvucci2004inferring, mandalia2005using}, intersection crossing \cite{hong2014smartphone, banovic2016modeling, elhenawy2015modeling}, car following \cite{brackstone1999car}, and in terms of driving actions specific to those contexts (e.g., throttle and braking level, turning) and features thereof (e.g. rate of acceleration, rate of deceleration, maximum speed in a time window).\emph{ We define driving defensiveness in our work as an aggregate of driving features in various driving scenarios}. Therefore, in our study, we present a combination of all of the aforementioned traffic conditions and scenarios to our participants.
Research on driving styles has been extended to autonomous cars in two forms. One body of work includes exploratory studies on understanding how explicitly-defined driving styles relate to comfort \cite{scherer2015driver}. The second body of work encompasses research on ways to teach an autonomous car how to drive from human demonstrations \cite{abbeel2004apprenticeship, ziebart2008navigate, kuderer2015learning,silver2010learning}. Both these groups assume that an autonomous car should learn their own user's driving style or driving behavior.
\section{Methods}\label{sec:methods}
\subsection{Hypothesis}
Because being a passenger is a different experience than being a driver, we hypothesize that:
\noindent\textbf{H.}
\emph{Users of autonomous cars prefer a driving style that is significantly different than their own.}
\subsection{Study Design}
In order to test our hypothesis, we leverage a driving simulator, and let users experience and evaluate autonomous cars with different driving styles, including their own style (without their knowledge).
We conducted a study in two parts. In the first part we collected driving data of participants in a simulation environment, so that we could let them experience their own style in the second part of the study.
\subsection{Manipulated Variables}
We manipulated the driving styles of autonomous cars at four levels of defensiveness: \be{aggressive}, \be{defensive}, \be{own style}, and a \be{distractor style} (a different participant's style). Users did not know if any of the styles were their own. Likewise, we also consciously avoided the use of the phrase ``driving style'' anytime during the studies, as well as, in the pre-study screening.
We define the \emph{defensiveness} of the style objectively, as a function of several driving features (e.g., distance to other cars -- the larger the distance, the more defensive the driving). We use features informed by existing literature. We describe them in \sref{sec:features}.
We created the aggressive and the defensive styles of driving by demonstration, and then validated these styles using our driving features (see our Manipulation Check \sref{sec:manipulationcheck}).
\subsection{Simulator and Driving Tasks}
We conducted both parts of the study in a simulation environment. Our simulation environment consisted of a standard classroom projection screen and table in front of the screen fitted with Logitech G920 steering wheel, brake, and gas pedal. We used the OpenDS driving simulation software \cite{opends2016} for running each of the driving simulations. The simulation platform was set up on a standard PC augmented with NVIDIA GeForce GTX 1070 and was hidden from the participants' view.
In the first part, the participants drove on a 9.6 mile long test track that consisted of 14 different driving tasks designed using the City Engine software (\figref{fig:track}).
We define a \emph{driving task} as a sequence of driving maneuvers in response to specific traffic conditions. For each task there are two to three simulated traffic conditions that resemble everyday traffic, so as to elicit natural driving behavior from the participant.
In the second part of the study, the participants experienced 6 of these 14 tasks, each performed by autonomous cars of four different styles.
\subsection{Procedure}
Before the driving session in part one of the study, we familiarized participants to the driving simulator. We asked each participant to practice on two different test tracks until they felt that they were driving as they would in their everyday driving. The first track had several traffic signals and turns, and second one was on congested city roads with several traffic cars. Their driving was assisted by a voice navigation. There were road signs for speed change zone, speed limit, sharp turns, entry to expressway and exit from expressway. We instructed the participants to drive as they would on actual roads and to treat the speed limits the way they would in their usual driving. This practice session lasted 5-10 minutes for each participant.
Participants then began the first part of the study, which consisted of 15-20 minutes of driving along the 14 tasks-test track, followed by a 10 minute interview.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{figs/designed_route.pdf}
\caption{Designed track: Tasks (shown in the list below the figure) are indicated in square brackets. Total road stretch is 9.6 miles.}
\label{fig:track}
\end{figure*}
In the second part of the study, the autonomous cars performed six tasks (combined into four test tasks) from this list with the participant as a passenger, shown in bold letters on the list in \figref{fig:track}. To simplify, we combined the second and the third tasks in the list, i.e., lead car slows down forcing lane change and merge back to right lane into a single test task, which we refer to as Task 1 in the rest of the paper. Likewise, we combined the sixth and the seventh tasks into a single test task, called Task 2 in the rest of the paper. Thus, each autonomous car performed four test tasks in total.
Two of the test tasks were on the expressway and lasted approximately 4 minutes for each style and the other two tasks on the inner city roads were shorter than 2 minutes.
After the participants had driven in an autonomous car of each driving style for each of the test tasks, we conducted a short interview-based survey with each participant.
\subsection{Dependent Measures}
\label{sec:dependent}
\prg{Perceived similarity to real driving}
In the first part of the study we conducted a post-driving open-ended interview with the participants to understand whether the manual driving in the simulation environment resembled their everyday driving. We asked three questions in this interview, each followed by a request for more elaboration. We asked the following questions in the interview:
\begin{enumerate}
\item Did you enjoy the drive?
\item Are there any positive or negative aspects of the simulation environment, the driving controls and the traffic conditions that you would like to mention?
\item On a scale of +3 to -3 \cite{ashrae2010}, please rate how similar or different is this experience from your daily driving?
\end{enumerate}
\prg{Open-ended responses} In the second part we asked each participant to think aloud about their emotions and feelings as they were experiencing autonomous driving.
\prg{Main subjective measures: Preference and perceived similarity to own style} After a participant had experienced each autonomous style for a given task, we conducted an interview-based survey. We asked the participants to rate each style of driving for \emph{comfort}, \emph{safety}, \emph{preference for everyday use}, and \emph{similarity with their own driving} on 7 point Likert scale.
\prg{Main objective measures: Driving style features and overall defensiveness}
\label{sec:features}
We measured the user's style quantitatively using task specific driving features, derived from existing literature. We carefully considered the contexts and subject demographics of each of these existing studies to ensure as much similarity in the context as possible with our study.
For car following, lane changing, and return to preferred lane, we selected the features described by Lee \textit{et al.} in ``A Comprehensive Examination of Naturalistic Lane-Changes'' \cite{lee2004comprehensive}. This study analyzed the largest naturalistic lane change dataset and specifically labelled lane change data resulting from the slowing down of the leading car. The speed range of 45 mph to 55 mph matches our driving conditions. Their dataset consisted 8667 lane changes over 23,949 miles of driving from 16 commuters of age group 20 to 60. They studied car following, lane changing, and return of preferred lane in terms of distance, time to collision, and relative speed classified by severity and urgency of lane change.
The features for tasks like turning at the intersection with a green light or stop light were derived from our preliminary interview with the participants and from Hong \textit{et al.} \cite{hong2014smartphone} and Banovic \textit{et al.} \cite{banovic2016modeling}.
\begin{table*}
\centering
\begin{tabular}{p{2in}|p{4in}}
Features & Definitions \\
\hline
Mean Distance to Lead car & During car following (with 200 meters distance) the average distance between middle of the driver car and the lead car.\\
Mean Time Headway & During car following (with 200 meters distance) average time headway, defined as ratio of Distance headway and speed of the driver car.\\
Time Headway during Lane change & Distance headway divided by the speed of the driver car during lane change.\\
Distance Headway during Lane change & Distance between the middle of the driver car and the lead car during lane change\\
Distance Headway Merge Back & This is the same as Distance Headway during lane change except measured in between driver car and the following car in the destination lane.\\
Braking Distance from the Intersection & The distance from the intersection at which a person starts applying brakes.\\
Time To Stop & Braking distance divided by the speed of the car right before brake is applied.\\
Maximum Turn Speed & Maximum speed of the driver car over a time window during a left turn or a right turn.\\
Speed at the Intersection & Instantaneous speed at the intersection.\\
Average Speed for 20 meters before Intersection & This is the speed of the driver car averaged over a distance range of 20 meters from the intersection.\\
\hline
\end{tabular}
\caption{Features for style classification}
\label{tab:features}
\end{table*}
Table \ref{tab:features} summarizes all the features for the four driving test tasks. We used \textit{mean distance to lead car}, \textit{mean time headway}, \textit{time headway during lane change}, and \textit{distance headway during lane change} as features for Task 1 and Task 2. Task 1 had an extra feature \textit{distance headway merge back} for scoring the merge back behavior to the right lane.
Task 3 consisted of two sub-tasks (approaching intersection at a stop light and then making a left turn at green ball). We characterized this task with 5 features: \textit{Braking Distance from the intersection}, \textit{Average speed for 20 meters before intersection}, \textit{Time To Stop}, \textit{Speed at the intersection}, and \textit{Maximum turn speed}.
Task 4 constituted approaching intersection at green ball and then turning right without stopping. The features for this tasks are \textit{Speed at the intersection} and \textit{Maximum Turn Speed}.
We objectively measured a participant's overall driving style in terms of a \be{Defensiveness Score}. We first normalized the feature values across participants for each feature irrespective of the task. We calculated a \textit{Defensiveness Score} for each participant and for each task as the average over all the normalized feature values for that participant and task. We then computed an \textit{Aggregate Defensiveness Score} for each participant by averaging their scores across the four test tasks.
\subsection{Manipulation Check}
\label{sec:manipulationcheck}
We performed a manipulation check on our aggressive and defensive driving styles. We measured the aggregate defensiveness score for each style, plotted on the bottom right of \figref{fig:features}. We found that indeed, the aggressive style was less defensive than the defensive style (lower defensiveness score). We found that 86.67 \% of the users' styles scored higher than the aggressive style, and lower than the defensive style. This suggests that the two reference driving styles created by demonstrations resulted in meaningful representations of aggressive and defensive driving.
\subsection{Participants}
\prg{Subject Allocation} We opted for a within-subjects allocation because the participants needed to choose a preferred style out of the set of available ones. We randomized the order of the conditions.
\prg{Demographics} We recruited 15 participants consisting of a mix of graduate students and undergraduate students. Before the study we sent out a screening form to each participant in order to ensure a wide distribution of demographics, driving experience and perceived driving behaviors of the participants. We also checked for a valid driving license. 3 of our participants were 30 to 31 years old, the rest of the participants were 18-24 years old.
The mean driving experience of the participants was 5.46 years with a standard deviation of 4.5 years. Participants had driven an average 214 miles with a standard deviation of 188 miles on the week before they filled out the screening form.
We asked the participants to give us some information about their perceived driving behavior using the following questions: 1. Please rate if you consider yourself a conservative or an adventurous driver on a 7-point scale, 1 being \textit{conservative} and 7 being \textit{adventurous}. 2. Please rate on a 7-point scale what you like about driving, 1 being \textit{joy of motion} (like feeling the force as you accelerate) and 7 being \textit{comfort of steadiness}. You may like some of both. 3. Rate on a 7-point scale if you think you vary your driving by road conditions, traffic and time availability, 1 being \textit{vary always} and 7 being \textit{I don't vary at all}. 4. Please rate your driving experience from somewhat experienced to very skillful. The purpose of these questions was to acquire some information about the participants' driving styles without explicitly using the term style or in other words give away the original goal of the study.
Approximately 46 \% of the participants considered themselves well experienced in driving, and 20 \% considered themselves experienced. The rest were equally distributed between somewhat experienced to very skillful. The mean score for perceived conservative-adventurous driving behavior was 3.6. Most of the participants considered themselves to be in the middle of the spectrum. Only one participant considered himself to be conservative. More participants preferred comfort and steadiness over joy of driving, the average rating being 4.46. The mean rating for variation of driving style in response to environment and traffic was 3, which means most participants believed that they alter their driving behavior according to traffic.\\
\subsection{External Validity and \\Controlling for Confounds}
\prg{Driving environment} We used a simulator and not real autonomous cars. However, we designed a simulation track and traffic conditions so as to elicit natural driving responses. We also collected participant feedback in the first part of the study on the simulation environment and how their driving behavior in the simulated track related to their actual driving behavior.
\prg{Masking own style}
One of the major challenges of this work was to ensure that a participant could not recognize his or her driving style from simulation peculiarities like scenes, traffic and controls. We wanted the participants to only recognize their driving style based on their traffic maneuvers and actions. We took several steps to camouflage the driving data of a participant in the second part of the study:
\begin{itemize}
\item We retained the traffic conditions and route from the first part of the study while changing the surrounding scenes and traffic cars, such that we can replicate the user's driving while removing the bias of familiar environment.
\item We let the participant perform approximately 14 driving actions in the first part of the study and picked only some of these tasks for the second part of the study.
\item During the second part of the study we presented the tasks in an order different from how they occurred in the manual mode. For example: In the first part, the participants first entered the expressway and performed some driving actions on the expressway and then exited the expressway and performed some more driving maneuvers on the city roads. During the second part, we presented one city road task and one expressway task in an alternate order.
\item We presented the four styles for each of the test driving tasks in a randomized order, which made it more difficult to consistently recognize one style.
\item We post-processed the users' driving to remove peculiarities, which we explain below.
\end{itemize}
During our pilot studies we found that due to some peculiarities of simulation environment (over-sensitive steering, less sensitive braking) and the resultant jitter in the driving data, some participants were able to recognize their own driving. For example, a participant mentioned: \emph{``This looks like how I was driving. I had to stop at the intersection because I pressed the brake too early. The brake was tight.'' }
Idiosyncrasies of the simulator led pilot participants to identify their driving behavior in the second part of the study.
In order to eliminate these peculiarities of the simulation environment, we changed the brake stiffness and steering sensitivity and presented participants with smoothed version of their data in the second part of the study.
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.4]{figs/allfeatures.pdf}
\caption{Participants' feature distribution}
\label{fig:features}
\end{figure*}
\subsection{Trajectory Smoothing}
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.8]{figs/sample_trajectory.pdf}
\caption{Smoothed trajectory compared to original trajectory of task 1 of one participant at 15 \% smoothing}
\label{fig:smoothed_trajectory}
\end{figure}
We filtered the driving trajectories to eliminate idiosyncrasies that make the trajectory instantly recognizable.
We applied a Bilateral Filter \cite{tomasi1998bilateral} to reduce the lateral variance (or equivalently, the variance of the lateral displacements from the center of the lane) of the trajectories. By affecting only the lateral components of the trajectory, this filtering preserves distance between the cars. We applied filtering only to the stretches of the trajectory on the expressway.
\figref{fig:smoothed_trajectory} shows a smoothed trajectory for one participant. It has 15 \% lower lateral variance than the original trajectory.
\section{Results} \label{Results}
\subsection{Simulation Realism }
In the first part of the study, in addition to collecting user driving data, we also wanted to ensure that this driving data corresponded to participants' everyday driving as much as possible. We conducted a post-driving interview, as described in the Dependent Measures subsection (\sref{sec:dependent}). Here we present the results of the interview.
The rating mode for similarity between driving on road and driving in our study simulator was +1 on -3 to +3 scale. Four participants gave a rating of +2. Some of their positive comments were: ``Not considering the room environment and just looking at the simulation graphics and the car it was pretty much the same environment as real. I would give +3 for surrounding traffic conditions". Other participants said that they felt relaxed in the simulator environment and that they could drive cautiously as they would in real traffic.
One participant who rated the driving experience similarity -2 complained about the lack of motion feedback in the system. This is the same participant who gave high rating for \textit{joy of motion} in the screening question. However, no other participant had the same concern and got well-adjusted to the simulation environment.
Most of the participants who rated +1 to -1 found steering re-centering or brake insensitivity difficult. We also received quite opposite feedback from two participants when they compared their everyday driving to the simulator driving. For example, one participant mentioned ``It felt real. It was something I could get used to after driving a while. The gas and brakes were more sensitive than my car''. Another participant felt that the brakes were excellent, different from regular car.
One participant reported that she was so immersed after driving for a while, that she caught herself turning her head back to check for oncoming traffic in the destination lane. We found that participants with one or less years of driving experience could not use the simulation environment properly. Overall, the ratings and the comments supported that the simulator conditions are not \emph{too} far from real conditions.
\subsection{Feature Distribution for Participant Styles}
We define driving style in terms of features mentioned in the \sref{sec:methods}.
\figref{fig:features} shows, for each task, feature, and participant, what the participant's feature value was for that task (blue marks). The figure also shows the aggressive style values in red and defensive style values in green.
Higher negative values correspond to more aggressive behavior. All the feature values are arranged from aggressive on the left to defensive on the right. However, for features like speed where lower values mean more defensive we show and use the negation of these features.
The bottommost plot to the right shows the aggregate defensiveness score. This score is derived from the normalized feature values.
60 \% of the participants are within 0.75 standard deviation aggressive and 40 \% within 0.75 standard deviation defensive. Only two of the participants were more defensive than the autonomous defensive car, one of them being very close to the defensive car in the score.
\begin{quote}
\emph{When looking at the aggregate defensiveness, most participants lie between the aggressive and defensive styles. }
\end{quote}
There are, however, exceptions, but for particular features in particular tasks.
For task 1, several participants were more defensive than the defensive autonomous car. For the last feature of task 1, Distance Headway Merge Back, the aggressive car was not as aggressive as several participants and even our defensive car. In task 2, the aggressive and the defensive autonomous cars enclosed a middle section of the spectrum for Mean Distance Headway and Mean Time Headway. In other words, several participants were more aggressive and more defensive than the aggressive and defensive autonomous cars respectively. This is because these features were measured during car following over a long time span and are expected to have wider distributions than features characterizing instantaneous actions.
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.5]{figs/preference_actual_barchart.pdf}
\caption{Mean Defensiveness Score Across Participants. The corresponding scores of aggressive and defensive autonomous cars are Task 1: (-0.768, -0.222) Task 2: (-0.885, 1.325), Task 3: (-1.82, 0.766) and Task4: (-1.49,0.72).}
\label{fig:user preference}
\end{figure}
\subsection{Preferred Style in Relation to Own Style}
\label{sec:preferredvsactual}
We asked participants to rate how much they would prefer driving with each style, for each task. We refer to the highest rated style(s) as the participant's preferred style(s).
Our main finding is that overall, users preferred a different style than their own. A total of 9 out of 15 participants preferred a different style than their own on at least one of the tasks. A matched pairs $t$-test comparing actual and preferred defensiveness score showed a significant difference ($t(1,60)=-2.58$, $p=.0121$), supporting our hypothesis. Here, whenever a user's highest rating was for multiple styles as opposed to a single one, we included each preferred style as a data point.
\begin{quote}
\emph{Overall, people prefer a significantly more defensive style than their own.}
\end{quote}
We also investigated how this breaks down by task, and only found significant effects on the $2^{nd}$ and $3^{rd}$ tasks. See \figref{fig:user preference} for comparison between average preferred style and own style of our participants for each of the four tasks. For task 1 we note that several participants were more defensive than other autonomous styles presented to them. However, they still preferred our defensive style, which explains that the average choice was more aggressive than the participants' own style.
Interestingly, some participants did not perceive the extra defensive nature of their own style in task 1 positively. One participant mentioned about their own style that ``In this one I felt like we gave a lot of room, more than I would have probably.'' (ironically, since they did \emph{exactly} that). Two other participants made similar comments about their own lane changing behavior. Besides, a few participants also considered driving features beyond the ones we accounted for.
For task 2 and task 3 the defensive autonomous car was more aggressive than only none to three participants across all features and it was more defensive than the rest of the population by a major margin, in features like Distance Headway and Time Headway During Lane Change.
The task had a significant effect on the difference ($F(3,58)=4.13$, $p=.0101$), suggesting that people's preferences for a driving style are not consistent, but rather \emph{change based on the context.} This motivates future research on predicting the desired driving style not just based on the individual, but also based on the current driving context.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figs/preferredvsperceived.pdf}
\caption{Scatter plot showing correlation between the style that users \emph{thought} was their own and the style that they chose as their preferred.}
\label{fig:perceived_preferred}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figs/perceivedstyle_scatter.pdf}
\caption{Scatter plot showing little correlation between own style and perceived own style: users did not tend to identify their own style correctly. as evidenced by the off-diagonal points.}
\label{fig:perceived_own}
\end{figure}
\subsection{Perceived Own Style \\in Relation to Actual Own Style}
We also asked participants to rate each style in terms of similarity to their own. From this, we learned what participants \emph{perceived} their own style to be.
We found that even though participants did not pick their \emph{actual} style as their preferred (Sec. \ref{sec:preferredvsactual}), participants did tend to prefer their \emph{perceived} style. On each task, between 80 and 93\% of participants opted for the same style as the one they \emph{thought} was the closest to their own (and sometimes rated other styles as well as equally good). We found a significant correlation between the perceived own and preferred styles, $r(58)=.86$, $p<.0001$. \figref{fig:perceived_preferred} shows a scatter plot of preferred style by perceived style, with many points on the diagonal representing users who preferred driving in the style they thought was (closest to) their own.
However, even though the majority participants thought that they were picking their own style, they really were not. A total of 46 to 67\% participants on each task did not correctly identify their actual own style, and the correlation between perceived and actual defensiveness score was only $r(56)=.40$ across tasks. \figref{fig:perceived_own} paints a different picture from \figref{fig:perceived_preferred}: it plots the perceived style against the \emph{actual} own style, showing many off-diagonal points, representing users who did not correctly identify their style.
In task 1 we see that several participants perceived themselves to be slightly more aggressive irrespective of their actual style. Likewise, both for task 2 and task 3 several participants perceived themselves to be more defensive irrespective of their actual style.
\begin{quote}
\emph{Participants tended to prefer the style that they thought was their own, but in fact that style had little correlation to their actual own style. }
\end{quote}
\section{Discussion}
\noindent\textbf{Summary.} We hypothesized that users of future autonomous cars would prefer a driving style that is significantly different than their own. We conducted a user study in a driving simulator to test our hypothesis. We found that users preferred a more defensive style than their own. This echoes the finding from prior work \cite{horswill1999} that when people are not in control of the driving they prefer lower speeds -- autonomous cars are one instantiation of not being in control of the driving.
Interestingly, over 80\% of users preferred the style that they \emph{thought} was their own, but many times they were incorrect in identifying their own style. These results open the door for learning what the user's preferred style will be, but caution against getting driving demonstration from the user, since people can drive like they do, not like they want to be driven.
\noindent\textbf{Limitations and Future Work}
Our work is limited in the following ways:
\begin{itemize}
\item \textbf{Limited driving style features}. Following the most common conventions, we have only characterized style in terms of defensiveness. We also inherited from previous studies the feature choices in defining driving styles.
\item \textbf{Limited driving style choices}. We presented participants with limited options along the spectrum of defensiveness and found that they preferred a style more defensive than their own. However, we did not learn the style they actually desired, only the best out of our few options.
\item \textbf{Limited fidelity of simulation environment}. Our simulation environment does not provide motion feedback, which may limit the users' perception of speed. Although the interview results validated that participants' perception of the driving styles are sufficient, experiment results in a higher fidelity simulation environment might be more accurate.
\end{itemize}
Given the encouraging results from our findings presented here, we believe that it is worthwhile to test more diverse feature choices and driving style representations in a higher fidelity setting. It is also worthwhile exploring what features users' consider when they evaluate autonomous driving styles. These experiments will provide a comprehensive evaluation of the study presented in this paper.
Going further, we are excited to investigate how we might learn a deviation from the user's driving style that is predictive of how they actually want to be driven, and explore new learning techniques that can augment user demonstrations with other types of user input and guidance.
\section{Acknowledgements}
We would like to thank our post-doctoral colleague Santosh Chandrasekhar and undergraduate researcher Joseph Stansil for their contribution in system set-up and pilot studies for this research. This work is partly supported by the Berkeley Deep Drive Center, the Center for Human-Compatible AI, and CITRIS.
\bibliographystyle{abbrv}
|
1,116,691,497,293 | arxiv | \section{Introduction} \label{sec:intro}
The basic motivation of this paper is to present a new class of integral equations arising from relativistic quantum physics. The existence of solutions for these equations is not obvious, and we describe an approach that allows for proofs of existence and uniqueness results for simple examples from that class.
The straightforward extension of the concept of a quantum mechanical wave function to the relativistic case, due to Dirac \cite{dirac_32} (and in different form to Tomonaga \cite{tomonaga} and Schwinger \cite{schwinger}), uses wave functions that, for $N$ particles, are functions of $N$ space-time points, i.e.,
\begin{equation}\label{psidef}
\psi : \big(\mathbb{R}^{1,d} \big)^N \rightarrow \mathbb{C}^k,~~~(x_1,...,x_N) \mapsto \psi(x_1,...,x_N).
\end{equation}
Here, $\mathbb{R}^{1,d}$ stands for (1+$d$)-dimensional Minkowski spacetime, $k \in \mathbb{N}$ depends on the type of particles described by $\psi$ and $x_i = (t_i,{\mathbf{x}}_i) \in \mathbb{R}^{1,d}$ with ${\mathbf{x}}_i \in \mathbb{R}^d$. Because of the occurrence of $N$ time coordinates $t_i$, $\psi$ has been termed a \textit{multi-time wave function} \cite{dice_paper}. The usual single-time wave function $\varphi(t,{\mathbf{x}}_1,...,{\mathbf{x}}_N)$ is straightforwardly contained in $\psi$ as the special case of equal times, i.e., $\varphi(t,{\mathbf{x}}_1,...,{\mathbf{x}}_N) = \psi(t,{\mathbf{x}}_1,...,t,{\mathbf{x}}_N)$, while $\psi$ is a manifestly Lorentz-covariant object.
Apart from being needed for Lorentz invariance, the $N$ time coordinates also make a new type of evolution equation possible that includes direct interactions between the particles. Since interaction cannot occur faster than light in a relativistic setting, these interactions need to have a retarded effect, with a time delay proportional to the distance; that is, the interaction should take place along light cones, as in the Wheeler--Feynman formulation of classical electromagnetism \cite{wf2}. As detailed in \cite{direct_interaction_quantum}, starting from the reformulation of the usual Schr\"odinger equation for $N=2$ as an integral equation, a natural generalization of the equation to the relativistic case leads to the following class of \textit{multi-time integral equations:}
\begin{equation}
\psi(x_1,x_2) = \psi^{\rm free}(x_1,x_2) + \lambda \int dV(x_1') \int dV(x_2') \, G_1(x_1-x_1') G_2(x_2-x_2') K(x_1',x_2') \psi(x_1',x_2').
\label{eq:inteq}
\end{equation}
Here, $\psi^{\rm free}(x_1,x_2)$ is a given solution of free (i.e., non-interacting) relativistic wave equations such as the Klein-Gordon (KG) equation or the Dirac equation in each spacetime variable $x_1, x_2$. We shall focus on the KG equation for which we have
\begin{equation}
(\square_i + m_i^2)\psi^{\rm free}(x_1,x_2) = 0,~~~i=1,2.
\label{eq:freekgmultitime}
\end{equation}
Here, $\square_i = \partial_{t_i}^2 - \Delta_i$ denotes the wave operator acting on $x_i$. Furthermore, $G_1, G_2$ are Green's functions of these equations, $\lambda \in \mathbb{R}$ is a coupling constant, $dV(x_i),~i=1,2$ are the (1+$d$)-dimensional spacetime volume elements, the integrals run over $\mathbb{R}^{1,d}$ and $K(x_1,x_2)$ is the so-called \textit{interaction kernel.}
Eq.~\eqref{eq:inteq} defines the class of integral equations that this paper is about. We shall give more details in Sec.~\ref{sec:inteq} and refer to \cite{direct_interaction_quantum} for details about physical background and motivation.
Another source of motivation for studying \eqref{eq:inteq} is that similar equations can heuristically be derived from quantum field theory (QFT) for the description of bound states of two particles. In fact, the well-known Bethe-Salpeter (BS) equation \cite{bs_equation,greiner_qed} is of the form \eqref{eq:inteq} with a distribution-valued $K$.
So far, relativistic two-particle wave functions have almost exclusively been considered (a) in the non-interacting case when the task reduces to solving well-known free equations such as \eqref{eq:freekgmultitime}, and (b) in the interacting case with recourse to QFT (see \cite{schweber}). Even if one is only interested in the two-particle wave function $\psi(x_1,x_2)$, the QFT dynamics nevertheless involves a Fock space function, i.e., a collection of $n$-particle wave functions $\psi^{(n)}(x_1,...,x_n)$ for every $n \in \mathbb{N}$. Eq. \eqref{eq:inteq}, by contrast, only involves one function of eight variables instead of a set of infinitely many functions, each of $4n$ variables. Even more importantly, QFT typically faces the problem of ultra-violet divergences. That means, some of the expressions involved in QFT do not make sense when taken literally but are divergent \cite{folland}. As we shall demonstrate here, Eq. \eqref{eq:inteq}, by contrast, makes sense as it stands and leads to a well-posed initial value problem. Besides, our proof of the existence and uniqueness of solutions is based on an iteration scheme that might serve as the basis of numerical algorithms. Moreover, also the feature that \eqref{eq:inteq} allows to express \textit{direct interactions with time delay} is new compared to the existing approaches (apart from the BS equation, see \cite{direct_interaction_quantum} for a discussion).
In the mathematical literature, integral equations of the form \eqref{eq:inteq} have, to the best of our knowledge, not been systematically analyzed before.\footnote{Note that several works in the physics literature on the BS equation study (special) solutions of that equation (see e.g. \cite{wick_54,cutkosky54,green57,tiktopoulos65,consenza65,obrien75} and references therein). However, these works seem to be of limited significance for the mathematical theory of \eqref{eq:inteq}, for the following reasons. Several of the works use a Wick rotation \cite{wick_54}, i.e., they replace Minkowski spacetime with (1+$d$)-dimensional Euclidean space. This greatly simplifies the equation, as Green's functions of the equation $(-\Delta_{1+d} + m^2)\varphi = 0$, where $\Delta_d$ denotes the Laplacian in $d$ dimensions, are much simpler than Green's functions of the Klein-Gordon equation $(\partial_t^2 - \Delta_d + m^2)\varphi = 0$. However, a transformation back to the original equation is not attempted (and may well not always be possible). In addition, some of these works set $\psi^{\rm free} = 0$. The same step here would lead to only the trivial solution $\psi=0$. Moreover, these works study an eigenvalue problem of the form $\psi = \lambda \widehat{L} \psi$, where $\widehat{L}$ is an integral operator, in the coupling constant $\lambda$. As the physical value of the coupling constant is fixed, the results are only indirectly relevant for the actual problem. Lastly, the question of suitable initial data (or a different classification of solutions) is left untouched. As a consequence, these works have nothing to say about how to understand the BS equation as a law for defining the time evolution of $\psi$.} Several points make the task challenging:
\begin{enumerate}
\item \textit{Non-trivial time dependence.} Because of the structure $G_1(x_1-x_1') G_2(x_2-x_2') K(x_1',x_2')$, integral transformations in the two time coordinates do not render the problem simple. In particular, the problem cannot easily be reduced to a time-independent one.
%
\item \textit{Infinite domains.} The domain of integration occurring in \eqref{eq:inteq} is $\mathbb{R}^{1,d}\times \mathbb{R}^{1,d}$. That means, in order for the integral to exist, the integrand needs to have a certain drop-off behavior at infinity, e.g., has to be in $L^1$. However, as the Green's functions of the relevant wave equations do not decay particularly fast, $\psi$ needs to provide this drop-off behavior. This is problematic, as the integral operator then maps out of $L^1$, and it becomes hard to even set up a suitable mathematical framework. (We shall illustrate this problem in detail in Sec. \ref{sec:problem}.)
\item \textit{Combined singularities of the Green's functions.} Green's functions of relativistic wave equations are typically singular; for example, they often contain Dirac $\delta$-functions on the light cone. If in addition $K$ is singular as well, which is the case for the physically natural choice in 1+3 dimensions, $K(x_1,x_2)=\delta((t_1-t_2)^2-|{\mathbf{x}}_1-{\mathbf{x}}_2|^2)$ \cite{direct_interaction_quantum}, then the structure of the combined singularities of $G_1, G_2$ and $K$ becomes particularly difficult to treat.
\end{enumerate}
We shall address these problems as follows. We focus on the case that $G_1,G_2$ are Green's functions of the Klein-Gordon equation. In order to formulate a tractable class of models, we set aside item 3 (returning to it later) and assume that $K$ is bounded. At least in $d=1$, this assumption also turns out to be physically realistic. With regard to item 2, we note that the infinite domain $\mathbb{R}^{1,d}\times \mathbb{R}^{1,d}$ is not the only physically reasonable possibility. Cosmologists take seriously the possibility that our universe had a Big Bang and thus is only semi-infinite in time. To keep the discussion simple, we implement this beginning in time in a rather crude way, cutting off Minkowski spacetime before $t=0$. (The case of curved cosmological spacetimes which actually feature a Big Bang singularity is studied in a separate paper \cite{int_eq_curved}.)
Focusing on the case of \emph{retarded} Green's functions (i.e., $G(x_1-x_1')$ that are nonzero only for $x_1'$ on or within the past light cone of $x_1$; a common choice in physics with reference to causality) then renders the time integrals finite, and leads to a Volterra-type structure of the equations. In fact, the whole domain of integration in \eqref{eq:inteq} becomes finite.
These simplifications then permit us to deal with item 1, the non-trivial time dependence.\\
The paper is structured as follows.
First, we formulate the integral equation in full detail on Minkowski spacetime for the relevant space dimensions $d=1,2,3$ (Sec. \ref{sec:inteq}). At the example of $d=1$, we illustrate the above-mentioned problem of infinite domains (Sec. \ref{sec:problem}). This motivates us to formulate the integral equation on semi-infinite (1+$d$)-dimensional Minkowski spacetime (Sec. \ref{sec:simplifiedmodel}).
Sec. \ref{sec:results} is dedicated to the question of the existence and uniqueness of solutions of the identified models. In Sec. \ref{sec:banachspaces} we point out which Banach spaces seem appropriate for the physical problem. Then, in Sec. \ref{sec:l2kernels}, we connect to standard results about multi-time Volterra integral equations by proving an existence and uniqueness theorem (Thm. \ref{thm:l2kernels}) for arbitrary $L^\infty L^2$-kernels ($L^\infty$ in the times and $L^2$ in the space variables). The proof is rather standard but serves to recall classical arguments and to prepare the method of the later proofs. Noting that the kernels in our models are not $L^\infty L^2$-kernels, we turn to the case of realistic Green's functions $G$ and bounded interaction kernels $K(x_1,x_2)$. (The total integral kernel is still singular.) Section \ref{sec:boundedkernels} contains our main results: existence and uniqueness theorems for the simplified models of Sec. \ref{sec:simplifiedmodel} (Thms. \ref{thm:1dboundedkernel}-\ref{thm:3dboundedkernel}). The proofs make crucial use of the fact that the retarded Green's functions of relativistic wave equations are only supported on (and possibly within) past light cones. In Sec. \ref{sec:singularkernels}, we finally extend our method to certain (special) singular interaction kernels which are simplified compared to the physically natural cases (Thm.s \ref{thm:singularkernel3d}, \ref{thm:singularkernel2d}).
In Sec. \ref{sec:conclusion}, we conclude and put our results in perspective. Moreover, we point out open problems that may be of interest to researchers specializing in integral equations.
\section{The integral equation} \label{sec:inteq}
We now make explicit the physically relevant form of our integral equation~\eqref{eq:inteq} in $d=1,2,3$ space dimensions.
\subsection{Explicit form of the Green's functions and the integral equation} \label{sec:explicitform}
The integral equation \eqref{eq:inteq} becomes fully specified by the choices of $G_1, G_2$ and $K$. Here we focus on the case that $\psi$ is complex-valued [i.e., $k=1$ in \eqref{psidef}] and $G_1, G_2$ are retarded Green's functions of the Klein-Gordon (KG) equation, i.e., for $j=1,2$,
\begin{equation}
(\square_j + m_j^2)G_j^{\rm ret}(x_j) = \delta^{(1+d)}(x_j),
\end{equation}
and $G_j^{\rm ret}(t_j,{\mathbf{x}}_j) = 0$ for $t_j <0$. Here, $m_j \geq 0$ is the $j$-th particle's mass and $\delta^{(1+d)}$ denotes the (1+$d$)-dimensional delta function.
In dimensions $d=1,2,3$, the retarded Green's functions with mass $m=m_1$ or $m=m_2$ are given as follows (see \cite[chap. 7.4]{zauderer}, \cite[appendix E]{birula_qed}). We use the abbreviation $x^2 = (x^0)^2-|{\mathbf{x}}|^2$ for $x = (x^0,{\mathbf{x}}) \in \mathbb{R}^{1,d}$ (Minkowski square) and set the physical constants $c$ and $\hbar$ to unity.
\begin{itemize}
\item[] $d=1$: $G^{\rm ret}(x) = \frac{1}{2} H(x^0) H(x^2) J_0(m \sqrt{x^2})$,
\item[] $d=2$: $G^{\rm ret}(x) = \frac{1}{2\pi} H(x^0) H(x^2) \frac{\cos(m\sqrt{x^2})}{\sqrt{x^2}}$,
\item[] $d=3$: $G^{\rm ret}(x) = \frac{1}{2\pi} H(x^0) \delta(x^2) - \frac{m}{4\pi \sqrt{x^2}} H(x^0) H(x^2) J_1(m\sqrt{x^2})$.
\end{itemize}
Here, $H(s)=1_{s>0}$ denotes the Heaviside function and $J_0, J_1$ are Bessel functions of the first kind of order 0 and 1, respectively.
As detailed in \cite{direct_interaction_quantum}, the physically natural choice of the interaction kernel is $K(x_1,x_2) = G^{\rm sym}(x_1-x_2)$, the time-symmetric Green's function of the wave equation (i.e., the massless KG equation). We have:
\begin{itemize}
\item[] $d=1$: $G^{\rm sym}(x) = \frac{1}{2} H(x^2)$,
\item[] $d=2$: $G^{\rm sym}(x) = \frac{1}{2\pi} \frac{H(x^2)}{\sqrt{x^2}}$,
\item[] $d=3$: $G^{\rm sym}(x) = \frac{1}{2\pi} \delta(x^2)$.
\end{itemize}
With these choices, the integral equation \eqref{eq:inteq} in the various space dimensions becomes:
\paragraph{d=1:}
\begin{align}
&\psi(t_1,z_1,t_2,z_2) = \psi^{\rm free}(t_1,z_1,t_2,z_2) + \frac{\lambda}{8} \int_{-\infty}^{t_1} dt_1' \int dz_1' \int_{-\infty}^{t_2} dt_2' \int dz_2'~ H(t_1-t_1'-|z_1-z_1'|)\nonumber\\
&~\times~ J_0\Big(m_1\sqrt{(t_1-t_1')^2-|z_1-z_1'|^2}\Big) \, H(t_2-t_2'-|z_2-z_2'|) \, J_0\Big(m_2\sqrt{(t_2-t_2')^2-|z_2-z_2'|^2}\Big)\nonumber\\
&~\times~ H((t_1'-t_2')^2-|z_1'-z_2'|^2) \, \psi(t_1',z_1',t_2',z_2').
\label{eq:inteq1d}
\end{align}
\paragraph{d=2:}
\begin{align}
&\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{\lambda}{(2\pi)^3} \int_{-\infty}^{t_1} dt_1' \int d^2 {\mathbf{x}}_1' \int_{-\infty}^{t_2} dt_2' \int d^2 {\mathbf{x}}_2'~\nonumber\\
& ~\times~ H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)\, \frac{\cos\big(m_1\sqrt{(t_1-t_1')^2-|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}\big)}{\sqrt{(t_1-t_1')^2-|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}} \,
H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|) \, \nonumber\\
&~\times~ \frac{\cos\big(m_2\sqrt{(t_2-t_2')^2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}\big)}{\sqrt{(t_2-t_2')^2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}}\, \frac{H((t_1'-t_2')^2-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2) }{\sqrt{(t_1'-t_2')^2-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2}}\, \psi(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2').
\label{eq:inteq2d}
\end{align}
\paragraph{d=3:} For simplicity, we consider only the massless case ($m_1=m_2=0$). Then the most singular terms of the Green's functions are still included, and the equation takes the form
\begin{align}
&\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{2\lambda}{(4\pi)^3} \int_{-\infty}^{t_1} dt_1' \int d^3 {\mathbf{x}}_1' \int_{-\infty}^{t_2} dt_2' \int d^3 {\mathbf{x}}_2'~ \nonumber\\
& ~\times~\frac{\delta(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|}\, \frac{\delta(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \, \delta((t_1'-t_2')^2-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2) \, \psi(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2').
\label{eq:inteq3d}
\end{align}
The form of the equations for the different dimensions is quite different, both with respect to the domain of integration and with respect to the singularities that occur. In $d=1$ and $d=2$, the domain of integration are the time-like configurations, i.e., the set
\begin{equation}
\mathcal{T} = \{ (t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) \in \mathbb{R}^{1,d} \times \mathbb{R}^{1,d} : |t_1-t_2| > |{\mathbf{x}}_1-{\mathbf{x}}_2|\}.
\end{equation}
In $d=3$, however, because of the delta function $\delta((t_1'-t_2')^2-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2)$ in the interaction kernel, the integral in \eqref{eq:inteq3d} runs only along the light-like configurations,
\begin{equation}
\mathscr{L} = \{ (t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) \in \mathbb{R}^{1,d} \times \mathbb{R}^{1,d} : |t_1-t_2| = |{\mathbf{x}}_1-{\mathbf{x}}_2|\}.
\end{equation}
As noted in \cite{direct_interaction_quantum}, \eqref{eq:inteq3d} can be solved on $\mathscr{L}$ autonomously. For a configuration outside of $\mathscr{L}$, it can then be used as a formula to calculate the solution.
Concerning the singularities, there are only jump singularities in $d=1$ and the whole integral kernel remains bounded. In $d=2$, there are three connected singularities of the form $1/\sqrt{t^2-{\mathbf{x}}^2}$. Finally, in $d=3$, there are singularities of the form $1/|{\mathbf{x}}|$ and also $\delta$-functions which require some care to be defined rigorously, and which may lead to further singularities because of the weight factor associated with the roots of their arguments.
These connected singularities may be quite hard to treat in $d=2, 3$. However, because in all cases the domains extend infinitely in the time direction, there is a more basic problem we have to deal with first. We shall illustrate this in the case $d=1$ where the singularities are unproblematic.
\subsection{Difficulties with infinite time integrations} \label{sec:problem}
Consider the integral equation \eqref{eq:inteq1d} in $d=1$. The well-posedness of the problem at the very least requires the integral to exist. As $|J_0| \leq 1$ and $J_0(0) = 1$, the existence of the integral in the massless case implies the existence for every $m_1,m_2>0$. So we focus on $m_1=m_2=0$. Then we obtain the following condition on $\psi$:
\begin{multline}
(\widehat L \psi)(t_1,z_1,t_1',z_1'):= \int_{-\infty}^{t_1} dt_1' \int dz_1' \int_{-\infty}^{t_2} dt_2' \int dz_2'~ H(t_1-t_1'-|z_1-z_1'|) \\[2mm]
\times~ H(t_2-t_2'-|z_2-z_2'|) \, H((t_1'-t_2')^2-|z_1'-z_2'|^2)\, |\psi|(t_1',z_1',t_2',z_2') < \infty
\label{eq:integral1d}
\end{multline}
for all $t_1,t_2, z_1,z_2$. This means, $\psi$ needs to have certain integrability properties which are related to its behavior for $t_1,t_2 \rightarrow \pm \infty$. As only configurations $(t_1',z_1',t_2',z_2') \in \mathcal{T}$ contribute to the integral, a natural possibility is to demand $\psi \in L^1(\mathcal{T})$. Then the integral \eqref{eq:integral1d} is finite.
However, in order to formulate the equation \eqref{eq:inteq1d} on $\psi \in L^1(\mathcal{T})$, we also need that the integral operator $\widehat{L}$ occurring in the equation maps from $L^1(\mathcal{T})$ to $L^1(\mathcal{T})$. This yields the further condition:
\begin{align}
&\hspace{-5mm}\int dt_1 \int_{-\infty}^{t_1} dt_1' \int dt_2 \int_{-\infty}^{t_2} dt_2' \int dz_1\, dz_1' \, dz_2 \, dz_2' ~ H(t_1-t_1'-|z_1-z_1'|) \, H(t_2-t_2'-|z_2-z_2'|)\nonumber\\
\times~ &H((t_1-t_2)^2-|z_1-z_2|^2)\, H((t_1'-t_2')^2-|z_1'-z_2'|^2)\, |\psi|(t_1',z_1',t_2',z_2') < \infty
\label{eq:doubleintegral1d}
\end{align}
for all $\psi \in L^1(\mathcal{T})$.
The point now is that the integral \eqref{eq:doubleintegral1d} simply diverges. This can, for example, be seen from the fact that arbitrarily large $t_1,t_2$ contribute to the integral.
Thus, it seems difficult to even start the mathematical analysis of \eqref{eq:inteq1d}. A similar problem also occurs in the massive case and in dimensions $d=2,3$ as well.
While we do not claim that it is generally impossible to analyze the integral equations on domains which are infinite in the times, we have not found any other way to deal with the problem besides modifying the equation. The root of the problem obviously lies in the fact that the domain of integration is infinite in time. In the retarded case, the domain extends to $-\infty$ instead of stopping at some finite value. The easiest remedy is to assume that spacetime does not extend back to $t\to -\infty$ but had an initial time which thus becomes a lower bound of the time integrations. This renders the integral \eqref{eq:integral1d} finite without demanding some kind of drop-off behavior of $\psi$ in time (e.g., for $\psi \in L^\infty(\mathbb{R}^4)$). Of course, the cutoff requires a justification from physics, as it breaks important symmetries such as time translation invariance (and also Lorentz invariance).
Fortunately, there is such a physical justification. Cosmology has come to the conclusion that it is not unlikely that our universe has a Big Bang singularity, i.e., a beginning in time. To implement the Big Bang properly requires to formulate the integral equation \eqref{eq:inteq} on curved spacetimes. This is a non-trivial task by itself and is the topic of a separate paper \cite{int_eq_curved}. Among other things, one needs to explicitly determine the Green's functions of the appropriate quantum mechanical wave equations on the respective curved spacetimes. Here we set aside these complications, content ourselves that there is a physical reason for a beginning in time, and simply cut off the time integrals in \eqref{eq:inteq} at $t_1=t_2=0$.
\subsection{Simplified models} \label{sec:simplifiedmodel}
The cutoff at $t_1=t_2=0$ gets rid of the problem of infinite time domains. However, there is another problem remaining (for $d=2,3$): the connected singularities of $G_1,G_2$ and $K$. We shall deal with this problem as follows. The Green's functions $G_1,G_2$ cannot be modified as they are determined by the type of quantum mechanical particle under consideration. The interaction kernel $K$ is more arbitrary. There is a most natural choice for physics, $K(x_1,x_2) = G^{\rm sym}(x_1-x_2)$, but other choices just lead to a different kind of interaction. In particular, we can approximate $G^{\rm sym}(x_1-x_2)$ by a regular function while respecting the physical symmetries. For example, for $d=3$, $G^{\rm sym}(x_1-x_2) = \frac{1}{2\pi} \delta((x_1-x_2)^2)$ can be approximated by
\begin{equation}
K(x_1,x_2) = \frac{1}{ (2\pi)^{3/2} \sigma} \exp\left(- \frac{|(x_1-x_2)^2|^2}{2 \sigma^2} \right)
\end{equation}
for some small constant $\sigma >0$.
In the following, we shall therefore study models with arbitrary but regular (e.g., bounded or bounded and smooth) interaction kernels $K$. We return to the case of singular interaction kernels in Sec. \ref{sec:singularkernels}.
The simplified models we shall study are given as follows.
\paragraph{d=1:} Here, the natural interaction kernel is bounded. We nevertheless allow for arbitrary bounded $K(x_1,x_2):$
\begin{align}
&\psi(t_1,z_1,t_2,z_2) = \psi^{\rm free}(t_1,z_1,t_2,z_2) + \frac{\lambda}{4} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int dz_1' \, dz_2'~ H(t_1-t_1'-|z_1-z_1'|)\nonumber\\
&~\times~ J_0\Big(m_1\sqrt{(t_1-t_1')^2-|z_1-z_1'|^2}\Big) \, H(t_2-t_2'-|z_2-z_2'|) \, J_0\Big(m_2\sqrt{(t_2-t_2')^2-|z_2-z_2'|^2}\Big)\nonumber\\[1mm]
&~\times~ K(t_1',z_1',t_2',z_2') \, \psi(t_1',z_1',t_2',z_2').
\label{eq:inteq1dsimplified}
\end{align}
\paragraph{d=2:}
\begin{align}
&\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{\lambda}{(2\pi)^2} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1'\, d^2 {\mathbf{x}}_2'~\nonumber\\
& ~\times~ H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)\, \frac{\cos\big(m_1\sqrt{(t_1-t_1')^2-|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}\big)}{\sqrt{(t_1-t_1')^2-|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}} \,
H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|) \, \nonumber\\
&~\times~ \frac{\cos\big(m_2\sqrt{(t_2-t_2')^2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}\big)}{\sqrt{(t_2-t_2')^2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}}\, K(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2') \, \psi(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2').
\label{eq:inteq2dsimplified}
\end{align}
\paragraph{d=3:} We again consider the case $m_1=m_2=0$ here and let $K(x_1,x_2)$ be smooth and bounded. If $\psi$ is a test function, this permits us to perform the time integrals by using the $\delta$-functions. We are left with:
\begin{multline}
\hspace{-3mm}\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{\lambda}{(4\pi)^2} \int d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2'~ \frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|}\, \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \\[2mm]
\times~ K(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2') \, \psi(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2').
\label{eq:inteq3dsimplified}
\end{multline}
The Heaviside functions result from the lower bounds $t_1',t_2' \geq 0$ in the modified integral equation \eqref{eq:inteq3d}. In order to avoid complications with distribution-valued kernels, we directly study \eqref{eq:inteq3dsimplified}.
\paragraph{Remarks.}
\begin{enumerate}
\item For all three equations \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified}, the domain of integration is now effectively finite, as it is finite in the time directions and as the Green's functions $G_1^{\rm ret}$, $G_2^{\rm ret}$ are only supported along (and possibly inside) the backward light cones ${\rm PLC}(x_i) = \{ y \in \mathbb{R}^{1,d} : |x_i^0-y^0| > |{\mathbf{x}}_i-{\mathbf{y}}| ~{\rm and}~ y^0 < x^0_i\}$.
%
\item For $d=1$ and $d=2$, the time integrals run from $0$ to $t_i$, $i=1,2$. That means, \eqref{eq:inteq1dsimplified} and \eqref{eq:inteq2dsimplified} have a multi-dimensional Volterra-type structure. We shall therefore call these equations \textit{multi-time Volterra integral equations} (MTVE). For $d=3$, the structure is somewhat different. However, we can also see that the radial coordinates $|{\mathbf{x}}_i|$ can only take values between $0$ and $t_i$. This will also allow us to employ methods for Volterra integral equations for $d=3$.
%
\item From the integral equations \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} we can read off that $\psi$ satisfies the initial value problem $\psi(0,{\mathbf{x}}_1,0,{\mathbf{x}}_2) = \psi^{\rm free}(0,{\mathbf{x}}_1,0,{\mathbf{x}}_2)$. In other words, $\psi$ is subject to a \textit{Cauchy problem ``at the Big Bang.''} As $\psi^{\rm free}$ is a solution of the free multi-time equations, here $(\square_k + m^2_k) \psi = 0,~k=1,2$, it is itself determined uniquely by Cauchy data. Thus, if we can prove the existence and uniqueness of solutions for arbitrary $\psi^{\rm free}$, we also obtain a classification of the solutions by Cauchy data at $t_1 = 0 = t_2$. For a multi-time integral equation \eqref{eq:inteq} on a domain which has no lower bounds in the times, the relation between $\psi^{\rm free}$ and initial values for $\psi$ is not as clear (see the discussion in \cite{direct_interaction_quantum}).
\end{enumerate}
\section{Results} \label{sec:results}
In the following, we prove a number of existence and uniqueness theorems for MTVEs. In Sec. \ref{sec:banachspaces} we discuss which Banach spaces seem appropriate for physics. In Sec. \ref{sec:l2kernels}, we pick up work in the literature on multi-dimensional Volterra integral equations and prove an existence and uniqueness result for general $L^\infty_{t_1,t_2} L^2_{{\mathbf{x}}_1,{\mathbf{x}}_2}$ kernels. We claim no originality for this result; however, the proof is useful to connect with classical results and to introduce the strategy of the following proofs. In Sec. \ref{sec:boundedkernels}, we turn to the existence and uniqueness proofs for Eqs. \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} with bounded interaction kernels. These theorems constitute our main results; they are not special cases of the general theorem in Sec. \ref{sec:l2kernels}. In Sec. \ref{sec:singularkernels}, we finally return to the case of singular interaction kernels, and show that the method developed in Sec. \ref{sec:boundedkernels} is sufficient to at least treat certain singular interaction kernels.
\subsection{Banach space} \label{sec:banachspaces}
We shall consider the integral equations \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} as linear operator equations on a suitable Banach space $\mathscr{B}$:
\begin{equation}
\psi = \psi^{\rm free} + \widehat{L}\psi,
\end{equation}
where $\widehat{L}$ is the integral operator occurring in the respective equation.
Which space should $\mathscr{B}$ be? As $\psi$ is a quantum-mechanical wave function, there are some expectations about $\mathscr{B}$. In non-relativistic quantum mechanics, the single-time wave function $\varphi(t,{\mathbf{x}}_1,{\mathbf{x}}_2)$ represents a probability amplitude; hence, it has to be square integrable in the space variables ${\mathbf{x}}_1,{\mathbf{x}}_2$ for every fixed time $t$, and the $L^2$-norm is constant in time. This suggests choosing $\mathscr{B}$ to be the following Bochner space:
\begin{equation}
\mathscr{B}_d := L^\infty \big([0,T]^2_{(t_1,t_2)}, L^2(\mathbb{R}^{2d}_{({\mathbf{x}}_1,{\mathbf{x}}_2)}) \big),
\label{eq:banach}
\end{equation}
where $T>0$ is an arbitrary constant, with the norm of $\psi \in \mathscr{B}_d$ given by
\begin{equation}
\| \psi\|_{\mathscr{B}_d} = \esssup_{t_1,t_2 \in [0,T]} \| \psi(t_1,\cdot,t_2,\cdot) \|_{L^2} \:.
\label{eq:norm}
\end{equation}
In fact, for solutions $\psi$ of \eqref{eq:inteq} or of the free Klein-Gordon equation, $|\psi|^2$ cannot be expected to represent a probability density, nor $\| \psi(t_1,\cdot,t_2,\cdot) \|_{L^2}$ to be independent of $t_1$ and $t_2$. Still, the choice \eqref{eq:banach} will turn out useful for our proofs.
\subsection{General $L^\infty_{t_1,t_2} L^2_{{\mathbf{x}}_1,{\mathbf{x}}_2}$-kernels} \label{sec:l2kernels}
Multi-dimensional Volterra integral equations have been treated in the literature before, see e.g. \cite{beesack_1984,beesack_1985}. These references cover equations of the form
\begin{equation}
f({\mathbf{t}}) = f_0({\mathbf{t}}) + \int_0^{{\mathbf{t}}} d {\mathbf{t}}'~L({\mathbf{t}},{\mathbf{t}}')f({\mathbf{t}}'),
\label{eq:beesack}
\end{equation}
where ${\mathbf{t}} = (t_1,...,t_N)$,
\begin{equation}
\int_0^{{\mathbf{t}}} d{\mathbf{t}}' = \int_0^{t_1}dt_1' \cdots \int_0^{t_N} dt_N'\,,
\end{equation}
and the kernel $L$ is assumed to be either bounded or square integrable.
Our MTVEs \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} differ from \eqref{eq:beesack} in the following aspects.
\begin{enumerate}
\item In addition to the time integrals, they also include space integrals. Space and time directions are distinguished by the form of the equations. Most importantly, the integral from 0 to $t$, which characterizes Volterra integral equations, only appears in the time directions.
\item The kernels in Eqs.~\eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} are in general not square integrable (see the Remark at the end of this section for details), the ones of Eqs.~\eqref{eq:inteq2dsimplified} and \eqref{eq:inteq3dsimplified} are in general not bounded either. Likewise, the specific kernels of Eqs.~\eqref{eq:inteq1d}-\eqref{eq:inteq3d} are not square integrable, those of \eqref{eq:inteq2d} and \eqref{eq:inteq3d} are not bounded.
\end{enumerate}
The first point can easily be approached using classical methods; we shall prove a corresponding theorem below (Thm. \ref{thm:l2kernels}). However, the second item shows that this is not enough to cover even the simplified physically relevant cases. It turns out that we need to utilize the more specific structure of the kernels of Eqs.~\eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified}. This will be done in Sec.~\ref{sec:boundedkernels}.\\
So let us describe how square-integrable kernels can be treated.
In the remainder of the section, we study the MTVE
\begin{equation}
f({\mathbf{t}},{\mathbf{x}}) = f_0({\mathbf{t}},{\mathbf{x}}) + \int_0^{{\mathbf{t}}} d {\mathbf{t}}' \int d{\mathbf{x}}' ~L({\mathbf{t}},{\mathbf{x}};{\mathbf{t}}',{\mathbf{x}}') f({\mathbf{t}}',{\mathbf{x}}'),
\label{eq:mtve}
\end{equation}
where ${\mathbf{t}} \in \mathbb{R}^N$, ${\mathbf{x}} \in \mathbb{R}^M$. The integral equations \eqref{eq:inteq1dsimplified}, \eqref{eq:inteq2dsimplified} in $d=1$ and $d=2$ correspond to this structure for $N=2$ and $M=2$ or $M=4$ with special (but not square-integrable) integral kernels. The integral equation \eqref{eq:inteq3dsimplified} in $d=3$ is different because of the time shifts occurring in the integral.
\begin{theorem} \label{thm:l2kernels}
Let $T>0$, consider the Banach space $\mathscr{B} = L^\infty \big([0,T]^N_{{\mathbf{t}}}, L^2(\mathbb{R}^M_{{\mathbf{x}}}) \big)$, and let
\begin{equation}
\| L \|^2 = \esssup_{{\mathbf{t}}, {\mathbf{t}}' \in [0,T]^N} \int d {\mathbf{x}} \, d{\mathbf{x}}'~ |L|^2({\mathbf{t}},{\mathbf{x}};{\mathbf{t}}',{\mathbf{x}}') < \infty.
\label{eq:norml}
\end{equation}
Then, for every $f_0 \in \mathscr{B}$, \eqref{eq:mtve} has a unique solution $f \in \mathscr{B}$.
\end{theorem}
The proof serves as a good illustration of the basic technique that shall also be used in Sec. \ref{sec:boundedkernels}. It is based on classical methods for Volterra integral equations (see \cite[chap. 3.1]{linz} and \cite{beesack_1984,beesack_1985}).
\begin{proof}
Let $f_0 \in \mathscr{B}$. The idea is to first show that the iterations
\begin{equation}
f_n({\mathbf{t}},{\mathbf{x}}) = f_0({\mathbf{t}},{\mathbf{x}}) + \int_0^{{\mathbf{t}}} d {\mathbf{t}}' \int d{\mathbf{x}}' ~L({\mathbf{t}},{\mathbf{x}};{\mathbf{t}}',{\mathbf{x}}') f_{n-1}({\mathbf{t}}',{\mathbf{x}}'),~~~n\in \mathbb{N}
\end{equation}
converge. In a second step, we demonstrate that the limiting function is indeed a solution of \eqref{eq:mtve}. Third, we show that the solution is unique.
For convenience, we introduce
\begin{equation}
\varphi_n = f_n - f_{n-1},~~~n\in \mathbb{N}
\end{equation}
and $\varphi_0 = f_0$. We then have
\begin{equation}
f_n = \sum_{i=0}^n \varphi_i
\label{eq:varphiseries}
\end{equation}
and the functions $\varphi_n$ satisfy the equation
\begin{equation}
\varphi_n({\mathbf{t}},{\mathbf{x}}) = \int_0^{{\mathbf{t}}} d {\mathbf{t}}' \int d{\mathbf{x}}' ~L({\mathbf{t}},{\mathbf{x}};{\mathbf{t}}',{\mathbf{x}}') \: \varphi_{n-1}({\mathbf{t}}',{\mathbf{x}}'),~~~n\in \mathbb{N}.
\label{eq:varphieq}
\end{equation}
Let $\widehat{L}$ denote the integral operator in \eqref{eq:mtve}. First of all, we show that $\widehat{L}$ is a bounded operator on $\mathscr{B}$. Then it follows in particular that $\varphi_n \in \mathscr{B} \, \forall n \in \mathbb{N}_0$.
So let $f \in \mathscr{B}$. Then $f$ is an equivalence class of functions. We choose an arbitrary representative of this class, a function on $[0,T]^N \times \mathbb{R}^M$ that is square-integrable in ${\mathbf{x}}$ for almost every ${\mathbf{t}}$, and call this function simply $f$ again. Using \eqref{eq:varphieq} and the Cauchy-Schwarz inequality, we find for every $n \in \mathbb{N}$ and ${\mathbf{t}}\in [0,T]^N$:
\begin{align}
\| (\widehat{L}f)({\mathbf{t}},\cdot) \|^2_{L^2} &\leq \int d {\mathbf{x}} \left[ \left( \int_0^{\mathbf{t}} d{\mathbf{t}}' \int d {\mathbf{x}}' ~|L|^2({\mathbf{t}},{\mathbf{x}};{\mathbf{t}}',{\mathbf{x}}') \right) \left( \int_0^{\mathbf{t}} d{\mathbf{t}}' \int d {\mathbf{x}}' ~|f|^2({\mathbf{t}}',{\mathbf{x}}')\right) \right]\nonumber\\
&= \left( \int_0^{\mathbf{t}} d{\mathbf{t}}' \int d {\mathbf{x}} \, d{\mathbf{x}}'~|L|^2({\mathbf{t}},{\mathbf{x}};{\mathbf{t}}',{\mathbf{x}}') \right) \left( \int_0^{\mathbf{t}} d{\mathbf{t}}'~\| f({\mathbf{t}}',\cdot) \|^2_{L^2}\right)\nonumber\\
&\leq (t_1 \cdots t_N) \: \|L\|^2 \: \int_0^{\mathbf{t}} d{\mathbf{t}}'~\| f({\mathbf{t}}',\cdot) \|^2_{L^2},
\label{eq:auxiliaryineq}
\end{align}
where it remains open at first whether the $L^2$ norms are finite or infinite. However, since $\| f({\mathbf{t}}',\cdot) \|_{L^2}\leq \|f\|_{\mathscr{B}}$ for almost every ${\mathbf{t}}'$, we obtain that
\begin{align}
\| (\widehat{L}f)({\mathbf{t}},\cdot) \|^2_{L^2}
&\leq (t_1 \cdots t_N) \: \|L\|^2 \: \int_0^{\mathbf{t}} d{\mathbf{t}}'~\|f\|^2_{\mathscr{B}} \nonumber\\
&\leq (t_1 \cdots t_N)^2 \: \|L\|^2 \: \|f\|^2_{\mathscr{B}} \label{eq:aux2ineq}
\end{align}
for every ${\mathbf{t}}$, which is independent of the choice of representative of $f$. (In particular, the $L^2$ norm of $(\widehat{L}f)({\mathbf{t}},\cdot)$ turns out finite for every ${\mathbf{t}}$.)\\
Furthermore, the estimate \eqref{eq:aux2ineq} implies:
\begin{equation}
\| \widehat{L} f \|_{\mathscr{B}} = \esssup_{{\mathbf{t}} \in [0,T]^N} \| \widehat{L} f ({\mathbf{t}},\cdot)\|_{L^2} \leq T^{N} \: \| L \| \: \| f \|_{\mathscr{B}}.
\end{equation}
So $\widehat{L}$ is indeed a bounded operator on $\mathscr{B}$, and we have $\varphi_n \in \mathscr{B} \,\forall n \in \mathbb{N}_0$.
Next, we show that the sequence \eqref{eq:varphiseries} has a limit in $\mathscr{B}$. To this end, we now prove the following bound for the point-wise norms $\| \varphi_n({\mathbf{t}},\cdot) \|_{L^2}$ by induction over $n \in \mathbb{N}_0$:
\begin{equation}
\| \varphi_n({\mathbf{t}},\cdot) \|^2_{L^2} \leq \| f_0\|^2_{\mathscr{B}} \: \| L \|^{2n} \frac{(t_1 \cdots t_N)^{2n}}{(n!)^N}
\label{eq:varphiind}
\end{equation}
for every ${\mathbf{t}}$. For $n=0$, the claim is obvious as $\varphi_0 = f_0$. So let \eqref{eq:varphiind} hold for some $n \in \mathbb{N}$. Recall $\varphi_{n+1} = \widehat{L} \varphi_n$. That means, we can use \eqref{eq:auxiliaryineq} to estimate the norm $\| \varphi_{n+1}({\mathbf{t}},\cdot) \|^2_{L^2}$ in terms of $\| \varphi_n({\mathbf{t}},\cdot) \|^2_{L^2}$. Plugging \eqref{eq:varphiind} for $n$ into \eqref{eq:auxiliaryineq} yields:
\begin{align}
\| \varphi_{n+1}({\mathbf{t}},\cdot) \|^2_{L^2} &\leq (t_1 \cdots t_N) \: \|L\|^2 \: \int_0^{\mathbf{t}} d{\mathbf{t}}'~\| f_0\|^2_{\mathscr{B}} \: \| L \|^{2n} \frac{(t'_1 \cdots t'_N)^{2n}}{(n!)^N}\nonumber\\
&= \| f_0\|^2_{\mathscr{B}} \: \| L \|^{2(n+1)} \frac{(t_1 \cdots t_N)^{2(n+1)}}{(n!)^N (2n+1)^N}\nonumber\\
&\leq \| f_0\|^2_{\mathscr{B}} \: \| L \|^{2(n+1)} \frac{(t_1 \cdots t_N)^{2(n+1)}}{[(n+1)!]^N}.
\end{align}
This proves \eqref{eq:varphiind}. In particular, \eqref{eq:varphiind} implies:
\begin{equation}
\| \varphi_n \|_{\mathscr{B}} \leq \| f_0\|_{\mathscr{B}} \: \| L \|^n \frac{T^{n N}}{(n!)^{N/2}} \: .
\label{eq:normvarphin}
\end{equation}
This bound in turn shows that the series $\sum_{i=0}^\infty \| \varphi_i \|_{\mathscr{B}}$ converges. Hence, the iterations converge, i.e., $f_n \rightarrow f \in \mathscr{B}$ for $n\rightarrow \infty$. This completes the first step of the proof.
Next, we show that the series $f= \sum_{i=0}^\infty \varphi_i$ indeed is a solution of \eqref{eq:mtve}. Since $\widehat L$ is bounded, we have that
\begin{equation}
\widehat{L} \sum_{i=0}^\infty \varphi_i = \sum_{i=0}^\infty \widehat{L}\varphi_i = \sum_{i=0}^\infty \varphi_{i+1} = \sum_{i=0}^\infty \varphi_i - \varphi_0,
\label{eq:sumintexchange}
\end{equation}
which is equivalent to $f = f_0 + \widehat{L} f$.
Finally, we turn to the uniqueness of the solution. To this end, let $\widetilde{f} \in \mathscr{B}$ be another solution of \eqref{eq:mtve}. Then the difference $g = f - \widetilde{f}$ satisfies the equation $g = \widehat{L} g$. This is similar to the equation $\varphi_n = \widehat{L} \varphi_{n-1}$. Thus, in the same way as we derived \eqref{eq:normvarphin}, we obtain the inequality
\begin{equation}
\| g\|_{\mathscr{B}} \leq \| g \|_{\mathscr{B}} \: \| L \|^n \frac{T^{n N}}{(n!)^{N/2}}~~\forall n \in \mathbb{N}.
\end{equation}
Thus, for $n \rightarrow \infty$ we find $\|g \| = 0$, hence $f = \widetilde{f}$. This shows the uniqueness of the solution, completing the proof. \hfill\ensuremath{\square}
\end{proof}
\paragraph{Remarks.}
\begin{enumerate}
\item The kernels in the integral equations \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} may [and those of \eqref{eq:inteq1d}-\eqref{eq:inteq3d} do] violate the square-integrability condition \eqref{eq:norml} for the following reason: if $L$ is invariant under translations as in $({\mathbf{x}},{\mathbf{x}}') \mapsto ({\mathbf{x}}+\mathbf{a},{\mathbf{x}}'+\mathbf{a})$ for every $\mathbf{a}\in\mathbb{R}^d$, then $L({\mathbf{t}},\cdot,{\mathbf{t}}',\cdot)$, if nonzero, cannot be square-integrable in $\mathbb{R}^{2d}$ as a function of ${\mathbf{x}}$ and ${\mathbf{x}}'$ for any ${\mathbf{t}}$ and ${\mathbf{t}}'$.
\item Since we have obtained the existence of a unique solution up to time $T$ for arbitrary $T>0$, it follows that a unique solution exists for all times, i.e., on $[0,\infty)^N \times \mathbb{R}^M$. In fact, the estimate \eqref{eq:varphiind} shows that the solution can at most grow exponentially according to
\begin{equation}
\|f({\mathbf{t}},\cdot)\|_{L^2} \leq \|f_0\|_{\mathscr{B}} \: e^{\|L\| t_1 \cdots t_N} \,.
\end{equation}
Thus, $e^{-\|L\| t_1\cdots t_N} f \in L^\infty([0,\infty)^N, L^2(\mathbb{R}^M))$ whenever $f_0$ lies in this space.
\end{enumerate}
\subsection{Bounded interaction kernels} \label{sec:boundedkernels}
In this section, we provide existence and uniqueness results for the simplified equations \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified} and bounded interaction kernels $K$ (the overall kernel $L$ is still singular). The proofs use the same strategy as before, however, essential use is made of the fact that the equations only contain integrals along (and inside of) past light cones. This is the essential feature that allows us to deal also with kernels which do not satisfy \eqref{eq:norml}.
\begin{theorem}[$d=1$] \label{thm:1dboundedkernel}
For every $m_1,m_2 \geq 0$, every
\[
\psi^{\rm free} \in \mathscr{B}_1 = L^\infty \big([0,T]^2_{(t_1,t_2)}, L^2(\mathbb{R}^{2}_{(z_1,z_2)}) \big),
\]
and every essentially bounded $K : \mathbb{R}^4 \rightarrow \mathbb{C}$, the integral equation \eqref{eq:inteq1dsimplified} has a unique solution $\psi \in \mathscr{B}_1$.
\end{theorem}
The theorem evidently covers the physically natural interaction kernel in Eq. \eqref{eq:inteq1d}.
For the proof (and the following ones), we follow the strategy of the proof of Thm. \ref{thm:l2kernels}. We again define the functions $f_n$ as the $n$-th iteration (starting from $f_0 = \psi^{\rm free}$) and $\varphi_n$ as the difference $f_n-f_{n-1}$ with $\varphi_0 = f_0$. The integral operator in \eqref{eq:inteq1dsimplified} is denoted by $\widehat{L}$. We describe only the essential new steps in the proof (in particular how to obtain an appropriate estimate for $\| \varphi_n \|$); the rest then follows as before.
\begin{proof}
We first show that $\widehat{L}$ maps $\mathscr{B}_1$ to $\mathscr{B}_1$. Using \eqref{eq:inteq1dsimplified}, the Cauchy-Schwarz inequality, and that $|J_0| \leq 1$, we find:
\begin{align}
&\| (\widehat{L}\psi)(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \frac{\lambda^2}{16} \int dz_1 \, dz_2 \biggl[ \biggl( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d z_1' \, dz_2' ~H(t_1-t_1'-|z_1-z_1'|) \nonumber\\
& \times H(t_2-t_2'-|z_2-z_2'|) \, |K|^2(t_1',z_1',t_2',z_2') \biggr)\nonumber\\
& \times \left. \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d z_1' \, dz_2' \, H(t_1-t_1'-|z_1-z_1'|) \, H(t_2-t_2'-|z_2-z_2'|) \, |\psi|^2(t_1',z_1',t_2',z_2') \right)\right]\nonumber\\
&\leq \frac{\lambda^2}{16} \int dz_1 \, dz_2 \left[ \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~ 2(t_1-t_1') \, 2(t_2-t_2') \, \| K\|_\infty^2 \right) \right.\nonumber\\
& \times \left. \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d z_1' \, dz_2' \, H(t_1-t_1'-|z_1-z_1'|) \, H(t_2-t_2'-|z_2-z_2'|) \, |\psi|^2(t_1',z_1',t_2',z_2') \right)\right]\nonumber\\
&= \frac{\lambda^2}{16} \, \| K\|_\infty^2 \, (t_1 t_2)^2 \int dz_1 \, dz_2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d z_1' \, dz_2'~ H(t_1-t_1'-|z_1-z_1'|) \nonumber\\
&~~~~~~~~~~~~~~~~~~~~\times H(t_2-t_2'-|z_2-z_2'|) |\psi|^2(t_1',z_1',t_2',z_2') \: .
\end{align}
Exchanging the order of integrations and performing the $z_1,z_2$-integrations first leads to:
\begin{align}
&\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \nonumber\\
&\leq \frac{\lambda^2}{16} \| K\|_\infty^2 \, (t_1 t_2)^2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d z_1' \, dz_2'~ 2(t_1-t_1') \, 2(t_2-t_2')
\, |\psi|^2(t_1',z_1',t_2',z_2')\nonumber\\
&= \frac{\lambda^2}{4}\, \| K\|_\infty^2 \, (t_1 t_2)^2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~ (t_1-t_1') \, (t_2-t_2') \, \|\psi(t_1',\cdot,t_2'\cdot)\|^2_{L^2}
\label{eq:hilfsformelind1dl2}
\end{align}
Therefore, replacing $\|\psi(t_1',\cdot,t_2'\cdot)\|^2_{L^2}$ with $\|\psi \|_{\mathscr{B}_1}^2 = \esssup_{t'_1, t'_2 \in [0,T]} \|\psi(t_1',\cdot,t_2'\cdot)\|^2_{L^2}$, we find:
\begin{equation}
\| \widehat{L} \psi\|^2_{\mathscr{B}_1} \leq \frac{\lambda^2}{16}\esssup_{t_1, t_2 \in [0,T]} \| K\|_\infty^2 \, (t_1 t_2)^4 \, \|\psi\|_{\mathscr{B}_1}^2 = \frac{\lambda^2}{16} \, \| K\|_\infty^2 \, T^8 \, \|\psi\|_{\mathscr{B}_1}^2.
\end{equation}
This shows that $\widehat{L} : \mathscr{B}_1 \to \mathscr{B}_1$ is bounded. In particular, $\varphi_n \in \mathscr{B}_1$ for all $n \in \mathbb{N}_0$. Next, we prove by induction that \footnote{The estimate is not optimal but sufficient.}
\begin{equation}
\| \varphi_n(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \left( \frac{\lambda^2}{4}\right)^{n} \: \| \psi^{\rm free}\|_{\mathscr{B}_1}^2 \: \| K \|_\infty^{2n} \: \frac{(t_1 t_2)^{4n}}{[(2n)!]^2} \,.
\label{eq:inf1dl2ind}
\end{equation}
For $n=0$ this is obviously true. So let \eqref{eq:inf1dl2ind} hold for some $n \in \mathbb{N}_0$. Then, by plugging \eqref{eq:inf1dl2ind} into \eqref{eq:hilfsformelind1dl2} we obtain that
\begin{align}
& \| \varphi_{n+1}(t_1,\cdot,t_2,\cdot) \|^2_{L^2}\nonumber\\
& \leq \left( \frac{\lambda^2}{4}\right)^{n+1} \| \psi^{\rm free}\|_{\mathscr{B}_1}^2 \: \| K\|_\infty^{2(n+1)} (t_1 t_2)^2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~ (t_1-t_1')(t_2-t_2') \frac{(t_1' t_2')^{4n}}{[(2n)!]^2}\nonumber\\
&= \left( \frac{\lambda^2}{4}\right)^{n+1} \| \psi^{\rm free}\|_{\mathscr{B}_1}^2 \: \| K\|_\infty^{2(n+1)} \frac{(t_1 t_2)^{4(n+1)}}{[(2n)!]^2} \, \frac{1}{[(4n+1)(4n+2)]^2}\nonumber\\
& \leq \left( \frac{\lambda^2}{4}\right)^{n+1} \| \psi^{\rm free}\|_{\mathscr{B}_1}^2 \: \| K\|_\infty^{2(n+1)} \frac{(t_1 t_2)^{4(n+1)}}{[(2(n+1))!]^2}\:.
\end{align}
This proves \eqref{eq:inf1dl2ind}. In particular, \eqref{eq:inf1dl2ind} implies:
\begin{equation}
\| \varphi_n \|_{\mathscr{B}_1} \leq \left(\frac{|\lambda|}{2}\right)^n \| \psi^{\rm free}\|_{\mathscr{B}_1} \: \| K \|_\infty^n \, \frac{T^{4n}}{(2n)!} \: .
\end{equation}
Hence, $\sum_i \| \varphi_i \|_{\mathscr{B}_1} < \infty$. Going through the analogous steps as in the proof of Thm. \ref{thm:l2kernels}, we find that $\sum_i \varphi_i \in \mathscr{B}_1$ yields the unique solution of \eqref{eq:inteq1dsimplified}. \hfill\ensuremath{\square}
\end{proof}
\begin{theorem}[$d=2$] \label{thm:2dboundedkernel}
For every $m_1,m_2 \geq 0$, every essentially bounded $K : \mathbb{R}^6 \rightarrow \mathbb{C}$ and every $\psi^{\rm free} \in \mathscr{B}_2= L^\infty \big([0,T]^2_{(t_1,t_2)}, L^2(\mathbb{R}^{4}_{({\mathbf{x}}_1,{\mathbf{x}}_2)}) \big)$, \eqref{eq:inteq2dsimplified} has a unique solution $\psi \in \mathscr{B}_2$.
\end{theorem}
The proof again uses the previous ideas and notation.
\begin{proof}
We first show that the integral operator $\widehat{L}$ in \eqref{eq:inteq2dsimplified} is a bounded operator on $\mathscr{B}_2$. Using \eqref{eq:inteq2dsimplified} and the Cauchy-Schwarz inequality, we find for every $\psi \in \mathscr{B}_2$:
\begin{align}
& \| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot) \|^2_{L^2}\nonumber\\
& \leq \frac{\lambda^2}{(2\pi)^4} \int d^2 {\mathbf{x}}_1 \, d^2 {\mathbf{x}}_2 \left[ \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2'~ \frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{\sqrt{(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}} \right. \right.\nonumber\\
& \left. \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{\sqrt{(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}} |K|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2') \right) \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2'~ \right.\nonumber\\
& \left. \left. \frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{\sqrt{(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}} \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{\sqrt{(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}} |\psi|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2') \right) \right].
\label{eq:2dinfcalc1}
\end{align}
The expression in the first round brackets is smaller than or equal to
\begin{align}
&\| K \|_\infty^2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2'~ \frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{\sqrt{(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}} \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{\sqrt{(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}}\nonumber\\
&= \| K \|_\infty^2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~ (2\pi)^2 (t_1-t_1') (t_2-t_2')\nonumber\\
&= (2\pi)^2 \| K \|_\infty^2 \frac{(t_1t_2)^2}{4},
\end{align}
where in the second line we made use of the identity
\begin{equation}
\int_{|{\mathbf{x}}| < \tau}d^2 {\mathbf{x}} \, \frac{1}{\sqrt{\tau^2-|{\mathbf{x}}|^2}} = 2 \pi \tau.
\label{eq:identity2d}
\end{equation}
Thus, we find with \eqref{eq:2dinfcalc1}:
\begin{align}
&\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \frac{\lambda^2}{(2\pi)^2} \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int d^2 {\mathbf{x}}_1 \, d^2 {\mathbf{x}}_2 \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2' \nonumber\\
& ~~~~\frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{\sqrt{(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}} \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{\sqrt{(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}} \, |\psi|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2')
\end{align}
Now we change the order of integration, introduce new integration variables ${\mathbf{y}}_i = {\mathbf{x}}_i-{\mathbf{x}}_i'$ instead of ${\mathbf{x}}_i$ and integrate over ${\mathbf{y}}_i$ (using again \eqref{eq:identity2d}). This yields:
\begin{align}
& \| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \nonumber\\
&\leq \lambda^2 \, \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2' ~(t_1-t_1') (t_2-t_2') \, |\psi|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2')\nonumber\\
&= \lambda^2 \, \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~(t_1-t_1') (t_2-t_2') \, \| \psi(t_1',\cdot,t_2',\cdot) \|^2_{L^2}.
\label{eq:hilfsformelind2dl2}
\end{align}
Replacing $\| \psi(t_1',\cdot,t_2',\cdot) \|^2_{L^2}$ in \eqref{eq:hilfsformelind2dl2} with $\| \psi \|^2_{\mathscr{B}_2}$, we find:
\begin{equation}
\| \widehat{L} \psi \|^2_{\mathscr{B}_2} \leq \esssup_{t_1,t_2 \in [0,T]} \lambda^2 \, \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \, \| \psi \|^2_{\mathscr{B}_2} \, \frac{(t_1 t_2)^2}{4} = \lambda^2 \, \| \psi \|^2_{\mathscr{B}_2} \, \| K \|_\infty^2 \, \frac{T^8}{16}.
\end{equation}
This shows that $\widehat{L}$ is a bounded operator on $\mathscr{B}_2$. Hence, $\varphi_n \in \mathscr{B}_2 \, \forall n \in \mathbb{N}_0$.
Next, we prove the following estimate for $\varphi_n,~n\in \mathbb{N}_0$:
\begin{equation}
\| \varphi_n(t_1,\cdot,t_2,\cdot) \|_{L^2}^2 \leq \lambda^{2n} \, \| \psi^{\rm free} \|^2_{\mathscr{B}_2} \, \frac{\| K \|_\infty^{2n}}{4^n} \, \frac{(t_1t_2)^{4n}}{[(2n)!]^2} \:.
\label{eq:2dinfind}
\end{equation}
For $n=0$ this obviously holds. So let \eqref{eq:2dinfind} be true for some $n \in \mathbb{N}_0$. Plugging \eqref{eq:2dinfind} into \eqref{eq:hilfsformelind2dl2} yields:
\begin{align}
\| \varphi_{n+1}(t_1,\cdot,t_2,\cdot) \|^2_{L^2} &\leq \lambda^{2(n+1)} \, \| \psi^{\rm free} \|^2_{\mathscr{B}_2} \, \frac{\| K \|_\infty^{2(n+1)}}{4^{n+1}} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~(t_1-t_1') (t_2-t_2') \, \frac{(t_1't_2')^{4n}}{[(2n)!]^2}\nonumber\\
& = \lambda^{2(n+1)} \, \| \psi^{\rm free} \|^2_{\mathscr{B}_2} \, \frac{\| K \|_\infty^{2(n+1)}}{4^{n+1}} \frac{(t_1 t_2)^{4(n+1)}}{[(2n)!]^2} \, \frac{1}{[(4n+1)(4n+2)]^2}\nonumber\\
&\leq \lambda^{2(n+1)} \, \| \psi^{\rm free} \|^2_{\mathscr{B}_2} \, \frac{\| K \|_\infty^{2(n+1)}}{4^{n+1}} \frac{(t_1 t_2)^{4(n+1)}}{[(2(n+1))!]^2}.
\end{align}
This proves \eqref{eq:2dinfind}. \eqref{eq:2dinfind} in particular implies:
\begin{equation}
\| \varphi_n \|_{\mathscr{B}_2} \leq |\lambda|^n \, \| \psi^{\rm free} \|_{\mathscr{B}_2} \, \frac{\| K \|_\infty^n}{2^n} \frac{T^{4n}}{(2n)!} \: .
\end{equation}
This bound shows that $\sum_i \| \varphi_i \|_{\mathscr{B}_2}$ converges. As before, we conclude that $\sum_i \varphi_i \in \mathscr{B}_2$ is the unique solution of \eqref{eq:inteq2dsimplified}. \hfill\ensuremath{\square}
\end{proof}
\begin{theorem}[$d=3$] \label{thm:3dboundedkernel}
For every bounded $K:\mathbb{R}^8 \rightarrow \mathbb{C}$ and every $\psi^{\rm free} \in \mathscr{B}_3$, \eqref{eq:inteq3dsimplified} possesses a unique solution $\psi \in \mathscr{B}_3$.
\end{theorem}
\begin{proof}
The strategy of the proof and the notation are the same as before. We demonstrate that the integral operator in \eqref{eq:inteq3dsimplified} defines a bounded operator on $\mathscr{B}_3$. Then we derive an estimate for $\| \varphi_n \|_{\mathscr{B}_3}$.
We begin again with estimates on the $L^2$ norm of $\widehat{L} \psi$ for arbitrary $\psi\in\mathscr{B}_3$ at given times, i.e., on
\begin{multline}
\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot)\|_{L^2}^2 = \int d^3{\mathbf{x}}_1 \, d^3 {\mathbf{x}}_2 \, \Biggl| \frac{\lambda}{(4\pi)^2} \int d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2'~ \frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|}\, \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \\[2mm]
\times~ K(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2') \, \psi(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2') \Biggr|^2\,.
\end{multline}
Here, we have as usual chosen an arbitrary representative of the given $\psi\in\mathscr{B}_3$. Due to the delay in the time arguments of $\psi$, it is not obvious whether $\psi$ with these arguments is even square-integrable as a function of ${\mathbf{x}}_1'$ and ${\mathbf{x}}_2'$, as $\psi$ was only assumed square-integrable for (almost all) fixed time arguments. So, the integral on the right-hand side might be $\infty$. Nevertheless, we can use the Cauchy-Schwarz inequality to obtain that
\begin{align}
&\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot)\|_{L^2}^2 \leq \frac{\lambda^2}{(4\pi)^4} \int d^3 {\mathbf{x}}_1 \, d^3 {\mathbf{x}}_2 \biggl[ \biggl( \int d^3 {\mathbf{x}}_1' \,d^3 {\mathbf{x}}_2' ~ \frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|} \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|}\nonumber\\
&\times |K|^2(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2') \biggr) \biggl( \int d^3 {\mathbf{x}}_1' \,d^3 {\mathbf{x}}_2' ~ \frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|} \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \nonumber\\
& \times |\psi|^2(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2') \biggr) \biggr]\nonumber\\
&\leq \frac{\lambda^2}{(4\pi)^2} \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int d^3 {\mathbf{x}}_1 \, d^3 {\mathbf{x}}_2 \, d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2'~ \frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|} \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \nonumber\\
&\times |\psi|^2(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2').
\end{align}
Now we change the order of integration, substitute the integration variables ${\mathbf{x}}_i$ by ${\mathbf{y}}_i = {\mathbf{x}}_i-{\mathbf{x}}_i'$ (the Jacobi determinant is 1), and change the order of integration back. (Since the integrand is non-negative, we can do this by virtue of Tonelli's theorem even if we do not know whether the integral is finite.) This leads to
\begin{align}
&\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot)\|_{L^2}^2 \leq \frac{\lambda^2}{(4\pi)^2} \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int d^3 {\mathbf{y}}_1 \, d^3 {\mathbf{y}}_2 \, d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2'~\frac{H(t_1-|{\mathbf{y}}_1|)}{|{\mathbf{y}}_1|} \frac{H(t_2-|{\mathbf{y}}_2|)}{|{\mathbf{y}}_2|} \nonumber\\
&\times |\psi|^2(t_1-|{\mathbf{y}}_1|,{\mathbf{x}}_1',t_2-|{\mathbf{y}}_2|,{\mathbf{x}}_2') \nonumber\\
&= \frac{\lambda^2}{(4\pi)^2} \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int d^3 {\mathbf{y}}_1 \, d^3 {\mathbf{y}}_2 ~ \frac{H(t_1-|{\mathbf{y}}_1|)}{|{\mathbf{y}}_1|} \frac{H(t_2-|{\mathbf{y}}_2|)}{|{\mathbf{y}}_2|} \: \| \psi(t_1-|{\mathbf{y}}_1|,\cdot,t_2-|{\mathbf{y}}_2|, \cdot) \|_{L^2}^2 \nonumber\\
&= \lambda^2 \: \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int_0^{t_1} dr_1 \, \int_0^{t_2} dr_2 ~ r_1 r_2 \, \| \psi(t_1-r_1,\cdot,t_2-r_2, \cdot) \|_{L^2}^2
\label{eq:hilfsformelind3d}
\end{align}
for all $(t_1,t_2)$. Since for almost all $(r_1,r_2)$, $\| \psi(t_1-r_1,\cdot,t_2-r_2,\cdot) \|_{L^2} \leq \| \psi \|_{\mathscr{B}_3}$, we have that
\begin{align}
\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot)\|_{L^2}^2
&\leq \lambda^2 \: \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \int_0^{t_1} dr_1 \, \int_0^{t_2} dr_2 ~ r_1 r_2 \, \| \psi \|^2_{\mathscr{B}_3} \nonumber\\
&= \lambda^2 \: \| K \|_\infty^2 \frac{(t_1t_2)^2}{4} \| \psi \|^2_{\mathscr{B}_3} \frac{(t_1t_2)^2}{4}
\label{eq:aux3ineq}
\end{align}
for all $(t_1,t_2)$ (and all representatives of $\psi$). So now we know that the left-hand side is finite.
Moreover,
\begin{equation}
\| \widehat{L} \psi \|^2_{\mathscr{B}_3} \leq \esssup_{t_1,t_2 \in [0,T]} \lambda^2 \, \| K \|_\infty^2 \frac{(t_1t_2)^4}{16} \| \psi \|^2_{\mathscr{B}_3} = \lambda^2\, \| K \|_\infty^2 \frac{T^8}{16} \, \| \psi \|^2_{\mathscr{B}_3} \:.
\end{equation}
This shows that $\widehat{L}$ is a bounded operator $\mathscr{B}_3 \to \mathscr{B}_3$. Hence, $\varphi_n \in \mathscr{B}_3 ~ \forall n \in \mathbb{N}_0$.
We now prove the following estimate for the norm of $\varphi_n$ by induction over $n \in \mathbb{N}_0$:
\begin{equation}
\| \varphi_n(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \| \psi^{\rm free} \|^2_{\mathscr{B}_3} \: \frac{\lambda^{2n} \, \| K \|_\infty^{2n}}{4^n} \, \frac{(t_1t_2)^{4n}}{[(2n)!]^2}
\label{eq:3dinfindl2}
\end{equation}
for all $(t_1,t_2)$.
For $n=0$, this obviously holds. So let \eqref{eq:3dinfindl2} be true for some $n \in \mathbb{N}_0$. Then, plugging \eqref{eq:3dinfindl2} into \eqref{eq:hilfsformelind3d}, we obtain that
\begin{align}
&\| \varphi_n(t_1,\cdot,t_2,\cdot)\|_{L^2}^2\nonumber\\
& \leq \| \psi^{\rm free} \|^2_{\mathscr{B}_3} \frac{\lambda^{2(n+1)} \, \| K \|_\infty^{2(n+1)}}{4^{n+1}} (t_1t_2)^2 \int_0^{t_1} dr_1 \, \int_0^{t_2} dr_2 ~ r_1 r_2 \frac{(t_1-r_1)^{4n}(t_2-r_2)^{4n}}{[(2n)!]^2} \nonumber\\
&= \| \psi^{\rm free} \|^2_{\mathscr{B}_3} \frac{\lambda^{2(n+1)} \, \| K \|_\infty^{2(n+1)}}{4^{n+1}} (t_1t_2)^2 \int_0^{t_1} d \rho_1 \, \int_0^{t_2} d \rho_2 ~ (t_1-\rho_1) (t_2-\rho_2) \frac{\rho_1^{4n} \rho_2^{4n}}{[(2n)!]^2}\nonumber\\
&= \| \psi^{\rm free} \|^2_{\mathscr{B}_3} \frac{\lambda^{2(n+1)} \, \| K \|_\infty^{2(n+1)}}{4^{n+1}} \, \frac{(t_1t_2)^{4(n+1)}}{[(2n)!]^2} \, \frac{1}{[(4n+1)(4n+2)]^2}\nonumber\\
&\leq \| \psi^{\rm free} \|^2_{\mathscr{B}_3} \frac{\lambda^{2(n+1)} \, \| K \|_\infty^{2(n+1)}}{4^{n+1}} \, \frac{(t_1t_2)^{4(n+1)}}{[(2(n+1))!]^2}.
\end{align}
This proves \eqref{eq:3dinfindl2}. In particular, it follows that
\begin{equation}
\| \varphi_n \|_{\mathscr{B}_3} \leq \| \psi^{\rm free} \|_{\mathscr{B}_3} \, \frac{|\lambda|^n \, \| K \|^n_\infty}{2^n} \frac{T^{4n}}{(2n)!}.
\end{equation}
This shows that $\sum_i \| \varphi_i \|_{\mathscr{B}_3}$ converges. As before, we conclude that $\sum_i \varphi_i \in \mathscr{B}_3$ is the unique solution of \eqref{eq:inteq3dsimplified}. \hfill\ensuremath{\square}
\end{proof}
\paragraph{Remarks.}
\begin{enumerate}
\item Interestingly, all the main estimates are the same in dimensions $d=1,2,3$, although the integrations leading there were rather different.
%
\item In a similar way as in the proofs of Thms.~\ref{thm:1dboundedkernel}-\ref{thm:3dboundedkernel}, one can show the existence and uniqueness of a solution $\psi \in L^\infty \big([0,T]^2\times \mathbb{R}^{2d} \big)$ (for the respective $d$ of \eqref{eq:inteq1dsimplified}-\eqref{eq:inteq3dsimplified}). In combination with Thms.~\ref{thm:1dboundedkernel}-\ref{thm:3dboundedkernel}, we then find that if $\psi^{\rm free} \in L^\infty \big([0,T]^2\times \mathbb{R}^{2d} \big) \cap \mathscr{B}_d$, then also $\psi \in L^\infty \big([0,T]^2\times \mathbb{R}^{2d} \big)\cap \mathscr{B}_d$.
\end{enumerate}
\subsection{Singular interaction kernels} \label{sec:singularkernels}
In $d=2$ and $d=3$, the physically natural interaction kernels are singular (see Eqs.~\eqref{eq:inteq2d}, \eqref{eq:inteq3d}). The main difficulty about this is that the singularities of the interaction kernel and of the Green's functions are connected. In the following, we show a possible way to deal with such connected singularities. However, compared to the physically natural cases, we still make simplifications. In $d=3$, the reason for these simplifications is that the $\delta$-functions in the interaction kernel lead to complicated weight functions. In $d=2$, the Green's functions and the interaction kernel are simply too singular in order for our strategy to work without modifications.
\paragraph{Modified singular integral equation in $d=3$.}
We consider the integral equation
\begin{align}
\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{\lambda}{(4 \pi)^2} \int_0^{t_1} dt_1' \int d^3 {\mathbf{x}}_1' \int_0^{t_2} dt_2' \int d^3 {\mathbf{x}}_2' ~ \nonumber\\
\frac{\delta(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|} \frac{\delta(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \, \frac{f(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2')}{|{\mathbf{x}}_1'-{\mathbf{x}}_2'|} \, \psi(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2'),
\label{eq:3dsingm0}
\end{align}
where $f$ is smooth and bounded. Eq.~\eqref{eq:3dsingm0} imitates the structure of the integral equation \eqref{eq:inteq3dsimplified} for $d=3$ and $m_1=m_2=0$. The difference is that we have replaced the physically natural interaction kernel
\begin{equation}
\frac{1}{2\pi} \delta\bigl((t_1'-t_2')-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2 \bigr) = \frac{1}{4 \pi\, |{\mathbf{x}}_1'-{\mathbf{x}}_2'|} \biggl[\delta(t_1'-t_2'-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|) + \delta(t_1'-t_2'+|{\mathbf{x}}_1'-{\mathbf{x}}_2'|)\biggr]
\end{equation}
by $f(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2')/|{\mathbf{x}}_1'-{\mathbf{x}}_2'|$.
By integrating out the delta functions, \eqref{eq:3dsingm0} can be rewritten as
\begin{align}
&\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{\lambda}{(4 \pi)^2} \int d^3 {\mathbf{x}}_1'\, d^3 {\mathbf{x}}_2' ~\frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|} \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|} \nonumber\\
&\times \frac{f(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2')}{|{\mathbf{x}}_1'-{\mathbf{x}}_2'|} \, \psi(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2').
\label{eq:3dsingm0b}
\end{align}
\begin{theorem} \label{thm:singularkernel3d}
For every bounded $f:\mathbb{R}^8\to \mathbb{C}$ and every $\psi^{\rm free} \in \mathscr{B}_3$, \eqref{eq:3dsingm0b} has a unique solution $\psi \in \mathscr{B}_3$.
\end{theorem}
\begin{proof}
The proof is structured as before. We prove that the integral operator in \eqref{eq:3dsingm0} defines a bounded operator $\widehat{L}$ on $\mathscr{B}_3$. Then we derive an estimate for $\| \varphi_n \|$.
For arbitrary $\psi \in \mathscr{B}_3$, \eqref{eq:3dsingm0b} and the Cauchy-Schwarz inequality yield that
\begin{align}
&\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot)\|^2_{L^2} \leq \frac{\lambda^2}{(4\pi)^4} \int d^3 {\mathbf{x}}_1 \, d^3 {\mathbf{x}}_2 \biggl[ \biggl( \int d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2' ~\frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2} \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2} \nonumber\\
& \times |\psi|^2\Bigl(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2'\Bigr)\biggr) \nonumber\\
&\times \biggl( \int d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2' ~H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)\frac{|f|^2\bigl(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2'\bigr)}{|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2} \biggr) \biggr].
\label{eq:3dsingcalc1}
\end{align}
We first consider the integral $I$ in the second round bracket and split it up into $I= I_1 + I_2$ where $I_1$ is the part with $|{\mathbf{x}}_1'-{\mathbf{x}}_2'| \leq 1$ and $I_2$ the part with $|{\mathbf{x}}_1'-{\mathbf{x}}_2'| > 1$. For the first part, we find, replacing $|f|$ with $\|f\|_\infty^2$ and leaving out the second Heaviside function:
\begin{align}
I_1 &\leq \|f \|_\infty^2 \int\limits_{|{\mathbf{x}}_1'-{\mathbf{x}}_2'| \leq 1} d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2' ~H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) \, \frac{1}{|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2}\nonumber\\
& = \|f \|_\infty^2 \int d^3 {\mathbf{x}}_1' \int_{|{\mathbf{y}}| \leq 1} d^3 {\mathbf{y}} ~H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) \,\frac{1}{|{\mathbf{y}}|^2} \nonumber\\
&= \|f \|_\infty^2 \left( \int d^3 {\mathbf{x}}_1' ~H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) \right) \left( \int_{|{\mathbf{y}}| \leq 1} \frac{d^3 {\mathbf{y}}}{|{\mathbf{y}}|^2} \right)\nonumber\\
&= \|f \|_\infty^2 \, \frac{4\pi t_1^3}{3} \, 4\pi \leq \|f \|_\infty^2 \,\frac{(4\pi)^2 \, T^3 }{3}.
\end{align}
For $I_2$, we obtain:
\begin{align}
I_2 &\leq \|f \|_\infty^2 \int_{|{\mathbf{x}}_1'-{\mathbf{x}}_2'| > 1} d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2' ~H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|) \, \frac{1}{|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2}\nonumber\\
&\leq \| f \|_\infty^2 \int d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2' ~H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)\nonumber\\
&= \| f \|_\infty^2 \,\left(\frac{4\pi}{3}\right)^2 \, (t_1 t_2)^3 \leq \| f \|_\infty^2 \,\left(\frac{4\pi}{3}\right)^2 \, T^6.
\end{align}
Hence, $I \leq \|f \|^2_\infty \, \frac{(4\pi)^2}{3} (T^3 + T^6)$.
Thus, replacing the second round bracket in \eqref{eq:3dsingcalc1} by this bound for $I$, we obtain:
\begin{align}
&\| (\widehat{L} \psi)(t_1,\cdot,t_2,\cdot)\|^2_{L^2} \leq \frac{\lambda^2}{(4\pi)^2} \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3} \int d^3 {\mathbf{x}}_1 \, d^3 {\mathbf{x}}_2 \, d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2'~\frac{H(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}\nonumber\\
& ~~~~~\times \frac{H(t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2} \, |\psi|^2(t_1-|{\mathbf{x}}_1-{\mathbf{x}}_1'|,{\mathbf{x}}_1',t_2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|,{\mathbf{x}}_2')\nonumber\\
&= \frac{\lambda^2}{(4\pi)^2} \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3} \int d^3 {\mathbf{y}}_1 \, d^3 {\mathbf{y}}_2 \, d^3 {\mathbf{x}}_1' \, d^3 {\mathbf{x}}_2'~\frac{H(t_1-|{\mathbf{y}}_1|)}{|{\mathbf{y}}_1|^2} \frac{H(t_2-|{\mathbf{y}}_2|)}{|{\mathbf{y}}_2|^2} \nonumber\\
& ~~~~~\times |\psi|^2(t_1-|{\mathbf{y}}_1|,{\mathbf{x}}_1',t_2-|{\mathbf{y}}_2|,{\mathbf{x}}_2') \nonumber\\
&= \lambda^2 \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3} \int_0^{t_1} dr_1 \int_{0}^{t_2} dr_2 ~ \| \psi(t_1-r_1,\cdot,t_2-r_2,\cdot) \|_{L^2}^2\nonumber\\
&= \lambda^2 \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3} \int_0^{t_1} d\rho_1 \int_{0}^{t_2} d\rho_2 ~ \| \psi(\rho_1,\cdot,\rho_2,\cdot) \|_{L^2}^2.
\label{eq:hilfsformel3dsing}
\end{align}
In particular, using $\| \psi(\rho_1,\cdot,\rho_2,\cdot) \|_{L^2}^2 \leq \| \psi\|^2_{\mathscr{B}_3}$ for almost every $(\rho_1,\rho_2)$, this shows that
\begin{equation}
\| \widehat{L} \psi \|_{\mathscr{B}_3} \leq |\lambda| \: \| f \|_\infty \, \frac{(T^5+T^8)^{1/2}}{\sqrt{3}} \, \|\psi \|_{\mathscr{B}_3} \:.
\end{equation}
Hence, the integral operator $\widehat{L}$ is bounded, and $\varphi_n \in \mathscr{B}_3\, \forall n \in \mathbb{N}_0$.
We now turn to the estimate of $\| \varphi_n(t_1,\cdot,t_2,\cdot) \|_{L^2}$. We shall prove by induction over $n \in \mathbb{N}_0$:
\begin{equation}
\| \varphi_n(t_1,\cdot,t_2,\cdot) \|_{L^2}^2 \leq \left( \lambda^2 \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3} \right)^n \, \frac{(t_1t_2)^n}{(n!)^2} \, \| \psi^{\rm free} \|^2_{\mathscr{B}_3}.
\label{eq:3dsingind}
\end{equation}
For $n=0$, this obviously holds. So let \eqref{eq:3dsingind} be true for some $n \in \mathbb{N}_0$. Plugging \eqref{eq:3dsingind} into \eqref{eq:hilfsformel3dsing} for $\psi = \varphi_n$, we find:
\begin{align}
&\| \varphi_{n+1}(t_1,\cdot,t_2,\cdot) \|_{L^2}^2 \leq \left(\lambda^2 \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3} \right)^{n+1} \int_0^{t_1} d\rho_1 \int_{0}^{t_2} d\rho_2 ~ \frac{(\rho_1 \rho_2)^n}{(n!)^2}\, \| \psi^{\rm free} \|^2\nonumber\\
&= \left( \lambda^2 \, \| f \|_\infty^2 \, \frac{(T^3+T^6)}{3}\right)^{n+1} \frac{(t_1 t_2)^{n+1}}{((n+1)!)^2}\, \| \psi^{\rm free} \|^2.
\end{align}
This proves \eqref{eq:3dsingind}. In particular, \eqref{eq:3dsingind} implies:
\begin{equation}
\| \varphi_n \|_{\mathscr{B}_3} \leq |\lambda| \: \| f \|_\infty \, \frac{(T^3+T^6)^{1/2}}{\sqrt{3}} \: \frac{T^n}{n!} \: \| \psi^{\rm free} \|_{\mathscr{B}_3}.
\end{equation}
This shows that $\sum_i \| \varphi_i \|$ converges. As before, we conclude that $\sum_i \varphi_i \in \mathscr{B}_3$ is the unique solution of \eqref{eq:3dsingm0b}. \hfill\ensuremath{\square}
\end{proof}
\begin{remark}
Splitting the singularities of $G_1,G_2$ and $K$ via the Cauchy-Schwarz inequality does not work for the physically natural equation \eqref{eq:inteq2d} in $d=2$. The reason is that the integral $\int_0^t dt' \int d^2 {\mathbf{x}}' ~\frac{H(t'-|{\mathbf{x}}'|)}{(t')^2-|{\mathbf{x}}'|^2}$ diverges. However, we can treat a problem with $[(t')^2-|{\mathbf{x}}'|^2]^{-\alpha/2}$ with $\alpha < 1$ instead of $[(t')^2-|{\mathbf{x}}'|^2]^{-1/2}$.
\end{remark}
\paragraph{Modified singular integral equation in $d=2$.}
Let $0 < \alpha < 1$. The previous remark suggests to consider the following integral equation on $\mathscr{B}_2$:
\begin{align}
&\psi(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) = \psi^{\rm free}(t_1,{\mathbf{x}}_1,t_2,{\mathbf{x}}_2) + \frac{\lambda}{(2\pi)^3} \int_0^{t_1} dt_1' \int d^2 {\mathbf{x}}_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_2'~\nonumber\\
& \frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{[(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2]^{\alpha/2}} \cos\left( m_1\sqrt{(t_1-t_1')^2-|{\mathbf{x}}_1-{\mathbf{x}}_1'|^2}\right) \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{[(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2]^{\alpha/2}} \nonumber\\
& \times \cos\left( m_2\sqrt{(t_2-t_2')^2-|{\mathbf{x}}_2-{\mathbf{x}}_2'|^2}\right) \frac{H((t_1'-t_2')^2-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2)}{[(t_1'-t_2')^2 - |{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2]^{\alpha/2}}\, \psi(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2').
\label{eq:2dsingalpha}
\end{align}
\begin{theorem} \label{thm:singularkernel2d}
For every $\psi^{\rm free} \in \mathscr{B}_2$, \eqref{eq:2dsingalpha} has a unique solution $\psi \in \mathscr{B}_2$.
\end{theorem}
\begin{proof}
The proof is structured like the previous ones. First we show that the integral operator $\widehat{L}$ in \eqref{eq:2dsingalpha} is a bounded operator on $\mathscr{B}_2$. Then we derive an estimate for the norm of $\varphi_n$ (defined analogously as before).
For the boundedness, we use \eqref{eq:2dsingalpha} and the Cauchy-Schwarz inequality to obtain:
\begin{align}
&\| \widehat{L} \psi(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \frac{\lambda^2}{(2\pi)^6} \int d^2 {\mathbf{x}}_1 \, d^2 {\mathbf{x}}_2 \left[ \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2'~\right. \right. \nonumber\\
& \frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{[(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2]^\alpha} \left. \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{[(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2]^\alpha} |\psi|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2')\right)\nonumber\\
&\times \left( \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2'~ H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|) H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|) \right. \nonumber\\
& \left. \left. \times \frac{H((t_1'-t_2')^2-|{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2)}{[(t_1'-t_2')^2 - |{\mathbf{x}}_1'-{\mathbf{x}}_2'|^2]^\alpha} \right) \right]
\label{eq:2dsingalphacalc1}
\end{align}
We first estimate the expression in the second round bracket.
Changing variables, ${\mathbf{x}}_i' \rightarrow {\mathbf{y}}_i = {\mathbf{x}}_i'-{\mathbf{x}}_i$, it becomes:
\begin{equation}
\int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{y}}_1 \, d^2 {\mathbf{y}}_2~ H(t_1-t_1'-|{\mathbf{y}}_1|) H(t_2-t_2'-|{\mathbf{y}}_2|) \frac{H((t_1'-t_2')^2-|{\mathbf{y}}_1+{\mathbf{x}}_1-{\mathbf{y}}_2-{\mathbf{x}}_2|^2)}{[(t_1'-t_2')^2 - |{\mathbf{y}}_1+{\mathbf{x}}_1-{\mathbf{y}}_2-{\mathbf{x}}_2|^2]^\alpha}
\end{equation}
Changing variables another time, namely to ${\mathbf{y}} = {\mathbf{y}}_1+{\mathbf{x}}_1-{\mathbf{y}}_2-{\mathbf{x}}_2$ and $\mathbf{Y} = {\mathbf{y}}_1 + {\mathbf{y}}_2$ (with the Jacobi determinant $\frac{1}{4}$) and then introducing spherical coordinates for ${\mathbf{y}}$ and $\mathbf{Y}$, we see that the expression is smaller than or equal to
\begin{align}
&\frac{(2\pi)^2}{4} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int_0^{t_1-t_1'+t_2 - t_2'} dR \, R \int_0^{|t_1'-t_2'|} dr ~ \frac{r}{((t_1'-t_2')^2-r^2)^{\alpha}}\nonumber\\
&= \frac{(2\pi)^2}{4} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int_0^{t_1-t_1'+t_2 - t_2'} dR \, R ~ \left[- \frac{1}{2(1-\alpha)}((t_1'-t_2')^2-r^2)^{1-\alpha}\right]_{0}^{|t_1'-t_2'|}\nonumber\\
&= \frac{(2\pi)^2}{4} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int_0^{t_1-t_1'+t_2 - t_2'} dR \, R ~\frac{1}{2(1-\alpha)}|t_1'-t_2'|^{2(1-\alpha)}\nonumber\\
&= \frac{(2\pi)^2}{2^4(1-\alpha)} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' ~(t_1-t_1'+t_2 - t_2')^2 \, |t_1'-t_2'|^{2(1-\alpha)}.
\end{align}
For our purposes, a crude upper bound is sufficient. To this end, we note $(t_1-t_1'+t_2 - t_2')^2 \leq (t_1+t_2)^2$ and $|t_1'-t_2'|^{2(1-\alpha)} \leq (t_1+t_2)^{2(1-\alpha)}$. Thus, we see that the previous expression is smaller than or equal to
\begin{equation}
\frac{(2\pi)^2}{2^4(1-\alpha)} (t_1+t_2)^2 \: (t_1+t_2)^{2(1-\alpha)} \: t_1 t_2 \leq \frac{(2\pi)^2}{2^4(1-\alpha)} (t_1+t_2)^{6-2\alpha}.
\end{equation}
With this result, \eqref{eq:2dsingalphacalc1} becomes:
\begin{align}
&\| \widehat{L} \psi(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \lambda^2 \frac{(t_1+t_2)^{6-2\alpha}}{(2\pi)^4 \cdot 2^4(1-\alpha)} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{x}}_1 \, d^2 {\mathbf{x}}_2 \, d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2' \nonumber\\
& \frac{H(t_1-t_1'-|{\mathbf{x}}_1-{\mathbf{x}}_1'|)}{[(t_1-t_1')^2 - |{\mathbf{x}}_1-{\mathbf{x}}_1'|^2]^\alpha} \frac{H(t_2-t_2'-|{\mathbf{x}}_2-{\mathbf{x}}_2'|)}{[(t_2-t_2')^2 - |{\mathbf{x}}_2-{\mathbf{x}}_2'|^2]^\alpha} |\psi|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2').
\end{align}
We now change variables, ${\mathbf{x}}_i \rightarrow {\mathbf{x}}_i-{\mathbf{x}}_i' =: {\mathbf{y}}_i$ (Jacobi determinant 1). This yields:
\begin{align}
&\| \widehat{L} \psi(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \lambda^2 \frac{(t_1+t_2)^{6-2\alpha}}{(2\pi)^4 \cdot 2^4(1-\alpha)} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{y}}_1 \, d^2 {\mathbf{y}}_2 \, d^2 {\mathbf{x}}_1' \, d^2 {\mathbf{x}}_2' \nonumber\\
&~~~ \frac{H(t_1-t_1'-|{\mathbf{y}}_1|)}{[(t_1-t_1')^2 - |{\mathbf{y}}_1|^2]^\alpha} \frac{H(t_2-t_2'-|{\mathbf{y}}_2|)}{[(t_2-t_2')^2 - |{\mathbf{y}}_2|^2]^\alpha} |\psi|^2(t_1',{\mathbf{x}}_1',t_2',{\mathbf{x}}_2') \nonumber\\
&= \lambda^2 \frac{(t_1+t_2)^{6-2\alpha}}{(2\pi)^4 \cdot 2^4(1-\alpha)} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2' \int d^2 {\mathbf{y}}_1 \, d^2 {\mathbf{y}}_2 \nonumber\\
&~~~\frac{H(t_1-t_1'-|{\mathbf{y}}_1|)}{[(t_1-t_1')^2 - |{\mathbf{y}}_1|^2]^\alpha} \frac{H(t_2-t_2'-|{\mathbf{y}}_2|)}{[(t_2-t_2')^2 - |{\mathbf{y}}_2|^2]^\alpha} \, \| \psi(t_1',\cdot,t_2',\cdot)\|^2_{L^2}\nonumber\\
&= \lambda^2 \frac{(t_1+t_2)^{6-2\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2'~ (t_1-t_1')^{2(1-\alpha)}(t_2-t_2')^{2(1-\alpha)} \| \psi(t_1',\cdot,t_2',\cdot)\|^2_{L^2}\nonumber\\
&\leq \lambda^2 \frac{(2T)^{10-4\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2'~\| \psi(t_1',\cdot,t_2',\cdot)\|^2_{L^2}.
\label{eq:2dsingalphahilfsformel}
\end{align}
Using $\| \psi(t_1',\cdot,t_2',\cdot)\|^2_{L^2} \leq \| \psi \|^2$, we deduce:
\begin{align}
\| \widehat{L} \psi \|^2 \leq \lambda^2 \frac{(2T)^{12-4\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \, \| \psi \|^2.
\end{align}
This shows that $\widehat{L}$ indeed is a bounded operator on $\mathscr{B}_2$. Next, we prove the following estimate by induction over $n \in \mathbb{N}_0$:
\begin{equation}
\| \varphi_n(t_1,\cdot,t_2,\cdot) \|^2_{L^2} \leq \| \psi^{\rm free}\|^2 \, \left( \frac{\lambda^2 \cdot (2T)^{12-4\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \right)^n \, \frac{(t_1t_2)^n}{(n!)^2}.
\label{eq:2dsingalphaind}
\end{equation}
For $n=0$, \eqref{eq:2dsingalphaind} obviously holds. So let \eqref{eq:2dsingalphaind} be true for some $n \in \mathbb{N}_0$. Now $\varphi_{n+1} = \widehat{L} \varphi_n$ and thus plugging \eqref{eq:2dsingalphaind} into \eqref{eq:2dsingalphahilfsformel} yields:
\begin{align}
\| \widehat{L} \varphi_{n+1}(t_1,\cdot,t_2,\cdot) \|^2_{L^2} &\leq \| \psi^{\rm free} \|^2 \, \left( \frac{\lambda^2 \cdot (2T)^{12-4\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \right)^{n+1} \int_0^{t_1} dt_1' \int_0^{t_2} dt_2'~\frac{(t_1't_2')^n}{(n!)^2}\nonumber\\
&= \| \psi^{\rm free} \|^2 \, \left( \frac{\lambda^2 \cdot (2T)^{12-4\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \right)^{n+1} \, \frac{(t_1t_2)^{n+1}}{[(n+1)!]^2}.
\end{align}
This proves \eqref{eq:2dsingalphaind}. In particular, \eqref{eq:2dsingalphaind} implies:
\begin{equation}
\| \varphi_n \|_{\mathscr{B}_2} \leq \frac{\| \psi^{\rm free}\|_{\mathscr{B}_2}}{n!} \, \left( \frac{\lambda^2 \cdot (2T)^{14-4\alpha}}{(2\pi)^2 \cdot 2^6(1-\alpha)^3} \right)^{n/2}.
\end{equation}
This bound shows that $\sum_i \| \varphi_i \|$ converges. Hence $\sum_i \varphi_i$ converges in $\mathscr{B}_2$, and analogously to before it follows that this series is, in fact, the unique solution of \eqref{eq:2dsingalpha}. \hfill\ensuremath{\square}
\end{proof}
\section{Conclusions} \label{sec:conclusion}
\paragraph{Summary.}
In this paper, we have provided the first existence and uniqueness results for certain classes of multi-time integral equations of the form \eqref{eq:inteq}. We have focused on the case that the Green's functions in \eqref{eq:inteq} are retarded Green's functions of the Klein-Gordon equation. It was then demonstrated that assuming a beginning of time (which seems plausible in view of the Big Bang of our universe), these integral equations attain a Volterra-type structure. Hence, the time integrals run only from 0 to $t_i$, $i=1,2$. However, compared to the standard cases for multi-dimensional Volterra integral equations, the multi-time integral equations in this paper show several new features that necessitate a novel treatment: (a)~combined space and time integrals occur, (b)~the integral kernels are not square integrable, (c)~the kernels are singular (except for $d=1$). This singular behavior manifests itself in two aspects: (i) the Green's functions are singular and (ii) the interaction kernel is singular as well. The fact that these two types of singularities are connected makes them particularly challenging.
We were able to give results covering each of these features (a)--(c) for integral equations that are simplified compared to the physically natural cases. (In $d=1$, however, the physically natural case is covered.) The simplifications were introduced with care in order not to deviate overly from these natural cases. In particular, arbitrary but bounded interaction kernels have been covered. We also proved some results for singular interaction kernels which, in a certain sense, approximate the physically natural ones. This shows that it is, in principle, possible to deal with the above-mentioned connected singularities.
\paragraph{Discussion.}
In the context of other equations involving time delay, such as delay differential equations, our results may appear surprising. For these equations, it is notoriously hard to prove the existence and uniqueness of solutions. However, our equations involve time delay (even in many variables) and we have obtained global existence and uniqueness results for them. So the question arises: what makes our equations more tractable than a typical delay differential equation?
We believe that mainly two features are responsible: 1. our equations are linear while the typical delay differential equation is not, and 2., in our case, the time delay is bounded because of the assumed beginning in time and an arbitrary final time $T$ up to which we would like to solve the equation.
Our results also shed new light on a question discussed in \cite{direct_interaction_quantum}, namely: which data parametrize the solution spaces of multi-time integral equations \eqref{eq:inteq}? The conjecture that a given solution $\psi^{\rm free}$ of the free multi-time equations determines the solution $\psi$ of \eqref{eq:inteq} uniquely has turned out correct for the multi-time equations studied in this paper. We can even say more than that. As a consequence of the beginning in time and the retarded Green's functions, $\psi^{\rm free}(t_1=0,\cdot,t_2=0,\cdot)$ plays the role of initial data for $\psi$; that is, we obtain a Cauchy problem ``at the Big Bang.''
\paragraph{Outlook.} There are several interesting questions which have been left open by our work, or have been opened up by it:
\begin{enumerate}
\item It would be desirable to treat the physically natural singular integral kernels also in $d=2,3$ (Eqs. \eqref{eq:inteq2d}, \eqref{eq:inteq3d}). It is a challenge to find a proof or disproof of existence and uniqueness of solutions of these equations. This likely requires a modification of our techniques.
%
\item In the present paper, the Big Bang was only taken as a reason to introduce a lower limit for the time integrals. It would be desirable to implement it in a physically natural way instead. This would mean to formulate the integral equation on curved spacetimes with a Big Bang singularity and requires in particular to explicitly determine the Green's functions on curved spacetimes. Furthermore, we expect additional singularities of the Green's functions to appear on spacetimes with a Big Bang, as a consequence of the latter. We address this circle of questions in a subsequent paper \cite{int_eq_curved}.
%
\item Physically, it would be more natural to study the case of Green's function of the Dirac equation instead of the Klein-Gordon equation. While the KG equation is normally only used as a toy model, the Dirac equation describes actual elementary particles, e.g., electrons. In the Dirac case, the Green's functions become more singular (they involve $\delta'$-functions).
%
\item Finally, the case of time-symmetric Green's functions would be of great interest as the integral equation \eqref{eq:inteq} then is time reversal invariant, a property which is usually expected from fundamental physical laws. In this case, the equation does not have a Volterra structure any more, and it becomes much harder to derive existence and uniqueness results. A beginning in time alone does not simplify the problem much; one would also need an end in time. In fact, this is also a possible cosmological scenario: the Big Crunch. It would be of interest to develop existence results for this case (see \cite{int_eq_curved}).
\end{enumerate}
\paragraph{Acknowledgments.
We would like to thank Markus N\"oth and Shadi Tahvildar-Zadeh for helpful discussions. Special thanks go to Fioralba Cakoni for valuable advice.\\[1.5mm]
\begin{minipage}{15mm}
\includegraphics[width=13mm]{flag_yellow_low.jpg}
\end{minipage}
\begin{minipage}{143mm}
This project has received funding from the European Union's Framework for Re-\\
search and Innovation Horizon 2020 (2014--2020) under the Marie Sk{\l}odowska-
\end{minipage}\\[1mm]
Curie Grant Agreement No.~705295.
|
1,116,691,497,294 | arxiv | \section{Introduction}
Urolithiasis refers to the formation of stony concretions in the bladder or urinary tract \cite{hall2009nephrolithiasis, kasidas2004renal}. It represents a major public health issue in industrialized countries: at least 10\% of the population appears to have a kidney stone and the risk of inappropriate treatment due to an incorrect stone type identification can concern up to 40\% of patients\cite{kartha2013impact, scales2012prevalence}.
Therefore, the development of novel diagnosis and characterization tools for assisting clinicians is strongly encouraged by the urology community \cite{daudon2004clinical, estrade2017should}. Indeed, the in-vivo recognition of the type of kidney stones is an important aspect in the diagnosis, as it allows to prescribe adequate and personalized treatments in order to avoid relapses \cite{friedlander2015diet, kartha2013impact, viljoen2019renal}.
The morpho-constitutional analysis (MCA) developed by Daudon et al. \cite{daudon2016comprehensive} is the reference method for the ex-vivo identification of kidney stones which were fragmented and extracted during an ureteroscopy. MCA is performed by biologists working in a laboratory, and consists of two complementary analyses. A Fourier transform infrared spectroscopy (FTIR) analysis provides the chemical composition of the kidney stone,
whereas a visual inspection of the fragment observed with a microscope allows for the description of the crystalline structure based on colors and textures \cite{corrales2021classification}. Both the FTIR analysis and a rigorous visual inspection of the fragment surface and section are required to unequivocally identify the kidney stone type.
However, fragmenting kidney stones with a laser and extracting them from the kidneys and ureters is a tedious procedure lasting between 30 and 60 minutes. Lasers can also be used to vaporize the fragments. Such dusting procedures significantly speed-up ureteroscopies and diminishes the infection risks, with the major drawback that MCA analyses become impossible. To overcome this issue, kidney stones can be visually identified on a screen by few experts \cite{estrade2017should}. Becoming such an expert entails extensive training, making their incorporation in the clinical practice unfeasible. Moreover, this visual kidney stone recognition by urologists is operator dependent. AI techniques assessing endoscopic images could lead to an automated and operator independent in-vivo recognition.
\begin{figure}[]
\centering
\includegraphics[width=0.99 \linewidth]{images/datasetb.jpeg}
\caption{Examples of kidney stone images of the used dataset \cite{el2022evaluation}. The latter consists of the six most common kidney stone types, namely whewellite (WW), weddellite (WD), uric acid (AU), struvite (STR), brushite (BRU), and cystine (CYS).} \label{fig:dataset}
\end{figure}
Despite the importance of this problem, only few works \cite{martinez2020towards, lopez2021assessing, ochoa2022vivo} have dealt with the identification of kidney stones seen in images acquired with an ureteroscope.
However, none of these works have introduced a mechanism for fusing information (i.e., feature maps) of the section and surface views of a given kidney stone, which is what specialists do in clinical practice.
As noticeable in the two upper endoscopic image rows of Fig.~\ref{fig:dataset}, the aspect of the surface (SUR) and section (SEC) of kidney stone fragments depends on the urinary stone type. Existing methods have trained classifiers using features extracted from each image type, without taking into account the practices described by Daudon using the MCA analysis.
This contribution takes inspiration from recent works in multi-view fusion strategies \cite{geras2017high, seeland2021multi, sleeman2021multimodal}, which seek to combine characteristics from different sources or modalities to further improve
machine learning based classification models.
The aim of combining/fusing the features extracted from surface and section images is to increase the amount of discriminant information to improve the accuracy of the classification.
This approach based on feature fusion is also extended with attention mechanisms to further improve the classification performance (via feature refinement through attention).
The rest of this paper is organized as follows. Section \ref{sota} describes previous works dealing with the identification of kidney stones. Section \ref{mandm} starts with the description of the data used in this contribution. Then, this section presents a novel kidney stone classification approach based on attention and fusion, and ends with the training step of the proposed deep-learning (DL) model. Section \ref{res} discusses the obtained results, while Section \ref{conclusion} concludes the article.
\section{State-of-the-Art}
\label{sota}
The first works \cite{serrat2017mystone, martinez2020towards} dealing with the classification of kidney stones were based on shallow machine learning (SML) approaches, i.e., they used expert knowledge during the feature extraction. For instance, in \cite{martinez2020towards}, texture (local binary pattern histograms) and color (values in the hue/saturation/intensity space) information were gathered in feature vectors and treated by a random forest classifier to identify four kidney stone types. The results showed that using data from both section and surface images can lead to promising results. However, further work of these authors \cite{lopez2021assessing} has shown that SML-methods under-perform when compared with DL-methods in the context of kidney stone classification.
In recent works \cite{ochoa2022vivo, black2020deep}, the performance was effectively improved by using DL-based methods. Encouraging results showed the potential of CNNs to extract sufficiently discriminative features in surface and section images. In addition, it was also shown that training a neural network by combining information extracted from both section and surface images improves the performance of the models. Furthermore, the results in \cite{lopez2022boosting} showed that training models in different image distributions also improves the classification performance.
However, among all these solutions, not a single approach has been proposed to fuse the information extracted from images of the surface and section of urinary stone fragments.
Therefore, this contribution proposes a DL-based method that extracts and fuses information from both images types to assess whether image fusion can lead to an improvement of the classification performances in this task.
\section{Materials and methods}
\label{mandm}
\subsection{Dataset}
\label{datasets}
The dataset used in this contribution was built for kidney stone fragments whose types were determined during MCA, i.e., the data were annotated using the reference laboratory procedure \cite{daudon2016comprehensive}.
Images were acquired with an ureteroscope by placing the fragments inside a tubular shaped enclosure having a diameter and a color close to that of the ureters and their internal epithelial wall, respectively. As detailed in \cite{el2022evaluation}, although the images were acquired in ex-vivo, they are quite realistic since the environment and the illumination are very close to those observed in in-vivo, whereas the acquisitions were made with an endoscope and a light source actually used during an ureteroscopy. Table \ref{tab:dataset} shows that the dataset consists of 246 and 163 surface and section images, respectively.
\begin{table}[]
\centering
\caption{Endoscopic dataset. Number of images per class. SUR and SEC views contain 1000 patches per class each, while the MIX (SUR+SEC) contains 2000 patches per class.}
\vspace{-0.15cm}
\label{tab:dataset}
\begin{tabular}{@{}ccccc@{}}
\toprule
Type & Main component & Surface & Section & MIX \\ \midrule \vspace{-0.05cm}
Ia & Whewellite (WW) & 62 & 25 & 87 \\ \vspace{-0.05cm}
IIa & Weddellite (WD) & 13 & 12 & 25 \\ \vspace{-0.05cm}
IIIa & Acide Urique (AU) & 58 & 50 & 108 \\ \vspace{-0.05cm}
IVc & Struvite (STR) & 43 & 24 & 67 \\ \vspace{-0.05cm}
IVd & Brushite (BRU) & 23 & 4 & 27 \\ \vspace{-0.05cm}
Va & Cystine (CYS) & 47 & 48 & 95 \\ \cmidrule(l){2-5} \vspace{-0.05cm}
& All types & 246 & 163 & 409 \\ \bottomrule \vspace{-0.05cm}
\end{tabular}
\end{table}
As noticeable in Table \ref{tab:dataset}, rather few images are available for the six kidney stone types and the classes are imbalanced. It has been shown in previous works \cite{lopez2021assessing, martinez2020towards, ochoa2022vivo} that extracting from the images square patches with a maximal overlap of 20 pixels and with an appropriate size allows for capturing non redundant information including locally representative color and texture data.
In theses works, the square patch side length was a hyper-parameter whose optimal value of 256 pixels was adjusted in the test phase. In this contribution, the patch size is also $256 \times 256$ pixels (see the two last rows of Fig.~\ref{fig:dataset}). Extracting patches from the images (which are also whitened, see \cite{martinez2020towards}) and performing data augmentation are two means to increase the amount of data and to balance the classes.
\subsection{Proposed Approach}
\textbf{Multi-View Classification. }
Multi-View (MV) classification seeks to combine characteristics from different sources (here the image types). The accuracy of object identification increases due to the diversity of the features extracted from different sources and that are fused \cite{li2016multi, sleeman2021multimodal}. Thus, the performance of a DL-model can be improved by optimizing multiple functions, one per each image type. MV-fusion in CNNs has a particular interest when images from a single source do not carry sufficiently discriminative information for performing an accurate classification.
\textbf{Attention. }
CNNs have demonstrated their capability to solve a variety of visual tasks, such as classification.
However, the reasoning for performing a classification task is often unintelligible (i.e., the model can be seen as a black-box) limiting the understanding of the modle inner workings. One approach to visualize and to improve the representation power of the CNNs lies in the use of attention layers, which are scalar matrices representing the relative importance of a given layer activation at different locations with respect to the target task \cite{jetley2018learn, woo2018cbam}. By using attention, the model can focus on the important features, while suppressing the unnecessary ones.
\begin{figure*}[]
\includegraphics[width=.95\linewidth]{images/resnet50_attn_6.png}
\centering
\vspace{-0.15cm}
\caption{Proposed Multi-View model with attention. The first part of the model corresponds to the duplicated feature extraction layers from the ResNet50 model. These layers are followed by the fusion layer, which combines information from the two views (i.e., from the two image types). The fused feature map is then connected to the classification layer.}
\label{fig:mv_figure}
\end{figure*}
\textbf{Convolutional Block Attention Module. }
A recent attempt to incorporate attention into CNNs to improve their performance was described in \cite{woo2018cbam}. The Convolutional Block Attention Module (CBAM) consists of two attention sub-modules, namely i) a channel and ii) a spatial attention which are applied in that sequential order. Channel attention aims to focus on feature maps that are important for the learning step and enhances their relevance. On the other hand, spatial attention attempts to learn more discriminant points in the feature maps.
Combining both feature maps has demonstrated to yield an improvement in classification performance.
\textbf{Proposed Model. }
This contribution uses a ResNet50-based model pre-trained with ImageNet for performing the classification of the kidney stone types listed in Table \ref{tab:dataset}. In a preliminary experiment, it was observed that initializing the network on ImageNet improves the classification results, even if the distribution of this natural image dataset differs from that of the endoscopic image dataset \cite{lopez2022boosting}.
Then, an additional attention module consisting of the sequential application of channel and a spatial attention layers were added at the end of each conv block, as proposed in \cite{woo2018cbam}. For ResNet50, a total of 16 attention layers were added (see Fig \ref{fig:mv_figure}).
Finally, the baseline model and the modified version with attention were used to train the MV fusion models without, and with attention, respectively. To fuse the features, two late-fusion strategies are explored: feature concatenation, and max-pooling of the individual features obtained by each view response \cite{sleeman2021multimodal, seeland2021multi}.
ResNet50 was selected as the base model since this architecture produced the best performance on the three different views from the state-of-the-art (surface, section, and both image types mixed) \cite{lopez2021assessing, lopez2022boosting}. For this model, CBAM blocks were added at the end of every convolutional block, as suggested in \cite{woo2018cbam}. The purpose of having attention added at multiple points in the network is to get subsequent refined feature maps from the intermediate feature maps.
The selected fusion mechanism is late fusion, in which the feature extraction and the learning is done independently before the final classification. The two approaches used for effectively fusing data either concatenate the deep features or merge the features by applying max-pooling. Finally, this fused features are used for the classification task.
Moreover, we aim to create a model that simulates how MCA is carried out, by combining information of surface and section views in the same model, which is more alike to what specialists do in clinical practice.
\subsection{Training}
\label{training}
\textbf{Single-view Model.}
The base model used was evaluated in three scenarios: using of only surface features, using only of section patches, or by combining both views.
The model was trained for 30 epochs using a batch size of 32, along with the Adam optimizer with a learning rate of $2e^{-4}$. Finally, the representations obtained from the model are passed to fully connected layers with 512, 256, and 6 neurons each, with ReLU as activation function, batch normalization, and a dropout probability of 0.5.
The mixed views are used to train a base model for the creation of the MV model.
\textbf{Multi-View Model.}
For the training of the MV-fusion model, the feature extraction layers from the single-view model are frozen and then duplicated. One head processes the patches of the section view, wheareas the second head treats the surface patches.
These layers are followed by the fusion layer, which mixes the information of both views. The first fusion strategy consists of a stack of feature vectors on which max-pooling is applied.
The second fusion method concatenates the features obtained by each view.
Finally, the resulting representations are connected to a sequence of FC layers.
The proposed model is shown in Fig. \ref{fig:mv_figure}. Since feature extraction layers are frozen, any difference in the performance lies on the fusion mechanism and the FC layers.
\begin{table*}[t!]
\centering
\caption{Mean $\pm$ standard deviation assessed for four quality criteria. Each model was executed five times.}
\vspace{-0.15cm}
\label{tab:results1}
\begin{tabular}{@{}cccccl@{}}
\toprule \vspace{-0.03cm}
View & Accuracy & Precision & Recall & F1-score & \multicolumn{1}{c}{Model description} \\ \midrule \vspace{-0.03cm}
\multirow{2}{*}{Surface} & 0.856 $\pm$ 0.030 & 0.872 $\pm$ 0.022 & 0.856 $\pm$ 0.030 & 0.858 $\pm$ 0.034 & Base model \\ \vspace{-0.03cm}
& 0.888 $\pm$ 0.028 & 0.896 $\pm$ 0.024 & 0.886 $\pm$ 0.026 & 0.886 $\pm$ 0.026 & Base model + Attention \\ \midrule \vspace{-0.03cm}
\multirow{2}{*}{Section} & 0.836 $\pm$ 0.038 & 0.876 $\pm$ 0.015 & 0.838 $\pm$ 0.039 & 0.830 $\pm$ 0.040 & Base model \\ \vspace{-0.03cm}
& 0.844 $\pm$ 0.059 & 0.904 $\pm$ 0.023 & 0.844 $\pm$ 0.059 & 0.838 $\pm$ 0.068 & Base model + Attention \\ \midrule \vspace{-0.03cm}
\multirow{6}{*}{SUR+SEC} & 0.826 $\pm$ 0.027 & 0.846 $\pm$ 0.030 & 0.826 $\pm$ 0.027 & 0.824 $\pm$ 0.029 & Base model \\ \vspace{-0.03cm}
& \textbf{0.902 $\pm$ 0.014} &\textbf{ 0.910 $\pm$ 0.014} & \textbf{0.902 $\pm$ 0.015} & \textbf{0.902 $\pm$ 0.019} & Base model + Attention \\ \cmidrule(l){2-6} \vspace{-0.03cm}
& 0.828 $\pm$ 0.039 & 0.844 $\pm$ 0.036 & 0.828 $\pm$ 0.039 & 0.812 $\pm$ 0.049 & MV model (max-pooling) \\ \vspace{-0.03cm}
& \textbf{0.966 $\pm$ 0.005} & \textbf{0.968 $\pm$ 0.008} & \textbf{0.966 $\pm$ 0.005} & \textbf{0.962 $\pm$ 0.008} & MV model (max-pooling) + Attention \\ \cmidrule(l){2-6} \vspace{-0.03cm}
& 0.855 $\pm$ 0.036 & 0.870 $\pm$ 0.030 & 0.850 $\pm$ 0.035 & 0.841 $\pm$ 0.040 & MV model (concatenation) \\ \vspace{-0.03cm}
& \textbf{0.969 $\pm$ 0.004} & \textbf{0.980 $\pm$ 0.010} & \textbf{0.971 $\pm$ 0.004} & \textbf{0.970 $\pm$ 0.010} & MV model (concatenation) + Attention \\ \bottomrule \vspace{-0.03cm}
\end{tabular}
\end{table*}
\begin{figure*}[]
\centering
\includegraphics[width=0.97\linewidth]{images/figura_umaps4.png}
\vspace{-0.15cm}
\caption{UMAP visualizations of the features extracted by the models. (a) No-attention mixed model (Mixed Base model), (b) Mixed Base model + Attention, and (c) MV model (max pool) + Attention. See \ref{tab:results1} for more details about the trained models.}
\label{fig:umap}
\end{figure*}
\begin{table}[]
\centering
\caption{Comparison of the performances of the models studied in this contribution (see the accuracy column in Table \ref{tab:results1}) with the model accuracy of the state-of-the-art.}
\vspace{-0.15cm}
\label{tab:results_sota}
\begin{tabular}{@{}cccc@{}}
\toprule \vspace{-0.03cm}
Method & Surface & Section & SUR+SEC \\ \midrule \vspace{-0.025cm}
Black, et al. \cite{black2020deep} & 0.735$\pm$0.190 & 0.888$\pm$0.028 & 0.801$\pm$0.138 \\\vspace{-0.025cm}
Estrade, et al. \cite{estrade2022towards} & 0.737$\pm$0.179 & 0.788$\pm$0.106 & 0.701$\pm$0.223 \\ \vspace{-0.025cm}
Lopez, et al. \cite{lopez2021assessing} &0.810$\pm$0.030 &0.880$\pm$0.023 &0.850$\pm$0.030 \\ \vspace{-0.025cm}
Lopez, et al. \cite{lopez2022boosting} & 0.832$\pm$0.012 & 0.904$\pm$0.048 & 0.856$\pm$0.001 \\ \vspace{-0.025cm}
\textbf{This proposal} & \textbf{0.888$\pm$0.028} & \textbf{0.844$\pm$0.060} & \textbf{0.966$\pm$0.005} \\ \bottomrule \vspace{-0.025cm}
\end{tabular}
\end{table}
\section{Results and discussion}
\label{res}
The three patch data (SUR, SEC and SUR+SEC) in Table \ref{tab:dataset} were separately used to assess the impact of the attention mechanisms on the recognition of kidney stones seen in endoscopic images.
An additional experiment to assess the incorporation of attention in a MV-scheme was done to evaluate the effects of training the network on mixed data. It has been reported that the combination of different views generates valuable features for a classification using DL networks \cite{lopez2021assessing, lopez2022boosting, ochoa2022vivo}.
In the experiment with mixed views, 9600 and 2400 patches were used for the training and the testing phases, respectively. The accuracy, precision, recall and F1-score metrics were used to assess the model performances. The results of the experiments are shown in Table \ref{tab:results1}.
\subsection{Single-view Model}
\textbf{Surface patch results. }
As seen in Table \ref{tab:results1}, the overall accuracy after training the base model using only the weights transferred from ImageNet (i.e., without attention layers) is 0.856$\pm$0.030.
It is noticeable in Table \ref{tab:results1} that adding attention layers to the base model increased the accuracy by 3\%, leading to value of 0.888$\pm$0.028 for this criterion.
\textbf{Section patch results.}
The base model without attention led to an accuracy of 0.836$\pm$0.039 for section data. This accuracy reached a value of 0.844$\pm$0.059 by applying attention to the baseline model.
This very small increase over the baseline performances is probably due to the fact that the stronger textures present in section data do less require attention layers making a focus on these features.
\textbf{Mixed (SUR+SEC) patch results. }
The model with attention for mixed views shows promising results (accuracy of 0.902$\pm$0.015) compared to the model without attention (0.826$\pm$0.027). An overall increase of 8.0\% is achieved for all our metrics when adding attention layers to the base model.
Similar performance improvements were also observed in \cite{lopez2021assessing,lopez2022boosting, ochoa2022vivo} when features of both surface and section patches are used in a single training step.
The baseline + attention model achieves an increase of 4\% in terms of accuracy in comparison to \cite{lopez2022boosting}. The feature extraction part of this model was used as backbones in the MV-experiment.
\subsection{Multi-view Model}
The feature extraction layers from the models trained with the mixed dataset are duplicated and used to extract information both from the SUR and SEC views.
The two models with the attention module and fusion strategies yielded the two best performances in this work, obtaining an overall accuracy of 0.996$\pm$0.005 and 0.969$\pm$0.004 for the max-pooling strategy and the fusion strategy, respectively.
In addition, the distribution of the features by stone type also improves when attention is added.
Despite of this, it can be observed in Fig. \ref{fig:umap}.(a) and \ref{fig:umap}.(b) that the clusters (corresponding to urinary stone types) are scattered, elongated or fragmented in the three-dimensional UMAP feature space. By combining the information from different views with the MV-model and attention layers (see Fig. \ref{fig:umap}.(c)) the inter-class distances are increased, while the intra-class distances are reduced. These tighter clusters of points in the feature facilitate the classification task.
\section{Conclusions}
\label{conclusion}
The results given in this contribution demonstrate that the classification results of six different types of kidney stones can be improved by the insertion of attention mechanisms in CNN models, and that MV-schemes are also boosted by this addition.
The experiments also show that the feature distribution by stone type is enhanced by including several attention blocks along the network as the learned features are improved, leading to larger inter-class distances and smaller intra-class distances in the feature space.
\section*{Acknowledgments}
The authors wish to thank the AI Hub and the CIIOT at Tecnologico de Monterrey for their support for carrying the experiments reported in this paper in their NVIDIA's DGX computer.
We also to thank CONACYT for the master scholarship for Elias Alejandro Villalvazo-Avila and the doctoral scholarship for Francisco Lopez-Tiro at Tecnologico de Monterrey.
\section*{Compliance with ethical approval}
The images were captured in medical procedures following the ethical principles outlined in the Helsinki Declaration of 1975, as revised in 2000, with the consent of the patients.
|
1,116,691,497,295 | arxiv |
\section{For every submission}
\subsection{Did you discuss the \textit{limitations} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{mainClaims}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{mainClaimsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss any potential \textit{risks} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{risks}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{risksJustification}
\end{tabular}
\end{Form}
\subsection{Do the abstract and introduction summarize the paper’s main claims?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{abstractIntro}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{abstractIntroJustification}
\end{tabular}
\end{Form}
\section{Did you use or create \textit{scientific artifacts}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this sectio. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{createArtifacts}{Yes,No}\\[0.2cm]
\end{tabular}
\end{Form}
If yes:
\subsection{Did you cite the creators of artifacts you used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{citeCreators}{Yes,No,N/A}\\[0.2cm]
\tf{citeCreatorsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the \textit{license or terms} for use and/or distribution of any artifacts?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{legalGrounds}{Yes,No,N/A}\\[0.2cm]
\tf{legalGroundsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss if your use of existing artifact(s) was consistent with their \textit{intended use}, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{intendedUse}{Yes,No,N/A}\\[0.2cm]
\tf{intendedUseJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the steps taken to check whether the data that was collected/used contains any \textit{information that names or uniquely identifies individual people} or \textit{offensive content}, and the steps taken to protect / anonymize it?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{personallyIdentifiableInformationOrOffensiveContent}{Yes,No,N/A}\\[0.2cm]
\tf{personallyIdentifiableInformationOrOffensiveContentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{documentation}{Yes,No,N/A}\\[0.2cm]
\tf{documentationJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{relevantStatistics}{Yes,No,N/A}\\[0.2cm]
\tf{relevantStatisticsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you run \textit{computational experiments}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{computationalExperiments}{Yes,No}
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the \textit{number of parameters} in the models used, the \textit{total computational budget} (e.g., GPU hours), and \textit{computing infrastructure} used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{reportReproducibility}{Yes,No,N/A}\\[0.2cm]
\tf{reportReproducibilityJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the experimental setup, including \textit{hyperparameter search} and \textit{best-found hyperparameter} values?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{bestFoundHyperparameter}{Yes,No,N/A}\\[0.2cm]
\tf{bestFoundHyperparameterJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report \textit{descriptive statistics} about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{descriptiveStatistics}{Yes,No,N/A}\\[0.2cm]
\tf{descriptiveStatisticsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{existingPackages}{Yes,No,N/A}\\[0.2cm]
\tf{existingPackagesJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you use \textit{human annotators} (e.g., crowdworkers) or \textit{research with human subjects}?} If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{hummanAnnotators}{Yes,No}\\
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{fullTextInstructions}{Yes,No,N/A}\\[0.2cm]
\tf{fullTextInstructionsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such \textit{payment is adequate} given the participants’ demographic (e.g., country of residence)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{payment}{Yes,No,N/A}\\[0.2cm]
\tf{paymentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss whether and how \textit{consent} was obtained from people whose data you're using/curating (e.g., did your instructions explain how the data would be used)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{consent}{Yes,No,N/A}\\[0.2cm]
\tf{consentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Was the data collection protocol \textit{approved (or determined exempt)} by an ethics review board?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{ethicsAmountSpent}{Yes,No,N/A}\\[0.2cm]
\tf{ethicsAmountSpentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report the basic demographic and geographic characteristics of the \textit{annotator} population that is the source of the data?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{annotator}{Yes,No,N/A}\\[0.2cm]
\tf{annotatorJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\end{document}
\section{Evaluations}
We evaluate on two datasets: QuoRef~\cite{dasigi-etal-2021-dataset} and the version of SQuAD~\cite{rajpurkar-etal-2016-squad} from the MRQA shared task~\cite{fisch-etal-2019-mrqa}.
For each dataset, we run PaLM on the training data to produce silver annotations of the markup-and-mask rationales, as described above.
The decontextualization step is autoregressive, in the sense that the decontextualization for sentence $t$ is part of the prompt for decontextualizing sentence $t+1$. This makes it difficult to use the more efficient bulk inference procedure that we apply in the other parts of the prompt chain. For this reason, we use only a fraction of the SQuAD training data (12000 questions). We then use PaLM's output as annotations to fine-tune multitask sequence-to-sequence models built on pretrained mT5 backbones~\cite{xue-etal-2021-mt5}.
The results that follow are based on the mT5-XXL backbone. Comparisons across model scales are shown in \Cref{fig:results-overall-f1}.
\subsection{Accuracy}
\ifneurips
\begin{SCtable*}[]
\ifshortversion
\footnotesize
\fi
\centering
\input{tables/accuracy_table_xxl}
\caption{Overall exact match / \fm on open-book question answering. The \emph{end-to-end} system predicts the answer directly from the passage; the \emph{markup+mask} system predicts the answer from a rationale that includes both masking and markup; the \emph{mask-only} system uses a rationale based only on masking the original unmarked text; \emph{PaLM in-context} refers to the teacher model, which uses in-context learning only.
}
\label{tab:overall-accuracy}
\end{SCtable*}
\else
\begin{table}[]
\ifshortversion
\footnotesize
\fi
\centering
\input{tables/accuracy_table_xxl}
\caption{Overall exact match / \fm on open-book question answering. The \emph{end-to-end} system predicts the answer directly from the passage; the \emph{markup+mask} system predicts the answer from a rationale that includes both masking and markup; the \emph{mask-only} system uses a rationale based only on masking the original unmarked text; \emph{PaLM in-context} refers to the teacher model, which uses in-context learning only.
}
\label{tab:overall-accuracy}
\end{table}
\fi
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/scaling_plot}
\caption{Overall \fm results by student model size, for each configuration. The teacher model \fm is shown with the dotted horizontal line.}
\label{fig:results-overall-f1}
\end{figure*}
\Cref{tab:overall-accuracy} shows the overall performance of the student model, an end-to-end equivalent, and a masking-only ablation. On the SQuAD dataset, performance is similar across all model variants, showing that it is possible to derive causal rationales for SQuAD answers with only a minimal impact on accuracy. In contrast, prior work has found that previous unsupervised techniques for constructing rationales~\cite{paranjape-etal-2020-information, guerreiro-martins-2021-spectra} decreased performance by 10-20 \fm on SQuAD~\cite{chen2022can}. The pipeline method suffers a significant reduction in accuracy on QuoRef, which, as discussed below, is particularly resistant to rationale-based approaches. However, this is mitigated by the use of decontextualizing markup, reducing the gap between the end-to-end predictor and the mask-based rationales by almost half.
\ifshortversion
\else
\begin{SCtable*}[]
\footnotesize
\centering
\input{tables/selective_accuracy_table_xxl}
\caption{Evaluation of selective prediction for the XXL-based models. Answers from the end-to-end predictor are distinguished by whether they agree with the answer provided by the honest student pipeline. For example, the top row shows that on SQuAD, the predictors agree on 86.8\% of examples, receiving an \fm of 95.3 on this subset.}
\label{tab:explanation-as-confidence}
\end{SCtable*}
\fi
\paragraph{Selective prediction.} The availability of a step-by-step explanation can serve as a coarse form of calibration: examples for which explanations are available may be more likely to be accurately predicted. To test this idea, we perform an evaluation of \emph{selective prediction}, in which answers are produced only when they are likely to be correct~\cite{rodriguez2019quizbowl,kamath-etal-2020-selective}. Specifically, we compare accuracy on examples where the end-to-end model and the rationale-based pipeline agree and disagree.\footnote{The link between explanations and calibration was explored by \citet{ye2022unreliability}, who work in a chain-of-thought prompting framework. They show that when the chain-of-thought explanation is consistent with the passage, the answer is more likely to be correct.} As shown in \Cref{tab:explanation-as-confidence}, rationalizable answers are significantly more accurate. The \fm for rationalizable answers is more than 20 points higher than for non-rationalizable answers on both datasets, and the gap in exact match is even larger. Furthermore, most answers are rationalizable in this way. The markup-and-mask rationales play an important role in selective prediction on the QuoRef dataset, where they increase the fraction of rationalizable answers from 58\% to 74\%, while enlarging the \fm gap from 13.0 to 22.1. However, on the QuoRef dataset, a better coverage-accuracy tradeoff can be obtained by thresholding on the predictive probability of the end-to-end model; on SQuAD, the tradeoff is almost identical.
\subsection{Rationales}
\label{sec:eval-masks}
\begin{SCfigure*}[]
\centering
\includegraphics[width=0.7\textwidth]{figs/entailment_pointplot_short.pdf}
\caption{\protect\rule{0ex}{3ex}Consistency of rationales, as measured by the frequency with which the rationale entails a linearization of the question and the predicted answer.}
\label{fig:rationale-consistency}
\end{SCfigure*}
To test how often rationales are consistent with the answers, we apply natural language inference (NLI). Specifically, we ask a strong NLI system whether the rationale entails the linearization, ``The answer to "[question]" is "[predicted-answer]"''. This style of evaluation has been applied to other tasks involving factual consistency, such as summarization and fact verification~\cite{honovich-etal-2022-true-evaluating}. We use a very similar NLI system, trained by fine-tuning t5-XXL on multiple NLI datasets (MNLI, SNLI, FEVER, PAWS, SciTail, and VitaminC). As shown in \Cref{fig:rationale-consistency}, the rationales produced by the pipeline student models are significantly more consistent than the chain-of-thought rationales produced by the teacher model, justifying the ``honest student'' moniker. On the QuoRef dataset, 64\% of the rationales produced by the student model (with markup) entail that model's predicted answers, versus 47\% for the teacher model with markup, and 36\% without. On the SQuAD dataset, the student model achieves 81\% consistency, versus 76\% for the teacher model (75.5\% without markup).\ifshortversion%
\else
\footnote{As a robustness check, we shuffled the rationales to compute how often the classifier predicted entailment for unrelated rationales and predictions. For all rationale groups, the predicted entailment rate was less than 1\%.}
\fi
The markup also improves the consistency of the student model by 26\% on QuoRef and 1\% on SQuAD. It is particularly notable that markup improves the entailment rate despite the fact that the NLI system is trained on data that does not contain any markup.
\paragraph{Extractiveness and compression.}
A rationale is deemed \emph{extractive} when it appears as a contiguous substring in the marked-up passage, case-insensitive and not including punctuation characters or whitespace.
Extractiveness is desirable because it means that the rationales are directly grounded in the passage, similar to the notion of ``verified quotes'' proposed by~\citet{menick2022teaching}.
In QuoRef, the student model rationales were extractive for 92.3\% of passages; in SQuAD, 90.6\%.
These rationales yielded 7.9x compression in QuoRef and 4.5x compression in SQuAD. \Cref{tab:rationale-statistics} shows statistics of the markup and rationales.
\ifshortversion
\else
\paragraph{Stress test.} Can we view the markup-and-mask rationales as a faithful explanation of the reasoning process that produced the answer in the honest student~\citep{jacovi-goldberg-2020-towards}? While the honest student does not have access to the full passage, it may rely on knowledge obtained during pretraining. As a robustness check, we created an entity-perturbed version of the SQuAD dataset, similar to \citet{longpre-etal-2021-entity} and \citet{yan-etal-2022-robustness}, in which entity names were automatically substituted in the passages and gold answers. Substitutions were performed by running a named entity recognizer and replacing names that appear in the answer and passage with names of other entities of the same broad class, e.g., \say{Winston Churchill} $\to$ \say{Patti Smith}, \say{AT\&T} $\to$ \say{the New York Knicks}.
As shown in \cref{tab:perturb-results}, all models are approximately 3-4 \fm points worse than on the original evaluation set, with comparable exact match.
Note that in some cases these perturbations affect the grammaticality of the passage, making the task more difficult for reasons that do not relate to the fidelity of the explanations.
Overall these results suggest that the predictors mainly relied on the passage and not on knowledge obtained during pretraining.
\begin{table}
\centering
\footnotesize
\begin{tabular}{ll}
\toprule
\textbf{ } & \textbf{EM / \fm}\\
\midrule
End-to-end & 83.7 / 89.3 \\
Markup+mask & 81.5 / 87.4 \\
Mask-only & 81.5 / 87.0 \\
\bottomrule
\end{tabular}
\caption{Performance of the XXL-based student model on the SQuAD challenge set with entity perturbations.}
\label{tab:perturb-results}
\end{table}
\fi
\subsection{Decontextualizing markup}
\label{sec:eval-markup}
To measure the accuracy of the decontextualizing markup, we apply the prompt-based teacher and the fine-tuned student models to a manually decontextualized dataset, in which references are replaced inline rather than annotated with markup~\cite{choi-etal-2021-decontextualization}. Results are shown in \Cref{tab:decontext-results}. Both the student and teacher models exceed the reported results for a T5-base model that was fine-tuned on 11,290 in-domain examples of the decontextualization task. This shows that it is possible to learn to perform the task reasonably well from just five labeled examples, and that distillation improves performance further. Our models produce a different style of decontextualization from the test data, so it is possible that these results could be further improved.
\paragraph{Well-formedness.} We treat markup as a free-text generation task, with no constrained decoding. As a result, the markup may not be well-formed: the removal of markup may not yield a passage that is character-wise identical to the original passage (case-insensitive). However, both the student and teacher models usually produce well-formed markup. For more than 96\% of sentences in the QuoRef eval set, the decontextualization phase of the student model leaves the original text unaffected, as intended, and in 73\% of passages, all markup was well formed. In SQuAD, the markup was well formed in 96\% of sentences and in 85\% of full passages. The difference at the passage level is due to mainly the greater length of the QuoRef passages (see \Cref{tab:rationale-statistics}).
The teacher model markup was slightly less well-formed: on both the SQuAD and QuoRef datasets, approximately 94\% of the teacher model's sentence decontextualizations were well formed. This indicates that the language model can learn the format of the markup task from the five in-context examples. Most of the errors were minor, such as omission of sentence-final punctuation and the erroneous movement of text from the original into markup, e.g. \say{As a schoolboy Saint-Sa\"ens was outstanding} $\to$
\say{As a schoolboy [Charles-Camille Saint-Sa\"ens] was outstanding}. More serious errors, such as incorrectly-formatted markup and deletion of significant original content, occurred very rarely.
\paragraph{Amount of markup.} On the QuoRef dataset, the decontextualization model added 2.0 markup spans per sentence, with an average length of 5.3 SentencePiece tokens per span (31.6 per document). This almost exactly matches the behavior of the teacher model, which added 2.1 spans, with 5.8 SentencePiece tokens per span (median=4).
On the SQuAD dataset, there were fewer opportunities for decontextualization: the teacher model added 0.9 markup spans per sentence, with 6.1 tokens per span. The student model also added 0.9 spans per sentence (4.8 per document), with 5.6 tokens per span (median=4).
\ifshortversion
\else
\begin{table}
\centering
\footnotesize
\input{tables/decontext_sari}
\end{table}
\fi
\ifneurips
\begin{SCtable*}[]
\centering
\footnotesize
\input{tables/rationale-statistics-xxl}
\caption{Passage-level statistics of the rationales produced by the XXL-based models. Passage length and rationale length are computed in number of SentencePiece tokens. For more details on the other statistics, see Sections~\ref{sec:eval-masks} and \ref{sec:eval-markup}.}
\label{tab:rationale-statistics}
\end{SCtable*}
\else
\begin{table}[]
\centering
\footnotesize
\input{tables/rationale-statistics-xxl}
\caption{Passage-level statistics of the rationales produced by the XXL-based models. Passage length and rationale length are computed in number of SentencePiece tokens. For more details on the other statistics, see Sections~\ref{sec:eval-masks} and \ref{sec:eval-markup}.}
\label{tab:rationale-statistics}
\end{table}
\fi
\ifshortversion
\else
\subsection{Error analysis}
\input{content/eval-error-analysis}
\fi
\section{Generating Markup-and-Mask Annotations}
\label{sec:prompts-to-annotations}
Our goal is to fine-tune a student model to produce markup-and-mask rationales. Lacking labeled examples, we obtain silver annotations by applying three distinct prompting patterns to the pretrained language model PaLM~\cite{chowdhery2022palm} (540-billion parameter version), which we refer to as the \emph{teacher model}. Each prompt combines passages and questions from open-book question answering datasets, along with the outputs of previous prompts, in an approach that has been called \emph{prompt chaining}~\cite{wu2022ai}. There are three steps to the silver annotation process: (1) decontextualization; (2) chain-of-thought question answering; (3) rationale validation. The prompt chain is shown in \Cref{fig:prompt-chain}.
\ifshortversion
\else
\input{figs/decontext-example}
\fi
\paragraph{Decontextualization.}
The goal of the decontextualization step is to add free-text markup of the style shown in \cref{fig:main-example}. Decontextualization examples are linearized as \texttt{Context: \dots \ Passage: \dots \ Rewrite:}, with the language model prompted to complete the rewrite. An example is shown in \Cref{fig:decontext-linearization-example}. We use a hand-crafted prompt with five examples, shown in \cref{app:decontext-prompt}. We proceed incrementally through the document, decontextualizing each sentence using the previous $k$ decontextualized sentences as context. This enables information to propagate through the document.
The capabilities and limitations of this approach are highlighted in \Cref{fig:decontext-rokeby-venus}, which shows some typical outputs. The markup resolves pronominal references \say{she} and \say{her} and the nominal references \say{this painting} and \say{this phenomenon}. Perhaps most impressively, the elliptical expression \say{despite this} is decontextualized with the markup \say{[the fact that nudes were extremely rare\ldots]}. However, by the end of the document, we have lost track of the first name of the artist, so that \say{the artist} is decontextualized as only \say{[Velázquez]}, rather than with the full name. Future work may address this issue by exploring more sophisticated strategies than simple autoregressive decontextualization.
\paragraph{Chain-of-thought question answering.}
In chain-of-thought prompting, the language model is asked to first generate a rationale before producing an answer~\cite{wei2022chain}. For open-book question answering, we take the rationale to be a sentence that is extracted from the passage and which contains the answer, as shown in \Cref{fig:cot-qa-example}.
We construct question-specific few-shot prompts by concatenating several exemplars in which a question, passage, rationale, and answer are shown, before providing the question and passage for the instance to be predicted. The exemplars are drawn from the training set, selecting questions with the highest BM25 similarity to the target question~\cite{robertson2009probabilistic}. Exemplars are added until we reach a limit of 1024 sentencepiece tokens in the prompt~\cite{kudo-richardson-2018-sentencepiece}; for the QuoRef dataset, this amounts to two or three exemplars in most cases.
\ifshortversion
\else
\input{figs/decontext-linearization-example}
\fi
To generate the rationales in the exemplars, we enumerate all sentences in the passage that contains an exact match to the answer and select the one with the highest BM25 similarity to the exemplar's question. Each sentence is considered in both its original surface form and with decontextualizing markup. If no sentence contains an exact match to the answer, then the question is not included as an exemplar. However, prompts are constructed for all training set examples, even when no rationale can be extracted using this heuristic.
\paragraph{Rationale validation.}
Finally, to validate the rationales that were generated in the chain-of-thought stage, we perform a final validation stage in which the teacher model must answer questions based only on the generated rationales. As in the previous stage, we include each training set example and construct in-prompt exemplars by BM25 similarity to other questions in the training set. Because this stage does not include full passages, we can fit many more exemplars while remaining under the budget of 1024 tokens, on the order of 20 per prompt. The resulting ``faithful answers'' are then used to filter the fine-tuning data that is exposed to the student model.
\ifshortversion
\else
\input{figs/cot-example}
\fi
\section{Training the Student Model}
The prompt chain described in \Cref{sec:prompts-to-annotations} produces markup-and-mask rationales and uses them to answer questions. However, there are two main reasons to distill this teacher model into a smaller ``honest student.'' The first reason is efficiency: the prompt chain requires several calls to the large language model; because it is more specialized, the student model can potentially be smaller. The second reason is accuracy: in the teacher model, the training set is used only for in-context learning, with only a few examples per prompt; fine-tuning can make use of more gold answers, in combination with silver rationales.
To fine-tune the student model, we use as training data the gold answers and the rationales produced by the teacher model. Because our goal is to train an \emph{honest} student, we implement the student model as a pipeline: it must first produce the decontextualizing markup without seeing the question, then generate a rationale from the passage (conditioned on the question and the marked-up passage), and finally produce an answer (conditioned on the question and the generated rationale). Critically, the student has no access to the full passage when generating the answer. Each step of the pipeline is implemented as a text-to-text model using the t5x library~\cite{roberts2022scaling}, and the steps are trained in a single multi-task model. The specific tasks for the student model are:
\begin{description}[labelindent=*,labelsep=1ex,labelwidth=2em,itemindent=-2em,leftmargin=2em]
\item[Decontextualizing markup.] As in the teacher model, decontextualization is performed autoregressively, with one training example per sentence. The target output is the markup produced by the teacher model.
\item[Span selection.] The input to the span selection task is a concatenation of the question and the decontextualized passage, and the target output is the rationale generated by the teacher in the chain-of-thought QA step. At training time the decontextualized passages are from the teacher; at prediction time they are from the decontextualizing markup step in the student pipeline.
\item[Rationale-based reading comprehension.] At training time, the input is a concatenation of the question and the teacher model's rationale; the target output is the gold answer. At prediction time, the input includes the rationale produced by the span selection step in the student pipeline.
\item[End-to-end reading comprehension.] For comparison, we also train an end-to-end reading comprehension task, in which the input is a concatenation of the question and the full passage. The target output is the gold answer and no rationale is produced.
\end{description}
The decontextualization task aligns closely to the decontextualization \emph{prompt},
but the student model is trained by fine-tuning while the teacher model relies only on in-context learning. Unlike the chain-of-thought prompt described in \Cref{sec:prompts-to-annotations}, the span selection task does not produce an answer; the rationale-based reading comprehension task is conceptually similar to the rationale validation prompt, but again, the student model uses fine-tuning rather than in-context learning. To build a cleaner silver training set, we train only on the rationales that led to approximately correct answers at both the chain-of-thought stage (using the entire passage) and the validation stage (using the rationale alone). Specifically, we score the generated answers at both stages, and exclude examples for which either answer has an \fm $ < 0.5$.
\section{Discussion}
We show how to train an \emph{honest student} to produce markup-and-mask rationales for open-book question answering. The approach has three key properties: (1) the rationales are more \emph{expressive} than traditional masks because they include free-text markup to enable each sentence to stand on its own; (2) the rationales are \emph{faithful} because the student model must first produce the rationale and then discard all other information from the passage when answering the question; (3) the rationale-generation system is \emph{unsupervised}, training on silver data created by prompting a large language model. These properties suggest a general methodology for a new generation of pipeline systems, which could offer the benefits of interpretability and controllability while limiting annotation cost and achieving the expressivity of natural language. In future work we will explore the capability of the teacher model to support even more expressive reasoning patterns, through richer prompt chains.
\paragraph{Limitations.}
A number of limitations are highlighted by the error analysis in \Cref{sec:error-analysis}.
More generally, we have assumed that answers can be rationalized by a contiguous span of the passage, after applying query-independent markup.
This explains the lower performance of the pipelined methods on QuoRef, which contains questions that are hard to answer from any single sentence, even with query-independent markup. Another limitation is that markup is provided in a single forward pass, making it impossible to handle cataphoric references --- for example, when an individual's name is revealed only at the end of a passage.
\section{Related Work}
Philosophically, the honest student is motivated by the goal of building \emph{warranted trust} in question anwering systems~\cite{jacovi2021formalizing}, through an architecture in which the rationales meaningfully constrain the predicted answer~\citep{deyoung-etal-2020-eraser} and can easily be checked by users.
\paragraph{Rationales for question answering.}
Rationales are typically defined as masks on the input passage~\cite{lei-etal-2016-rationalizing}, with the goal of finding the minimal rationale that is sufficient to identify the ground truth label~\cite{deyoung-etal-2020-eraser}. Such masks can be learned from human annotations~\cite{zaidan-etal-2007-using,menick2022teaching} or from unsupervised objectives such as information bottleneck~\citep{paranjape-etal-2020-information}.
We depart from fully extractive rationales by adding decontextualizing markup, unlike prior work in which decontextualization is performed inline~\cite{choi-etal-2021-decontextualization}, obscuring the relationship to the original text. This markup often indicates coreference relationships. Prior work has used human annotations to capture coreference in question answering~\cite{dua-etal-2020-benefits}. We show that similar functionality can be obtained without human annotations, through the combination of in-context learning and end-task supervision.
\paragraph{Reasoning chains in language models.}
In the past year, a number of papers have explored the ability of large language models to ``show their work.'' In chain-of-thought and least-to-most prompting, the model is prompted to produce an explanation alongside its answer, with questions focusing on arithmetic and commonsense reasoning~\cite{kojima2022large,wei2022chain,zhou2022least}.
Concurrent research uses chain-of-thought prompting in a student-teacher setup, similar to our architecture~\cite{snell2022learning}.
In all of these papers, the purpose of the explanations is not necessarily to make the model more trustworthy, but rather, to make the answer more accurate.
In contrast, our main goal is to increase transparency: question-answering systems must be auditable and self-explaining to avoid leading users astray.
Thus we seek to build an \emph{honest} student model, whose rationales accurately describe the passage and the predicted answer~\cite{creswell2022faithful}.
A related point is that chain-of-thought explanations have been found to be inconsistent with the source text for some types of textual reasoning~\citep{ye2022unreliability}.
In contrast, the student model produces explanations that are almost always extractive from the marked-up text. Furthermore, the markup has high precision, as measured against manual decontextualization.
\ifshortversion
\else
In both the student and teacher models, the output from the markup and masking steps are redirected into new prompts as input.
The rationale is not a separate output that may or may not be consistent with the answer, it is the \emph{only} part of the passage available to the answer-generation step.
This is an example of general architecture that has been referred to as a \emph{language model cascade}~\cite{dohan2022language}, a framework that generalizes earlier work on prompt chaining~\cite{wu2022ai} and multi-stage prompting~\citep{liu-etal-2022-multi}. Our work shows that such cascades can indeed lead to reliable and useful rationales. We employ this forward-chaining strategy in two ways: in the teacher model, to generate faithful reasoning traces, and in the student model, which is constrained to ignore the passage after selecting the key evidence.
\fi
Another line of work has focused on training language models to perform reasoning by fine-tuning on gold reasoning traces~\cite{bostrom2022natural,creswell2022faithful,dalvi-etal-2021-explaining,tafjord-etal-2021-proofwriter}. In contrast, our work does not rely on annotations of reasoning traces: our student model learns to perform accurate multi-step inferences by relying on the combination of few-shot in-context learning and filtering on the performance of the end-task. More similar is the work of~\citet{kojima2022large}, in which the model is fine-tuned to rationalize its predictions by ``bootstrapping'' from a small number of labeled examples. We provide a conceptually simpler approach that trains a student model by leveraging the pretrained capabilities of a large language model, eliminating the need for even a small seed set of labeled examples (except for the decontextualization step, which includes five labeled sentences), and using standard fine-tuning rather than a more complex iterative procedure with a dynamic training set.
\paragraph{Language models as teachers.} We employ a language model to generate silver annotated data for the intermediate steps of the pipeline. Prior work has explored the use of language models to generate training data in the few-shot setting~\cite{wang-etal-2021-want-reduce}. Of particular interest are approaches for filtering language model outputs that are unlikely to be correct, which could result in a cleaner silver training set~\citep{pmlr-v162-lang22a,smith2022language}.
In this paper we assume access to the gold answers, and filter intermediate steps by whether they lead to high-scoring answer predictions. However, the gold labels are a relatively weak constraint on the decontextualizing markup, so stricter filtering approaches might further improve performance on the markup task.
Finally, concurrent work shows that it is possible to distill from large pretrained language models using self-consistency in chain-of-thought prompting, without any labels at all~\citep{huang2022large}. Instead, they treat answers that have multiple derivations as more likely to be correct, and show that by fine-tuning on such answers, smaller students can outperform larger teachers.
\section{Introduction}
To be trustworthy and useful, a question answering system should be able to explain its reasoning and offer evidence. In open-book question answering, such explanations often take the form of rationale \emph{masks}, which are subsets of tokens from the original passage~\citep{lei-etal-2016-rationalizing}. However, a challenge for mask-based rationales is that subspans of the original passage are not meant to be read alone: coherent texts contain anaphora, ellipsis, and other cohesion-building elements that limit the interpretability of individual subspans when extracted from the discourse~\citep{halliday1976cohesion}. An example is shown in \Cref{fig:main-example}, in which the key sentence mentions the answer only through the nominal \say{the grieving goddess}.
A sufficient rationale for this answer would have to include an additional sentence introducing the entity \say{Astarte} and binding it to the nominal in the sentence that describes the key event.
Despite their limitations, extractive rationales have an important advantage over free-text explanations: they are directly linked to the original passage, making it easy for human readers to assess the reliability of the evidence for themselves. In this paper, we present a new style of explanation, called \textbf{markup-and-mask}, which preserves the attributability of extractive rationales while overcoming the problems created by extracting propositions from the discourse in which they were written. The key idea is that discourse context is made explicit in free-text markup and then rationales are extracted from the marked-up passages.
\begin{figure}
\begin{tcolorbox}
\footnotesize
\begin{itemize}[leftmargin=0cm,itemsep=0pt]
\item \textbf{Question: } What is the name of the person who revived Eshmun?
\item \textbf{Passage:} \textcolor{gray}{... Eshmun, a young man from Beirut, was hunting in the woods when Astarte saw him [Eshmun] and was stricken by his [Eshmun] beauty.} \dots The grieving goddess [Astarte] revived Eshmun and transported him [Eshmun] to the heavens where she [Astarte] made him [Eshmun] into a god of heaven. \dots
\item \textbf{Answer: } Astarte.
\end{itemize}
\end{tcolorbox}
\caption{An example from QuoRef~\citep{dasigi-etal-2021-dataset} with the generated rationale shown in dark text. The markup, shown in square brackets, makes it possible to find a more concise rationale than could be extracted from the original passage.}
\label{fig:main-example}
\end{figure}
\begin{comment}
\begin{figure}
\begin{tcolorbox}
\begin{itemize}
\item \textbf{Question:} What is the first name of the person who left seven slashes on the painting?
\item \textbf{Passage:} \textcolor{gray}{On 10 March 1914, the suffragette Mary Richardson walked into the National Gallery \dots} Richardson [Mary Richardson] left seven slashes on the painting [The Rokeby Venus] \dots
\item \textbf{Answer: } Mary
\end{itemize}
\end{tcolorbox}
\caption{An example from QuoRef~\citep{dasigi-etal-2021-dataset} with the generated rationale shown in dark text. The markup, shown in square brackets, makes it possible to find a more concise rationale than could be extracted from the original passage.}
\label{fig:main-example}
\end{figure}
\end{comment}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/prompt-chain.png}
\caption{Schematic of the prompt chain used to produce silver data to fine-tune the honest student. At the decontextualization stage, one prompt is applied per sentence in the passage in sequence; the remaining stages use exactly one prompt each.}
\label{fig:prompt-chain}
\end{figure*}
Rather than annotating markup-and-mask rationales manually, we present a new training method that leverages the in-context learning capability of large pretrained language models (\Cref{fig:prompt-chain}). First, we prompt a frozen language model to produce markup that sequentially decontextualizes each sentence in each passage in the training set. Next, we prompt the same language model to produce answers and chain-of-thought rationales from the decontextualized passage. Finally, we check that the rationale supports the answer by prompting the language model again, this time replacing the full passage with the rationale. When the answer approximately matches the ground truth, we add the rationale and markup to a silver training set. These silver annotations are used to train an ``honest student'' that is constrained to follow a pipeline: first generate question-neutral markup, then select a question-based rationale, and finally produce an answer using the rationale and not the passage.
Evaluation shows a number of favorable properties to this approach: (1) unlike other masking-based methods, accuracy on SQuAD is nearly as good as that of an end-to-end system; (2) on QuoRef, markup significantly increases accuracy; (3) answers that can be validated by a rationale are much more likely to be correct (+20 \fm); (4) rationales usually entail the answers; (5) despite having access to only five human-annotated examples of decontextualizing markup, the student model produces markup that is more accurate than a system that was fine-tuned on 11,290 gold-labeled training examples. The student models outperform their teacher on all three of our key metrics --- overall accuracy, entailment rate of rationales, and accuracy of decontextualizing markup --- highlighting the positive impact of distillation from pretrained language models.
To summarize the contributions of this paper:
\begin{itemize}[leftmargin=2em,itemsep=0pt]
\item We propose markup-and-mask rationales for open-book question answering, which preserve a direct link to the original evidence text but use markup to incorporate non-local information.
\item We show that it is possible to train models to produce markup-and-mask rationales without explicit supervision, by leveraging the capabilities of a pretrained language model.
\item We present a general strategy for using pretrained language models to help supervise interpretable pipeline systems in which annotations are available for only the end task.
\item We empirically validate the proposed approach, showing that the resulting rationales: (1) support
accurate question answering; (2) help quantify predictive uncertainty; (3) are more likely to entail the predicted answers than "chain-of-thought" rationales produced alongside the answer; and (4) accurately match human-written decontextualizations.
\end{itemize}
\section{Prompts and exemplars}
\label{app:decontext-prompt}
During decontextualization, the language model must be queried for every sentence in the dataset. For this reason, and because results were promising from the first exploratory prompts, we did not consider many alternative prompts. The prompt was written to include a few types of decontextualization, including references to people, locations, times, and events, as well as cases in which the decontextualizing information was not present in the context. The exemplars and instructions are shown in \Cref{fig:decontext-prompt}. These exemplars are then combined with individual sentences and contexts, as shown in \Cref{fig:decontext-linearization-example}.
\begin{figure}[ht]
\centering
\begin{tcolorbox}
\VerbatimInput{prompt}
\end{tcolorbox}
\caption{The instructions and exemplars for the decontextualization prompt.}
\label{fig:decontext-prompt}
\end{figure}
\ifshortversion
\input{figs/decontext-example}
\fi
\ifshortversion
An example prompt for chain-of-thought QA is shown in \Cref{fig:cot-qa-example}. As described above, the in-context exemplars are selected from the training set dynamically, based on similarity to the question.
\input{figs/cot-example}
\fi
\ifshortversion
\section{Additional evaluations}
\label{app:perturbation-eval}
\paragraph{Entity-swap perturbation.}
\Cref{tab:perturb-results} shows the results of a stress test evaluation that tests dependence on knowledge acquired during pretraining. Similar to \cite{longpre-etal-2021-entity}, we perturb existing SQuAD examples by running a named entity recognizer and replacing names that appear in the answer and passage with names of other entities of the same broad class (e.g., ``Winston Churchill'' $\to$ ``Patti Smith'', ``AT\&T'' $\to$ ``the Denver Broncos.'') The perturbations are performed only on the evaluation data, so we are evaluating the ability of a model fine-tuned on the original SQuAD data to generalize to these perturbations. Note that in some cases these perturbations affect the grammaticality of the passage, making the task more difficult for reasons that do not relate to the fidelity of the explanations. As shown in the table, all models are approximately 3-4 \fm points worse than on the original evaluation set, with comparable exact match. This suggests that the predictors mainly relied on the passage and not on knowledge obtained during pretraining.
\begin{SCtable*}
\centering
\begin{tabular}{ll}
\toprule
\textbf{ } & \textbf{em / \fm}\\
\midrule
End-to-end & 83.7 / 89.3 \\
Markup+mask & 81.5 / 87.4 \\
Mask-only & 81.5 / 87.0 \\
\bottomrule
\end{tabular}
\caption{Performance of the XXL-based student model on the SQuAD challenge set with entity perturbations.}
\label{tab:perturb-results}
\end{SCtable*}
\paragraph{Decontextualization.} Detailed results from the evaluation on labeled decontextualizations~\cite{choi-etal-2021-decontextualization} are shown in \Cref{tab:decontext-results}.
\begin{table}
\centering
\input{tables/decontext_sari}
\end{table}
\fi
\section{Implementation details}
\paragraph{Teacher model decontextualization.}
Sentence-level decontextualization requires sentence segmentation, which was performed using \texttt{sent\_tokenize} function of NLTK~\cite{BirdKleinLoper09}. Because sentence tokenization errors frequently propagated to decontextualization errors, we applied a few hand-crafted character-level replacement rules to improve segmentation accuracy, e.g. transforming expressions like \say{J. R. R. Tolkien} into \say{J.\textasciitilde R.\textasciitilde R. Tolkien}. All such transformations were reversed after sentence segmentation. The maximum number of context sentences was set at $k=5$.
\ifshortversion
\section{Error analysis}
\input{content/eval-error-analysis}
\fi
\ifshortversion
\section{Selective prediction results}
\Cref{tab:explanation-as-confidence} shows the results for selective prediction, distinguishing cases in which the end-to-end answer matches the pipeline from cases where they do not match. When the two answers do not match, the end-to-end system is evaluated because it is more accurate overall.
\begin{table*}[]
\centering
\input{tables/selective_accuracy_table_xxl}
\caption{Evaluation of selective prediction for the XXL-based models. Answers from the end-to-end predictor are distinguished by whether they agree with the answer provided by the honest student pipeline. For example, the top row shows that on SQuAD, the predictors agree on 86.8\% of examples, receiving an \fm of 95.3 on this subset.}
\label{tab:explanation-as-confidence}
\end{table*}
\fi
\begin{comment}
\section{Extra tables and figures}
Will probably cut these before submitting.
\begin{table}[h]
\centering
\input{tables/accuracy_table}
\caption{Overall accuracy (exact match / \fm) of student models based on mT5-xl. The \emph{end-to-end} system predicts the answer directly from the passage; the \emph{markup+mask} system predicts the answer from a rationale that includes both masking and markup; the \emph{mask-only} system uses a rationale based only on masking the original text; \emph{PaLM in-context} refers to the teacher model, which uses in-context learning only.}
\label{tab:overall-accuracy}
\end{table}
\begin{table*}[h]
\centering
\input{tables/selective_accuracy_table}
\caption{Evaluation of selective prediction for student models based on mT5-xl. Answers from the end-to-end predictor are distinguished by whether they agree with the answer provided by the honest student. For example, the top row shows that on mrqa-squad, the predictors agree on 85.1\% of examples, receiving an \fm of 94.0 on this subset.}
\label{tab:explanation-as-confidence}
\end{table*}
\begin{table*}[h]
\centering
\input{tables/rationale-statistics}
\caption{Passage-level statistics of the rationales produced by student models based on mT5-xl. Passage length and rationale length are computed in number of SentencePiece tokens. For more details on the other statistics, see \Cref{sec:rationale-reliability}}
\label{tab:rationale-statistics}
\end{table*}
\end{comment}
\section{Submission of papers to TSRML 2022}
Please read the instructions below carefully and follow them faithfully.
\subsection{Style}
Papers to be submitted to TSRML 2022 must be prepared according to the
instructions presented here. Papers are recommended to have at most \textbf{six pages},
including figures. Additional pages \emph{containing only acknowledgments and
references} are allowed.
Authors are required to use the TSRML \LaTeX{} style files obtainable at the
TSRML website as indicated below. Please make sure you use the current files
and not previous versions. Tweaking the style files may be grounds for
rejection.
\subsection{Retrieval of style files}
The style files for TSRML and other workshop information are available on
the World Wide Web at
\begin{center}
\url{https://tsrml2022.github.io/}
\end{center}
The file \verb+tsrml_2022.pdf+ contains these instructions and illustrates the
various formatting requirements your TSRML paper must satisfy.
The only supported style file for TSRML 2022 is \verb+tsrml_2022.sty+,
rewritten for \LaTeXe{}. This style file is adapted from the style file of NeurIPS 2022.
The \LaTeX{} style file contains three optional arguments: \verb+final+, which
creates a camera-ready copy, \verb+preprint+, which creates a preprint for
submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the
\verb+natbib+ package for you in case of package clash.
\paragraph{Preprint option}
If you wish to post a preprint of your work online, e.g., on arXiv, using the
TSRML style, please use the \verb+preprint+ option. This will create a
nonanonymized version of your work with the text ``Preprint. Work in progress.''
in the footer. This version may be distributed as you see fit. Please \textbf{do
not} use the \verb+final+ option, which should \textbf{only} be used for
papers accepted to TSRML.
At submission time, please omit the \verb+final+ and \verb+preprint+
options. This will anonymize your submission and add line numbers to aid
review. Please do \emph{not} refer to these line numbers in your paper as they
will be removed during generation of camera-ready copies.
The file \verb+tsrml_2022.tex+ may be used as a ``shell'' for writing your
paper. All you have to do is replace the author, title, abstract, and text of
the paper with your own.
The formatting instructions contained in these style files are summarized in
Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point
type with a vertical spacing (leading) of 11~points. Times New Roman is the
preferred typeface throughout, and will be selected for you by default.
Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no
indentation.
The paper title should be 17~point, initial caps/lower case, bold, centered
between two horizontal rules. The top rule should be 4~points thick and the
bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and
below the title to rules. All pages should start at 1~inch (6~picas) from the
top of the page.
For the final version, authors' names are set in boldface, and each name is
centered above the corresponding address. The lead author's name is to be listed
first (left-most), and the co-authors' names (if different address) are set to
follow. If there is only one co-author, list both author and co-author side by
side.
Please pay special attention to the instructions in Section \ref{others}
regarding figures, tables, acknowledgments, and references.
\section{Headings: first level}
\label{headings}
All headings should be lower case (except for first word and proper nouns),
flush left, and bold.
First-level headings should be in 12-point type.
\subsection{Headings: second level}
Second-level headings should be in 10-point type.
\subsubsection{Headings: third level}
Third-level headings should be in 10-point type.
\paragraph{Paragraphs}
There is also a \verb+\paragraph+ command available, which sets the heading in
bold, flush left, and inline with the text, with the heading followed by 1\,em
of space.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone.
\subsection{Citations within the text}
The \verb+natbib+ package will be loaded for you by default. Citations may be
author/year or numeric, as long as you maintain internal consistency. As to the
format of the references themselves, any style is acceptable as long as it is
used consistently.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations appropriate for
use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
If you wish to load the \verb+natbib+ package with options, you may add the
following before loading the \verb+tsrml_2022+ package:
\begin{verbatim}
\PassOptionsToPackage{options}{natbib}
\end{verbatim}
If \verb+natbib+ clashes with another package you load, you can add the optional
argument \verb+nonatbib+ when loading the style file:
\begin{verbatim}
\usepackage[nonatbib]{tsrml_2022}
\end{verbatim}
As submission is double blind, refer to your own published work in the third
person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our
previous work [4].'' If you cite your other papers that are not widely available
(e.g., a journal paper under review), use anonymous author names in the
citation, e.g., an author of the form ``A.\ Anonymous.''
\subsection{Footnotes}
Footnotes should be used sparingly. If you do require a footnote, indicate
footnotes with a number\footnote{Sample of the first footnote.} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches (12~picas).
Note that footnotes are properly typeset \emph{after} punctuation
marks.\footnote{As in this example.}
\subsection{Figures}
\begin{figure}
\centering
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\end{figure}
All artwork must be neat, clean, and legible. Lines should be dark enough for
purposes of reproduction. The figure number and caption always appear after the
figure. Place one line space before the figure caption and one line space after
the figure. The figure caption should be lower case (except for first word and
proper nouns); figures are numbered consecutively.
You may use color figures. However, it is best for the figure captions and the
paper body to be legible if the paper is printed in either black/white or in
color.
\subsection{Tables}
All tables must be centered, neat, clean and legible. The table number and
title always appear before the table. See Table~\ref{sample-table}.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables \emph{do not contain vertical rules.} We
strongly suggest the use of the \verb+booktabs+ package, which allows for
typesetting high-quality, professional tables:
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
This package was used to typeset Table~\ref{sample-table}.
\begin{table}
\caption{Sample table title}
\label{sample-table}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files. In
particular, do not modify the width or length of the rectangle the text should
fit into, and do not change font sizes (except perhaps in the
\textbf{References} section; see below). Please note that pages should be
numbered.
\section{Preparing PDF files}
Please prepare submission files with paper size ``US Letter,'' and not, for
example, ``A4.''
Fonts were the main cause of problems in the past years. Your PDF file must only
contain Type 1 or Embedded TrueType fonts. Here are a few instructions to
achieve this.
\begin{itemize}
\item You should directly generate PDF files using \verb+pdflatex+.
\item You can check which fonts a PDF files uses. In Acrobat Reader, select the
menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can
also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is
available out-of-the-box on most Linux machines.
\item The IEEE has recommendations for generating PDF files whose fonts are also
acceptable for NeurIPS. Please see
\url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf}
\item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use
"solid" shapes instead.
\item The \verb+\bbold+ package almost always uses bitmap fonts. You should use
the equivalent AMS Fonts:
\begin{verbatim}
\usepackage{amsfonts}
\end{verbatim}
followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+
for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following
workaround for reals, natural and complex:
\begin{verbatim}
\newcommand{\RR}{I\!\!R}
\newcommand{\Nat}{I\!\!N}
\newcommand{\CC}{I\!\!\!\!C}
\end{verbatim}
Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package.
\end{itemize}
If your file contains type 3 fonts or non embedded TrueType fonts, we will ask
you to fix it.
\subsection{Margins in \LaTeX{}}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the
figure width as a multiple of the line width as in the example below:
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
See Section 4.4 in the graphics bundle documentation
(\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf})
A number of width problems arise when \LaTeX{} cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command when
necessary.
\begin{ack}
Use unnumbered first level headings for the acknowledgments. All acknowledgments
go at the end of the paper before the list of references. Moreover, you are required to declare
funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work).
More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2022/PaperInformation/FundingDisclosure}.
Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission.
\end{ack}
\section*{References}
References follow the acknowledgments. Use unnumbered first-level heading for
the references. Any choice of citation style is acceptable as long as you are
consistent. It is permissible to reduce the font size to \verb+small+ (9 point)
when listing the references.
Note that the Reference section does not count towards the page limit.
\medskip
{
\small
[1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for
connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen
(eds.), {\it Advances in Neural Information Processing Systems 7},
pp.\ 609--616. Cambridge, MA: MIT Press.
[2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring
Realistic Neural Models with the GEneral NEural SImulation System.} New York:
TELOS/Springer--Verlag.
[3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and
recall at excitatory recurrent synapses and cholinergic modulation in rat
hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262.
}
|
1,116,691,497,296 | arxiv | \section{Introduction}
In the paper, we shall introduce a sequence of differential operators acting on symplectic spinor valued exterior differential forms over a symplectic manifold $(M,\omega)$ admitting the so called metaplectic structure. To define these operators, we make use of a symplectic torsion-free affine connection $\nabla$ on $(M,\omega).$
Under certain condition on the curvature of the connection $\nabla,$ described bellow, we prove that the mentioned sequence forms a complex.
Let us say a few words about the metaplectic structure.
The symplectic group
$Sp(2l,\mathbb{R})$ admits a non-trivial two-fold covering, the so called metaplectic group, which we shall denote by $Mp(2l,\mathbb{R}).$ Let $\mathfrak{g}$ be the Lie algebra of $Mp(2l,\mathbb{R}).$
A~metaplectic structure on a symplectic manifold $(M^{2l},\omega)$ is
a notion parallel to a spin structure on a Riemannian manifold. In particular,
one of its part is a principal $Mp(2l,\mathbb{R})$ bundle $(q: \mathcal{Q} \to M,Mp(2l,\mathbb{R}))$.
For a symplectic manifold admitting a metaplectic structure, one can construct the so called symplectic spinor bundle $\mathcal{S} \to M,$ introduced by Bertram Kostant in 1974. The symplectic spinor bundle $\mathcal{S}$ is the vector bundle associated to the metaplectic structure $(q: \mathcal{Q}\to M,Mp(2l,\mathbb{R}))$
on $M$ via the so called Segal-Shale-Weil representation
of the metaplectic group $Mp(2l,\mathbb{R})$.
See Kostant \cite{Kostant2} for details.
The Segal-Shale-Weil representation is an infinite dimensional unitary representation of
the metaplectic group $Mp(2l, \mathbb{R})$ on the space of all complex valued square
Lebesgue integrable functions ${\bf L^{2}}(\mathbb{R}^{l}).$ Because of the infinite dimension, the Segal-Shale-Weil representation is not so easy to handle. It is known, see, e.g., Kashiwara, Vergne \cite{KV}, that
the $\mathfrak{g}^{\mathbb{C}}$-module structure of the underlying Harish-Chandra module of
this representation is equivalent to the space
$\mathbb{C}[x^1,\ldots, x^l]$ of polynomials in $l$ variables, on which
the Lie algebra $\mathfrak{g}^{\mathbb{C}}\simeq \mathfrak{sp}(2l,\mathbb{C})$ acts
via the so called Chevalley homomorphism,\footnote{The Chevalley homomorphism
is a Lie algebra monomorhism of the complex symplectic Lie algebra $\mathfrak{sp}(2l,\mathbb{C})$ into the Lie algebra of the
associative algebra of polynomial coefficients differential operators acting on
$\mathbb{C}[x^1,\ldots, x^l].$} see Britten, Hooper, Lemire
\cite{BHL}. Thus, the infinitesimal structure of the Segal-Shale-Weil
representation can be viewed as the complexified {\it symmetric}
algebra $(\bigoplus_{i=0}^{\infty}\odot^i
\mathbb{R}^l)\otimes_{\mathbb{R}}{\mathbb{C}} \simeq \mathbb{C}[x^1,\ldots, x^l]$ of the Lagrangian
subspace $(\mathbb{R}^l,0)$ of the canonical symplectic vector space
$\mathbb{R}^{2l}\simeq (\mathbb{R}^l,0)\oplus (0,\mathbb{R}^l).$
This shows that the situation is completely parallel to
the complex orthogonal case, where the spinor representation can be
realized as the {\it exterior} algebra of a maximal isotropic
subspace. An interested reader is
referred to Weil \cite{Weil}, Kashiwara, Vergne \cite{KV} and also
to Britten, Hooper, Lemire \cite{BHL} for details. For some technical reasons, we shall be using the so called minimal globalization of
the underlying Harish-Chandra module of the Segal-Shale-Weil representation, which we will call {\it metaplectic representation} and denote it by ${\bf S}.$ The elements of $\bf S$ will be called symplectic spinors.
Now, let us consider a symplectic manifold $(M,\omega)$ together with
a symplectic torsion-free affine connection $\nabla$ on it. Such connections are usually called Fedosov connections.
Because the Fedosov connection is not unique for a choice of $(M,\omega)$ (in the contrary to Riemannian geometry), it seems natural to add the connection to the studied symplectic structure and
investigate the triples $(M,\omega, \nabla)$ consisting of a symplectic manifold $(M,\omega)$ and a Fedosov connection $\nabla.$ Such triples are usually called Fedosov manifolds and they were used in the deformation quantization. See, e.g., Fedosov \cite{Fedosov}. Let us recall that in Vaisman \cite{Vaisman}, the space of the so called symplectic curvature tensors was decomposed wr. to $Sp(2l,\mathbb{R}).$ For $l=1,$ the module of symplectic curvature tensors is irreducible, while for $l\geq 2,$ it decomposes into two irreducible submodules. These modules are usually called symplectic Ricci and symplectic Weyl modules, respectively. This decomposition translates to differential geometry level giving rise to the symplectic Ricci and symplectic Weyl curvature tensor fields, which add up to the curvature tensor field of $\nabla.$ See Vaisman \cite{Vaisman} and also Gelfand, Retakh, Shubin \cite{GSR} for a comprehensive treatment on Fedosov manifolds.
Now, let us suppose that a Fedosov manifold $(M,\omega,\nabla)$ admits a metaplectic structure $(q:\mathcal{Q}\to M^{2l}, Mp(2l,\mathbb{R})).$ Let $\mathcal{S} \to M$ be the symplectic spinor bundle associated to $(q:\tilde{Q} \to M,Mp(2l,\mathbb{R}))$ and let us consider the space $\Omega^{\bullet}(M,\mathcal{S})$ of exterior differential forms with values in $\mathcal{S},$ i.e., $\Omega^{\bullet}(M,\mathcal{S}):=\Gamma(M,\mathcal{Q}\times_{\rho}(\bigwedge^{\bullet}(\mathbb{R}^{2l})^*\otimes {\bf S})),$ where $\rho$ is the obvious tensor product representation of $Mp(2l,\mathbb{R})$ on $\bigwedge^{\bullet}(\mathbb{R}^{2l})^*\otimes {\bf S}.$
In Kr\'ysl \cite{KryslSVF}, the $Mp(2l,\mathbb{R})$-module $\bigwedge^{\bullet}(\mathbb{R}^{2l})^*\otimes {\bf S}$ was decomposed into irreducible submodules. The elements of $\bigwedge^{\bullet}(\mathbb{R}^{2l})^* \otimes {\bf S}$ are specific examples of the so called higher symplectic spinors. For $i=0,\ldots, 2l,$ let us denote the so called Cartan component of the tensor product $\bigwedge^{i}(\mathbb{R}^{2l})^*\otimes {\bf S}$ by ${\bf E}^{i m_i}.$ (For $i=0,\ldots, 2l,$ the numbers $m_i$ will be specified in the text.)
For $i=0,\ldots 2l-1,$ we introduce an operator $T_i$ acting between the sections of the vector bundle $\mathcal{E}^{im_i}$ associated to ${\bf E}^{i m_i}$ and the sections of the vector bundle $\mathcal{E}^{i+1,m_{i+1}}$associated to ${\bf E}^{i+1,m_{i+1}}.$ In a parallel to the Riemannian case, we shall call these operators symplectic twistor operators.
These operators are first order differential operators and they are defined using the symplectic torsion-free affine connection $\nabla$ as follows. First, the connection $\nabla$ induces a covariant derivative $\nabla^S$ on the bundle $\mathcal{S} \to M$ in the usual way. Second, the covariant derivative $\nabla^S$ determines the associated exterior covariant derivative, which we denote by $d^{\nabla^S}.$ For $i=0,\ldots, 2l-1$, we define the symplectic twistor operator $T_i$ as the restriction of $d^{\nabla^S}$ to $\Gamma(M,\mathcal{E}^{im_i})$ composed with the projection to $\Gamma(M,\mathcal{E}^{i+1,m_{i+1}}).$
Because we would like to derive a condition under which $T_{i+1}T_i=0,$ $i=0,\ldots, 2l-1,$ we should focus
our attention to the curvature tensor $R^{\Omega^{\bullet}(M,\mathcal{S})}:=d^{\nabla^S}d^{\nabla^S}$ of $d^{\nabla^S}$ acting on the space $\Omega^{\bullet}(M,\mathcal{S}).$ The curvature $R^{\Omega^{\bullet}(M,\mathcal{S})}$ depends only on the curvature of the symplectic connection $\nabla,$ which consists of the symplectic Ricci and symplectic Weyl curvature tensor fields as we have already mentioned. In the paper, we will analyze the action of the symplectic Ricci curvature tensor field on symplectic spinor valued exterior differential forms and especially on $\Gamma(M,\mathcal{E}^{i m_i}),$ $i=0,\ldots, 2l-2.$ We shall prove that the symplectic Ricci curvature tensor field when restricted to $\Gamma(M,\mathcal{E}^{im_i})$ maps this submodule into at most three $Mp(2l,\mathbb{R})$-submodules sitting
in symplectic spinor valued forms of degree $i+2,$ $i=0,\ldots, 2l-2.$
These submodules will be explicitly described. This will help us to prove that $T_{i+1}T_{i}=0$ $(i=0,\ldots, l-2)$ and
$T_{i+1}T_i=0$ $(i=l,\ldots, 2l-2)$ assuming the symplectic Weyl curvature tensor field vanishes. In this way, we will obtain two complexes.
Unfortunately, one can not expect $T_lT_{l-1}=0$ in general.
This will influence the way, how we construct one complex of the two complexes introduced above.
Let us notice that similar complex was investigated in Severa \cite{Severa} in the case of spheres equipped with the conformal structure of their round metrics.
The reader interested in applications of the symplectic spinor fields in theoretical physics is
referred to Green, Hull \cite{GH}, where the symplectic spinors are
used in the context of 10 dimensional super string theory. In Reuter
\cite{Reuter}, symplectic spinors are used in the theory of the so called
Dirac-K\"{a}hler fields.
In the second section, some basic facts on
the metaplectic representation and higher symplectic spinors are recalled. In this section, we also introduce several mappings acting on the graded space $\bigwedge^{\bullet}(\mathbb{R}^{2l})^*\otimes {\bf S},$ derive some \linebreak (super-)~commutation relations between them and determine a superset of the image of two of them, which are components of an infinitesimal version of the symplectic Ricci curvature tensor field.
In the section 3, basic properties of torsion-free symplectic connections and their curvature tensor field are mentioned and the metaplectic structure is introduced. In the subsection 3.1., the theorem on the complex consisting of the symplectic twistor operators is presented and proved.
\section{Metaplectic representation, higher symplectic spinors and basic notation}
To fix a notation, let us recall some notions from symplectic linear algebra.
Let us consider a real symplectic vector space
$(\mathbb{V},\omega)$ of dimension $2l,$ i.e., $\mathbb{V}$ is a
$2l$ dimensional real vector space and $\omega$ is a
non-degenerate antisymmetric bilinear form on $\mathbb{V}.$ Let us
choose two Lagrangian subspaces\footnote{maximal isotropic wr. to $\omega$} $\mathbb{L}, \mathbb{L}' \subseteq
\mathbb{V}$ such that $\mathbb{L}\oplus \mathbb{L}'=\mathbb{V}.$ It follows that $\mbox{dim}(\mathbb{L})=\mbox{dim}(\mathbb{L}')=l.$
Throughout this article, we shall use a symplectic basis
$\{e_i\}_{i=1}^{2l}$ of $\mathbb{V}$ chosen in such a way that
$\{e_i\}_{i=1}^l$ and $\{e_i\}_{i=l+1}^{2l}$ are respective bases of
$\mathbb{L}$ and $\mathbb{L}'.$ Because the definition of a symplectic
basis is not unique, let us fix one which shall be used in this
text. A basis $\{e_i\}_{i=1}^{2l}$ of $\mathbb{V}$ is called
symplectic basis of $(\mathbb{V},\omega)$ if
$\omega_{ij}:=\omega(e_i,e_j)$ satisfies $\omega_{ij}=1$ if and
only if $i\leq l$ and $j=i+l;$ $\omega_{ij}=-1$ if and only if $i>l$
and $j=i-l$ and finally, $\omega_{ij}=0$ in other cases. Let
$\{\epsilon^i\}_{i=1}^{2l}$ be the basis of $\mathbb{V}^*$ dual to
the basis $\{e_i\}_{i=1}^{2l}.$ For $i,j=1,\ldots,
2l,$ we define $\omega^{ij}$ by
$\sum_{k=1}^{2l}\omega_{ik}\omega^{jk}=\delta_i^j,$ for $i,j=1,\ldots,
2l.$ Notice that not only $\omega_{ij}=-\omega_{ji},$ but also
$\omega^{ij}=-\omega^{ji},$ $i,j =1, \ldots, 2l.$
As in the orthogonal case, we would like to rise and lower indices.
Because the symplectic form $\omega$ is antisymmetric, we should be more careful in this case.
For coordinates ${K_{ab\ldots c\ldots d}}^{rs \ldots t \ldots u}$ of a tensor $K$ over $\mathbb{V},$ we denote
the expression $\omega^{ic}{K_{ab\ldots c \ldots d}}^{rs \ldots t}$ by
${{{K_{ab \ldots}}^{i}}_{\ldots d}}^{rs \ldots t}$ and
${K_{ab\ldots c}}^{rs \ldots t \ldots u}\omega_{ti}$ by ${{{K_{ab \ldots c}}^{rs\ldots}}_{i}}^{\ldots u}$ and similarly for other types of tensors and also in the geometric setting when we will be considering tensor fields over a symplectic manifold $(M,\omega)$.
Let us denote the symplectic group of $(\mathbb{V},\omega)$ by
$G,$ i.e., $G :=Sp(\mathbb{V},\omega)\simeq Sp(2l,\mathbb{R}).$
Because the maximal compact subgroup $K$ of $G$ is isomorphic to the
unitary group $K \simeq U(l)$ which is of homotopy type
$\mathbb{Z},$ there exists a nontrivial two-fold covering
$\tilde{G}$ of $G.$ See, e.g., Habermann, Habermann \cite{HH} for details. This two-fold covering is called metaplectic
group of $(\mathbb{V},\omega)$ and it is denoted by
$Mp(\mathbb{V},\omega)$. Let us remark that $Mp(\mathbb{V},\omega)$ is reductive in the sense of Vogan \cite{Vogan}.
In the considered case, we have
$\tilde{G}\simeq Mp(2l,\mathbb{R}).$ For a later use, let us reserve
the symbol $\lambda$ for the mentioned covering. Thus $\lambda:
\tilde{G} \to G$ is a fixed member of the isomorphism class of all
nontrivial $2:1$ covering homomorphisms of $G$.
Because $\lambda:\tilde{G}\to G$
is a homomorphism of Lie groups and $G$ is a subgroup of the general
linear group $GL(\mathbb{V})$ of $\mathbb{V},$ the mapping $\lambda$ is
also a representation of the metaplectic group $\tilde{G}$ on the
vector space $\mathbb{V}.$ Let us define $\tilde{K}:=\lambda^{-1}(K).$ Obviously, $\tilde{K}$ is a maximal compact subgroup of $\tilde{G}.$
Further, one can easily see that $\tilde{K}\simeq \widetilde{U(l)}:=\{(g,z)\in U(l)\times \mathbb{C}^{\times}| \mbox{det}(g)=z^2\}$ and thus in particular, $\tilde{K}$ is connected. The Lie algebra $\tilde{\mathfrak{g}}$ of $\tilde{G}$ is isomorphic to the Lie algebra $\mathfrak{g}$ of $G$ and we will identify them. One has $\mathfrak{g}=\mathfrak{sp}(\mathbb{V},\omega)\simeq \mathfrak{sp}(2l,\mathbb{R}).$
Now let us recall some notions from representation theory which we shall need in this paper. From the point of view of this article, these notions are rather of a technical character.
Let $\mathcal{R}(\tilde{G})$ be the category the object of which are complete, locally convex, Hausdorff topological spaces with a continuous linear $\tilde{G}$-action, such that the resulting representation is admissible and of finite length; the morphisms are continuous $\tilde{G}$-equivariant linear maps between the objects. Let $\mathcal{HC}(\mathfrak{g},\tilde{K})$ be the category of Harish-Chandra $(\mathfrak{g},\tilde{K})$-modules and let us consider the forgetful Harish-Chandra functor $HC:\mathcal{R}(\tilde{G})\to \mathcal{HC}(\mathfrak{g},\tilde{K}).$
It is well known that there exists an adjoint functor $mg: \mathcal{HC}(\mathfrak{g},\tilde{K})\to \mathcal{R}(\tilde{G})$ to the Harish-Chandra functor $HC$. This functor is usually called the minimal globalization functor and its existence is a deep result in representation theory. For details and for the existence of the minimal globalization functor $mg,$ see Kashiwara, Schmid \cite{KS} or Vogan \cite{Vogan}.
From now on, we shall restrict ourselves to the case $l\geq 2$
not alway mentioning it explicitly. The case $l=1$ should be handled separately (though analogously) because
the shape of the root system of $\mathfrak{sp}(2,\mathbb{R})\simeq \mathfrak{sl}(2,\mathbb{R})$ is different from that
one of
of the root system of $\mathfrak{sp}(2l,\mathbb{R})$ for $l\geq 2.$
As usual, we shall denote the
complexification of $\mathfrak{g}$ by $\mathfrak{g}^{\mathbb{C}}.$
Obviously, $\mathfrak{g}^{\mathbb{C}}\simeq
\mathfrak{sp}(2l,\mathbb{C}).$
Further, for any Lie group $G$ and a principal $G$-bundle $(p:\mathcal{P} \to M,G)$ over a manifold $M,$ we shall denote the vector bundle associated to this
principal bundle via a representation $\sigma: G \to
\hbox{Aut}(\bf W)$ of $G$ on ${\bf W}$ by $\mathcal{W},$ i.e.,
$\mathcal{W}=\mathcal{G}\times_{\sigma} {\bf W}.$
Let us also mention that we shall often use the Einstein summation convention for repeated indices (lower and upper) without mentioning it explicitly.
\subsection{Metaplectic representation and symplectic spinors}
There exists a distinguished infinite dimensional unitary representation of
the metaplectic group $\tilde{G}$ which does not descend to a
representation of the symplectic group $G.$ This representation,
called {\it Segal-Shale-Weil},\footnote{The names oscillator or
metaplectic representation are also used in the literature. We shall
use the name Segal-Shale-Weil in this text, and reserve the name
metaplectic for certain representation arising from the
Segal-Shale-Weil one.} plays an important role in geometric
quantization of Hamiltonian mechanics, see, e.g., Woodhouse
\cite{Wood}. We shall not give a
definition of this representation here and refer the interested
reader to Weil \cite{Weil} or Habermann, Habermann \cite{HH}.
The Segal-Shale-Weil representation, which we shall denote by $U,$ is a complex infinite dimensional unitary representation of
$\tilde{G}$ on the space of complex valued square Lebesgue
integrable functions defined on the Lagrangian subspace
$\mathbb{L},$ i.e.,
$$U: \tilde{G} \to
\mathcal{U}({\bf L^2}(\mathbb{L})),$$ where $\mathcal{U}({\bf W})$
denotes the group of unitary operators on a Hilbert space ${\bf W}.$
In order to be precise, let us refer to the space ${\bf
L^2}(\mathbb{L})$ as to the Segal-Shale-Weil module. It is known that the Segal-Shale-Weil module belongs to the category $\mathcal{R}(\tilde{G}).$ (See Kashiwara, Vergne \cite{KV} for details and Segal-Shale-Weil representation in general.)
It is easy to see
that the Segal-Shale-Weil representation splits into two
irreducible $Mp(2l,\mathbb{R})$-submodules ${\bf L^{2}}(\mathbb{L})\simeq {\bf
L^{2}}(\mathbb{L})_+\oplus {\bf L^{2}}(\mathbb{L})_-.$ The first
module consists of even and the second one of odd complex valued square
Lebesgue integrable functions on the Lagrangian subspace
$\mathbb{L}.$ Let us remark that a typical construction of the
Segal-Shale-Weil representation is based on the so called
Schr\"{o}dinger representation of the Heisenberg group of
$(\mathbb{V}=\mathbb{L}\oplus\mathbb{L}',\omega)$ and a use of the
Stone-von Neumann theorem.
For technical reasons, we shall need the minimal
globalization of the underlying Harish-Chandra $(\mathfrak{g},\tilde{K})$-module $HC({\bf L^2}(\mathbb{L}))$ of the introduced Segal-Shale-Weil module. We
shall call this minimal globalization {\it metaplectic representation} and
denote it by $meta,$ i.e.,
$$meta: \tilde{G} \to \hbox{Aut}(mg(HC({\bf L^2}(\mathbb{L})))),$$ where $mg$ is the minimal globalization functor (see this section and the references therein). For our convenience, let us denote the module
$mg(HC({\bf L^2}(\mathbb{L})))$ by ${\bf S}.$ Similarly we define $\bf S_+$ and $\bf S_-$
to be the minimal globalizations of the underlying Harish-Chandra $(\mathfrak{g},\tilde{K})$-modules of the modules
${\bf L^2}(\mathbb{L})_+$ and ${\bf L^{2}}(\mathbb{L})_-.$
Accordingly to ${\bf L^{2}}(\mathbb{L})\simeq {\bf L^{2}}(\mathbb{L})_+\oplus
{\bf L^{2}}(\mathbb{L})_-,$ we have $\bf S \simeq {\bf S_+} \oplus {\bf S_-}.$
We shall call the $Mp(\mathbb{V},\omega)$-module $\bf S$
the symplectic spinor module and its elements {\it symplectic spinors}. For
the name "spinor", see Kostant \cite{Kostant2} or the Introduction.
Further notion related to the symplectic vector space
$(\mathbb{V}=\mathbb{L}\oplus \mathbb{L}',\omega)$ is the so called symplectic Clifford
multiplication of elements of ${\bf S}$ by vectors from $\mathbb{V}.$ For $i=1,\ldots, l$ and a symplectic spinor $f\in {\bf S},$ we define
\begin{eqnarray*}
(e_i.f)(x)&:=&\imath x^i f(x) \mbox{ and}\\
(e_{i+l}.f)(x)&:=&\frac{\partial f}{\partial x^{i}}(x),
\end{eqnarray*} where $x=\sum_{i=1}^{l}x^i e_i \in \mathbb{L}$ and $\imath=\sqrt{-1}$ denotes the imaginary unit.
Extending
this multiplication $\mathbb{R}$-linearly, we get the mentioned
symplectic Clifford multiplication.
Let us mention that the multiplication and the differentiation make sense for any $f\in {\bf S}$ because of the "analytic" interpretation of the minimal globalization. (See Vogan \cite{Vogan} for details.) Let us remark that in the physical literature, the symplectic Clifford multiplication is usually called Schr\"odinger quantization prescription.
The following lemma is an easy consequence of the definition of the symplectic Clifford multiplication.
{\bf Lemma 1:} For $v,w \in \mathbb{V}$ and $s \in {\bf S},$ we have
$$v.(w.s)-w.(v.s)=-\imath \omega(v,w)s.$$
{\it Proof.} See Habermann, Habermann \cite{HH}, pp. 11. $\Box$
Sometimes, we shall write $v.w.s$ instead of $v.(w.s)$ for $v,w \in \mathbb{V}$ and a symplectic spinor $s\in {\bf S}$ and similarly for higher number of multiplying elements. Further instead of $e_i.e_j.s,$ we shall write $e_{ij}.s$ simply
and similarly for expressions with higher number of multiplying elements, e.g., $e_{ijk}.s$ abbreviates $e_i.e_j.e_k.s.$
\subsection{Higher symplectic spinors}
In this subsection, we shall present a result on a decomposition
of the tensor product of the metaplectic representation $meta: \tilde{G} \to \mbox{Aut}({\bf S})$ with the wedge
power of the representation $\lambda^*: \tilde{G} \to
GL(\mathbb{V}^*)$ of $\tilde{G}$ (dual to the representation
$\lambda$) into irreducible summands.
Let us reserve the symbol $\rho$ for the
mentioned tensor product representation of $\tilde{G}$, i.e.,\begin{eqnarray*}
&&\rho: \tilde{G} \to \hbox{Aut}(\bigwedge ^{\bullet}\mathbb{V}^*\otimes {\bf S})\\
&&\rho(g)(\alpha\otimes s):=\lambda(g)^{*\wedge r}\alpha\otimes
meta(g)s
\end{eqnarray*}
for $r = 0,\ldots, 2l,$ $g\in \tilde{G},$ $\alpha\in \bigwedge^r\mathbb{V}^*,$ $s\in \bf{S}$ and extend it linearly. For definiteness, let us equip the tensor product
$\bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}$ with the so called
Grothendieck tensor product topology. See Vogan \cite{Vogan} and Treves \cite{Treves} for
details on this topological structure. In a parallel to the Riemannian case, we shall call the elements of $\bigwedge^{\bullet}\mathbb{V}^* \otimes {\bf S}$ higher symplectic spinors.
Let us introduce the following subsets of the set of pairs of non-negative integers.
We define
\begin{eqnarray*}
&&\Xi:=\{(i,j)\in \mathbb{N}_0 \times \mathbb{N}_0 | i = 0, \ldots, l;j=0,\ldots, i \} \cup \\
&& \qquad \mbox{ } \cup \{(i,j)\in \mathbb{N}_0\times \mathbb{N}_0 |i=l+1,\ldots, 2l, j=0,\ldots, 2l-i\},\\
&&\Xi_+:=\Xi - \{(i,i)|i=0, \ldots, l\} \, \mbox{ and}\\
&&\Xi_-:=\Xi - \{(i,2l-i)|i=l,\ldots, 2l\}.
\end{eqnarray*}
For each $(i,j) \in \Xi,$ a $\mathfrak{g}^{\mathbb{C}}$-module $\mathbb{E}^{ij}_{\pm}$ was introduced in Kr\'ysl \cite{KryslSVF}. These modules are irreducible infinite dimensional highest modules over $\mathfrak{sp}(\mathbb{V},\omega)^{\mathbb{C}}$ and they are described via their highest weights in the mentioned article.
In the next theorem, the module of symplectic spinor valued exterior forms $\bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}$ is decomposed into irreducible submodules.
{\bf Theorem 2:} For $l\geq 2,$ the following decomposition into irreducible \linebreak $Mp(\mathbb{V},\omega)$-submodules
$$\bigwedge^{i}\mathbb{V}^*\otimes {\bf S}_{\pm} \simeq \bigoplus_{j, (i,j) \in \Xi} {\bf E}^{ij}_{\pm}, \quad i=0,\ldots, 2l, \, \mbox{ holds.}$$
The modules ${\bf E}^{ij}_{\pm}$ are determined, as objects in the category $\mathcal{R}(\tilde{G}),$ by the fact that first they are submodules of the corresponding tensor product and second the $\mathfrak{g}^{\mathbb{C}}$-structure of $HC({\bf E}^{ij}_{\pm})$ is isomorphic to $\mathbb{E}^{ij}_{\pm}.$
{\it Proof.} See Kr\'ysl \cite{KryslSVF} or Kr\'ysl \cite{KryslJRT}. $\Box$
In the Figure 1, the decomposition in the case $l=3$ is displayed.
In the $i^{th}$ column of the Figure 1, when counted from zero, the summands of $\bigwedge^{i}\mathbb{V}^*\otimes {\bf S},$ $i=0,\ldots,6,$ are written. The meaning of the arrows at the figure will be explained later.
{\bf Remark:} Let us mention that for any $(i,j), (i,k) \in \Xi,$ $j\neq k,$ we have $\mathbb{E}^{ij}_{\pm} \not\simeq \mathbb{E}^{ik}_{\pm}$ (as $\mathfrak{g}^{\mathbb{C}}$-modules) for all combinations of $\pm$ on the left hand as well as on the right hand side. Using this fact, we have that for $i =0,\ldots, 2l$ the $\tilde{G}$-modules $\bigwedge^i\mathbb{V}^*\otimes {\bf S}_{\pm}$ are multiplicity free. Moreover for $(i,j), (k,j) \in \Xi$, we have $\mathbb{E}^{ij}_{\pm} \simeq \mathbb{E}^{kj}_{\mp}.$ These facts will be crucial in the paper.
For our convenience, let us set ${\bf E}^{ij}_{\pm}:=\{0\}$ for $(i,j) \in \mathbb{Z}\times \mathbb{Z} - \Xi$ and ${\bf E}^{ij}:={\bf E}^{ij}_+\oplus {\bf E}^{ij}_-.$
$$\xymatrix{{\bf E}^{0,0}\ar[r]\ar[dr] &{\bf E}^{1,0}\ar[r]\ar[dr] &{\bf E}^{2,0}\ar[r]\ar[dr] &{\bf E}^{3,0} \ar[r]\ar[dr] &{\bf E}^{4,0}\ar[r]\ar[dr] &{\bf E}^{5,0}\ar[r] &{\bf E}^{6,0}\\
&{\bf E}^{1,1} \ar[ur]\ar[r]\ar[dr] &{\bf E}^{2,1}\ar[r]\ar[dr] \ar[ur] & {\bf E}^{3,1}\ar[r]\ar[dr]\ar[ur] & {\bf E}^{4,1} \ar[r]\ar[ur] &{\bf E}^{5,1}\ar[ur] & \\
& & {\bf E}^{2,2} \ar[r] \ar[ur] \ar[dr] & {\bf E}^{3,2} \ar[ur] \ar[r] & {\bf E}^{4,2}\ar[ur]&&\\
&&& {\bf E}^{3,3}\ar[ur]&&&}
$$
\centerline{Figure 1.}
Now, we shall introduce four operators which help us to describe the action of the symplectic Ricci curvature tensor field acting on symplectic spinor valued exterior differential forms. For $r=0,\ldots, 2l,$ $\alpha \otimes s \in \bigwedge^r
\mathbb{V}^*\otimes {\bf S}$ and $\sigma \in \odot^2 \mathbb{V}^*,$ we set
\begin{eqnarray*}
X&:& \bigwedge^{r}\mathbb{V}^*\otimes {\bf S} \to
\bigwedge ^{r+1}\mathbb{V}^*\otimes {\bf S}, \, \mbox{ }X(\alpha \otimes
s):=\sum_{i=1}^{2l}\epsilon^i\wedge \alpha \otimes e_i.s,\\
Y&:& \bigwedge^{r}\mathbb{V}^{*}\otimes {\bf S} \to \bigwedge^{r-1}\mathbb{V}^*\otimes {\bf S}, \, \mbox{ } Y(\alpha \otimes s):=\sum_{i,j=1}^{2l} \omega^{ij}\iota_{e_i}\alpha \otimes e_j.s, \\
\Sigma^{\sigma}&:&\bigwedge^{r}\mathbb{V}^*\otimes {\bf S}\to \bigwedge^{r+1} \mathbb{V}^* \otimes {\bf S}, \, \mbox{ } \Sigma^{\sigma} (\alpha \otimes s):=\sum_{i,j=1}^{2l}{\sigma^i}_j\epsilon^j\wedge \alpha \otimes e_i.s \, \mbox{ and }\\
\Theta^{\sigma}&:&\bigwedge^{r}\mathbb{V}^*\otimes {\bf S} \to \bigwedge^{r}\mathbb{V}^*\otimes {\bf S}, \, \mbox{ } \Theta^{\sigma} (\alpha \otimes s):=\sum_{i,j=1}^{2l} \alpha \otimes \sigma^{ij}e_{ij}.s \, \mbox{ }
\end{eqnarray*}
and extend it linearly. Here $\sigma_{ij}:=\sigma(e_i,e_j),$ $i,j =1,\ldots, 2l,$ and the contraction of an exterior form $\alpha \in \bigwedge^{\bullet}\mathbb{V}^*$ by a vector $v\in \mathbb{V}$ is denoted by $\iota_{v}\alpha.$
{\bf Remark:}
\begin{itemize}
\item[1)] One easily finds out that the operators are independent of the choice of a symplectic basis $\{e_i\}_{i=1}^{2l}.$
The operators $X$ and $Y$ are used to prove the Howe correspondence for $Mp(\mathbb{V},\omega)$ acting on
$\bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}$ via the representation $\rho.$ See Kr\'ysl \cite{KryslJRT} for details.
\item[2)] The symmetric tensor $\sigma$ is an infinitesimal version of a part of the curvature of a Fedosov connection. This part is called symplectic Ricci curvature tensor field and will be introduced bellow. The operators $\Sigma^{\sigma}$ and $\Theta^{\sigma}$ will help us to describe the action of the symplectic Ricci curvature tensor field acting on symplectic spinor valued exterior differential forms.
\end{itemize}
In what follows, we shall write $\iota_{e_{ij}}\alpha$ instead of $\iota_{e_i} \iota_{e_j}\alpha,$ $i,j=1,\ldots, 2l,$ and similarly for higher number of contracting elements.
Using the Lemma 1, it is easy to compute that
\begin{eqnarray}
X^2(\alpha\otimes s)=-\frac{\imath}{2}\omega_{ij}\epsilon^i\wedge \epsilon^j \wedge \alpha \otimes s \mbox{ and }\qquad Y^2(\alpha \otimes s)=\frac{\imath}{2}\omega^{ij}\iota_{e_{ij}}\alpha \otimes s
\end{eqnarray}
for any element $\alpha \otimes s \in \bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}.$
In order to be able to use the operators $X$ and $Y$ in a geometric setting and some further reasons, we shall need the following
{\bf Lemma 3:}
\begin{itemize}
\item[1)] The operators $X,$ $Y$ are $\tilde{G}$-equivariant wr. to the representation $\rho$ of $\tilde{G}.$
\item[2)]For $(i,j) \in \Xi_-,$ the operator $X$ is an isomorphism if restricted to ${\bf E}^{ij}.$\\
For $(i,j) \in \Xi_+,$ the operator $Y$ is an isomorphism if restricted to ${\bf E}^{ij}.$
\end{itemize}
{\it Proof.} For the $\tilde{G}$-equivariance of $X$ and $Y,$ see Kr\'ysl \cite{KryslRarita}.
The fact that the mentioned restrictions are isomorphisms is proved in Kr\'ysl \cite{KryslJRT}.
$\Box$
In the next lemma, four relations are proved which will be used later in order to determine a superset of the image of a restriction of the symplectic Ricci curvature tensor field acting on symplectic spinor valued exterior differential forms. Often, we shall write $\Sigma$ and $\Theta$ simply instead of the more explicit $\Sigma^{\sigma}$ and $\Theta^{\sigma}.$ The symmetric tensor $\sigma$ is assumed to be chosen. The symbol $\{,\}$ denotes the anticommutator on $\mbox{End}(\bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}).$
{\bf Lemma 4:} The following relations
\begin{eqnarray}
\{\Sigma, X \} &=& 0, \\
\left[ \{ \Sigma, Y\} , Y^2 \right] &=& 0,\\
\left[X,\Theta \right]&=& 2\imath\Sigma \, \mbox{ and } \\
\left[\Theta, Y^2\right]&=&0
\end{eqnarray}
hold on $\bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}.$
{\it Proof.}
We shall prove these identities for $\alpha\otimes s \in \bigwedge^{i}\mathbb{V}^{*}\otimes {\bf S},$ $i=0,\ldots, 2l$ only. The statement then follows by linearity of the considered operators.
\begin{itemize}
\item[1)] Let us compute
\begin{eqnarray*}
(X\Sigma + \Sigma X)(\alpha \otimes s)&=&X({\sigma^i}_j\epsilon^j\wedge \alpha \otimes e_i\phi)+\Sigma(\epsilon^i\wedge \alpha \otimes e_i.s)\\
&=& {\sigma^i}_j \epsilon^k \wedge \epsilon^j \wedge \alpha \otimes e_{ki}.s
+{\sigma^j}_k\epsilon^k \wedge \epsilon^i \wedge \alpha \otimes e_{ji}.s\\
&=&{\sigma^i}_k\epsilon^j \wedge \epsilon^k \wedge \alpha \otimes e_{ji}.s
+{\sigma^i}_k\epsilon^k\wedge \epsilon^j \wedge \alpha \otimes e_{ij}.s\\
&=&{\sigma^i}_k\epsilon^j\wedge \epsilon^k \wedge \alpha \otimes (e_{ji}-e_{ij}).s\\
&=&-\imath {\sigma^i}_k \omega_{ji} \epsilon^j \wedge \epsilon^k \wedge \alpha \otimes s\\
&=&\imath \sigma_{jk} \epsilon^j\wedge \epsilon^k \wedge \alpha \otimes s\\
&=&0,
\end{eqnarray*}
where we have renumbered indices, used the Lemma 1 and the fact that $\sigma$ is symmetric. In what follows, we shall use similar procedures without mentioning it explicitly.
\item[2)] Let us compute
\begin{multline*}
P(\alpha \otimes s):=\{\Sigma, Y\}(\alpha \otimes s)\\
\begin{aligned}
&= Y({\sigma^i}_j\epsilon^j \wedge \alpha \otimes e_i.s)+\Sigma (\omega^{ij}\iota_{e_i}\alpha\otimes e_j.s)\\
&= {\sigma^i}_j\omega^{kl}\iota_{e_k}(\epsilon^j \wedge \alpha) \otimes e_{li}.s + \omega^{ij}{\sigma^k}_l\epsilon^l\wedge \iota_{e_{i}} \alpha\otimes e_{kj}.s \\
&={\sigma^i}_j \omega^{kl}(\delta^j_k\alpha - \epsilon^j \wedge \iota_{e_k}\alpha)\otimes e_{li}.s+
\omega^{ij}{\sigma^k}_l \epsilon^l \wedge \iota_{e_{i}}\alpha \otimes e_{kj}.s\\
&=\sigma^{il}\alpha\otimes e_{li}.s-{\sigma^i}_j\omega^{kl}\epsilon^j \wedge \iota_{e_k}\alpha\otimes e_{li}.s+ \omega^{ij}{\sigma^k}_l \epsilon^l \wedge \iota_{e_{i}}\alpha \otimes e_{kj}.s\\
&=\sigma^{il}\alpha\otimes e_{li}.s-{\sigma^k}_l\omega^{ij}\epsilon^l\wedge \iota_{e_i}\alpha \otimes e_{jk}.s+
\omega^{ij}{\sigma^k}_l \epsilon^l \wedge \iota_{e_{i}}\alpha \otimes e_{kj}.s\\
&=\sigma^{il}\alpha\otimes e_{li}.s -\imath \omega^{ij}\omega_{kj}{\sigma^k}_l\epsilon^l\wedge \iota_{e_{i}}\alpha \otimes s\\
&=\sigma^{il}\alpha\otimes e_{li}.s - \imath {\sigma^i}_j\epsilon^j \wedge \iota_{e_i}\alpha \otimes s.
\end{aligned}
\end{multline*}
Now, we use the derived prescription for $P$ and the equation (1) to compute
\begin{multline*}
\left[P,2\imath Y^2 \right](\alpha\otimes s)= 2\imath PY^2(\alpha\otimes s) - 2\imath Y^2 P(\alpha \otimes s)\\
\quad\begin{aligned}
=& -P(\omega^{ij}\iota_{e_{ij}}\alpha \otimes s) - 2\imath Y^2 (\sigma^{ij}\alpha\otimes e_{ji}.s - \imath {\sigma^i}_j\epsilon^j\wedge \iota_{e_i}\alpha \otimes s)\\
=& -\omega^{ij}\sigma^{kl}\iota_{e_{ij}}\alpha \otimes e_{lk}.s + \imath \omega^{ij}{\sigma^k}_l\epsilon^l \wedge \iota_{e_{kij}} \alpha \otimes s\\
&+\sigma^{ij}\omega^{kl}\iota_{e_{kl}}\alpha \otimes e_{ij}.s - \imath {\sigma^i}_j \omega^{kl}\iota_{e_{kl}}(\epsilon^j \wedge \iota_{e_i}\alpha )\otimes s\\
=& -\omega^{ij}\sigma^{kl}\iota_{e_{ij}}\alpha \otimes e_{kl}.s + \imath \omega^{ij}{\sigma^k}_l \epsilon^l \wedge \iota_{e_{kij}} \alpha \otimes s\\
&+ \sigma^{ij}\omega^{kl}\iota_{e_{kl}}\alpha \otimes e_{ji}.s -\imath
\omega^{kl}{\sigma^i}_j(\delta^j_l\iota_{e_{ki}}\alpha - \delta^j_k \iota_{e_{li}}\alpha + \epsilon^j \wedge \iota_{e_{kli}}\alpha)\otimes s\\
=& -\omega^{ij}\sigma^{kl}\iota_{e_{ij}}\alpha \otimes e_{kl}.s + \imath \omega^{ij}{\sigma^k}_l \epsilon^l \wedge \iota_{e_{kij}} \alpha \otimes s\\
&+\sigma^{kl}\omega^{ij}\iota_{e_{ij}}\alpha\otimes e_{kl}.s-\imath \omega^{ij}{\sigma^{k}}_l\epsilon^l\wedge \iota_{e_{ijk}}\alpha\otimes s\\
=& \, 0.
\end{aligned}
\end{multline*}
\item[3)] Due to the definition of $\Theta,$ we have
\begin{eqnarray*}
\left[X, \Theta \right](\alpha\otimes s) &=& \epsilon^k \wedge \alpha \otimes \sigma^{ij}e_{kij}.s- \epsilon^i \wedge \alpha \otimes \sigma^{jk}e_{jki}.s\\
&=&\epsilon^k\wedge \alpha \otimes \sigma^{ij}e_{kij}.s - \epsilon^k \wedge \alpha \otimes \sigma^{ij}e_{ijk}.s\\
&=& \sigma^{ij}\epsilon^k\wedge \alpha\otimes (e_{ikj}.s- \imath \omega_{ki}e_j.s - e_{ijk}.s)\\
&=& \sigma^{ij}\epsilon^k\wedge \alpha\otimes (e_{ijk}.s- \imath \omega_{kj}e_i.s - \imath \omega_{ki}e_j.s - e_{ijk}.s)\\
&=& 2 \imath \Sigma (\alpha \otimes s).
\end{eqnarray*}
\item[4)] This relation follows easily from the definition of $\Theta$ and the relation (1).
$\Box$
\end{itemize}
In the next proposition, a superset of the image of $\Sigma$ and $\Theta$ restricted to ${\bf E}^{ij},$ for $(i,j) \in \Xi,$ is determined.
{\bf Proposition 5:} For $(i,j) \in \Xi,$ we have
\begin{eqnarray*}
\Sigma_{|{\bf E}^{ij}}&:& {\bf E}^{ij} \to {\bf E}^{i+1,j-1} \oplus {\bf E}^{i+1,j} \oplus {\bf E}^{i+1,j+1}\, \mbox{ and }\\
\Theta_{|{\bf E}^{ij}}&:& {\bf E}^{ij} \to {\bf E}^{i+1,j-1} \oplus {\bf E}^{i+1,j} \oplus {\bf E}^{i+1,j+1}.
\end{eqnarray*}
{\it Proof.}
\begin{itemize}
\item[1)]For $i=0,\ldots, l,$ let us choose an element $\psi =\alpha \otimes s \in {\bf E}^{ii}.$ Using the relation (3), we have
$0=[P,Y^2]\psi = (P Y^2 - Y^2 P)\psi = (\Sigma Y^3 + Y\Sigma Y^2 -Y^2\Sigma Y + Y^3\Sigma)\psi.$ Because $Y$ is $\tilde{G}$-equivariant (Lemma 3 item 1), decreasing the form degree of $\psi$ by one and there is no summand isomorphic to ${\bf E}^{ii}_+$ or ${\bf E}^{ii}_-$ in $\bigwedge^{i-1}\mathbb{V}^*\otimes {\bf S}$ (Remark bellow the Theorem 2), $Y\psi=0.$ Using this equation, we see that the first three summands in the above expression for $[P,Y^2]$ are zero. Therefore we have $0=Y^3\Sigma \psi.$ Because $Y$ is injective on ${\bf E}^{ij}$ for $(i,j)\in \Xi_+$ (Lemma 3 item 2), we see that $\Sigma \psi \in {\bf E}^{i+1,i-1} \oplus {\bf E}^{i+1,i} \oplus {\bf E}^{i+1,i+1}.$
Now, let us consider a general $(i,j) \in \Xi$ and $\psi \in {\bf E}^{ij}.$ Let us take an element $\psi' \in {\bf E}^{jj}$ such that $\psi=X^{(i-j)}\psi'.$ This element exists because according to Lemma 3 item 2, the operator $X$ is an isomorphism when restricted to ${\bf E}^{ij}$ for $(i,j)\in \Xi_-.$
Because of the relation (2), we have $\Sigma\psi=\Sigma X^{(i-j)}\psi'=\pm X^{(i-j)}\Sigma \psi'.$ From the previous item, we know that $\Sigma \psi' \in {\bf E}^{j+1,j-1} \oplus {\bf E}^{j+1,j} \oplus {\bf E}^{j+1,j+1}.$ Because $X$ is $\tilde{G}$-equivariant (Lemma 3 item 1) and the only summands in $\bigwedge^{i+1}\mathbb{V}^*\otimes {\bf S}$ isomorphic to ${\bf E}^{j+1,j-1} \oplus {\bf E}^{j+1,j} \oplus {\bf E}^{j+1,j+1}$ are those described in the formulation of this proposition (see the Remark bellow the Theorem 2), the statement follows.
\item[2)] For $i=0,\ldots, l,$ let us consider an element $\psi = \alpha \otimes s \in {\bf E}^{ii}.$ Using the relation (5), we have $0=[\Theta,Y^2]\psi = \Theta Y^2 \psi + Y^2 \Theta \psi.$ Using similar reasoning to that one in the first item, we get $Y\psi=0.$ Using the expression for $[\Theta, Y^2]$ above, we get $Y^2\Theta \psi = 0$ and consequently, $\Theta \psi \in {\bf E}^{ii}\oplus {\bf E}^{i,i-1}.$ Now, let us suppose $\psi \in {\bf E}^{ij}$ for $(i,j) \in \Xi.$ There exists an element $\psi' \in {\bf E}^{jj}$ such that $\psi = X^{(i-j)}\psi'$ (Lemma 3 item 2).
Using the relations (4) and (2), we have $\Theta \psi = \Theta X^{(i-j)}\psi'= X^{(i-j)}\Theta \psi'$ if $i-j$ is even and $(X^{(i-j)}\Theta - 2\imath X^{(i-j-1)}\Sigma)\psi'$ if $i-j$ is odd. Using the fact $\Sigma_{|{\bf E}^{ij}}: {\bf E}^{ij} \to {\bf E}^{i+1,j-1} \oplus {\bf E}^{i+1,j} \oplus {\bf E}^{i+1,j+1},$ the statement follows by similar lines of reasoning as in the first item.
$\Box$
\end{itemize}
\section{Metaplectic structures and symplectic curvature tensors}
After we have finished the algebraic part of the paper, let us start describing the geometric structure we shall be investigating.
We begin with a recollection of results of Vaisman in \cite{Vaisman} and of Gelfand, Retakh and Shubin in \cite{GSR}.
Let $(M,\omega)$ be a symplectic manifold and
$\nabla$ be a symplectic torsion-free affine connection. By symplectic and torsion-free, we mean $\nabla \omega =0$ and $T(X,Y):=\nabla_XY-\nabla_YX-[X,Y]=0$ for all $X,Y \in \mathfrak{X}(M),$ respectively.
Such connections are usually called Fedosov connections. In what follows, we shall call the triple $(M,\omega,\nabla)$ Fedosov manifolds.
To fix our notation, let us recall the classical definition of the curvature tensor $R^{\nabla}$ of the connection $\nabla,$ we shall be using here. Let
$$R^{\nabla}(X,Y)Z:=\nabla_X\nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]}Z$$ for
$X,Y,Z \in \mathfrak{X}(M).$
Let us choose a local symplectic frame $\{e_i\}_{i=1}^{2l}$ over an open subset $U\subseteq M.$
We shall often write expressions in which indices $i,j,k,l$ e.t.c. occur. We will implicitly mean $i,j,k, l$ are running from $1$ to $2l$ without mentioning it explicitly.
We set $$R_{ijkl}:=\omega(R(e_k,e_l)e_j,e_i).$$ Let us mention that we are using the convention of Vaisman \cite{Vaisman} which is different from that one used in Habarmann, Habermann \cite{HH}.
From the symplectic curvature tensor field $R^{\nabla}$, we can build the symplectic Ricci curvature tensor field $\sigma^{\nabla}$ defined by the
classical formula
$$\sigma^{\nabla}(X,Y):=\mbox{Tr}(V \mapsto R^{\nabla}(V,X)Y)$$ for each $X,Y \in \mathfrak{X}(M)$ (the variable $V$ denotes a vector field on $M$). For the chosen frame and $i,j=1,\ldots, 2l$, we set
$$\sigma_{ij}:=\sigma^{\nabla}(e_i,e_j).$$
Further, let us define
\begin{eqnarray}
2(l+1)\widetilde{\sigma}^{\nabla}_{ijkl}&:=&\omega_{il}\sigma_{jk}-\omega_{ik}\sigma_{jl}+\omega_{jl}\sigma_{ik}-\omega_{jk}\sigma_{il}+2\sigma_{ij}\omega_{kl},\\
\widetilde{\sigma}^{\nabla}(X,Y,Z,V)&:=&\widetilde{\sigma}_{ijkl}X^iY^jZ^kV^l\, \mbox{ and} \nonumber\\
W^{\nabla}&:=&R^{\nabla}-\widetilde{\sigma}^{\nabla}
\end{eqnarray}
for local vector fields $X=X^ie_i,$ $Y=Y^je_j,$ $Z=Z^ke_k$ and $V=V^le_l.$
We will call the tensor field $\tilde{\sigma}$ the extended symplectic Ricci curvature tensor field and $W^{\nabla}$ the symplectic Weyl curvature tensor field.
These tensor fields were already introduced in Vaisman \cite{Vaisman}. We shall often drop the index $\nabla$ in the previous expressions. Thus, we shall often write $W,$ $\sigma$ and $\widetilde{\sigma}$ instead of
$W^{\nabla},$ $\sigma^{\nabla}$ and $\widetilde{\sigma}^{\nabla},$ respectively.
In the next lemma, the symmetry of $\sigma$ is stated.
{\bf Lemma 6:} The symplectic Ricci curvature tensor field $\sigma$ is symmetric.
{\it Proof.} See Vaisman \cite{Vaisman}. $\Box$
Let us start describing the geometric structure with help of which the action of the symplectic twistor operators are defined. This structure, called metaplectic, is a precise
symplectic analogue of the notion of a spin structure in the Riemannian geometry.
For a symplectic manifold $(M^{2l}, \omega)$ of dimension $2l,$
let us denote the bundle of symplectic reperes in $TM$ by
$\mathcal{P}$ and the foot-point projection of $\mathcal{P}$ onto
$M$ by $p.$ Thus $(p:\mathcal{P}\to M, G),$ where $G\simeq
Sp(2l,\mathbb{R}),$ is a principal $G$-bundle over $M$. As in
the subsection 2, let $\lambda: \tilde{G}\to G$ be a member
of the isomorphism class of the non-trivial two-fold coverings of
the symplectic group $G.$ In particular, $\tilde{G}\simeq
Mp(2l,\mathbb{R}).$ Further, let us consider a principal
$\tilde{G}$-bundle $(q:\mathcal{Q}\to M, \tilde{G})$ over the
symplectic manifold $(M,\omega).$ We call a pair
$(\mathcal{Q},\Lambda)$ metaplectic structure if $\Lambda:
\mathcal{Q} \to \mathcal{P}$ is a surjective bundle homomorphism
over the identity on $M$ and if the following diagram,
$$\begin{xy}\xymatrix{
\mathcal{Q} \times \tilde{G} \ar[dd]^{\Lambda\times \lambda} \ar[r]& \mathcal{Q} \ar[dd]^{\Lambda} \ar[dr]^{q} &\\
& &M\\
\mathcal{P} \times G \ar[r] & \mathcal{P} \ar[ur]_{p} }\end{xy}$$
with the
horizontal arrows being respective actions of the displayed groups, commutes.
See, e.g., Habermann, Habermann \cite{HH} and Kostant \cite{Kostant2} for
details on metaplectic structures. Let us only remark, that typical examples of symplectic manifolds admitting a metaplectic structure are cotangent bundles of orientable manifolds (phase spaces), Calabi-Yau manifolds and complex projective spaces $\mathbb{CP}^{2k+1}$, $k \in \mathbb{N}_0.$
Let us denote the vector bundle associated to the introduced principal $\tilde{G}$-bundle
$(q:\mathcal{Q}\to M,\tilde{G})$ via the representation $meta$ on ${\bf S}$ by $\mathcal{S}.$ We shall call
this associated vector bundle symplectic spinor bundle. Thus, we have $\mathcal{S}=\mathcal{Q}\times_{meta}{\bf S}.$ The sections $\phi \in \Gamma(M,\mathcal{S}),$ will be called symplectic spinor fields.
Let us denote the space of symplectic valued exterior differential forms $\Gamma(M,\mathcal{Q}\times_{\rho}(\bigwedge^{\bullet}\mathbb{V}^*\otimes{\bf S}))$ by $\Omega^{\bullet}(M,\mathcal{S})$ and call it the space of symplectic spinor valued forms simply.
Further for $(i,j)\in \mathbb{Z}\times \mathbb{Z},$ we define the associated vector bundles $\mathcal{E}^{ij}$ by the prescription $\mathcal{E}^{ij}:=\mathcal{Q}\times_{\rho} {\bf E}^{ij}.$
Because the operators $X,Y$ are $\tilde{G}$-equivariant (Lemma 3 item 1), they lift to operators
acting on sections of the corresponding associated vector bundles.
We shall use the same symbols as for the defined operators as for
their "lifts" to the associated vector bundle structure. Because for each $i=0,\ldots 2l,$ the decomposition
$\bigwedge^i\mathbb{V}^* \otimes {\bf S} \simeq \bigoplus_{j,(i,j)\in \Xi} {\bf E}^{ij}$ is multiplicity free (see the Remark bellow the Theorem 2), there exist uniquely defined projections $p^{ij}:\Omega^i(M,\mathcal{S}) \to \Gamma(M,\mathcal{E}^{ij}),$ $(i,j)\in \mathbb{Z}\times \mathbb{Z}.$
Now, let us suppose that $(M,\omega)$ is equipped with a Fedosov connection $\nabla$. The connection $\nabla$ determines the associated principal bundle connection $Z$
on the principal bundle $(p:\mathcal{P}\to M, G).$
This connection lifts to a principal bundle connection on the principal bundle
$(q:\mathcal{Q}\to M, \tilde{G})$ and defines the associated covariant derivative on the symplectic bundle $\mathcal{S},$ which we shall denote by $\nabla^S$ and call it the symplectic spinor covariant derivative. See Habermann, Habermann \cite{HH} for details. The symplectic spinor covariant derivative induces the exterior symplectic spinor derivative $d^{\nabla^S}$ acting on $\Omega^{\bullet}(M,\mathcal{S}).$
The curvature tensor field $R^{\Omega^{\bullet}(M,\mathcal{S})}$ acting on the symplectic spinor valued forms is given by the classical formula
$$R^{\Omega^{\bullet}(M,\mathcal{S})}:=d^{\nabla^S} d^{\nabla^S}.$$
In the next theorem, a superset of the image of $d^{\nabla^S}$ restricted to $\Gamma(M,\mathcal{E}^{ij}),$ $(i,j) \in \Xi,$ is determined.
{\bf Theorem 7:} Let $(M,\omega,\nabla)$ be a Fedosov manifold admitting a metaplectic structure. Then for the exterior symplectic spinor derivative $d^{\nabla^S},$ we have
$$d^{\nabla^S}_{|\Gamma(M,\mathcal{E}^{ij})}: \Gamma(M,\mathcal{E}^{ij})\to \Gamma(M,\mathcal{E}^{i+1,j-1}\oplus \mathcal{E}^{i+1,j}\oplus \mathcal{E}^{i+1,j+1}),$$ where $(i,j) \in \Xi.$
{\it Proof.} See Kr\'ysl \cite{KryslSVF}. $\Box$
{\bf Remark:}
From the proof of the theorem, it is easy to see that it can be extended to the case $(M,\omega)$ is presymplectic and the symplectic connection $\nabla$ has a non-zero torsion.
For $l=3$ and any $(i,j)\in \Xi,$ the mappings $d^{\nabla^S}$ restricted to $\Gamma(M,\mathcal{E}^{ij})$ are displayed as arrows at the Figure 1 above. (The exterior covariant derivative $d^{\nabla^S}$ maps $\Gamma(M,\mathcal{E}^{ij})$ into the three "neighbor" subspaces.)
\subsection{Curvature tensor on symplectic spinor valued forms and the complex of symplectic twistor operators}
Let $(M,\omega, \nabla)$ be a Fedosov manifold admitting a metaplectic structure $(\mathcal{Q},\Lambda).$
In the next lemma, the action of $R^S:=d^{\nabla^S}\circ \nabla^S$ on the space of symplectic spinors fields is described using just the symplectic curvature tensor field $R$ of $\nabla.$
{\bf Lemma 8:} Let $(M,\omega,\nabla)$ be a
Fedosov manifold admitting a metaplectic structure.
Then for a symplectic spinor field $\phi \in \Gamma(M,\mathcal{S}),$ we have
$$R^S\phi=\frac{\imath}{2} {R^{ij}}_{kl}\epsilon^k\wedge \epsilon^l \otimes e_i.e_j.\phi.$$
{\it Proof.} See Habermann, Habermann \cite{HH} pp. 42. $\Box.$
For our convenience, let us set $m_i:=i$ for $i=0,\ldots, l$ and $m_i:=2l-i$ for $i=l+1,\ldots, 2l.$
Now, we can define the symplectic twistor operators, which we shall need to introduce the mentioned complex.
For $i=0,\ldots, 2l-1,$ we set
$$T_i:\Gamma(M,\mathcal{E}^{im_i})\to \Gamma(M,\mathcal{E}^{i+1,m_{i+1}}), \quad \mbox{ } T_i:=p^{i+1, m_{i+1}}d^{\nabla^S}_{|\Gamma(M,\mathcal{E}^{im_i})}$$ and call these operators {\it symplectic twistor operators.}
Informally, one can say that the operators are going on the edge of the triangle at the Figure 1.
Let us notice that $Y(\nabla^S-T_0)$ is, up to a nonzero scalar multiple, the so called symplectic Dirac operator introduced by K. Habermann in \cite{KH}.
{\bf Theorem 9:} Let $(M^{2l},\omega,\nabla)$ be a Fedosov manifold admitting a metaplectic structure. If $l\geq 2$ and the symplectic Weyl tensor field $W^{\nabla}=0,$ then
$$0 \longrightarrow
\Gamma(M,\mathcal{E}^{00}) \overset{T_0}{\longrightarrow}
\Gamma(M,\mathcal{E}^{11}) \overset{T_{1}}{\longrightarrow}
\cdots \overset{T_{l-1}}{\longrightarrow}
\Gamma(M,\mathcal{E}^{ll}) \longrightarrow 0 \mbox{ and}
$$
$$0 \longrightarrow
\Gamma(M,\mathcal{E}^{ll}) \overset{T_l}{\longrightarrow}
\Gamma(M,\mathcal{E}^{l+1,l+1}) \overset{T_{l+1}}{\longrightarrow}
\cdots \overset{T_{2l-1}}{\longrightarrow}
\Gamma(M,\mathcal{E}^{2l,2l}) \longrightarrow 0
$$
are complexes.
{\it Proof.}
\begin{itemize}
\item[1)] In this item, we prove that for an element $\psi \in \Omega^{\bullet}(M,\mathcal{S}),$
$$ R^{\Omega^{\bullet}(M,\mathcal{S})} \psi = \frac{\imath}{l+1}(\imath X^2\Theta^{\sigma} - X\Sigma^{\sigma})\psi.$$
For $\psi=\alpha \otimes \phi \in \Omega^{\bullet}(M,\mathcal{S}),$ we can write
\begin{multline*}
R^{\Omega^{\bullet}(M,\mathcal{S})}(\alpha \otimes \phi) = d^{\nabla^{S}}d^{\nabla^{S}}(\alpha \otimes \phi) = d^{\nabla^{S}}(d\alpha \otimes \phi + (-1)^{deg(\alpha)}\alpha \wedge \nabla^S \phi)\\
\begin{aligned}
&= d^2\alpha \otimes \phi + (-1)^{deg(\alpha)+1}d\alpha \wedge \nabla^S \phi+(-1)^{deg(\alpha)}d\alpha\wedge\nabla^S \psi+\\
&\, (-1)^{deg(\alpha)}(-1)^{deg(\alpha)}\alpha \wedge d^{\nabla^S} \nabla^S\phi = \alpha \wedge \frac{\imath}{2} {R^{ij}}_{kl}\epsilon^k\wedge \epsilon^l \otimes e_{ij}.\phi\\
&= \frac{\imath}{2} {R^{ij}}_{kl}\epsilon^k \wedge \epsilon^l \wedge \alpha \otimes e_{ij}.\phi,
\end{aligned}
\end{multline*}
where we have used the Lemma 8.
Using this computation, the definition of the symplectic Weyl curvature tensor field $W^{\nabla}$ (Eqn. (7)), the definition of the extended symplectic Ricci curvature tensor field $\widetilde{\sigma}$ (Eqn. (6)) and the assumption $W^{\nabla}=0$, we get
\begin{multline*}
-4(l+1)\imath R^{\Omega^{\bullet}(M,\mathcal{S})}(\alpha\otimes\phi)=2(l+1){R^{ij}}_{kl}\epsilon^k\wedge\epsilon^l\wedge \alpha \otimes e_{ij}.\phi\\
\begin{aligned}
&=2(l+1)({W^{ij}}_{kl}+{\mbox{$\widetilde{\sigma}^{ij}$}}_{kl})\epsilon^k\wedge\epsilon^l\wedge\alpha\otimes e_{ij}.s\\
&=2(l+1){\mbox{$\widetilde{\sigma}^{ij}$}}_{kl}\epsilon^k\wedge\epsilon^l\wedge \alpha \otimes e_{ij}.s\\
&=({\omega^i}_l{\sigma^{j}}_k-{\omega^{i}}_{k}{\sigma^j}_l+{\omega^{j}}_l{\sigma^{i}}_{k}-{\omega^{j}}_k{\sigma^{i}}_{l}+2\sigma^{ij}\omega_{kl})\epsilon^k\wedge\epsilon^l\wedge\alpha \otimes e_{ij}.\phi\\
&=(4{\omega^i}_l{\sigma^j}_k\epsilon^k\wedge\epsilon^l\wedge \alpha\otimes e_{ij}.\phi+2\sigma^{ij}\omega_{kl})\epsilon^k\wedge \epsilon^l \wedge \alpha \otimes e_{ij}.\phi\\
&=4\imath X^2(\alpha \otimes \sigma^{ij} e_{ij}.\phi)-4 X ({\sigma^j}_k\epsilon^k \wedge \alpha \otimes e_j.\phi)=(4\imath X^2\Theta^{\sigma}-4X\Sigma^{\sigma})\psi,
\end{aligned}
\end{multline*}where we have used
the relation (1) in the second last step. Extending the result by linearity, we get the statement of this item for arbitrary $\psi \in \Omega^{\bullet}(M,\mathcal{S}).$
\item[2)]Using the derived formula for $R^{\Omega^{\bullet}(M,\mathcal{S})},$ the Proposition 5, the $\tilde{G}$-equi\-variance of $X$ (Lemma 3 item 1) and the decomposition structure of $\bigwedge^{\bullet}\mathbb{V}^*\otimes {\bf S}$ (see the Remark bellow the Theorem 2), we see that for $(i,j)\in \Xi$ and an element $\psi \in \Gamma(M,\mathcal{E}^{ij}),$ the section $R^{\Omega^{\bullet}(M,\mathcal{S})} \psi \in \Gamma(M,\mathcal{E}^{i+2,j-1} \oplus \mathcal{E}^{i+2,j} \oplus \mathcal{E}^{i+2,j+1}).$ Thus especially, $p^{i+2,m_{i+2}}R^{\Omega^{\bullet}(M,\mathcal{S})} \psi = 0$ for \linebreak $i=0,\ldots, l-2,l, \ldots, 2l-2$ and $\psi \in \Gamma(M,\mathcal{E}^{im_i}).$
For $i=0, \ldots, l-2,$ we get
\begin{multline*}
\begin{aligned}
0&=p^{i+2,i+2}R^{\Omega^{\bullet}(M, \mathcal{S})}=p^{i+2,i+2}d^{\nabla^S}d^{\nabla^S}\\
&=p^{i+2,i+2}d^{\nabla^S}(p^{i+1,0} + \ldots + p^{i+1,i+1})d^{\nabla^S}\\
&=p^{i+2,i+2}d^{\nabla^S}p^{i+1,0}d^{\nabla^S}+\ldots + p^{i+2,i+2}d^{\nabla^S}p^{i+1,i+1}d^{\nabla^S}\\
&= T_{i+1}T_{i},
\end{aligned}
\end{multline*}
where we have used the Theorem 7 in the last step.
Similarly, one proceeds in the case $i=l,\ldots, 2l-2.$
\end{itemize}
$\Box$
{\bf Corollary 10.} Let $(M,\omega, \nabla)$ be a Fedosov manifold admitting a metaplectic structure. If $l \geq 2$ and the symplectic Weyl tensor field $W^{\nabla}=0,$ then
$$ 0\longrightarrow \Gamma(M,\mathcal{E}^{00}) \overset{T_0}{\longrightarrow}\cdots \overset{T_{l-2}}{\longrightarrow} \Gamma(M,\mathcal{E}^{l-1,l-1})\overset{T_lT_{l-1}}{\longrightarrow}$$
$$\overset{T_lT_{l-1}}{\longrightarrow} \Gamma(M,\mathcal{E}^{l+1,l+1}) \overset{T_{l+1}}{\longrightarrow} \cdots \overset{T_{2l-1}}{\longrightarrow} \Gamma(M,\mathcal{E}^{2l,2l}) \longrightarrow 0
$$
is a complex.
{\it Proof.} Follows easily from the Theorem 9. $\Box$
The question of the existence of a symplectic connection with vanishing symplectic Weyl curvature tensor field was treated, e.g., in Cahen, Gutt, Rawnsley \cite{Cahen}. These connections are called connections of Ricci type.
For instance it is known that if a compact simply connected symplectic manifold $(M,\omega)$ admits a connection of Ricci type, then $(M,\omega)$ is affinely symplectomorphic to a $\mathbb{P}^n \mathbb{C}$ with the symplectic form, given by the standard complex structure and the Fubini-Study metric, and the Levi-Civita connection of this metric.
Let us refer an interested reader to the paper of Cahen, Gutt, Schwachh\"ofer \cite{CGS}, where also a relation of symplectic connections to contact projective geometries is treated.
Further research could be devoted to the investigation and the interpretation of the cohomology of the introduced complex and to the investigation of analytic properties of the introduced symplectic twistor operators.
|
1,116,691,497,297 | arxiv | \section{Filtered Carriers and Interleavings}\label{sec:filtered_carriers_and_interleavings}
In this section, we introduce a notion of filtered carrier between complexes, and use this to construct explicit interleavings between persistence vector spaces. This generalizes the definition of carriers used in algebraic topology. Historically, carriers were used to prove equivalence of various homology theories -- see \cite{eilenbergSteenrod1952,MunkresAT,MosherTangora} for additional background.
\subsection{Filtered Maps and Carriers}\label{sec:filtered_carriers}
We define filtered carriers for objects in a category filtered by partially-ordered sets (posets) $S,T$ with initial objects. For our purposes, we consider totally ordered $S,T \subseteq \RR_+$ (with initial object 0), but extensions to other partially ordered sets are possible, with additional conditions, which allow for applications to generalized or multiparameter persistence. In order to specialize these results to standard carriers in the non-filtered setting, it suffices to consider the single element poset $S = T = \{0\}$.
\begin{definition}\label{def:filtered_object}
A filtered object in a category over a poset $T$ is a collection of objects $\X^T = \{\X^t\}_{t\in T}$ where $\X^{t_1} \subseteq \X^{t_2}$ if $t_1\le t_2$.
\end{definition}
The types of filtered objects we will consider are filtered cell complexes and filtered chain complexes.
\begin{definition}\label{def:filtered_map}
Let $\X^S, \Y^T$ be filtered objects in a category over posets $S, T$ respectively. Let $\alpha:S \to T$ be a non-decreasing map. An $\alpha$-shift map $f^\alpha:\X^S\to \Y^T$ is a collection of maps $f^{s}:\X^s \to \Y^{\alpha(s)}$ for each $s\in S$ so that the following diagram commutes.
\begin{equation}
\begin{tikzcd}
\X^s \ar[r]\ar[d,"f^s"] &\X^{s'}\ar[d,"f^{s'}"]\\
\Y^{\alpha(s)} \ar[r] &\Y^{\alpha(s')}
\end{tikzcd}
\end{equation}
\end{definition}
We are primarily interested in the categories of cell complexes and chain complexes. If $\alpha, \beta: S\to T$ are non-decreasing maps and $\alpha(s) \le \beta(s)$ for all $s\in S$, then we can extend a filtered map $f^\alpha$ to a filtered map $f^\beta$ by first applying $f^\alpha$ and then shifting the filtration to $\beta$: $f^\beta = \iota^{\beta - \alpha} \circ f^{\alpha} $. While the above definition can be applied to homotopies as well, we want to give a specialized definition of a sort of filtered chain homotopy:
\begin{definition}\label{def:filtered_htpy}
Let $F^\alpha_\ast, G^\alpha_\ast :C_\ast^S \to D_\ast^T$ be $\alpha$-shift maps of chain complexes. We say $F^\alpha, G^\alpha$ are $\beta$-chain homotopic, where $\beta:T\to T$ is a non-decreasing map if there exists a collection of maps $K^s_q : C_q^s \to D_{q+1}^{\beta\circ \alpha(s)}$ $q = 0,1,\dots$, and $s\in S$, so that
\begin{equation}
\partial^D_{q+1}K_q^s + K_{q-1}^s\partial^C_q = \iota^\beta (G_q^s - F_q^s)
\end{equation}
\end{definition}
\begin{definition}\label{def:filtered_carrier}
A filtered carrier of chain complexes over a poset $T$, denoted $\scrC^T: C_\ast^S \to D_\ast^T$ is an assignment of basis vectors of $C_\ast^S$ to filtered sub-complexes of $D_\ast^T$. In situations where $T$ is understood, we will drop the superscript, and simply write $\scrC: C_\ast^S \to D^T_\ast$.
\end{definition}
Note that while a basis element $x\in C_\ast^S$ may appear at parameter $s\in S$, the carrier $\scrC^T(x)$ is filtered by $T$. We can also define a filtered carrier of cell complexes $\scrC^T:\X^S \to \Y^T$ by assigning cells of $\X^S$ to sub-cell complexes of $\Y^T$. A (filtered) carrier of cell complexes produces a (filtered) carrier of chain complexes by application of the cellular chain functor.
We say the carrier $\scrC$ is \emph{proper} with respect to the filtered bases $B_\ast^S$ of $C_\ast^S$ and $B_\ast^T$ of $D_\ast^T$ if $\scrC(x)$ is generated by a sub-basis of $B_\ast^T$ for each $x$ in the basis $B_\ast^S$. Note that carriers of cell complexes always produce carriers of chain complexes that are proper with respect to the cell basis.
The term ``carrier'' comes from the utility of carrying a map:
\begin{definition}\label{def:carry_filtered_map}
Let $\scrC^T:C^S_\ast \to D^T_\ast$ be a filtered carrier, and $F^\alpha_\ast$ be an $\alpha$-shift chain map. We say that $F^\alpha_\ast:C^S_\ast \to D^T_\ast$ is carried by $\scrC^T$ if $F^\alpha(x) \in \scrC^T(x)$ at parameter $\alpha(s)$ for all basis elements $x\in C^s_\ast$.
\end{definition}
Again, there is an analogous definition for carriers of filtered cell complexes and maps.
\subsection{A Filtered Acyclic Carrier Theorem}
Recall that a chain complex $C_\ast$ is acyclic if its reduced homology $\tilde{H}_q(C_\ast) = 0$ for all $q\ge 0$. A carrier of chain complexes $\scrC:C_\ast \to D_\ast$ is acyclic if $\scrC(x)$ is acyclic for all basis elements $x\in C_\ast$.
The primary utility of acyclic carriers is in providing a tool to extend maps from initial data. For ordinary (non-filtered) chain complexes, we have
\begin{theorem} \label{thm:acyclic_carrier}
(Acyclic carrier theorem) If $\scrC: C_\ast \to D_\ast$ is acyclic, and $L_\ast\subset C_\ast$ is a sub-chain complex of $C_\ast$, then any chain map $\hat{F}_\ast: L_\ast \to D_\ast$ can be extended to a chain map $F_\ast:C_\ast \to D_\ast$. Furthermore, this extension is unique up to chain homotopy.
\end{theorem}
Proofs can be found in \cite{eilenbergSteenrod1952, MosherTangora, MunkresAT}. In this section, we will extend \autoref{thm:acyclic_carrier} to the filtered setting.
\begin{definition}\label{def:alpha_acyclic_complex}
We say a filtered chain complex $C_\ast^T$ is $\alpha$-acyclic if every cycle in $C_\ast^t$ has a boundary in $C_\ast^{\alpha(t)}$.
\end{definition}
This implies that any bar in the persistent homology $H_q(C_\ast^T)$ that is born at $t\in T$ must die before parameter $\alpha(t)$.
\begin{definition}\label{def:alpha_beta_acyclic_carrier}
Let $C_\ast^S, D_\ast^T$ be filtered chain complexes, $\scrC^T:C_\ast^S \to D_\ast^T$ be a filtered carrier, and $\alpha:S\to T$, $\beta:T\to T$ be non-decreasing maps. We say $\scrC^T$ is $(\alpha, \beta)$-acyclic if $\scrC^T(x)$ is $\beta$-acyclic after $t = \alpha(s)$ for all $x\in C_\ast^s$ and for all $s\in S$. In the case where $\beta = \id$, then we just say $\scrC^T$ is $\alpha$-acyclic.
\end{definition}
A related definition for cell complexes is to say a carrier $\scrC^T:\X^S \to \Y^T$ is $\alpha$-contractible if $\scrC^T(x)$ is contractible at $t = \alpha(s)$. This is sufficient to give an $\alpha$-acyclic carrier after application of the chain functor.
\begin{theorem}\label{thm:filtered_acyclic_carrier}
(Filtered acyclic carrier theorem) Let $\scrC^T:C_\ast^S \to D_\ast^T$ be an $(\alpha,\beta)$-acyclic carrier of filtered chain complexes, with $S$ a strict total order with an initial object $0 \in S$. Let $L_\ast^S \subseteq C_\ast^S$ be a filtered sub-complex generated by a filtered sub-basis of $C_\ast^S$, and $\tilde{F}^\alpha:L_\ast^S \to D_\ast^T$ be an $\alpha$-filtered chain map carried by $\scrC^T$. Then $\tilde{F}^\alpha$ extends to a filtered chain map $F^{\beta^k \circ \alpha}:C_\ast^S \to D_\ast^T$, where $k$ is the maximal dimension of the chain map, and the extension is unique up to $\beta$-chain homotopy.
\end{theorem}
\begin{proof}
We will proceed by induction on the dimension $k$ of the map, and on the total order on $S$. First, we start with $\tilde{F}^{0,\alpha(0)}_0:L_0^0\to D_0^{\alpha(0)}$. From the acyclic carrier theorem, \autoref{thm:acyclic_carrier}, we can extend to a chain map $F_0^{0,\alpha(0)}\to C_0^0 \to D_0^{\alpha(0)}$.
Now, let $s > 0$. Assume that we have extended $F^\alpha_0$ for all $r< s$ so that if $r' < r$,
\begin{equation}\label{eq:extension_restriction_condition}
F^{r,\alpha(r)}_0\mid_{C_\ast^{r'}} = F^{r',\alpha(r')}_0
\end{equation}
Note that this is satisfied trivially for $s=0$.
Let $L_0^{\prime S} = L^S_0 \cup \bigcup_{r < s} C_0^r$, and $\tilde{F}^\alpha_0$ denote the extended map up to all $r < s$. We can now apply \autoref{thm:acyclic_carrier} again to extend to $F^{s,\alpha(s)}$ to $C_0^s$. Because $S$ is a strict total order, \autoref{eq:extension_restriction_condition} continues to be satisfied because the function is extended on each basis element exactly once. By induction, we can extend to a map of 0-chains $F^\alpha:C_0^S \to D_0^T$.
Because the extension is not necessarily unique, suppose that $F_0^\alpha$ and $G_0^\alpha$ are both extensions of $\tilde{F}^\alpha_0$ carried by $\scrC$. $\partial_0 (F^\alpha_0 - G^\alpha_0) = 0$, so can be expressed as the boundary of $K_0^{\beta \circ \alpha}:C^S_0\to D^T_1$ after shifting by an additional factor of $\beta$. This gives a $\beta$ homotopy of 0-chain maps.
Now, we'll extend to higher-dimensional chains for $s=0$. Assume that we have extended to $F_k^{\beta^k \circ \alpha}:C_k^S\to D_k^T$. Again, we'll start with the initial object $0$ of $S$. We take $L^{\prime 0}_{\ast\le k+1} = C_{\ast \le k}^0 \cup L_{\ast \le k+1}^0$. We have extended $F_{\ast \le k}^{\beta^k \circ \alpha}:C_{\ast \le k}^0\to D_{\ast\le k}^{\beta^k \circ \alpha(s)}$. Let $x\in B_{k+1}$ be a basis element that we must extend at filtration parameter $s = 0$. We need $\partial_{k+1} F_{k+1} x = F_{k} \partial_{k+1} x$. The image of the boundary $F_k \partial_{k+1} x$ lies in $D^{\beta^k\circ \alpha (0)}_k$, but since $\scrC$ is $(\alpha,\beta)$-acyclic, the cycle need not have a boundary until we increase the filtration parameter $T$ by another factor of $\beta$. We can increase the grade on the map $F^{\beta^{k+1}\circ \alpha}$, taking $F^{\beta^{k+1}\circ \alpha} x = \iota^\beta F^{\beta^k\circ \alpha}x$ for $x\in L^{\prime 0}$, and then apply \autoref{thm:acyclic_carrier} to extend the map for $x\in C_{k+1}^0$.
Now, we'll extend to higher dimensional chains for $s > 0$. Assume that so far we have satisfied for $r' < r < s$
\begin{equation}\label{eq:equation_restriction_condition_k}
F_{k+1}^{r,\beta^{k+1} \circ \alpha(r)} \mid_{C_k^{r'}} = F_{k+1}^{r',\beta^{k+1} \circ \alpha(r')}
\end{equation}
and furthermore, that we have shifted the chain maps in lower dimensions via $F^{\beta^{k+1} \circ \alpha} = \iota^\beta F^{\beta^{k} \circ \alpha}$. Let $x\in B_{k+1}$ via a basis element that we must extend at filtration parameter $s$. The image of the boundary $F_k\partial_{k+1} x$ lies in $D_k^{\beta^k \circ \alpha(s)}$, and we have already shifted the grade to $\beta^{k+1} \circ \alpha(s)$ at which point the cycle is a boundary of some $y\in D_{k+1}^{\beta^{k+1}\circ \alpha(s)}$ in $\scrC(x)$. Thus, we can extend the map via $F_{k+1}^{\beta^{k+1}\circ \alpha} x = y$. Again, because $S$ is a strict total order, the map is extended for every basis element exactly once, so \autoref{eq:equation_restriction_condition_k} is satisfied.
Following a similar inductive argument, we can extend a $\beta$ homotopy of extended chain maps $F^{\beta^k \circ \alpha}_k$, $G^{\beta^k \circ \alpha}_k$ to a $\beta$ homotopy of $F^{\beta^{k+1} \circ \alpha}_{k+1}$ and $G^{\beta^{k+1} \circ \alpha}_k$, still incurring an additional shift of $\beta$.
By induction on $k$ and the strict total order of $S$, we conclude that we can extend $\tilde{F}^\alpha$ to a shifted chain map $F^{\beta^k \circ \alpha}:C^S_\ast \to D^T_\ast$, and that this chain map is unique up to $\beta$-chain homotopy.
\end{proof}
\begin{remark}
To compute induced maps in homology in dimension $k$, it is only necessary to extend maps up to dimension $k$. In many cases, $\beta$ will be the identity $\id$, in which case there is no additional penalty for extending to higher-dimensional chains.
\end{remark}
\begin{remark}
In \autoref{thm:filtered_acyclic_carrier}
we used the strict total ordering on $S$ to extend the initial map so that we guaranteed that \autoref{eq:extension_restriction_condition} is always satisfied. If $S$ is not a strict total ordering, then additional restrictions on the extension are needed to satisfy this condition.
\end{remark}
\begin{proposition}\label{prop:filtered_aug_preserving_exists}
Let $\scrC:C^S_\ast \to D^T_\ast$ be an $(\alpha,\beta)$-acyclic carrier that is proper with respect to a $T$-filtered basis $B^D_\ast$ of $D_\ast$. Then there exists a chain map $F^\alpha_0:C_0^S \to D_0^T$ carried by $\scrC$ which preserves the canonical augmentation $\epsilon: x\mapsto 1$ for basis elements $x\in C^S_0$.
\end{proposition}
\begin{proof}
For each 0-dimensional basis element $x\in C_0^S$, we simply assign $F^\alpha_0(x) = y$ for some basis element $y\in B_0^D \mid_{\scrC(x)}$. Such a $y$ exists at level $\alpha(s)$ for basis elements $x$ at parameter $s$ in $C_0^s$, so the map requires an $\alpha$ shift. This map will preserve the augmentation of the chain complexes because it sends 0-dimensional basis elements to 0-dimensional basis elements.
\end{proof}
Note that the map $F^\alpha_0$ in \autoref{prop:filtered_aug_preserving_exists} can then be extended to $F^{\beta^k\circ \alpha}_{k}$ using \autoref{thm:filtered_acyclic_carrier}.
\begin{proposition}\label{prop:pointwise_htpy}
Suppose $F_\ast^\alpha, G_\ast^\alpha: C_\ast^S \to D_\ast^T$ are augmentation-preserving chain maps carried by an $(\alpha, \beta)$-acyclic carrier $\scrC$. Then $F_\ast$ and $G_\ast$ are $\beta$-chain-homotopic.
\end{proposition}
\begin{proof}
For each basis element $x\in C_0$, $F_0(x), G_0(x)\in \scrC(x)$, and because $F_\ast$ and $G_\ast$ are augmentation preserving, $\epsilon x = \epsilon F(x) = \epsilon G(x)$, so $\epsilon \big(F(x) - G(x)) = 0$. Because $\scrC$ is $\beta$-acyclic, $\ker \epsilon = \img \partial_1$, so there must exist a 1-chain $K(x)\in \scrC(x)$ at level $\beta \circ \alpha$ so that $\partial_1 K(x) = F(x) - G(x)$, which is a homotopy of zero-chains. We can then apply \autoref{thm:filtered_acyclic_carrier} theorem to extend this to a $\beta$-homotopy $K_\ast: F_\ast \to G_\ast$.
\end{proof}
In the case where $\beta =\id$, then the two maps produce isomorphic maps on homology.
\subsection{Interleavings via Filtered Acyclic Carriers}\label{sec:interleavings_via_filtered_carriers}
We'll now turn to examining the conditions under which interleavings can be constructed from filtered carriers.
\begin{proposition}\label{prop:carrier_interleaving}
Let $\X^S$ and $\Y^T$ be filtered cell complexes, and suppose that $\scrC:\X^S \to \Y^T$ is an $\alpha$-acyclic carrier, $\scrD:\Y^T\to \X^S$ is a $\beta$-acyclic carrier, $\scrA \supseteq \scrD\circ\scrC$ is a $(\beta \circ \alpha)$-acyclic carrier that carries the inclusion map on $\Y^T$, and $\scrB \supseteq \scrC\circ \scrD$ is $(\alpha \circ \beta)$-acyclic and carries the inclusion map on $\X^S$. Then $H_q(\X^S)$ and $H_q(\Y^T)$ are $(\alpha,\beta)$-interleaved for any $q=0,1,\dots$.
\end{proposition}
\begin{proof}
First, we construct augmentation-preserving shift maps $F^\alpha:C_\ast(\X^s) \to C_\ast(\Y^{\alpha(s)})$ and $G^\beta: C_\ast(\Y^t) \to C_\ast(X^{\beta(t)})$ using \autoref{prop:filtered_aug_preserving_exists} and \autoref{thm:filtered_acyclic_carrier}. Now, note that $G^\beta \circ F^\alpha$ is augmentation preserving, and is carried by $\scrD \circ \scrC \subseteq \scrA$ which also carries the inclusion map, so by \autoref{prop:pointwise_htpy} $G^\beta \circ F^\alpha \simeq I^\X$. Similarly, $F^\alpha \circ G^\beta \simeq I^\Y$. Thus, the maps $F^\alpha$ and $G^\beta$ give an $(\alpha,\beta)$-interleaving on homology.
\end{proof}
In practice, more specific situations reduce the number of conditions that we need to satisfy. Often, we will find it convenient to take $\scrA = \scrC \circ \scrD$, and $\scrB = \scrD \circ \scrC$ when we can show that the composites are acyclic and carry inclusions.
\begin{corollary}\label{cor:surjection_interleaving}
Suppose $f^\alpha: \X^s \to \Y^{\alpha(s)}$ is a surjective simplicial map for every $s\in S$, and suppose $\scrC:\Y^T \to \X^S$, defined by $\scrC(y) = \langle f^{-1}(y) \rangle$ be a $\beta$-acyclic carrier. Then $H_q(\X^S)$ and $H_q(\Y^T)$ are $(\alpha,\beta)$-interleaved for $q=0,1,\dots$.
\end{corollary}
\begin{proof}
Because $f^{\alpha}$ is simplicial, the carrier $\scrC^f$ defined by $\scrC^f(x) = \langle f(x) \rangle$ is an $\alpha$-acyclic carrier that carries $f^\alpha$.
Because $f^{\alpha}$ is a surjective simplicial map, $\scrC(y)$ is nonempty and maps to proper sub-complexes of $\X^S$ for each $y\in \Y^T$, so is a well-defined filtered carrier. By definition, of $\scrC$, the composition $\scrC \circ \scrC^f$ carries the inclusion map $\iota^\X$. Additionally, $\scrC\circ f^\alpha$ is $(\beta\circ\alpha)$-acyclic, because $\scrC$ is $\beta$-acyclic for the simplex $f^\alpha(x)$ for each $x\in \X^S$.
Because $\scrC(y) = \langle f^{-1}(y) \rangle$, $f^\alpha\circ \scrC(y) = \langle y \rangle$, which is a simplicial carrier and thus acyclic. Note that $y \in f^\alpha\circ \scrC(y)$, so $f^\alpha \circ \scrC$ carries $\iota^\Y$. We can now apply the chain functor and \autoref{prop:carrier_interleaving} to complete the proof.
\end{proof}
\section{Computations}\label{sec:computations}
In this section, we demonstrate the use of Vietoris-Rips cover complexes in studying the homology of point cloud data. We first examine how several different covers can be used to investigate the homology of a sample from the torus. Next, we use the greedy landmark cover of \autoref{sec:sparse_filt_cover} to investigate the homology of $d$-dimensional Klein bottles associated with high-dimensional image patches. We have incorporated an implementation of the Vietoris-Rips cover complex into the BATS software\footnote{\url{https://github.com/CompTop/BATS}} package \cite{factorizationView2019} to support our experiments.
\subsection{A Flat Torus}\label{sec:computation_examples}
For our first example, we sample 500 points in a spiral on a flat torus in 4 dimensions. For intermediate parameters of a filtration, we expect to generally see the homology of the torus $T^2$
\begin{equation}
H_q(T^2) = \begin{cases}
\FF & q = 0\\
\FF \oplus \FF & q = 1\\
\FF & q = 2
\end{cases}
\end{equation}
with coefficients in any field.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{figures/torus_cover1.pdf}
\includegraphics[width=0.49\linewidth]{figures/torus_cover1_pd.pdf}
\caption{
Cover of a flat torus pulled back from projection onto the first coordinate and the persistence diagram of $\calR(\bX, \calU; r)$. The Nerve of the cover is contractible, as it covers an interval. We see a single essential $H_0$ class (above the dashed red line), two persistent $H_1$ classes, and a persistent $H_2$ class.
}
\label{fig:torus_line_pullback}
\end{figure}
In \autoref{fig:torus_line_pullback}, we pull back a cover of an interval in one dimension covering the projection of the data set onto the first coordinate. In this case, each set in the cover has non-trivial structure - generally two robust connected components and two robust generators in $H_1$. However, the persistent homology of the cover complex $\calR(\bX, \calU; r)$ demonstrates robust generators corresponding to the homology of the torus.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{figures/torus_cover2.pdf}
\includegraphics[width=0.49\linewidth]{figures/torus_cover2_pd.pdf}
\caption{
Cover of a flat torus pulled back from projection onto first two coordinates and the persistence diagram of $\calR(\bX, \calU; r)$. The nerve of the cover is homotopic to the circle, and we see essential $H_0$ and essential $H_1$ classes from the nerve of this cover. We also see an additional persistent $H_1$ class and persistent $H_2$ class.
}
\label{fig:torus_circle_pullback}
\end{figure}
In \autoref{fig:torus_circle_pullback}, we pull back a cover of the the data set projected onto its first two coordinates. In this case, the nerve of the cover is homotopic to a circle, and each set in the cover has points that lie in a circle. In this case, the cover complex $\calR(\bX, \calU; r)$ has prominent homology classes for each class in the torus, but the the classes coming from the nerve of the cover are essential.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{figures/torus_cover3.pdf}
\includegraphics[width=0.49\linewidth]{figures/torus_cover3_pd.pdf}
\caption{
Cover of a flat torus obtained from the 20-nearest neighbors of each point and the persistence diagram of $\calR(\bX, \calU; r)$. The nerve of the cover is homotopic to the torus, and we see essential homology classes corresponding to the homology of the torus.
}
\label{fig:torus_nn}
\end{figure}
In \autoref{fig:torus_nn}, instead of a pullback cover we simply produce a cover containing a set for every point $ x\in \calX$ containing $x$ itself and its 20 nearest neighbors. In this case, all sets are close to acyclic, but the nerve of the cover is equivalent to the torus, which we see reflected in the essential homology classes in the persistence diagram.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{figures/torus_cover4.pdf}
\includegraphics[width=0.49\linewidth]{figures/torus_cover4_pd.pdf}
\caption{
Cover of the flat torus based on the procedure in \autoref{sec:sparse_filt_cover} and the persistence diagram of $\calR(\bX, \calU; r)$. The Nerve is contractible, and so we just see a single essential $H_0$ class. We also see two persistent $H_1$ classes and a persistent $H_2$ class.
}
\label{fig:torus_sparse}
\end{figure}
In \autoref{fig:torus_sparse} we construct a cover of the data using the procedure in \autoref{sec:sparse_filt_cover} with $c=1.0$ for maximum sparsity. While this low value of $c$ gives a very pessimistic interleaving bound, each set in the cover is quite small, averaging less than $20$ points, and we still see the homology of the torus reflected in the prominent homology classes of the persistence diagram of $\calR(\bX, \calU; r).$
\subsection{$d$-Dimensional Klein bottles}
An interesting space motivated by data which admits a non-trivial fibration structure is a Klein bottle which lies near a high-density subset of high-contrast image patches \cite{CImgPatch}. The fibration map can be obtained using the Harris edge detector \cite{Harris88,pereaTexture} which sends an image patch to the direction of largest variation.
In \cite{nelson_parameterized_2020}, this model is generalized to higher-dimensional images to obtain a fibration over $\RP{d-1}$ for $d$-dimensional images. We will refer to this space as the $d$-dimensional Klein bottle, $\calK^d$, which was described independently in a different context by Davis \cite{davis_n-dimensional_2019}. The homology of this space can be computed using the Leray-Serre spectral sequence \cite{McClearySS} -- see \cite{nelson_parameterized_2020} for explicit computational details.
\begin{equation}\label{eq:harris_homology}
H_k(\calK^d) = \begin{cases}
\ZZ & k = 0\\
\ZZ_2 \oplus \ZZ_2 & 0 < k < d-1,~ k~\text{odd}\\
\ZZ & k = d,~ d~\text{odd}\\
\ZZ \oplus \ZZ_2 & k = d-1,~ d~\text{even}\\
0 &\text{otherwise}
\end{cases}
\end{equation}
Using the universal coefficient theorem (c.f. \cite{HatcherAT}, 3A.3), we see different dimensions in homology when computing with fields of different characteristic due to the 2-torsion in the integral homology of $\calK^d$
\begin{equation}\label{eq:f2coeff}
H_k(\calK^d; \FF_2) = \begin{cases}
\FF_2 & k = 0, d~\text{odd}\\
\FF_2 \oplus \FF_2 & 0 < k < d\\
\FF_2 & k = d\\
0 &\text{otherwise}
\end{cases}
\end{equation}
and for $\FF = \FF_p$, $p> 2$, or $\FF = \QQ$, we have
\begin{equation}\label{eq:f3coeff}
H_k(\calK^d; \FF) = \begin{cases}
\FF & k = 0\\
\FF & k = d-1, d~\text{even}\\
\FF & k = d, d~\text{odd}\\
0 &\text{otherwise}
\end{cases}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{figures/2d_kb_h3_f2.pdf}
\includegraphics[width=0.49\linewidth]{figures/2d_kb_h3_f3.pdf}
\caption{
Persistent homology of a 2-dimensional Klein bottle, $\calK^2$. Left: with $\FF_2$ field coefficients. Right: with $\FF_3$ field coefficients. There are two robust $H_1$ generators with $\FF_2$ coefficients at the location $(0.1,0.8)$.
}
\label{fig:k2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{figures/3d_kb_h3_f2.pdf}
\includegraphics[width=0.49\linewidth]{figures/3d_kb_h3_f3.pdf}
\caption{
Persistent homology of a 3-dimensional Klein bottle, $\calK^3$. Left: with $\FF_2$ field coefficients. Right: with $\FF_3$ field coefficients.
}
\label{fig:k3}
\end{figure}
We obtain a sample of $\K^d$ generalizing the model of Carlsson et al \cite{CImgPatch}. Given a unit vector $\vphi \in \RR^d$ and an angle $\theta$, we define a patch
\begin{equation}
p(x; \theta, \vphi) = \cos(\theta) (x^T \vphi)^2 + \sin(\theta) (x^T \vphi)
\end{equation}
which can be evaluated as a pixelated image patch by evaluating $x\in \RR^d$ on a grid. In \autoref{fig:k2} we generate a Klein bottle $\calK^2$ on $3\times 3$ image patches by evaluating $x$ on the grid $\{-1,0,1\}^2$, 20 equally spaced values of $\theta$, and $50$ equally spaced values of $\vphi$ for a total of 1000 points in $\RR^9$. We compute persistent homology of the cover complex filtration $\calR(\bX, \calU; r)$ using the landmark-based cover in \autoref{sec:sparse_filt_cover}, with $c=1.0$ for maximum sparsity. Using \autoref{eq:f2coeff}, the homology of $\calK^2$ with coefficients in $\FF^2$ has has dimension vector $(1,2,1)$, which is clearly observed in the persistence diagram. In \autoref{eq:f3coeff}, coefficients in $\FF^3$, the dimension vector becomes $(1,1,0)$, and we see one of the prominent $H_1$ classes shrink, and the prominent $H_2$ class shift toward the diagonal in the corresponding persistence diagram.
In \autoref{fig:k3}, we generate $\calK^3$ on $5\times 5\times 5$ patches using $20$ equally-spaced values of $\theta$ and $150$ values of $\vphi$ chosen by greedily landmarking a larger set of $4000$. The total data set consists of 3000 points in $\RR^{125}$, and again we compute persistent homology of the cover complex filtration $\calR(\bX, \calU; r)$ using the landmark-based cover in \autoref{sec:sparse_filt_cover}, with $c=1.0$. Using \autoref{eq:f2coeff}, the homology dimension vector of this space in $\FF^2$ is (1,2,2,1), and for $\FF^3$ coefficents, the dimension vector is $(1,0,0,1)$. In \autoref{fig:k3} we see both these dimension vectors match with the prominent homology classes in each dimension.
\section{Conclusion}\label{sec:conclusion}
In this paper, we developed a filtered version of the acyclic carrier theorem, which allowed us to construct interleavings between different geometric constructions. We have presented several results for Vietoris-Rips cover complexes, and we anticipate that the use of filtered carriers has broad potential as a technique to construct interleavings in situations that we have not yet considered. We have focused on algebraic interleavings, and many of these results could potentially be extended to homotopy interleavings \cite{blumbergUniversalityHomotopyInterleaving2017} given additional care when constructing carriers of cell complexes. Another interesting line of future investigation would be to use the algorithmic construction of maps from carriers
in data analysis. This could potentially be used, for instance, in constructing low dimensional embeddings of data that minimize the interleaving distance between a filtration on the higher-dimensional point cloud and the embedded point cloud.
Another line of future work is to leverage cover complexes for distrubyted computation. A limited version of this was explored in \cite{yoon2018}, and our interleaving results expand the potential use of cover complexes to more general settings. We also believe that the interleaving bounds we derive are likely pessimistic in many situations where data has additional structure. Analyses of these situations may help tighten our bounds considerably.
\section*{Acknowledgements}
BJN was supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No.
HR00112190040. He thanks Gunnar Carlsson and Jonathan Taylor for discussions on a early version of this work.
\section{Cover Complexes}\label{sec:covers}
\subsection{Local Stability}\label{sec:cover_local_interleavings}
In classical topology, a situation of interest is to study spaces over a base space. In particular, we consider surjective maps $p: \calX \to \calB$, where $\calB$ is called the base space. Some problems of interest focus on maps over $\calB$.
\begin{equation}
\begin{tikzcd}
\calX\ar[rr,"f"]\ar[dr,"p"] && \calY\ar[dl,"q"]\\
& \calB&
\end{tikzcd}
\end{equation}
\begin{definition}\label{def:compatible_system_carriers}
Let $\X^S(\calU)$, $\Y^T(\calU)$ be cover complexes over a cover $\calU$. A system of carriers $\scrC(\calU):\X^S(\calU) \to \Y^T(\calU)$ consists of carriers $\scrC(U): \X^S(U) \to \Y^T(U)$ for each $U\in \calU$. We say the system of carriers is {\em compatible} if $\cap U_k \ne \emptyset$ implies $\scrC(U_i)\mid_{\cap U_k} = \scrC(U_j)\mid_{\cap U_k}$ for all $U_i, U_j \in \{U_k\}$.
\end{definition}
In general, $U\in \calU$ need not cover the same points in $X$ and $Y$ (denoting the vertex sets of $\X$, $\Y$ respectively). We can alternatively think of it as an identification of sets in covers of each vertex set, or a set in a cover of the disjoint union $X \sqcup Y$.
When the system of carriers is compatible, we can extend carriers defined on sets of $\calU$ to intersections via $\scrC(\cap U_k) = \scrC(U_i)\mid_{\cap U_k}$ for $U_i\in \{U_k\}$. We'll say a compatible system of carriers is $(\alpha,\beta)$-acyclic if $\scrC(\cap U_k)$ is $(\alpha,\beta)$-acyclic for all $\{U_k\} \subset \calU$ where $\cap U_k \ne \emptyset$.
We can define a carrier $\scrC_\calU:\X^T(\calU) \to \Y^S(\calU)$ from a compatible system of carriers via $\scrC_\calU(x) = \scrC(\cap \{ U \ni x\})(x)$. When a compatible system of carriers $\scrC(\calU)$ is $(\alpha,\beta)$-acyclic, $\scrC_\calU$ is also $(\alpha,\beta)$-acyclic through application of the definition. The advantage of using a compatible system of carriers $\scrC(\calU)$ instead of the global carrier $\scrC_\calU$ is that we only need to check conditions locally in the cover.
\begin{proposition}\label{prop:interleaved_cover_cpxs}
Let $\calU$ be a finite cover. Suppose $\scrC(\calU):\X^S(\calU) \to \Y^T(\calU)$ is an $\alpha$-acyclic compatible system of carriers, and $\scrD(\calU):\Y^T(\calU) \to \X^S(\calU)$ is a $\beta$-acyclic compatible system of carriers. Furthermore suppose that for each $V = \cap U_k \ne \emptyset$, that $\scrC(V) \circ \scrD(V)$ is $(\alpha\circ\beta)$-acyclic and carries the identity, and $\scrD(V) \circ \scrC(V)$ is $(\beta\circ\alpha)$-acyclic and carries the identity. Then there exists an $(\alpha,\beta)$-interleaving of $H_q(\X^S)$ and $H_q(\Y^T)$.
\end{proposition}
\begin{proof}
This follows by constructing the global carriers $\scrC_\calU:\X^S(\calU) \to \Y^T(\calU)$ and $\scrD_\calU:\Y^T(\calU)\to \X^S(\calU)$, and noting that because the composite $\scrC_\calU \circ \scrD_\calU$ is $(\alpha\circ\beta)$-acyclic locally and carries the identity locally, it satisfies these properties globally. Similarly, $\scrD_\calU \circ \scrC_\calU$ is $(\beta\circ\alpha)$-acyclic and carries the identity. We can then apply \autoref{prop:carrier_interleaving} to obtain the result.
\end{proof}
The utility of \autoref{prop:interleaved_cover_cpxs} is to show that if we have identified sub-complexes of $\calX^S$ and $\calY^T$ in a consistent way using the cover that we can interleave the homology of the two filtrations.
\subsection{Local Geometric Stability}
\Cref{prop:interleaved_cover_cpxs} can be used to extend standard geometric stability results as in \cite{geometric_stab2014}
to cover complexes. We will focus on how cover complexes behave with respect to perturbations of the data
Several of our results will use the refinement of the cover $\calU$, as
\begin{equation}
\calUb = \big\{\bigcap_{i\in I} U_i | \{U_i\}_{i\in I} \subseteq \calU\big\}
\end{equation}
\begin{proposition}\label{prop:nerve_equivalence}
$H_\ast(\calN(\calU)) \simeq H_\ast(\calN(\calUb))$.
\end{proposition}
\begin{proof}
We define a carrier $\scrC:\calN(\calU)\to \calN(\calUb)$ via
\begin{equation}
\scrC(U_0,\dots,U_k) = \langle V = \cap \{U_i\} \mid \{U_i\} \subseteq \{U_0,\dots,U_k\} \rangle
\end{equation}
This carrier is acyclic because it forms a cone with the vertex $V = \cap_{i=0}^k U_i$.
We define a carrier $\scrD:\calN(\calUb)\to \calN(\calU)$ via
\begin{equation}
\scrC(V_0,\dots,V_k) = \langle U \mid U \supset V_i \in \{V_0,\dots,V_k\}\rangle
\end{equation}
Note that if $(V_0,\dots,V_k)$ is a simplex in $\calN(\calUb)$, there is a smallest set $V_{i_0}$ in the simplex, and so $\scrC(V_0,\dots,V_k) = \scrC(V_{i_0})$. This also implies that $\scrC$ is simplicial, thus acyclic.
Now, we have that $\scrD \circ \scrC(U_0,\dots,U_k)$ is a the simplex $(U_0,\dots,U_k,U'_0,\dots)$ where the extra simplices $U'_i$ are added if $\cap_{i=0}^k U_i \subseteq U
'_i$, which can appear for degenerate $\calUb$. This carrier is simplicial, thus acyclic, and clearly carries the identity.
The composition $\scrC\circ \scrD$ is acyclic because $\scrC\circ \scrD(V_0,\dots,V_k)$ forms a cone with the vertex on the minimal element $V_{i_0}\in \{V_0,\dots,V_k\}$. This composite carrier also carries the identity map.
We can now apply \autoref{prop:carrier_interleaving} to trivial filtrations on the Nerves to obtain the result.
\end{proof}
\begin{proposition}\label{cor:rips_cover_hausdorff}
Let $\bX, \bY$ be samples from a metric space $(X, d_X)$, and let $\calU$ be a cover of $\bX$ and $\calV$ be a cover of $\bY$. Suppose that for all $U\in \calUb$, there exists a $V\in \calVb$ such that $d_H(U, V)\le \epsilon$.
Then $\calR(\bX, \calU; r)$ and $\calR(\bY, \calV; r)$ are $2\epsilon$-interleaved.
\end{proposition}
\begin{proof}
Let $U_x = \bigcap \{U\in \calU \mid U \ni x\}$. By assumption, there exists some $V_x\in \calVb$ such that $d_H(U_x, V_x) \le \epsilon$, meaning there must exist some $y\in V_x$ so that $d_X(x,y)\le \epsilon$. Let $\Omega\subseteq \bX\times \bY$ be the left-total relation $\Omega(x) = \{ y \in V_x \mid d_X(x,y) \le \epsilon\}$. Then the induced carrier $\scrC_\Omega:\calR(\bX, \calU; r) \to \calR(\bY, \calV; r)$ is $2\epsilon$-simplicial. Similarly, for $y\in \bY$, we take $V_y = \bigcap_{V\ni y} V$ and $U_y\in \calUb$ the set that satisfies $d_H(U_y, V_y) \le \epsilon$. From the set using the right-total relation $\Psi\subseteq \bX\times \bY$, with $\Psi(y) = \{x \in U_y \mid d_X(x,y) \le \epsilon\}$, we obtain a $2\epsilon$-simplical carrier $\scrD_\Psi:\calR(\bY, \calV; r) \to \calR(\bX, \calU; r)$.
Now, note that the composite carrier $\scrD_\Psi \circ \scrC_\Omega$ need not carry the identity, because $y\in V_x$ does not imply $x\in U_y$. However, $y\in V_x$ implies does imply that $V_y \subseteq V_x$ which combined with the Hausdorff distance bound implies there must exist some $x'\in V_y \cap \bX$ such that $d_X(x',y)\le \epsilon$, which implies $d_X(x',x) \le 2\epsilon$ by triangle inequality. We can define a left-total relation $\Omega' \subseteq \bX\times \bX$, with $\Omega' = \{(x,x') \mid d(x,x') \le 2\epsilon, x'\in V_x\}$, which is nonempty, and $4\epsilon$-simplical by triangle inequality. Furthermore, the carrier $\scrA_{\Omega'}$ contains the composite $\scrD_\Psi \circ \scrC_\Omega$ and carries the identity. Similarly, we can define a relation $\Psi'\subset \bY \times \bY$ with $\Psi' = \{(y,y') \mid d_X(y,y') \le 2\epsilon\}$ which produces a $4\epsilon$-simplicial carrier $\scrB_{\Psi'}$ which contains the composite $\scrC_\Omega \circ \scrD_\Psi$ and carries the identity.
We can now apply \autoref{prop:carrier_interleaving} to obtain the result.
\end{proof}
\begin{corollary}\label{cor:rips_cover_hausdorff_joint}
Let $\bX, \bY$ be samples from a metric space $(X, d_X)$, and let $\calW$ be a cover of $\bX \sqcup \bY$ such that $d_H(W|_\bX, W|_\bY) \le \epsilon$ for all $W \in \calWb$. Then $\calR(\bX, \calW; r)$ and $\calR(\bY, \calW; r)$ are $2\epsilon$-interleaved.
\end{corollary}
\begin{proof}
We apply \autoref{cor:rips_cover_hausdorff} taking $\calU = \{ W \cap \bX \mid W\in \calW\}$ and $\calV = \{W \cap \bY \mid W\in \calW\}$.
\end{proof}
\autoref{cor:rips_cover_hausdorff_joint} specializes to the standard stability bound \cite{geometric_stab2014} when $\calU = \{\bX\}$.
\begin{comment}
The conditions of \autoref{cor:rips_cover_hausdorff} imply that $\calN(\calU) \simeq \calN(\calV)$.
\end{comment}
\subsection{A Generalized Nerve Theorem}\label{sec:gen_nerve_theorem}
We'll now prove a version of the Nerve theorem for cover complexes. This result can be viewed as a special case of of the approximate nerve theorems in \cite{ApproximateNerveTheorem2017, cavannaGeneralizedPersistentNerve2018}. While our proof is narrower in scope than the aforementioned results, the use of carriers will considerably simplify the proof, compared to \cite{ApproximateNerveTheorem2017} which used the Mayer-Vietoris spectral sequence, and \cite{cavannaGeneralizedPersistentNerve2018} which used a construction using the blowup complex.
\begin{theorem}\label{thm:nerve_theorem}
(Nerve Theorem \cite{borsukNerve}) Let $\calU$ be a cover of a paracompact space $X$, where if $\cap U_i\ne \emptyset$, then $\cap U_i$ is contractible. Then $\calN(\calU)\simeq X$.
\end{theorem}
A proof can be found in \cite{HatcherAT}.
\begin{theorem}\label{thm:acyclic_nerve_theorem}
[an $\alpha$-Acyclic Nerve Theorem] Let $\calU$ be a cover of a vertex set $X$, and let $\X^T(\calU)$ be a simplicial cover complex, with $T$ a strict order with initial object $0$. If $\X^T(V)$ is $\alpha$-acyclic for every $V\in \calUb$, then $H_k(\calN(\calU))$ and $H_k(\X^T(\calU))$ are $(\alpha^{k+1}, \id)$-interleaved.
\end{theorem}
\begin{proof}
We'll construct an interleaving with $\calN(\calUb)$, which has isomorphic homology to $\calN(\calU)$ by \autoref{prop:nerve_equivalence}.
We'll first define a carrier $\scrD:\calN(\calUb)\to \X^T$. We take $\scrD(V) = \X^T(V)$, and $\scrD(V_0,\dots,V_k) = \X^T(V_0\cup\dots\cup V_k) = \X^T(V_{i_k})$, where $V_{i_k}$ is the maximal set in $\{V_0,\dots,V_k\}$. This forms a $(0,\alpha)$-acylic carrier by assumption, where $0$ denotes the map to the initial object of $T$.
Now, we define a carrier $\scrC:\X^T(\calU)\to \calN(\calUb)$. We take
\begin{equation}\label{eq:nerve_carrier_C}
\scrC(x_0,\dots,x_k) = \bigg\langle \bigg\{\bigcap_{V \supseteq S} V \bigg\}_{S\subseteq \{x_0,\dots,x_k\}}\bigg\rangle
\end{equation}
Let $V' = \bigcap_{V\supseteq \{x_0,\dots,x_k\}} V$.
The carrier above forms a cone with $V'$, so is acyclic.
$\scrD\circ \scrC$ carries the identity because \autoref{eq:nerve_carrier_C} ensures that some $V'$ for which $\{x_0,\dots,x_k\}\subseteq V'$ is included in $\scrC(x_0,\dots,x_k)$, and $\scrD(V) \ni (x_0,\dots,x_k)$ for that $V$. Because all other sets in \autoref{eq:nerve_carrier_C} are contained in $V'$, $\scrD\circ \scrC(x_0,\dots,x_k) = \scrD(V')$, which is $(0,\alpha)$-acyclic by assumption.
Any $(x_0,\dots,x_k)\in \scrD(V)$, implies $\{x_0,\dots,x_k\}\subseteq V$. Thus, every $V_i$ generating the carrier in \autoref{eq:nerve_carrier_C} satisfies $V_i \subseteq \scrC(x_0,\dots,x_k)$. We can define $\scrA(V)$ to be the star of $V$ inside $\calN(\calUb)$. This carrier is acyclic because it forms a cone with the vertex for $V$, and contains $\scrC\circ \scrD(V)$. For $(V_0,\dots,V_k)\in \calN(\calUb)$, we take $\scrA(V_0,\dots,V_k) = \scrA(V_{i_k})$, where $V_{i_k}$ is the maximal set in the simplex. Again, this carrier is acyclic and carries the identity.
We have now constructed carriers for maps in the following diagram
\begin{equation}
\begin{tikzcd}
\X^T(\calU) \ar[r, hookrightarrow]\ar[d,"\scrC"] &\X^T(\calU)\ar[d,"\scrC"]\\
\calN(\calUb) \ar[r,"\scrA"] \ar[ru,"\scrD"] &\calN(\calUb)
\end{tikzcd}
\end{equation}
We can now construct a map $P_\ast :C_\ast(\X^T(\calU))\to C_\ast(\calN(\calUb))$ carried by $\scrC$ by applying \autoref{prop:filtered_aug_preserving_exists}.
We can also construct maps $F_i^{\alpha^i}: C_i(\calN(\calUb))\to C_i(\X^{\alpha^{i}(0)})$ using \autoref{thm:filtered_acyclic_carrier}, where $\partial_i F_i^{\alpha^i} x = F_{i-1}^{\alpha^{i-1}} \partial_i x$, which we need to construct for $i=0,\dots,k+1$. Because $\scrD\circ \scrC$ carries the inclusion, we can construct a homotopy, but only after increasing the grade by an extra factor of $\alpha$ in each dimension $i$, $I^{\alpha} F_i^{\alpha^i}\circ P_i \simeq I^{\alpha^{i+1}}_i$. In order to compute induced maps on homology for $H_i$, we only need to extend the chain homotopy up to dimension $i$. On homology, we have $\tilde{I}^\alpha \tilde{F}^{\alpha^{k}}_k {\tilde{P}_k} \cong \tilde{I}^{\alpha^{k+1}}_k$.
Finally, because $\scrA$ is acyclic and carries $P_\ast \circ F_\ast$ as well as the inclusion, we have $\tilde{P}_k \circ I^{\alpha} \tilde{F}^{\alpha^k} \simeq I$, we have constructed a $(\alpha^{k+1},\id)$-interleaving.
\end{proof}
Note that for Vietoris-Rips cover complexes as well as other geometric complexes, that there will be some parameter $t\in T$ at which $\X^T(V)$ will be acyclic for all $V$, when $\X^T(V)$ forms the maximal simplex on its vertex set. At this point, the cover complex and nerve are homotopic by the standard nerve theorem (\autoref{thm:nerve_theorem}).
\begin{corollary}\label{cor:acyclic_nerve_cor}
Let $\calU$ be a cover of $X$, where $\X^T(\calU)$ satisfies the conditions of \autoref{thm:acyclic_nerve_theorem}. Then if $\calN(\calU)$ is acyclic, $H_k(\X^T(\calU))$ is $(\alpha^{k+1})$-acyclic.
\end{corollary}
\begin{proof}
This follows because if $\calN(\calU)$ is acyclic, then the interleaving implies that $\X^T(\calU)$ is $\alpha^{k+1}$-acyclic.
\end{proof}
\section{Introduction}\label{sec:introduction}
A common task in computational topology is to construct a (filtered) geometric complex from a set of points $\bX$, possibly sampled from some larger space $X\supseteq \bX$, using a pairwise dissimilarity $d:\bX \times \bX \to \RR_+$ between points. Two major applications include statistical recovery of homological features of the larger space $X$,
\cite{CImgPatch,carlsson_topological_2014}
perhaps in the process of exploratory data analysis, and generating features for machine learning tasks \cite{cang_topologynet:_2017,hiraoka_hierarchical_2016}.
One limitation of geometric constructions is that they can produce very large combinatorial representations of a space as simplicial complexes, typically growing in the number of points $n$ and maximal simplex dimension $q$ as $O(n^{q+1})$ total simplices. Another limitation is that one must consider the choice of dissimilarity $d$. In general, a dissimilarity may be trusted locally (for small values), but not globally (for large values) -- a key motivation for dimension reduction techniques such as locally linear embeddings \cite{roweis_nonlinear_2000} and ISOMAP \cite{tenenbaum_global_2000}.
For example, if the points $\bX$ are sampled near a low dimensional manifold embedded in Euclidean space, we may choose the metric $d$ to either be the Euclidean distance of the ambient space, or the intrinsic distance of the manifold, perhaps approximated from the sampling. At small distances, the choice of metric will not appear to matter much, but at large distances differences between the two metrics will become much more apparent. These two factors combine to make the calculation of persistent homology from samples difficult even in dimensions as small as 2 or 3 -- either a large number of samples are required to cover a space without growing distance too large, or we must use large non-local distances which are not trusted.
One way to make calculation of higher-dimensional homology of sampled point clouds tractable is to incorporate the additional structure of a map $f:X\to B$. In this setting, the space $X$ is said to be parameterized by $B$, which is called the base space. A variety of tools in continuous topology have developed, both in the context of homotopy theory which studies notions such as base-space preserving maps\cite{mayParametrizedHomotopyTheory2006} and fibrations \cite{serreHomologieSinguliereEspaces1951}, and in the context of homology where the Leray and Leray-Serre spectral sequences can be used to ease calculation \cite{McClearySS}. Many ideas and results in the continuous setting rely on an analysis of {\em fibers} of the map, $f^{-1}(b)$, which poses a difficulty in the discrete setting where fibers will generally be empty. In this paper, we consider an extension of parameterized spaces to the setting of filtered complexes based not on fibers but on inverse images of sets $f^{-1}(U)$. Generally, the map $f$ is not needed for the construction - we can simply take any cover of the data (which coincides with $B = X$ and $f$ as the identity):
\begin{definition}\label{def:cover_system}
A {\em system of complexes} over a cover $\calU$ is a collection of (filtered) cell complexes $\{\X^T(U)\}_{U\in \calU}$ where $\X^T(U)$ has $U$ as its 0-skeleton, and the restriction of complexes to intersections of sets in the cover are compatible.
\begin{equation}
\X^T(U_{i})|_{\cap U_k} = \X^T(U_j)|_{\cap U_k}
\end{equation}
for all $U_i, U_j\in \{U_k\} \subseteq \calU$.
\end{definition}
\begin{definition}\label{def:cover_complex}
A {\em cover complex} $\X^T(\calU)$ is the union of complexes in a system of complexes.
\begin{equation}
\X^T(\calU) = \bigcup_{U\in \calU} \X^T(U)
\end{equation}
\end{definition}
This definition of cover complex coincides with a similar definition which appeared in an early pre-print of \cite{ApproximateNerveTheorem2017}, but which was abandoned in subsequent versions. The goal of \cite{ApproximateNerveTheorem2017}, as well as associated literature \cite{chazalPersistencebasedReconstructionEuclidean2008, cavannaGeneralizedPersistentNerve2018} is to understand when a filtered nerve can effectively be used to approximate a larger computation, a question which we will address for cover complexes in \autoref{sec:gen_nerve_theorem}. In contrast, we will seek to use the actual cover complex in computations in situations where the complex restricted to each set is not necessarily close to acylic, which we will investigate in \autoref{sec:cover_local_interleavings} and \autoref{sec:cover_full_interleaving}. This has previously been investigated by Yoon \cite{yoon2018} in the calculation of persistent homology of Vietoris-Rips filtrations at small scales in the setting where the nerve of the cover is contractible. These complexes also contain similarities to the multiscale mapper construction \cite{deyMultiscaleMapperTopological2016}, which also uses inverse images of sets in covers, but applies this to simplicial complexes generated using the mapper algorithm \cite{mapper} which contracts connected components in the inverse image of sets. We shall be interested in higher-dimensional homology as well.
\subsection{Geometric Complexes}\label{sec:geometric_complexes}
In applied topology, there are a variety of constructions which allow for the construction of simplicial complexes from a data set $\bX$. These complexes allow for the approximation of a larger space from which the data was sampled. Common examples include the Vietoris-Rips complex, \v{C}ech complex, Witness complex, and others -- see \cite{geometric_stab2014} for a review of a variety of constructions.
In this paper, we will focus on Vietoris-Rips complexes which are attractive from a computational point of view because they allow for an easy combinatorial description in arbitrary dimensions (as opposed to \v{C}ech or $\alpha$-complexes), and do not require selection of landmarks as in Witness complexes. The Vietoris-Rips complex uses a dissimilarity $d:\bX \times \bX \to \RR$ to determine whether simplices should be included in the complex.
\begin{definition}\label{def:ripsd}
Let $(\bX, d)$ be a dissimilarity space. We extend the dissimilarity to tuples of points $x_0,\dots,x_k\subseteq \bX$ as
\begin{equation}
d(x_0,\dots,x_k) = \max_{0\le i < j \le k} d(x_i, x_j)
\end{equation}
\end{definition}
with $d(x) = d(x,x) = 0$.
\begin{definition}
Let $(\bX, d)$ be a dissimilarity space. The Vetoris-Rips complex $\calR(\bX; r)$ is the union of simplices
\begin{equation}
\calR(\bX; r) = \{(x_0,\dots,x_k) \mid x_0,\dots,x_k\in \bX, d(x_0,\dots,x_k) \le r\}.
\end{equation}
We can use the same notation to refer to a filtration by letting the $r$ parameter vary.
\end{definition}
Because the Rips filtration is a flag filtration, the simplex $(x_0,\dots,x_k)$ appears at parameter $d(x_0,\dots,x_k)$. We can restrict simplicies of this full complex to sets in a cover to obtain an equivalent notion of cover complex:
\begin{definition}\label{def:cover_complex2}
Let $\X^T$ be a filtered cell complex over a poset $T$, with vertex set $\X^T_0 = X$, and let $\calU$ be a cover of $X$. We define the cover complex $\X^T(\calU)$ to be the restriction of $\X^T$ to cells whose 0-skeleton lies in some $U\in \calU$.
\end{definition}
This definition agrees with \autoref{def:cover_complex} where the system of complexes comes from the restriction of the full filtered complex $\X^T$ to sets in $\calU$. In \autoref{sec:application_to_rips} we will specifically consider Vietoris-Rips cover complexes, which we will denote $\calR(\bX, \calU; r)$.
\subsection{Homology, Persistence, and Interleavings}\label{sec:persistent_homology}
We are primarily interested in obtaining the persistent homology of filtered complexes, which can be used to describe the robust topological features in a filtration. For additional background on homology, we recommend \cite{HatcherAT}, and for additional information on persistent homology and interleavings, we recommend \cite{Oudot}.
Given a filtration $\X^T$, the homology functor in dimension $q$ produces a persistence vector space $H_q(\X^T)$, where for every filtration value $t\in T$ the complex $\X^t$ has an associated vector space $H_q(\X^t)$, and the inclusion maps $\X^s \subseteq \X^t$ for $s\le t$ have associated linear maps $F_q^{s,t}:H_q(\X^s) \to H_q(\X^t)$, as illustrated by the diagram:
\begin{equation}
\begin{tikzcd}
\X^s\ar[d] \ar[r,hookrightarrow] &\X^t\ar[d]\\
H_q(\X^s) \ar[r,"F_q^{s,t}"] &H_q(\X^t)
\end{tikzcd}
\end{equation}
The dimension, $\dim H_q(\X^t)$, can generally be interpreted to count the number of $q$-dimensional ``holes'' in the space $\X^t$, and the induced maps describe how holes relate to one another throughout the filtration. We will generally consider our posets $T$ to be finite subsets of the real numbers $\RR$, for example, the critical values at which simplices appear in a Vietoris-Rips filtration. In this case, the persistence vector space $H_q(\X^T)$ is described up to isomorphism by a collection of interval indecomposables $\{(b_i,d_i)\}$, or persistence barcode, which track the appearance (birth) and disappearance (death) of new homological features throughout the filtration \cite{ZCComputingPH2005,ZZtheory2010}.
In the context of geometric filtrations, intervals with long lengths $|d_i - b_i|$ are typically considered robust topological features, and those with short lengths are typically considered topological noise.
We wish to be able to compare the persistent homology of different filtrations, which is accomplished through the use of interleavings (cite). We can consider persistence vector spaces abstractly as quiver representations \cite{gabrielI,ZZtheory2010} over the poset $T$, which we denote $V^T$ (forgetting that the vector spaces and linear maps came from homology). In order to compare two different persistence vector spaces, we must first have a notion of map between them.
\begin{definition}\label{def:graded_map}
Let $V^S$ and $W^T$ be persistence vector spaces, and $\alpha:S\to T$ be a non-decreasing map. An $\alpha$-shift map is a collection of linear maps $F^{\alpha} = \{F^{s}: V^s \to W^{\alpha(s)}\}_{s\in S}$ which commute with the maps in $V^S$ and $W^T$
\begin{equation}
\begin{tikzcd}
V^r \ar[r]\ar[d,"F^{r}"] & V^s \ar[d,"F^{s}"]\\
W^{\alpha(r)} \ar[r] & W^{\alpha(s)}
\end{tikzcd}
\end{equation}
\end{definition}
We denote the self-shift map $I^\alpha:V^S \to V^S$ as the map that simply follows the maps in the persistence vector space $I^\alpha:V^s \to V^{\alpha(s)}$.
An interleaving is a pair of shift maps between persistence vector spaces:
\begin{definition}\label{def:alpha_beta_interleaving}
An {\em $(\alpha,\beta)$-interleaving} between $V^S$ and $W^T$ is a pair of graded maps $F^\alpha: V^S\to W^T, G^\beta:W^T \to V^S$ so so that $G^\beta \circ F^\alpha \cong I^{\beta \circ \alpha}$ and $F^\alpha \circ G^\beta \cong I^{\alpha \circ \beta}$.
\end{definition}
If two persistence vector spaces are $(\alpha,\beta)$ interleaved, then any vector $v\in V^s$ with image in $V^{\beta \circ \alpha(s)}$ must have a non-zero image in $W^{\alpha(s)}$. This provides a way to compare interval indecomposables in the context of persistent homology.
The interleaving distance \cite{chazalProximityPersistenceModules2009} is a distance on persistence vector spaces constructed by considering shift maps of the form $\epsilon: t \to t + \epsilon$. The infimum over $\epsilon \ge 0$ that admits an $(\epsilon,\epsilon)$ interleaving of two persistence vector spaces is the in
\begin{equation}
d_I(V^S, W^T) = \inf \{ \epsilon \ge 0 \mid \exists (\epsilon,\epsilon) \text{ interleaving of } V^S, W^T \}
\end{equation}
In the case where more general shift maps $\alpha, \beta \ge \epsilon$, then an $(\alpha,\beta)$-interleaving bounds the interleaving distance between persistence vector spaces from above. In the case of single-parameter persistence, the interleaving distance is equivalent to the bottleneck distance on persistence diagrams \cite{lesnick_multid2015}.
Interleavings are often used to obtain stability results explaining how perturbations of an input can affect output persistence vector spaces. An early use application of interleavings was to Gromov-Hausdorff stability of the persistent homology of Vietoris-Rips filtrations.
\begin{theorem}\label{thm:gh_stability}
\cite{GHStable,geometric_stab2014}
Let $(\bX, d_X)$ and $(\bY, d_Y)$ be metric spaces with
$$d_{GH}((\bX, d_X), (\bY,d_Y)) \le \epsilon.$$
Then $H_q(\calR((\bX,d_X); r))$ and $H_q(\calR((\bY,d_Y); r))$ are $(\epsilon,\epsilon)$-interleaved.
\end{theorem}
\subsection{Outline/Contributions}\label{sec:outline}
In this paper, we develop the use of Vietoris-Rips cover complexes, $\calR(\bX, \calU; r)$, with an eye to understanding homological stability properties and their relationship to the full Vietoris-Rips construction. In \autoref{sec:filtered_carriers_and_interleavings} we develop a filtered version of the acyclic carrier theorem which can be used to construct interleavings from initial data. In section \autoref{sec:covers}, we build up local-to-global results including Hausdorff stability of $H_q$ and a generalized Nerve theorem. In section \autoref{sec:application_to_rips} we characterize the relationship between $H_q(\calR(\bX; r))$ and $H_q(\calR(\bX, \calU; r))$ in terms of interleavings. Finally, in \autoref{sec:computations} we demonstrate the use of Vietoris-Rips cover complexes over base spaces, and target the computation of high-dimensional homology groups of a fiber-bundle associated to high-dimensional image patches. Several of these results were presented in preliminary form in the dissertation of the author \cite{nelson_parameterized_2020}. The present paper includes a simplified and focused exposition, new results relating Vietoris-Rips cover complexes to sparse filtrations, and additional computational examples.
\section*{Outline}
\input{introduction}
\input{carriers}
\input{covers}
\input{rips}
\input{computations}
\input{conclusion}
\bibliographystyle{spmpsci}
\section{Rips-Cover Constructions}\label{sec:application_to_rips}
We now focus on Vietoris-Rips cover complexes, which we denote as $\calR(\bX, \calU; r)$. We seek to answer the following questions:
\begin{enumerate}
\item For a fixed cover $\calU$, how sensitive is $\calR(\bX, \calU; r)$ to perturbations of the underlying data $X$?
\item For a fixed dataset $\bX$, how sensitive is $\calR(\bX, \calU; r)$ to the choice of cover $\calU$?
\item How does $\calR(\bX, \calU; r)$ relate to the full Vietoris-Rips complex $\calR(\bX;r)$?
\end{enumerate}
A related definition is the {\em Rips system} found in Yoon's 2018 dissertation \cite{yoon2018} which is used for distributed computation of persistent homology of Rips complexes via cellular (co)-sheaves. Yoon shows that if the Nerve is 1-dimensional, and the system covers the full Rips complex, that the Rips system can be used to obtain the Homology of the full complex, and develops a distributed algorithm for computation. We will consider more general coverings, and characterize regimes where the cover complex and full complex are interleaved, but not identical. Distribution schemes for computing persistent homology of cover complexes in their full generality are beyond the scope of this work.
\subsection{Interleavings for Arbitrary Covers}\label{sec:cover_full_interleaving}
We now turn to relating the persistent homology of $\calR(\bX, \calU; r)$ to the persistent homology of $\calR(\bX; r)$. At large $r$ parameters, Vietoris-Rips complexes become acyclic, so following \autoref{thm:acyclic_nerve_theorem} that $PH_\ast(\calR(\bX, \calU; r))$ will eventually converge to $H_\ast(\calN(\calU))$. This means that unless $\calN(\calU)$ is acyclic, $PH_\ast(\calR(\bX, \calU; r)$ and $PH_\ast(\calR(\bX; r))$ can not possibly interleave for sufficiently large $r$ parameters. However, in situations where sets in the cover have non-trivial structure, we would like to understand how this structure relates to the full filtration $\calR(\bX; r)$, particularly for small values of $r$.
Because there are inclusions $\calR(\bX, \calU; r) \hookrightarrow \calR(\bX; r)$, it suffices to study under what conditions we can extend a map $f^\alpha$ in the diagram
\begin{equation}\label{eq:cover_interleaving}
\begin{tikzcd}
\calR(\bX, \calU; r) \ar[r, hookrightarrow] \ar[d, hookrightarrow]& \calR(\bX, \calU; \alpha(r))\ar[d, hookrightarrow]\\
\calR(\bX; r) \ar[r, hookrightarrow]\ar[ur, "f^\alpha"] & \calR(\bX; \alpha(r))
\end{tikzcd}
\end{equation}
We focus on a carrier $\scrC: \calR(\bX; r) \to \calR(\bX, \calU; r)$ generated from witness sets
\begin{equation}
\bX(x_0,\dots,x_k) = \{ y\in \bX \mid d(y,x_i) \le d(x_0,\dots,x_k)\ \forall i=0,\dots,k\}
\end{equation}
and their union, denoted
\begin{equation}
\bar{\bX}(x_0,\dots,x_k) =\bigcup_{S \in \calP(\{x_0,\dots,x_k\})}\bX(S)
\end{equation}
where $\calP$ denotes the power set. We define the carrier $\scrC: \calR(\bX; r) \to \calR(\bX, \calU; r)$ via
\begin{equation}\label{eq:rips_cover_carrier}
\scrC: (x_0,\dots,x_k) \mapsto \langle\bar{\bX}(x_0,\dots,x_k) \rangle
\end{equation}
and let
\begin{equation}
\calUb(x_0,\dots,x_k) = \{V \cap \bar{\bX}(x_0,\dots,x_k) \mid V\in \calUb, \bar{\bX}(x_0,\dots,x_k) \cap V \ne \emptyset \}
\end{equation}
which covers $\scrC(x_0,\dots,x_k)$.
\begin{definition}\label{def:rips_inter_thres}
We define three thresholds: $R_1\le R_2 \le R_3$ which describe different regimes of the non-decreasing map $\alpha$.
\begin{enumerate}
\item Let $R_1$ be the largest value so that if $d(x_0,\dots,x_k)\le R_1$ then there exists some $U\in \calU$ so that $x_0,\dots,x_k\in U$.
\item Let $R_2$ be the largest value so that if $d(x_0,\dots,x_k) \le R_2$ then $\calN(\calUb(x_0,\dots,x_k))$ is acyclic and $\bX(x_0,\dots,x_k)\cap V)$ is non-empty for each $V\in \calUb(x_0,\dots,x_k)$.
\item Let $R_3$ be the largest value so that if $d(x_0,\dots,x_k) \le R_3$ then $\calN(\calUb(x_0,\dots,x_k))$ is acyclic.
\end{enumerate}
\end{definition}
If we impose a mild condition that for any points $x,y\in \bX$ with $d(x,y) = 0$ then $U\cap \{x, y\}$ is either $\{x,y\}$ or empty for all $U\in \calU$, then $0\le R_1$.
\begin{theorem}\label{thm:rips_cover_interleaving}
$H_k(\calR(\bX, \calU; r))$ and $H_k(\calR(\bX; r))$ are $(\id,\alpha)$-interleaved, where $\alpha(r) = r$ for $r \le R_1$, $\alpha(r) \le 2r$ for $r\le R_2$ and $\alpha(r) \le 3r$ for $r\le R_3$. For $r > R_3$, an interleaving may not exist.
\end{theorem}
Proofs of each inequality are in \autoref{prop:cover_rips_identical}, \autoref{prop:cover_rips_inter_2r}, and \autoref{prop:cover_rips_inter_3r}.
\begin{proposition}\label{prop:cover_rips_identical}
$\calR(\bX, \calU; r) = \calR(\bX; r)$ for all $r\le R_1$.
\end{proposition}
\begin{proof}
This follows because if $(x_0,\dots,x_k)\in \calR(\bX; r)$, then $d(x_0,\dots,x_k)\le r \le R_1$, so $(x_0,\dots,x_k)\in \calR(\bX, U; r)\subseteq \calR(\bX, \calU; r)$ for some $U\in \calU$. Thus $\calR(\bX;r) \subseteq \calR(\bX, \calU; r)$, and we already know $\calR(\bX, \calU; r) \subseteq \calR(\bX; r)$, giving equality.
\end{proof}
This mean that covers $\calU$ that encode some notion of locality produce cover complexes which are identical to the full Rips complex at the beginning of the filtration.
We now turn to the non-trivial interleavings. Let $\iota:\calR(\bX, \calU; r) \to \calR(\bX; r)$ denote the canonical inclusion, seen in \autoref{eq:cover_interleaving}. Clearly, $\scrC \circ \iota$ carries the inclusion $\calR(\bX, \calU; r) \to \calR(\bX, \calU; \alpha(r))$.
However, the carrier $\iota \circ \scrC$ does not carry the inclusion for any simplices in $\calR(\bX; r)$ that are not in the cover complex $\calR(\bX, \calU; r)$. We need to find another carrier which does carry the inclusion which also contains this carrier. Consider $\calD: R(\bX;r) \to \calR(\bX; r)$, defined as
\begin{equation}\label{eq:rips_cover_carrier2}
\scrD: (x_0,\dots,x_k) \mapsto \langle\bar{\bX}(x_0,\dots,x_k) \rangle.
\end{equation}
The difference between $\scrC$ and $\scrD$, despite the similarity of their definitions is that they map to different complexes. $\scrC$ maps to subcomplexes of $\calR(\bX, \calU; r)$, and $\scrD$ maps to subcomplexes of $\calR(\bX; r)$. Note that $\scrD$ does carry $\iota \circ \scrC$.
If $\scrD$ is also $\alpha$-acyclic, we can apply \autoref{prop:carrier_interleaving} to construct the interleaving. The remainder of this section describes conditions that will allow us to bound the non-decreasing map $\alpha$.
\begin{lemma}
If $(\bX, d)$ is a metric space, then $\calD$ is acyclic for $\alpha: r\mapsto 2r$.
\end{lemma}
\begin{proof}
Consider $\calD(x_0,\dots,x_k)$, and let $r = d_X(x_0,\dots,x_k)$. Without loss of generality, consider distances to $x_0$. Let $y\in \calD(x_0,\dots,x_k)$. By definition of $\scrD$, either $d(y,x_0)\le r$, or $d(y,x_i) \le r$ for some $x_i \in \{x_1,\dots,x_k\}$. Because $d_X(x_0,x_i)\le r$, by triangle inequality, $d(y,x_0)\le 2r$. Because the Vietoris-Rips complex is a flag complex, this implies $\calD(x_0,\dots,x_k)$ forms a cone with $x_0$ at parameter $2r$ and so is acyclic.
\end{proof}
The more difficult carrier to analyze is $\scrC$. We'll consider the restriction of the cover to the carrier. If $\calN(\calUb(x_0,\dots,x_k))$ is acyclic for each $(x_0,\dots,x_k)\in \calR(\bX; r)$, and $\calR(\bar{\bX}(x_0,\dots,x_k), V; r)$ is $\alpha$-contractible, then $\scrC$ is $\alpha$-acyclic by the Nerve theorem.
\begin{lemma}\label{lem:restricted_cover_3r}
Let $r = d_X(x_0,\dots,x_k)$. For each $V\in \calUb(x_0,\dots,x_k)$, $\calR(V; 3r)$ is contractible.
\end{lemma}
\begin{proof}
Let $y,y'\in V$. Then there are some $x,x'\in \{x_0,\dots,x_k\}$ for which $d(y,x), d(y',x') \le r$. Because $d(x,x')\le r$, by triangle inequality $d(y,y')\le 3r$. Thus, $\calR(V;3r)$ forms a simplex, so is contractible.
\end{proof}
In general, the bound in \autoref{lem:restricted_cover_3r} can be pessimistic. For instance,
\begin{lemma}\label{lem:restricted_cover_2r}
Let $r \le R_2$, so that for $d_X(x_0,\dots,x_k) \le r$, $\bX(x_0,\dots,x_k)\cap V$ is non-empty for each $V\in \calUb(x_0,\dots,x_k)$. Then $\calR(V; 2r)$ is contractible.
\end{lemma}
\begin{proof}
Fix $V\in \calUb$. By assumption, there is some $y\in V$ so that $d(y,x_i) \le r$ for all $i=0,\dots,k$. For some other $y'\in V$, we have $d(y,x_i) \le r$ for some $i=0,\dots,k$. By triangle inequality, $d(y,y')\le 2r$. Since this holds for all $y'\in V$, $\calR(V; 2r)$ forms a cone with $y$, and is thus contractible.
\end{proof}
We can now tie things together in the following propositions.
\begin{proposition}\label{prop:cover_rips_inter_2r}
$H_k(\calR(\bX, \calU; r))$ and $H_k(\calR(\bX; r))$ are $(\id, 2r)$-interleaved for $r \le R_2$.
\end{proposition}
\begin{proposition}\label{prop:cover_rips_inter_3r}
$H_k(\calR(\bX, \calU; r))$ and $H_k(\calR(\bX; r))$ are $(\id, 3r)$-interleaved for $r \le R_3$.
\end{proposition}
\begin{proof}
We use the approximate nerve theorem, \autoref{thm:acyclic_nerve_theorem}, to show that $\calC$ is acyclic under the conditions of \autoref{prop:cover_rips_inter_2r} and \autoref{prop:cover_rips_inter_3r}.
By definition of $R_3$, the nerve $\calN(\calUb(x_0,\dots,x_k))$ is acyclic in both propositions. \autoref{lem:restricted_cover_2r} or \autoref{lem:restricted_cover_3r} ensures that each set in the nerve is $2r$ or $3r$ acyclic respectively, so the whole carrier is $2r$ or $3r$ acyclic respectively.
\end{proof}
Note that the sets in the covers $\calU$ do not need to be acyclic at the levels prescribed, but rather their restriction to points within a certain distance of each simplex. This means there can be a variety of non-trivial structure in each set in the cover.
\subsection{Sparse Filtrations via Covers}\label{sec:sparse_filt_cover}
\Cref{thm:rips_cover_interleaving} can be applied to any cover $\calU$ of a data set $\bX$, and to a certain extent can guide the selection of a cover $\calU$ that increases $R_1, R_2$, and $R_3$ as much as possible:
\begin{enumerate}
\item $R_1$ is determined by the threshold at which for all $x\in \bX$, there exists some set $\calU$ which contains all its $R_1$-nearest neighbors.
\item To maximize $R_2$, we want to ensure that the cover $\calU$ contains witnesses to simplices. This may require sets covering large distances in sparse regions.
\item To maximize $R_3$, we want to make $\calN(\calUb(x_0,\dots,x_k))$ acyclic for all simplices $(x_0,\dots, x_k)$. This requires sufficient overlap of sets in cover.
\end{enumerate}
If the goal is to construct a cover that gives an interleaving for all filtration values, a practical approach is to construct a sparse filtration, as originally proposed by Sheehy \cite{sheehyLinearSizeApproximationsVietoris2013}. We consider a variant of this approach using a Vietoris-Rips cover complex which is amenable to a straightforward analysis. Another, more geometric, approach based on persistent nerves is studied in \cite{cavanna_geometric_2015}. The key differences between the approach here and \cite{sheehyLinearSizeApproximationsVietoris2013,cavanna_geometric_2015} are that the Vietoris-Rips cover complexes are not generally flag complexes, and that we do not consider re-weighting of edges to tighten the interleaving.
Consider a nested sequence of greedily chosen landmark sets $\bL_0 \subset \bL_1 \subset \dots \subset \bL_n = \bX$, so $\bL_i = \bL_{i-1} \cup \{x_i\}$ where $x_0$ can be chosen arbitrarily, and $x_i\in \bX$ is a point that realizes the Hausdorff distance $d_H(\bL_{i-1}, \bX)$. Let $\lambda_i = d_H(\bL_{i-1}, \bX)$, with $\lambda_0 = \infty$, and let $c > 1$ be a fixed constant. We construct a cover $\calU$ of the data set $\bX$ by associating a set $U_x$ to each element $x\in \bX$
\begin{equation}
U_x = \bigcup_{i=1}^n \{ \ell \in \bL_i \mid d(\ell, x) \le c \lambda_i\}.
\end{equation}
Each set is non empty because $x\in \bL_n$ and $d(x,x) = 0$ implies $x\in U_x$. Furthermore, the Nerve of the cover $\calN(\calU)$ is acyclic, as the single point $x_0$ contained in $\bL_0$ is contained in every set, every intersection of sets in $\calU$ is non-empty.
\begin{lemma}\label{lem:insertion_lb}
Let $x_0,\dots, x_q\in \bX$, with $d(x_0,\dots,x_q) = r$. Then for any $\lambda_i \ge \frac{r}{c-1}$, there exists some $\ell \in \bL_i$ so that $\ell \in \cap_{i=0}^k U_{x_i}$.
\end{lemma}
\begin{proof}
We consider a landmark set $\bL\subseteq \bX$ with $d_H(\bL, \bX) = \epsilon$. Without loss of generality, we take the point $x_0$, for which there must exists some $\ell\in \bL$ such that $d(x_0, \ell) \le \epsilon$ by the Hausdorff distance bound, so $\ell \in U_{x_0}$. In order to guarantee that $\ell$ is in $U_{x_i}, i=1,\dots,q$, we must satisfy
\begin{align*}
d(x_i,\ell) \le d(x_i,x_0) + d(x_0,\ell) &\le c \epsilon\\
r + \epsilon &\le c \epsilon\\
r &\le (c-1) \epsilon\\
\epsilon &\ge \frac{r}{c-1}
\end{align*}
Thus, for any $\bL_i$ with $\lambda_i = d_H(\bL_{i-1}, \bX)\ge \frac{r}{c-1}$, there exists such an $\ell$.
\end{proof}
\begin{lemma}\label{lem:witness_ub}
Let $x_0,\dots, x_q\in \bX$, with $d(x_0,\dots,x_q) = r$, and let $i$ be the index such that $\lambda_i \ge \frac{r}{c-1}$ and $\lambda_{i+1} < \frac{r}{c-1}$, then there exist $\ell_j\in \bL_i, j=0,1,\dots,q$ such that $d(x_j, \ell_j) \le \lambda_{i+1} < \frac{r}{c-1}$.
\end{lemma}
\begin{proof}
We have that $\lambda_{i+1} = d_H(\bL_i, \bX) < \frac{r}{c-1}$, so for all $x\in \bX$, there is some $\ell\in \bL_i$ so that $d(x,\ell) \le \lambda_{i+1} < \frac{r}{c-1}$.
\end{proof}
\begin{proposition}\label{prop:sparse_witness}
Let $x_0,\dots, x_q\in \bX$, with $d(x_0,\dots,x_q) = r$. Then there exists an $\ell \in \cap_{j=0}^q U_{x_j}$ with $d(x_j, \ell) \le \frac{cr}{c-1}$ for all $j=0,\dots,q$.
\end{proposition}
\begin{proof}
Let $i$ be the index such that $\lambda_i \ge \frac{r}{c-1}$ and $\lambda_{i+1} < \frac{r}{c-1}$. From \autoref{lem:witness_ub}, there is an $\ell \in \bL_i$ with $d(\ell, x_0) < \frac{r}{c-1}$, and since $\bL_i \ge \frac{r}{c-1}$, $\ell \in U_{x_0}$. Now, from \autoref{lem:insertion_lb}, such an $\ell$ is also in $U_{x_j}$ for $j=1,\dots,q$.
\end{proof}
This motivates the construction of a carrier $\scrC:\calR(\calX; r) \to \calR(\calX, \calU; r)$. Let
\begin{equation}\label{eq:sparse_carrier}
\scrC(x_0,\dots,x_q) = \bigg\langle \{\ell(x_0,\dots,x_q)\} \cup \bigcup_{\sigma \in \calP(x_0,\dots,x_k)} \scrC(\sigma) \bigg\rangle
\end{equation}
where $\ell(x_0,\dots,x_q)$ is an arbitrary choice of $\ell$ which satisfies \autoref{prop:sparse_witness}.
\begin{proposition}\label{prop:sparse_carrier}
Let $d(x_0,\dots,x_q) = r$. Then the carrier in \autoref{eq:cover_interleaving} is acyclic at level $\frac{2c}{c-1} r$.
\end{proposition}
\begin{proof}
We consider when the carrier forms a cone with $\ell = \ell(x_0,\dots,x_q)$, which is in every set $U_{x_i}$. Let $y$ be a point in $\scrC(x_0,\dots,x_q)$. Either $y$ is one of $x_0,\dots,x_q$, or it was included as $\ell(\sigma)$ for some $\sigma\subset \{x_0,\dots,x_q\}$. Because $d(\sigma) \le d(x_0,\dots,x_q)$, This means that $d(y,x_j) \le \frac{c}{c-1}r$ for any $x_j \in \sigma$. We can then bound using triangle inequality
\begin{align}
d(y,\ell) &\le d(y, x_j) + d(x_j,\ell)\\
&\le \frac{c}{c-1} r + \frac{c}{c-1} r\\
&\le \frac{2c}{c-1} r.
\end{align}
Because this holds for any point $y$ in the carrier, $\scrC(x_0,\dots,x_q)$ forms a cone by level $\frac{2c}{c-1} r$, so is acyclic.
\end{proof}
If we wish to obtain a $\alpha = 1+\epsilon$ interleaving, we can calculate that we must set $c = \frac{\epsilon+1}{\epsilon-1}$. In the limit of $c\to \infty$, $\epsilon\to 1$ from above, so we are limited to $\alpha > 2$ using this strategy. We can achieve $\alpha = 3$ by setting $c = 3$, which limits the size of the sets in the cover while achieving a relatively small multiplicative interleaving bound. A tighter bound can be achieved by re-weighting edges with distance $\ge \frac{c}{c-1} r$ in \autoref{prop:sparse_carrier} -- see \cite{sheehyLinearSizeApproximationsVietoris2013,cavanna_geometric_2015} for details.
|
1,116,691,497,298 | arxiv | \section{Introduction and main results}
\label{sec:INTRO}
Although in lattice QCD at maximal twist (Mtm-LQCD) O($a$) discretization
effects (actually all O($a^{2k+1}$), $k\geq 0$ effects) are absent
or easily eliminated~\cite{TM,FR1,FR2,FR4}, it turns out that correlators
are affected by dangerous artifacts of relative order $a^{2k}$, $k\geq 1$,
which are enhanced by inverse powers of the (squared) pion mass, as the latter
becomes small. In fact, when analyzed in terms of the Symanzik expansion,
lattice expectation values exhibit, as $m_\pi^2\to 0$, what we will call
``infrared (IR) divergent'' cutoff effects with a behaviour of the form
\begin{equation}
\<O\>\Big{|}^L_{m_q}=\<O\>\Big{|}^{\rm cont}_{m_q}
\Big{[}1+{\rm O}\Big{(}\frac{a^{2k}}{(m_\pi^2)^{h}}\Big{)}\Big{]}
\, ,\quad 2k\geq h\geq 1 \,\,(k,h \,\,{\rm integers})\, ,\label{ORD}\end{equation}
where we have assumed that the lattice correlator
admits a non-trivial continuum limit. Powers of $\Lambda_{\rm QCD}$ required
to match physical dimensions are often understood in the following.
We shall see that artifacts of the type~(\ref{ORD}) are reduced to terms
that are at worst of order $a^{2}(a^2/m_\pi^2)^{k-1}$, $k\geq 1$,
if the action is O($a$) improved {\it \`a la} Symanzik, or alternatively
the critical mass is chosen in some ``optimal'' way.
The idea that a suitable definition of critical mass exists which
can lead to a smoothing out of chirally enhanced lattice artifacts or
perhaps be of help in getting improvement was already put forward in the
context of chiral perturbation theory in refs.~\cite{SHWUNEW}
and~\cite{AB}, respectively.
An important consequence of our analysis is that the strong
(order of magnitude) inequality $m_q> a\Lambda^2_{\rm QCD}$,
invoked in ref.~\cite{FR1}
can be relaxed to the weaker relation $m_q> a^2\Lambda^3_{\rm QCD}$,
before large cutoff effects are possibly
met while lowering the quark mass at fixed $a$.
The works of refs.~\cite{SHWUNEW,AB}, and most recently
refs.~\cite{AB05,SH05}, which are based on lattice chiral
perturbation theory, lead to essentially equivalent conclusions about
cutoff effects in pion quantities in the parameter region
$m_q> a^2 \Lambda^3_{\rm QCD}$. They also yield interesting
predictions on the possible Wilson fermions phase scenarios~\cite{PS,SCO}
and results, when $m_q$ is of order $a^2$ or smaller.
A thorough discussion on the effectiveness of Mtm-LQCD in killing O($a$)
discretization errors and the ability of the optimal choice of the critical
mass in diminishing the magnitude of lattice artifacts at small quark mass
can be found in~\cite{SHI,PAP} and in the work of refs.~\cite{CAN,XLFNEW}.
As for Mtm-LQCD with clover-improved quark action, the promising quenched tests
presented some years ago in~\cite{DMetal} have been recently extended
in~\cite{Lub05} down to pion masses of 300~MeV or lower, confirming the
absence of large cutoff effects.
The outline of this presentation is as follows. In Section~\ref{sec:SEOLC} we
analyze the form of the Symanzik expansion of lattice correlators beyond O($a$)
and explain why and how `IR divergent'' cutoff effects arise in this context.
In section~\ref{sec:KLL} we discuss two ways of killing all the leading
``IR divergent'' cutoff effects and we describe the structure of the
left-over ``IR divergent'' terms. Finally in Section~\ref{sec:ART}
we collect some remarks on the peculiar structure of the lattice artifacts
affecting lattice hadronic energies and in particular pion masses.
Conclusions can be found in Section~\ref{sec:CONC}.
\section{Symanzik analysis of ``IR divergent'' cutoff artifacts}
\label{sec:SEOLC}
The study of cutoff artifacts affecting lattice correlators
in Mtm-LQCD can be elegantly made in the language of the Symanzik
expansion. A full analysis of cutoff effects beyond O($a$) is of
course extremely complicated. Fortunately it is not necessary,
if we limit the discussion to the terms that are enhanced as the
quark mass $m_q$ is decreased.
$\bullet$ {\it The Symanzik LEEA of Mtm-LQCD} - The
expression of the fermionic action of Mtm-LQCD in the
physical quark basis is given in~\cite{FR1,FR2}. The
low energy effective action (LEEA), $S_{\rm Sym}$, of
the theory can be conveniently written in the form
\begin{equation}
S_{\rm Sym}=\int\!d^4y\,\Big{[}{\cal L}_4(y)+
\sum_{k=0}^{\infty}a^{2k+1}\ell_{4+2k+1}(y)
+\sum_{k=1}^{\infty}a^{2k}\ell_{4+2k}(y)\Big{]}\, ,
\label{SLEEA}\end{equation}
where ${\cal L}_4=\frac{1}{2g_0^2}{\rm tr}(F\!\cdot\! F)+
\bar\psi(\gamma \!\cdot\! D + m_q)\,\psi$ is the target
continuum QCD Lagrangian. Based on the symmetries
of Mtm-LQCD a number of interesting properties
enjoyed by $S_{\rm Sym}$ can be proved which are summarized below.
1. Lagrangian densities of even dimension, $\ell_{2k}$, in
eq.~(\ref{SLEEA}) are parity-even, while terms of odd dimension,
$\ell_{2k+1}$, are parity-odd and twisted in iso-spin space.
Thus the latter have the quantum numbers of the neutral pion.
2. The term of order $a$ in eq.~(\ref{DEFEO}), $\ell_5$,
is given by the linear combination
\begin{eqnarray}
\hspace{-1.5cm}&&\ell_5=\delta_{5,SW}\,\ell_{5,SW}+\delta_{5,m^2}\,
\ell_{5,m^2}+\delta_{5,e}\,\ell_{5,e}\, ,\label{L5}\\
\hspace{-1.5cm}&&\ell_{5,SW}=
\frac{i}{4}\bar\psi[\sigma\cdot F]i\gamma_5\tau_3\psi \, ,
\!\!\quad\ell_{5,m^2} = m_q^2 \bar\psi i\gamma_5\tau_3\psi \, ,\!\!\quad
\ell_{5,e} = \Lambda_{\rm QCD}^2 \bar\psi i\gamma_5\tau_3\psi \, ,
\label{L51}
\end{eqnarray}
where the coefficients $\delta_{5,SW}$, $\delta_{5,m^2}$ and $\delta_{5,e}$
are dimensionless quantities, odd in $r$. The operator $\ell_{5,e}$ arises
from the need to describe order $a$ uncertainties entering any
non-perturbative determination of the critical mass and goes together
with $\ell_{5,SW}$. Both $\ell_{5,SW}$ and $\ell_{5,e}$ could be made to
disappear from~(\ref{SLEEA}) by introducing in the Mtm-LQCD action the
SW (clover)-term~\cite{SW} with the appropriate non-perturbatively
determined $c_{SW}$ coefficient~\cite{LU} and at the same time setting
the critical mass to its correspondingly O($a$) improved value.
3. Higher order ambiguities ($k\geq 1$) in the critical mass, which
will all contribute to ${\cal L}_{\rm odd}$, are described by terms
proportional to odd powers of $a$ of the kind
\begin{equation} a^{2k+1}\,\delta_{4+2k+1,e}\,\ell_{4+2k+1,e}=a^{2k+1}\,\delta_{4+2k+1,e}\,
(\Lambda_{\rm QCD})^{2k+2}\,\bar\psi i\gamma_5\tau_3\psi\, .\label{HK}\end{equation}
$\bullet$ {\it Describing Mtm-LQCD correlators beyond O($a$)} -
We are interested in the Symanzik description of the lattice artifacts
affecting connected expectation values of $n$-point, multi-local,
multiplicative renormalizable (m.r.) and gauge-invariant operators
$O(x_1,x_2,\ldots,x_n)=\prod_{j=1}^{n} O_j(x_j)\equiv O(x)$,
$x_1\neq x_2 \neq \ldots \neq x_n$, which we take to have continuum
vacuum quantum numbers, so as to yield a non trivially
vanishing result as $a\to 0$. In order
to ensure automatic O($a$) improvement~\cite{FR1} we shall assume that $O$
is parity invariant in which case its Symanzik expansion
will contain only even powers of $a$. Schematically we write
\begin{eqnarray}
\hspace{-.3cm}&&\<O({x})\>\Big{|}_{m_q}^{L}\!=\!
\<[O({x})+\Delta_{\rm odd}O(x)+\Delta_{\rm even}O(x)]
e^{-\int\!d^4 y[{\cal L}_{\rm odd}(y)+{\cal L}_{\rm even}(y)]}\>
\Big{|}^{\rm cont}_{m_q}\, , \label{SEOP}\\
&&{\cal L}_{\rm odd}=\sum_{k=0}^{\infty}a^{2k+1}\ell_{4+2k+1}\, ,\qquad
{\cal L}_{\rm even}=\sum_{k=1}^{\infty}a^{2k}\ell_{4+2k}\, .
\label{DEFEO}
\end{eqnarray}
The operators $\Delta_{\rm odd}O$ ($\Delta_{\rm even}O$)
have an expansion in odd (even) powers of $a$. They can be viewed as the
$n$-point operators necessary for the on-shell improvement of
$O$~\cite{HMPRS,LU}.
$\bullet$ {\it Pion poles and ``IR divergent'' cutoff effects} -
Although a complete analysis of all the ``IR divergent'' cutoff effects
is very complicated, the structure of the leading ones ($h=2k$ in
eq.~(\ref{ORD})) is rather simple, as they only come from continuum
correlators where $2k$ factors $\int d^4y\,{\cal L}_{\rm odd}(y)$
are inserted. More precisely the leading ``IR divergent'' cutoff
effects are identified on the basis of the following result~\cite{FR4}.
In the Symanzik expansion of $\<O(x)\>|^L_{m_q}$ at order $a^{2k}$
($k\geq 1$) there appear terms with a $2k$-fold pion pole and residues
proportional to $|\<\Omega|{\cal L}_{\rm odd}|\pi^0({\bf 0})\>|^{2k}$,
where $\<\Omega|$ and $|\pi^0({\bf 0})\>$ denote the vacuum
and the one-$\pi^0$ state at zero three-momentum, respectively.
Putting different factors together, each one of these terms can be seen
to be schematically of the form (recall ${\cal L}_{\rm odd}={\rm O}(a)$)
\begin{eqnarray}
\hspace{-1.cm}
\Big{[}\Big(\frac{1}{m_\pi^2}\Big)^{2k}
(\xi_{\pi})^{2k}{\cal M}[O;\{\pi^0({\bf 0})\}_{2k}]
\Big{]}_{m_q}^{\rm cont}\, ,\qquad
\xi_{\pi}=\Big{|}\<\Omega|{\cal L}_{\rm odd}|
\pi^0({\bf 0})\>\Big{|}_{m_q}^{\rm cont}\, ,\label{METREO}
\end{eqnarray}
where we have generically denoted by ${\cal M}[O;\{\pi^0({\bf 0})\}_{2k}]$
the $2k$-particle matrix elements of $O$, with each of the $2k$ particles
being a zero three-momentum neutral pion.
Less ``IR divergent'' cutoff effects (those with $h$ strictly
smaller than $2k$ in eq.~(\ref{ORD})) come either from terms
with some extra $\int d^4y{\cal L}_{\rm even}(y)$ insertions or from
contributions of more complicated intermediate states other than
straight zero three-momentum pions or from both. In the first case
one gets extra $a$ powers (not all ``compensated'' by corresponding
pion poles), while in the second one loses some $1/m_\pi^2$ factor.
It is important to remark that the appearance of pion poles like the
ones in eq.~(\ref{METREO}) in no way means that the lattice correlators
diverge as $m_q\to 0$, but only that the Symanzik expansion we have
employed appears to have a finite radius of convergence (on this point
see the remarks of ref.~\cite{SH05}).
\section{Reducing ``IR divergent'' cutoff artifacts}
\label{sec:KLL}
Recalling that ${\cal{L}}_{\rm odd}=a\,\ell_5+{\rm O}(a^3)$, the previous
analysis shows that at leading order
in $a$ the residue of the most severe multiple pion poles is
proportional to $|\<\Omega|\ell_{5}|\pi^0({\bf 0})\>|^{2k}$. It is an
immediate conclusion then that the leading ``IR divergent'' cutoff
effects can all be eliminated from lattice data if we can either reduce $\ell_{5}$
to only ${\ell}_{5,m^2}$ in~(\ref{L5}) or set $\xi_\pi$ to zero.
$\bullet$ {\it Improving the Mtm-LQCD action by the SW-term} -
The obvious, field-theoretical way to eliminate $\ell_{5}$ from
the LEEA of Mtm-LQCD consists in making use of the O($a$) improved
action~\cite{SW,LU,HMPRS}.
In this case lattice correlation functions will admit a Symanzik
description in terms of a LEEA where the operators ${\ell}_{5,SW}$ and
${\ell}_{5,e}$ are absent, and ${\ell}_5$ is simply given by
${\ell}_{5,m^2}$. The left-over contributions
arising from the insertions of ${\ell}_{5,m^2}$ in $\<O\>|_{m_q}^{\rm cont}$
yield terms that are at most of order
$(am_q^2/m_\pi^2)^{2k}\simeq (a m_q)^{2k}$, hence negligible in the chiral
limit. It is instead the next odd operator in the
Symanzik expansion, $a^3\ell_7$, which comes into play.
A detailed combinatoric analysis based on the structure of the
non-leading ``IR divergent'' cutoff effects~\cite{FR4}
reveals that the worst lattice artifacts left behind in correlators
after the ``clover cure'' are of the kind $a^2(a^2/m_\pi^2)^{k-1}$, $k\geq 1$.
$\bullet$ {\it Optimal choice of the critical mass} -
The alternative strategy to kill the leading ``IR divergent'' cutoff
effects consists in leaving the Mtm-LQCD action unimproved,
but fixing the critical mass through the condition
\begin{equation}
\lim_{m_q \to 0^{+}} \xi_\pi(m_q)=\lim_{m_q \to 0^{+}} \;
\Big{|}\<\Omega|{\cal{L}}_{\rm odd}|\pi^0({\bf 0})\>
\Big{|}^{\rm cont}_{m_q}\; = \; 0 \, .
\label{EFFCOND}
\end{equation}
The meaning of~(\ref{EFFCOND}) is simple. It amounts
to fix, for $k\geq 0$, the order $a^{2k+1}$ contribution in the
counter-term, $M_{\rm cr}\bar\psi^L i\gamma_5\tau_3\psi^L$,
so that its vacuum to one-$\pi^0({\bf 0})$ matrix element
compensates, in the limit $m_q\to 0$, the similar matrix element
of the sum of all the other operators making up $\ell_{4+2k+1}$.
A concrete procedure designed to implement condition~(\ref{EFFCOND})
in actual simulations was discussed in ref.~\cite{FR4}. It consists in
determining the critical mass by requiring the lattice correlator
$a^3\sum_{\bf x}\;\<V_0^2(x)P^1(0)\>|^L_{m_q}$ ($x_0 \neq 0$)
to vanish in the chiral limit, where
$V_0^2=\bar\psi\gamma_0\frac{\tau_2}{2}\psi$ is the vector current
with iso-spin index 2 and $P^1=\bar\psi \gamma_5 \frac{\tau_1}{2}\psi$
the pseudo-scalar density with iso-spin index 1.
In the continuum this correlator is zero by parity for any value of $m_q$.
On the lattice the breaking of parity (and iso-spin) due to the twisting
of the Wilson term makes it non-vanishing by pure discretization artifacts,
which have the form of a power series expansion in $\xi_\pi/m_\pi^2$.
The important conclusion of the analysis presented in~\cite{FR4}
is that it is not necessary (nor possible) to really go
to $m_q\to 0$. It is enough to have the critical mass determined by the
vanishing of the above correlator
at the current simulation quark mass, provided we stay in the region
$m_q>a^2$. In these conditions we will have
$\xi_\pi(m_q)={\rm O}(am_\pi^2)$ with all the leading ``IR divergent''
cutoff effects reduced to finite O($a^{2k}$) terms. As for the
subleading ones, a non-trivial diagrammatic analysis shows that the worst
of them, left behind after the ``optimal critical mass cure'',
are reduced to only $a^2(a^2/m_\pi^2)^{k-1}$, $k\geq 1$, effects, just like
in the case where the clover term is employed.
\section{Artifacts on hadronic energies and pion masses}
\label{sec:ART}
In the language of the Symanzik expansion discretization artifacts on
hadronic energies are described by a set of diagrams where at least one
among the inserted $\int {\cal L}_{\rm odd}$ factors gets necessarily absorbed
in a multi-particle matrix element, with the consequence that it is
not available for producing a pion pole.
As a consequence, at fixed order in $a$,
the most ``IR divergent'' lattice corrections to continuum hadronic energies
contain one overall factor $1/m_\pi^2$ less than the leading ``IR divergent''
cutoff effects generically affecting correlators.
For instance, to order $a^2$ the difference between lattice
and continuum energy of the hadron $\alpha_n$ reads~\cite{FR4}
\begin{eqnarray}
\hspace{-0.5cm}
\Delta E_{\alpha_n}({\bf q})\Big{|}_{a^2} \; \propto \; \left[
\frac{a^2}{m_\pi^2}
{\rm Re} \left(\frac{\<\Omega| \ell_5 | \pi^0({\bf 0}) \>
\<\pi^0({\bf 0})\alpha_n ({\bf q}) | \ell_5 |\alpha_n ({\bf q}) \>}
{2 E_{\alpha_n}({\bf q}) }\right)+ {\rm O}(a^2)
\right]_{m_q}^{\rm cont}\, ,
\label{DE2LEAD}
\end{eqnarray}
where ${\rm O}(a^2)$ denotes ``IR finite'' corrections.
It should be noted that this ``IR divergent'' lattice artifact
is reduced to an ``IR finite'' correction
after anyone of the two ``cures'' described in Sect.~\ref{sec:KLL}.
Specializing the formula~(\ref{DE2LEAD}) to the case of pions, one
obtains the interesting result that the difference between charged
and neutral pion (square) masses is a finite O($a^2$) quantity even
if the critical mass has not been set to its optimal value or the
clover term has not been introduced. The reason is that the
leading ``IR divergent'' contributions shown in~(\ref{DE2LEAD})
are equal for all pions (as one can prove by standard soft pion
theorems~\cite{SPT}), hence cancel in the (square) mass difference.
This conclusion is in agreement with detailed results from chiral
perturbation theory (see refs.~\cite{SCO} and~\cite{SHWUNEW}),
as well as with the first numerical estimates of the pion
mass splitting in Mtm-LQCD~\cite{LIV05}.
\section{Conclusions}
\label{sec:CONC}
When analyzed in terms of the Symanzik expansion, lattice correlators
in Mtm-LQCD show ``IR divergent'' cutoff effects
which tend to become large as the quark mass gets small. Extending
the works of refs.~\cite{AB,SHWUNEW}, we have shown
that, not only if the critical mass is chosen in some ``optimal'' way
but also if the action is clover improved, such
lattice artifacts are strongly reduced to terms that are at worst of the type
$a^{2}(a^2/m_\pi^2)^{k-1}$, $k\geq 1$.
The latter result implies that the continuum extrapolation of lattice data is smooth
at least down to values of the quark mass satisfying the order of
magnitude inequality $m_q >a^2\Lambda^3_{\rm QCD}$.
\vskip .2cm
\noindent{\bf Acknowledgments - }
We thank the LOC for the exciting atmosphere of the
Conference. G.C.R. gratefully acknowledges financial support from
Humboldt Foundation and NIC (DESY - Zeuthen).
|
1,116,691,497,299 | arxiv |
\section{Introduction}
Many models of systems consist of smaller sub-systems which interact with each other over a network. In particular, such models are often of a large scale, in the sense that they are described by a large number of state variables. There is a renewed interest in such systems due to applications in social dynamics, the power grid, neuroscience, and more.
In large-scale networked systems, it is desirable to design controllers that stabilize the system using local measurements only, and using as little control effort as possible. For example, in the context of pandemic control, this corresponds to minimizing the use of protective resources~\cite{Preciado2014optimal_resource}, or minimizing the negative effect of lockdowns on the economy~\cite{Ma2022optimal}. Naturally, the control design algorithm must also be computationally efficient.
One powerful approach for the analysis and control synthesis of large-scale
nonlinear systems is contraction
theory~\cite{LOHMILLER1998683,sontag_cotraction_tutorial}. Contractivity
implies a well-ordered asymptotic behaviour: if the system is
time-invariant and admits an equilibrium then the equilibrium is globally
exponentially stable (see, e.g.~\cite{sontag_cotraction_tutorial}). If the
system is time-varying and~$T$-periodic then contraction implies
entrainment, that is, every state variable in the network converges to a unique $T$-periodic solution (see,
e.g.~\cite{LOHMILLER1998683,entrain2011,RFM_entrain}). This property is
important in many natural and artificial systems ranging from power
electronics to systems biology. Furthermore, there exist easy to check
sufficient conditions for contraction of networked systems based on matrix
measures~\cite{Russo2013hier_contraction}.
The main contribution of this paper is a new approach for
the computationally efficient design of ``minimum-effort'' local controllers
guaranteeing that the closed-loop network system is
contracting with a specified contraction rate.
We demonstrate our approach by designing the control in a network of FitzHugh–Nagumo~(FHN) neurons, with a general interaction topology, so that the network is contractive. Our approach provides conditions guaranteeing that the local controllers make the closed-loop network contractive and thus guaranteeing entrainment to periodic excitations.
This property plays an important role in a multitude of sensory and cognitive processes~\cite{LAKATOS2019R890}.
Our approach brings together in a creative new way two recent results. The first is a sufficient condition for contraction of a nonlinear networked system, that appeared in~\cite{Davydov2021noneuclidean} in the context of contraction with respect to norms induced by weak pairings. This sufficient condition turns the question of contraction to that of checking whether a certain Metzler matrix is Hurwitz. The second result is a method for finding the minimal diagonal perturbation required to stabilize a Metzler matrix, presented in~\cite{Ma2022optimal} in the context of optimal lockdown design for controlling pandemics. This method is based on an elegant reduction of the optimization problem to a matrix balancing problem that can be solved using efficient algorithms. As this result is very recent, we provide here a self-contained review.
We use the following notation. $\R_{\ge0}^n$ [$\R_{>0}^n$] is the subset of vectors in~$\R^n$ with non-negative [positive] entries. For~$A \in \R^{n \times n}$, $\alpha(A)$ denotes the spectral abscissa of~$A$, i.e. the maximal real part of the eigenvalues of~$A$.
A matrix~$M\in\R^{n\times n}$ is called [marginally] Hurwitz if~$\alpha(M)<0$ [$\alpha(M)=0$].
A matrix~$M\in\R^{n\times n}$ is called Metzler if all its off-diagonal entries are non-negative. This is equivalent to the fact that the flow of~$\dot x=Mx$ maps~$\R_{\geq 0 }^n$ to itself i.e., the linear system is positive.
Dynamical systems that admit an invariant cone are called positive or monotone, and it is well-known that for such systems stability analysis and control synthesis tend to scale well with the system dimension~\cite{RANTZER201572}.
For~$A,B\in\R^{n\times m}$ we write~$A \leq B$ if~$a_{ij} \leq b_{ij}$ for all~$i,j$. Let~$\mathbbm{1}_n \in\R^n$ denote the vector with~$n$ entries equal to~$1$.
\section{Problem formulation}
Consider a networked system consisting of $m$ time-varying subsystems
\begin{equation}\label{eq:network}
\dot x^i(t) = f^i(t,x^1(t),\dots,x^m(t)) - u_i(t) x^i(t),\quad i=1,\dots,m,
\end{equation}
where $x^i \in \Omega^i \subseteq \R^{n_i}$ and $u_i \in \R$. Note that~$u_ix^i$ may be interpreted as a local controller in subsystem~$i$, with a stabilizing effect when~$u_i$ is positive.
We assume that~$f^i$ is continuously differentiable, and that~$\Omega^i$ is convex for any~$i\in\{1,\dots,n\}$. Let $n := \sum_{i=1}^m n_i$, $\Omega := \Omega^1 \times \dots \times \Omega^m$, and
\[
x := \begin{bmatrix}
x^1 \\
\vdots \\
x^m
\end{bmatrix}.
\]
Then $x \in \Omega \subseteq \R^n$. Let $u := \begin{bmatrix}u_1 & \dots & u_m \end{bmatrix}^T $, and denote
$
\delta_{ij}:=\begin{cases} 1, &i=j,\\
0, & i\not =j.
\end{cases}
$
The derivative of the vector field
in~\eqref{eq:network} with respect to~$x^j$
is
\begin{equation}\label{eq:jij}
J^{ij}(t,x,u) := \frac{\partial f^i}{\partial x^j}(t,x) - \delta_{ij} u_i(t) I_{n_i}.
\end{equation}
Let
\begin{equation}\label{eq:net_jacobian}
J(t,x,u) := \begin{bmatrix}
J^{11}(t,x,u) & \cdots & J^{1m}(t,x,u) \\
\vdots & \ddots & \\
J^{m1}(t,x,u) & & J^{mm}(t,x,u)
\end{bmatrix}.
\end{equation}
Fix~$\eta>0$. If follows from~\eqref{eq:jij} that if
all the~$\frac{\partial f^i}{\partial x^j} $s are uniformly bounded
then the overall system can be made contracting with rate~$\eta$ by
setting~$u_i(t)\equiv c$, $i=1,\dots,m$, with~$c>0$ sufficiently large.
This naturally yields the question of how to find a ``minimum effort control''
that guarantees that the networked system is contracting with rate~$\eta$.
We formalize this question by posing
the following optimization problem.
Given~$\eta>0$, a weight vector~$w \in \R_{>0}^m$, and a matrix measure~$\mu:\R^{n\times n} \to \R$, consider the problem
\begin{equation}\label{eq:opt_contract}
\begin{aligned}
\min_{v \in \R_{>0}^m } \quad & w^T v, \\
\mathrm{s.t.} \quad & \mu(J(t,x,v)) \le -\eta \text{ for all } t \ge 0,\; x \in \Omega .
\end{aligned}
\end{equation}
In other words, the goal is to find constant controls~$u_i(t)\equiv v_i$, $i=1,\dots,m$,
guaranteeing that the network system is contractive with rate~$\eta$, while minimizing the ``total cost'' $w^T v$.
In particular, by setting~$w_i\gg w_j$ for all~$j \not =i$, we can try to find a solution that guarantees a small control
effort~$u_i(t)\equiv v_i$ in the~$i$th controller,
if such a solution exists.
The optimization problem~\eqref{eq:opt_contract}
is difficult to address directly because the constraint on~$\mu(J)$
has to hold everywhere in the state-space and for all time. Furthermore, the matrix measure~$\mu$ is itself a decision variable of the problem, and it is not clear how to choose a ``good''~$\mu$.
The approach we propose here
overcomes these difficulties by: (1)~replacing the constraint by a stronger condition which only requires that a certain \emph{constant}
Metzler matrix is (marginally) stable. This removes the need to study the Jacobian directly, and essentially makes the choice of matrix measure implicit; and (2) efficiently solving
the resulting optimization problem using matrix balancing.
The remainder of this note is organized as follows. The next section reviews several known definitions and results that are used later on. Section~\ref{sec:main}
describes our main results. Section~\ref{sec:appli} demonstrates an application of our theoretical results to a network of FHN neurons, and the final section concludes.
\section{Preliminaries}
We first review known results that will be used later on.
\subsection{Sufficient condition for contraction in networked systems}
We briefly review a result by Str{\"o}m~\cite{Strom1975} which gives an upper bound for the matrix measure of a block matrix~$A$ based on the matrix measure of a smaller matrix $B$, where each entry of $B$ corresponds to a single block of~$A$. Given~$x\in\R^n$, decompose it as
\begin{equation}
x = \begin{bmatrix}
x^1 \\
\vdots \\
x^m
\end{bmatrix},\;\;x^i \in \R^{n_i},\;\;\sum_{i=1}^m n_i = n.
\end{equation}
Let $|\cdot|_i$ denote a norm on $\R^{n_i}$, and let~$|\cdot|_0$ denote a \emph{monotonic} norm\footnote{A vector norm~$|\cdot|:\R^n\to\R_{\ge0}$ is called monotonic if~$|y_i| \leq |x_i|$ for all~$i = 1,\dots,n$ implies that~$|y| \leq |x|$; see~\cite{Bauer1961_mono_norms} for more details.} on~$\R^m$. Define a norm~$|\cdot|:\R^n \to \R_+$ by
\begin{equation}\label{eq:hier_norm}
|x| := \left|
\begin{bmatrix}
|x^1|_1 \\ \vdots \\ |x^m|_m
\end{bmatrix}
\right|_0.
\end{equation}
Given $A \in \R^{n \times n}$, partition it into blocks $A^{ij} \in \R^{n_i \times n_j}$, with~$i,j\in\{1,\dots,m\} $, and define their induced matrix norms by
$
\|A^{ij}\|_{ij} := \sup_{ z \in \R^{n_j} \setminus\{0\} } |A^{ij} z|_i / |z|_j.
$
\begin{Theorem}{\cite{Strom1975}} \label{thm:strom}
Let~$\mu$ denote the matrix measure induced by the norm~$|\cdot|$ defined in~\eqref{eq:hier_norm}.
Let~$\mu_i$ denote the matrix measure induced by $|\cdot|_i$, $i = 0,\dots,m$.
Define $B \in \R^{m \times m}$ by
\[
B_{ij} := \begin{cases}
\mu_i(A^{ii}), & i=j, \\
\|A^{ij}\|_{ij}, & i \neq j.
\end{cases}
\]
Then
$ \mu(A) \le \mu_0(B)$.
\end{Theorem}
Thus, if~$\mu_0(B)\leq -\eta<0$ then~$\mu(A)\leq -\eta<0$. Note that~$B$ is Metzler by construction.
\subsection{Matrix balancing}
A non-negative matrix $A \in \R^{n \times n}_{\geq 0}$ is called \emph{balanced} (some authors use the term \emph{sum-symmetric}~\cite{Bapat1997nonnegative}) if
$
A \mathbbm{1}_n = A^T \mathbbm{1}_n.
$
In other words, the sum of the entries in row~$i$ of~$A$ is
equal to the sum of entries in column~$i$ of~$A$, for all~$i=1,\dots,n$.
For example, every symmetric matrix is balanced. Also, every doubly stochastic matrix is balanced, as the sum of every row and every column is one.
A Metzler matrix~$A\in\R^{n\times n} $ is said to be \emph{balancable via diagonal similarity scaling} (BDSS) if there exists a diagonal matrix~$D \in \R^{n \times n}$, with positive diagonal entries, such that~$D^{-1} A D$ is balanced. The following result from~\cite{Kalantari1997balancing}
presents a sufficient condition for~BDSS, and shows that balancing is equivalent to solving an optimization problem.
Balancing is typically presented for non-negative matrices.
We state this result in the slightly more general setting of Metzler matrices. The proof is in the appendix.
\begin{Theorem}[Balancing Theorem]\label{thm:diag_balance} \cite{Kalantari1997balancing}
Let~$A \in \R^{n \times n}$ be Metzler and irreducible. Define~$f : \R_{>0}^n \to \R$ by
\begin{equation}\label{eq:def_fd}
f(d) := \mathbbm{1}_n^T (\operatorname{diag}(d))^{-1} A \operatorname{diag}(d) \mathbbm{1}_n ,
\end{equation}
and consider the optimization problem
\begin{equation}\label{eq:minfd}
\min_{d \in \R_{>0}^n} f(d).
\end{equation}
Then:
\begin{enumerate}
\item There exists a $d^* \in \R_{>0}^n$
that
is a solution of~\eqref{eq:minfd};
\item $A$ is~BDSS and in particular~$(\operatorname{diag}(d^*))^{-1} A \operatorname{diag}(d^*)$ is balanced; and
\item If $\bar d, d^* \in \R_{>0}^n$ are solutions of~\eqref{eq:minfd}, then $\bar d = c d^*$ for some~$c > 0$.
\end{enumerate}
\end{Theorem}
\begin{Remark}
A matrix is called \emph{completely reducible} if it
is permutation-similar to a block-diagonal matrix, where each block is \emph{irreducible}. Equivalently, the graph corresponding to a completely reducible matrix is a union of strongly connected graphs.
Several recent papers state that irreducibility is a necessary and sufficient condition for~BDSS. This is wrong. For example, the identity matrix is~BDSS, but not irreducible. The correct statement is:
a non-negative matrix is~BDSS if and only if it is completely reducible.
Many of the results in this note which assume irreducibilty (including Prop.~\ref{prop:strom_contraction}, Thm.~\ref{thm:diag_balance} and Lemma~\ref{lem:olshevsky_margin_stab} and Thm.~\ref{thm:olshevsky_stab_bal} below)
are easily extended to the more general case of complete reducibility.
\end{Remark}
\begin{Remark} \label{rem:diagonal-invariance}
The diagonal entries of~$A$
do not affect the balancing: if $D^{-1}AD$ is balanced, with~$D$ a positive diagonal matrix, then for any diagonal matrix~$P$, $D^{-1}(A+P)D$ is also balanced.
\end{Remark}
There exist efficient numerical algorithms for matrix balancing that, under certain conditions, run in nearly linear time in the number of non-zero entries of the matrix, see~\cite{cohen2017matrix}.
Matrix balancing is a useful preconditioning step in many matrix algorithms, and
procedures for matrix balancing are often included in numeric computing software (e.g., the procedure \texttt{balance} in MATLAB).
In some cases, there are closed-form expressions for the positive diagonal matrix~$D$ which balances~$A$. The following well-known result (see, e.g.~\cite[Ch.~0]{total_book}) gives such an expression for tridiagonal matrices. Note that a tridiagonal matrix is irreducible if and only if all entries on the super- and sub-diagonal are non-zero.
\begin{Proposition}\label{prop:tridiag_balance}
Let~$A \in \R^{n \times n}$ be Metzler, irreducible and tridiagonal. Define the positive diagonal matrix~$D \in \R^{n \times n}$ by~$d_{11} := 1$, and~$d_{ii} := \sqrt{\prod_{j=1}^{i-1}\frac{a_{j+1,j}}{a_{j,j+1}}}$ for~$i \geq 2$. Then~$D^{-1}AD$ is symmetric and thus balanced.
\end{Proposition}
\subsection{Marginal stability of a Metlzer matrix}
There are several well-known characterizations of when a Metlzer matrix is Hurwitz~\cite{berman87,bullo_graph_metzler}. For our purposes, we need the following condition for marginal stability of a Metzler matrix. For the sake of completeness, we include the proof.
\begin{Lemma}\label{lem:olshevsky_margin_stab}
Let $A \in \R^{n \times n}$ be Metzler and irreducible. Then~$\alpha(A) \le 0$ iff there exists~$d \in \R_{>0}^n$ such that~$Ad \le 0$.
\end{Lemma}
\begin{proof}
Suppose that there exists $d \in \R_{>0}^n$ such that $Ad \le 0$. Then,
$
A \operatorname{diag}(d) \mathbbm{1} = Ad \le 0,
$
so the sum of every row of the matrix~$B:=(\operatorname{diag}(d))^{-1}A\operatorname{diag}(d)$ is non-positive. Since $A$ is Metzler, so is~$B$ ,and thus
$ b_{ii} + \sum_{\substack{j=1\\j\neq i}}^n |b_{ij}| = \sum_{j=1}^n b_{ij} \le 0,\quad i=1,\dots, n.
$
By Gershgorin's Theorem~\cite[Thm.~6.1.1]{Horn2013matanalysis}, all eigenvalues of~$B$ lie in the closed left half plane. This implies that the same holds for the eigenvalues of~$A$.
To prove the converse implication, assume that~$\alpha(A) \le 0$. Since $A$ is Metzler and irreducible, there exists $r \ge 0$ such that~$
S := A + r I
$
is irreducible and non-negative.
By the Perron-Frobenius Theorem~\cite[Thm.~8.4.4]{Horn2013matanalysis}, $S$ has a real eigenvalue $\lambda >0$ and corresponding eigenvector~$d\in \R_{>0}^n$. By the assumption,~$\lambda \le r$. Thus,~$ S d = \lambda d \le r d.
$ This gives~$A d \le 0$, and this completes the proof.
\end{proof}
\subsection{Minimal effort diagonal stabilization of Metzler matrices}
We now review a minimal effort controller design for an irreducible positive LTI system based on matrix balancing. To the best of our knowledge, this idea first appeared in~\cite{Ma2022optimal} in the context of optimal lockdown design for pandemic control. With respect to~\cite[Theorem~4.6 and its proof]{Ma2022optimal}, the following theorem statement is more general (e.g., it allows for general Metzler matrices, arbitrary target spectral abscissa,
and for the diagonal perturbation to take negative values), more explicit (e.g., an explicit formula for the diagonal perturbation is given as a function of the balancing diagonal matrix), and establishes additional properties of the transcription (e.g., the Perron eigenvector of the closed-loop system); additionally, the proof is more concise.
\begin{Theorem}\label{thm:olshevsky_stab_bal}
Let $A \in \R^{n \times n}$ be Metzler and irreducible.
Fix weights~$w \in \R_{>0}^n$ and target spectral abscissa~$\eta \in \R$.
Let~$d^* \in \R_{>0}^n$ be such that the matrix
\[
(\operatorname{diag}(d^*))^{-1} \operatorname{diag}(w) A \operatorname{diag}(d^*)
\]
is balanced and define
\begin{equation}\label{eq:opt_cont}
\ell^* := (\operatorname{diag}(d^*))^{-1} A \operatorname{diag}(d^*) \mathbbm{1}_n - \eta \mathbbm{1}_n.
\end{equation}
Then
\begin{enumerate}
\item the Metzler matrix $A - \operatorname{diag}(\ell^*)$ has spectral abscissa~$\eta$ and Perron eigenvector~$d^*$.
\item $\ell^*$ is the unique solution of the optimization problem
\begin{equation}\label{eq:optim_stab}
\begin{aligned}
\min_{\ell \in \R^n} \quad & w^T \ell, \\
\mathrm{s.t.} \quad & \alpha(A - \operatorname{diag}(\ell)) \le \eta.
\end{aligned}
\end{equation}
\item If~$A - \eta I \ge 0$ then $\ell^* \in \R_{>0}^n$.
\end{enumerate}
\end{Theorem}
For~$\eta\leq 0$ the goal of problem~\eqref{eq:optim_stab} is to guarantee that
the spectral abscissa of~$B:=A-\operatorname{diag}(\ell)$ is smaller or equal to~$\eta $, so in particular the irreducible matrix~$B$ is (marginally) Hurwitz. This should be done with the ``smallest possible'' diagonal perturbation~$\ell$ in the sense that~$w^T\ell$ is minimized.
There is considerable literature
on finding the closest
Metzler and Hurwitz matrix to a given matrix (see~\cite{closest_metlzer} and the references therein), but the advantages of the formulation in~\eqref{eq:optim_stab}
are:
(1)~$\alpha(A-\operatorname{diag}(\ell)) $ is convex in~$\ell$~\cite{cohen1981convexity};
and
(2)~as we will see below, it can be naturally interpreted as finding ``minimal effort'' local controllers that render a network contractive.
Note that~$w$ does not appear explicitly in the formula for~$\ell^*$, but the vector~$d^*$ there does depend on~$w$.
\begin{proof}
Since $A$ is Metzler and irreducible and $w\in\R^n_{>0}$, $\operatorname{diag}(w)A$ is also Metzler and irreducible, and by Thm.~\ref{thm:diag_balance} it is~BDSS.
We now show that $\ell^*$ in~\eqref{eq:opt_cont}
is the optimal solution to~\eqref{eq:optim_stab}.
By Thm.~\ref{thm:diag_balance} and Remark~\ref{rem:diagonal-invariance},~$d^*$ in the theorem statement is a minimizer of
\begin{equation}\label{eq:optim_stab_ww}
\min_{d \in \R_{>0}^n} f(d),
\end{equation}
with
\begin{align*}\label{eq:optim_stab_ww}
f(d) &:=
\mathbbm{1}_n^T (\operatorname{diag}(d))^{-1}\operatorname{diag}(w) (A - \eta I) \operatorname{diag}(d) \mathbbm{1}_n \nonumber \\
&= w^T (\operatorname{diag}(d))^{-1} (A - \eta I) \operatorname{diag}(d) \mathbbm{1}_n.
\end{align*}
Since~$w \in \R_{>0}^n$,~$f(d) \le w^T \ell$
for any~$\ell \in \R^n$ such that~$\ell \ge (\operatorname{diag}(d))^{-1} (A - \eta I) \operatorname{diag}(d)\mathbbm{1}_n$. Furthermore, as~$d \in \R_{>0}^n$,
\begin{align*}
& (\operatorname{diag}(d))^{-1} (A - \eta I) \operatorname{diag}(d) \mathbbm{1}_n \le \ell \\
& \iff (A - \eta I) d \le \operatorname{diag}(d)\ell = \operatorname{diag}(\ell) d \\
& \iff (A - \operatorname{diag}(\ell))d \le \eta d.
\end{align*}
We conclude that~\eqref{eq:optim_stab_ww} can be rewritten as
\begin{equation}\label{eq:optim_stab_aux2}
\begin{aligned}
\min_{\substack{\ell \in \R^n\\d \in \R_{>0}^n} } \quad & w^T \ell, \\
\mathrm{s.t.} \quad & (A - \operatorname{diag}(\ell))d \le \eta d,
\end{aligned}
\end{equation}
and optimal solutions to~\eqref{eq:optim_stab_aux2} must satisfy the equality~$(A - \operatorname{diag}(\ell))d = \eta d$. Then, by Thm.~\ref{thm:diag_balance},~$(\ell^*,d^*)$ is an optimal solution to~\eqref{eq:optim_stab_aux2}. Since~$A - \operatorname{diag}(\ell^*)$ is Metzler and irreducible,~$(A - \operatorname{diag}(\ell^*))d^* = \eta d^*$ implies that~$\eta$ is the spectral abscissa of~$A - \operatorname{diag}(\ell^*)$ and~$d^*$ is a Perron eigenvector. This proves statement~1).
By Lemma~\ref{lem:olshevsky_margin_stab},~\eqref{eq:optim_stab_aux2} is equivalent to~\eqref{eq:optim_stab}. Thus,~$\ell^*$ is an optimal solution to~\eqref{eq:optim_stab}, and it is unique by the third statement in Thm.~\ref{thm:diag_balance}. This proves statement~2).
Finally, statement~3) follows from the definition of~$\ell^*$ and the fact that $A - \eta I$ is non-negative and irreducible.
\end{proof}
\begin{Example}
Consider the controlled two-dimensional flow system
$
\dot x=A x -\operatorname{diag}(\ell_1,\ell_2) x,
$
where~$A:=\begin{bmatrix}
-1&1\\1&-1
\end{bmatrix} f$.
Here~$f>0$ models the flow rate between two nodes.
Since $A-\eta I_2\geq0$ holds for any~$\eta\leq -f$, we set~$\eta:=-(f+\varepsilon)$, with~$\varepsilon\geq 0$. Consider the optimization problem~\eqref{eq:optim_stab} with~$w:=\begin{bmatrix}
1&w_2
\end{bmatrix}^T$, where~$w_2>0$. Then~$\operatorname{diag}(w)A=\begin{bmatrix}
-1&1\\w_2&-w_2
\end{bmatrix}f$, and
$
(\operatorname{diag}(d))^{-1} \operatorname{diag}(w)A \operatorname{diag}(d)
$
is balanced for~$d=\begin{bmatrix}1&\sqrt{w_2}
\end{bmatrix}^T$, so~\eqref{eq:opt_cont} gives
\begin{equation}\label{eq:lopt}
\ell^* = \begin{bmatrix}
\sqrt{w_2} f+\varepsilon & \frac{1}{\sqrt{w_2}} f+\varepsilon
\end{bmatrix} ^T.
\end{equation}
The closed-loop system is then
$\dot x=A_c x$, with
\begin{align*}
A_c&:=A-\operatorname{diag}(\ell^*)\\
&=\begin{bmatrix}
-\varepsilon-(1+\sqrt{w_2} ) f & f\\
f& -\varepsilon-( 1+\frac{1}{\sqrt{w_2}} )f
\end{bmatrix}.
\end{align*}
The eigenvalues of~$A_c$ are~$-(f+\varepsilon)$ and~$-(f+\varepsilon) -(\sqrt{w_2}+\frac{1}{\sqrt{w_2}})f$, so~$\alpha(A_c)=\eta$.
Note that~\eqref{eq:lopt} implies the following. \begin{enumerate}
\item
If~$w_2\ll 1$
(i.e., the cost function is~$w^T \ell \approx \ell_1$)
then~$\ell^*\approx
\begin{bmatrix}
\varepsilon & \frac{1}{\sqrt{w_2}} f
\end{bmatrix}^T$.
\item If~$w_2=1$ (i.e., the cost function is~$w^T \ell = \ell_1+\ell_2$)
then~$\ell^*=
\begin{bmatrix}
f +\varepsilon& f+\varepsilon
\end{bmatrix}^T.$
\item
If~$w_2\gg 1 $
(i.e., the cost function is~$w^T \ell \approx w_2 \ell_2$)
then~$\ell^*\approx
\begin{bmatrix}
\sqrt{w_2} f & \varepsilon
\end{bmatrix}^T.$
\end{enumerate}
\end{Example}
\section{Main results}\label{sec:main}
We now combine the ideas above to provide a novel, simple and efficient algorithm for
finding local controllers guaranteeing contraction in a networked system. We begin with several auxiliary results.
Ref.~\cite{Russo2013hier_contraction} used Thm.~\ref{thm:strom} to derive a hierarchical approach to contraction. This approach requires finding a monotonic norm under which a certain nonlinear system is contracting. This is hard to do in general, as the monotonic norm has to induce a matrix measure that is negative at every point in the state space. This approach was further simplified in~\cite{Davydov2021noneuclidean}, which derived a stronger sufficient condition (i.e., one that is applicable for a smaller family of systems) which instead involves checking whether a certain \emph{constant}
Metzler matrix is Hurwitz. In~\cite{Davydov2021noneuclidean} this result was stated in terms of one-sided Lipschitz constants. Here we state and prove this result using matrix measures instead. The first step is to remove the dependency of~$B$ on~$t$ and~$x$ and replace it with a constant matrix. To do so, we will make use of the fact that~$B$ is Metzler by construction, and that $|\cdot|_0$ is a monotonic norm.
\begin{Proposition}\label{prop:measure_monotone}
Let~$|\cdot|_0:\R^n\to \R_+$ be a monotonic vector norm, and let~$||\cdot||_0:\R^{n\times n }\to \R_+$ and~$\mu_0:\R^{n\times n}\to \R$ denote the induced matrix norm and matrix measure.
If~$A,B \in \R^{n \times n}$ are Metzler
and~$A \le B$ then
$
\mu_0(A) \le \mu_0(B)
$.
\end{Proposition}
\begin{proof}
Since $A \le B$, we have
$
I + hA \le I + hB
$ for any~$h \geq 0$.
Furthermore, since~$A$ and~$B$ are Metzler, we have that~$I + hA$ and~$I + hB$ are non-negative matrices
for any~$h>0$ sufficiently
small.
By~\cite[Thm.~4]{Bauer1961_mono_norms},
$|| I + h A ||_0\leq || I + hB ||_0$, and using the definition of the matrix measure completes the proof.
\end{proof}
Consider now the networked system~\eqref{eq:network}. Construct $B$ from the blocks $J^{ij}$ of its Jacobian. Define a \emph{constant} matrix~$\hat{J} \in \R^{m\times m}$ by
\begin{equation}\label{eq:J_sup}
\hat{J}_{ij} := \begin{cases}
\sup_{\substack{x \in \Omega\\t \ge 0}} \mu_i(J^{ii}(t,x)), & i=j, \\
\sup_{\substack{x \in \Omega\\t \ge 0}} \|J^{ij}(t,x)\|_{ij}, & i \neq j.
\end{cases}
\end{equation}
By construction, $\hat{J}$ is Metzler and $B(t,x) \le \hat{J}$ for all $t \ge 0$ and $x \in \Omega$. This leads to the following result.
\begin{Proposition}\label{prop:strom_contraction}
Let $\mu$ denote the matrix measure induced by the norm defined in~\eqref{eq:hier_norm}.
Fix~$\varepsilon>0$. There exists a monotonic norm~$|\cdot|_0$ with induced matrix measure $\mu_0(\cdot)$ such that
\begin{equation}\label{eq:mono_norm_abscissa}
\mu(J(t,x)) \leq \mu_0(\hat{J}) \le \alpha(\hat{J}) + \varepsilon, \text{ for all } t\geq0,x\in\Omega.
\end{equation}
In particular, if $\hat{J}$ is Hurwitz then the network~\eqref{eq:network} is contracting with rate~$\alpha(\hat{J}) + \varepsilon$.
If in addition $\hat{J}$ is irreducible, then~\eqref{eq:mono_norm_abscissa}
holds with $\varepsilon=0$.
\end{Proposition}
\begin{proof}
Eq.~\eqref{eq:mono_norm_abscissa} follows from~\cite[Thm.~2]{Strom1975}. The fact that the system is contracting if $\hat{J}$ is Hurwitz then follows by Thm.~\ref{thm:strom}.
If~$\hat{J}$ is irreducible (and Metzler) the Perron-Frobenius Theorem~\cite[Thm.~8.4.4]{Horn2013matanalysis} implies that~$\alpha(\hat{J})$ is a simple eigenvalue of $\hat{J}$. Now~\cite[Thm.~3]{Strom1975} implies that~\eqref{eq:mono_norm_abscissa} holds also for~$\varepsilon=0$.
\end{proof}
We now apply Prop.~\ref{prop:strom_contraction} to obtain a sufficient condition for contraction in the networked system~\eqref{eq:network}. Since~$\mu(A + \alpha I) = \mu(a) + \alpha$ for any matrix measure and any~$\alpha \in \R$, Eq.~\eqref{eq:jij} gives
$
\mu(J^{ii}) = \mu(\frac{\partial f^i}{\partial x^i}) - u_i.
$
Therefore, a sufficient condition for~\eqref{eq:network} to be contracting is that the Metzler matrix $\hat{J} - \operatorname{diag}(u)$ is Hurwitz. Combining this with Thm.~\ref{thm:olshevsky_stab_bal} yields the following result for determining an upper bound on the effort required to guarantee that~\eqref{eq:network} is contracting.
\begin{Theorem}\label{thm:net_contract}
Consider the networked system~\eqref{eq:network} and define~$\hat J\in\R^{m\times m}$ as in~\eqref{eq:J_sup}. Suppose that~$\hat J$ is irreducible. Fix~$w \in \R_{>0}^m$ and $\eta > 0$ such that $\hat J + \eta I_m \ge 0$.
Then there exists a~$d \in \R_{>0}^m$ such that $(\operatorname{diag}(d))^{-1} \operatorname{diag}(w) \hat{J} \operatorname{diag}(d)$ is balanced, and
\begin{equation}\label{eq:opt_controller_contract}
v ^* := (\operatorname{diag}(d))^{-1} \hat{J} \operatorname{diag}(d) \mathbbm{1}_m + \eta \mathbbm{1}_m
\end{equation}
is the optimal solution to
\begin{equation}\label{eq:suboptimal_contract}
\begin{aligned}
\min_{v \in \R_{>0}^m} \quad & w^T v, \\
\mathrm{s.t.} \quad & \alpha(\hat{J} - \operatorname{diag}( v )) \le -\eta.
\end{aligned}
\end{equation}
Furthermore, $v ^*$ is a feasible solution of~\eqref{eq:opt_contract}.
\end{Theorem}
Thm.~\ref{thm:net_contract} shows that finding local controllers guaranteeing that~\eqref{eq:network} is contracting with rate~$\eta$ can be done by diagonally balancing an upper-bound of the reduced order Jacobian of the system. Since diagonal balancing can be solved efficiently, this approach is useful even in the case of large-scale systems.
In certain cases, Thm.~\ref{thm:net_contract} may in fact yield the optimal solution to~\eqref{eq:opt_contract}. Consider the networked system~\eqref{eq:network} and suppose that all subsystems are scalar i.e.,~$n_i = 1$ for all~$i$. If there exists~$x \in \Omega$ such that~$J(x) = \hat J$ and $\hat J$ is irreducible, then~$v^*$ in~\eqref{eq:opt_controller_contract} gives the minimal controller which guarantees contraction with rate $\eta$ under \emph{any} constant norm. Indeed, a necessary condition for contraction with respect to a constant norm in such systems is that~$\hat J - \operatorname{diag}(u)$ is Hurwitz. Thm.~\ref{thm:olshevsky_stab_bal} guarantees that the optimal controller stabilizing~$\hat J$ is~$ v^*$ in~\eqref{eq:opt_controller_contract}, and Prop.~\ref{prop:strom_contraction} guarantees that this controller also achieves contraction. Note that a special case of such a system is an irreducible positive~LTI system.
\begin{Example}
Consider the network system~\eqref{eq:network}, and suppose that $\frac{\partial f^i}{\partial x^j} \equiv 0$ for all $i,j$ such that $|i-j|>1$, so~$\hat J$ is a tridiagonal Metzler matrix. Assume in addition that $\hat J$ is irreducible, and let~$\eta > 0$ be such that $\hat J + \eta I_m \ge 0$. By Prop.~\ref{prop:tridiag_balance} and Thm.~\ref{thm:net_contract}, the optimal controller~$v^* $ for~\eqref{eq:suboptimal_contract} is
\[
v^*_i = \eta +\hat J_{ii}+ \begin{cases}
\sqrt{\hat J_{1,2} \hat J_{2,1}}, & i = 1, \\
\sqrt{\hat J_{i,i+1} \hat J_{i+1,i}} + \sqrt{\hat J_{i-1,i} \hat J_{i,i-1}}, & 1<i<m, \\
\sqrt{\hat J_{m-1,m} \hat J_{m,m-1}}, & i = m,
\end{cases}
\]
and~$v^* \in \R_{>0}^m$.
This can be explained as follows.
The optimal control~$v^*_i$ amounts to ``canceling'' the diagonal term~$\hat J_{ii}$ and also ``canceling'' the effect of its four ``neigbours'', and then subtracting~$\eta$.
\end{Example}
\section{An application}\label{sec:appli}
We apply our approach to design local controllers in a network of
FHN neurons that was studied using hierarchical contraction in~\cite{Russo2013hier_contraction}.
The FHN model is a simplified 2D version of the detailed Hodgkin–Huxley model
for the
activation and deactivation dynamics of a spiking neuron.
We derive sufficient conditions under which the network is contractive, and thus entrains to periodic inputs. This application in fact shows that our approach does not necessarily require
considering a Metzler matrix based on hierarchical contraction, but can also be applied in other cases.
The network consists of~$N$ neurons, each modeled according to the FHN model
\begin{equation}\label{eq:fi_neuron}
\begin{aligned}
\dot v_i &= c \left(v_i + w_i - \frac{1}{3}v_i^3 + r(t) \right) + h_i(v), \\
\dot w_i &= - (v_i - a + b w_i)/c,
\end{aligned}
\end{equation}
for $i = 1,\dots,N$, where $v_i$ denotes the membrane voltage, $w_i$ is a recovery variable, $r(t)$ is an external input current, and~$v:=\begin{bmatrix}
v_1&\dots&v_N\end{bmatrix}^T$. Here~$a,b\geq 0$ and~$c>0$. The function~$h_i(v)$ describes a connection term:
\begin{equation}\label{eq:hivterm}
h_i(v) = \gamma \sum_{j\in\mathcal{N}_i}(v_j - v_i) - \ell_i v_i,
\end{equation}
where~$\gamma>0$, $\mathcal{N}_i$ is the set of neighbours of neuron~$i$, and~$\ell_i>0$ is the gain of an additional local control term, which we will determine next such that the network is contractive.
Let~$x:=\begin{bmatrix}
v_1&\dots&v_N&w_1&\dots& w_N
\end{bmatrix}^T$. Then the
Jacobian of the dynamics
is
\begin{equation}
J(x) = \begin{bmatrix}
J^{11}(v) - \operatorname{diag}(\ell) & cI_N \\
- I_N /c & -bI_N/c
\end{bmatrix},
\end{equation}
where~$J^{11}(v) := c I_N - c (\operatorname{diag}(v))^2 - \gamma L$, and~$L\in\R^{N \times N}$ is the Laplacian of the graph describing the interactions between the neurons, that is,
\[
L_{ij} = \begin{cases}
|\mathcal{N}_i| , & i = j, \\
-1 , & i \neq j \text{ and } j\in \mathcal{N}_i, \\
0 , & \text{otherwise}.
\end{cases}
\]
The matrix~$J(x)$ is not Metzler, but rather than applying Thm.~\ref{thm:net_contract} at this point, we will use the fact that $J(x)$ can be transformed to a skew-symmetric form to guarantee contraction under a scaled~$L_2$ norm. Let
$
T := \begin{bmatrix}
I_N & 0 \\
0 & cI_N
\end{bmatrix},
$
and define a scaled~$L_2$ norm by:~$|x|_{2,T} := |Tx|$. Then,
\begin{align*}
\mu_{2,T}(J ) &= \mu_2(T J T^{-1}) \\
&= \mu_2\left(\begin{bmatrix}
J^{11}(v) - \operatorname{diag}(\ell) & I_N \\
-I_N & - {b} I_N/c
\end{bmatrix}\right) \\
&= \mu_2\left(\begin{bmatrix}
S(v) - \operatorname{diag}(\ell) & 0 \\
0 & - {b} I_N/c
\end{bmatrix}\right) \\
&= \max\{\mu_2(S(v)- \operatorname{diag}(\ell)), - {b}/ c \},
\end{align*}
where~$S(v):=(J^{11}(v)+(J^{11}(v))^T)/2$ is the symmetric part of~$J^{11}(v)$.
Since~$S(v)$ is Metzler for any~$v$, Prop.~\ref{prop:measure_monotone} gives
\[
\mu_2(S(v)) \le \mu_2(\hat J^{11}),\text{ for all } v,
\]
where~$\hat J^{11} := S(0)= cI_N - \gamma (L + L^T)/2$.
Therefore, a sufficient condition for contraction with rate~$\eta \in [0, {b}/{c}]$ w.r.t. the scaled~$L_2$ norm $|\cdot|_{2,T}$ is that $\mu_2(\hat J^{11}) = -\eta$. Furthermore, since $\hat J^{11}$ is symmetric, $\mu_2(\hat J^{11}) = \alpha(\hat J^{11})$. For any~$\eta \ge \gamma\max_i \{L_{ii}\} - c$, the matrix~$\hat J^{11} + \eta I_N$ is non-negative, so by Thms.~\ref{lem:olshevsky_margin_stab} and~\ref{thm:net_contract} the minimal~$\ell$ guaranteeing that~$\hat J^{11}$
is Hurwitz with~$\alpha(\hat J^{11}) = -\eta$ (and thus the network is contractive with rate~$\eta$) is
\begin{align}\label{eq:fhn_opt}
\ell^* &= (c+\eta) \mathbbm{1}_N - \frac{\gamma}{2} (L+L^T) \mathbbm{1}_N \nonumber \\
& = (c+ \eta) \mathbbm{1}_N - \frac{\gamma}{2} L^T \mathbbm{1}_N .
\end{align}
Note that for any~$i$ the required
control effort~$\ell_i^*$ decreases with~$\gamma \sum_{j\neq i} (L_{ij} - L_{ji})$ (i.e., when the connections are stronger or when neuron~$i$ is fed by more neurons or feeds less neurons).
The control effort increases with~$c$, as a larger~$c$ means that~\eqref{eq:fi_neuron} is less stable, and with the required rate of contraction~$\eta$. Also, if the interconnection is symmetric then
\begin{equation}\label{eq:lstar_homog}
\ell^* = (c + \eta)\mathbbm{1}_N,
\end{equation}
so the optimal controller is independent of the network topology. This is not surprising, as~$-L$ is marginally stable if the network is symmetric, so all that is needed to stabilize the system is to ``cancel'' the unstable effect of~$c$. When~\eqref{eq:lstar_homog}
holds, \eqref{eq:fi_neuron},
and~\eqref{eq:hivterm}
imply that the diagonal set~$\{x: v_i=v_j , w_i=w_j \text{ for all } i,j\}$ is an invariant set of the closed-loop network.
Since the network is also contractive,
this implies that the neurons do not only entrain, but also synchronise, that is,
$
v(t) \to
\beta_1(t) 1_N
$
and
$w(t)\to
\beta_2(t) 1_N
$,
where every~$\beta_i(t)$ is a scalar~$T$-periodic function.
Fig.~\ref{fig:entrainment} depicts the membrane voltage of three of the neurons (to avoid cluttering) in the non-symmetric network of six neurons used in~\cite[Fig.~1]{Russo2013hier_contraction} with
the~$T$-periodic input~$r(t) = 4 + 4\sin(2\pi t)$, for~$T=1$, and the controller~$\ell^*$ in~\eqref{eq:fhn_opt}. It may be seen that all the neurons entrain to the periodic input.
\begin{figure}
\centering
\includegraphics[scale=0.8]{Figures/entrain1.pdf}
\caption{Entrainment in a network of FHN neurons with the
interconnection topology in~\cite{Russo2013hier_contraction}. The parameters are $a=0,b=2,c=6,\gamma=0.05,\eta=0.05$, and~$\ell^*$ as in~\eqref{eq:fhn_opt}.}
\label{fig:entrainment}
\end{figure}
\begin{comment}
gamma = 0.05;
c = 6; a = 0; b = 2;
M = [0 1 1 0 0 0;
1 0 0 0 1 1;
0 1 0 1 0 0;
0 0 1 0 0 0;
0 1 0 0 0 0;
0 0 0 0 1 0];
L = diag(sum(M,2)) - M;
N = size(M,1);
w0 = rand(N,1)*8 - 4;
v0 = rand(N,1)*8 - 4;
eta = 0.05;
ell = c*ones(N,1) - gamma/2*L.'*ones(N,1) + eta*ones(N,1);
all(c*eye(N) - gamma/2*(L + L.') + eta*eye(N) >= 0, 'all')
max(real(eig(c*eye(N) - gamma/2*(L + L.') - diag(ell))))
max(real(eig([ c*eye(N) - gamma/2*(L + L.') - diag(ell), c*eye(N);
-1/c*eye(N), -b/c*eye(N)])))
r0 = @(t) 0;
[out.t,out.y] = ode45(@(t,y) FHN(t,y,N,gamma,a,b,c,L,r0,ell), [0 15], [v0;w0]);
out.v = out.y(:,1:N);
out.w = out.y(:,N+1:end);
T = [eye(N) , zeros(N,N);
zeros(N,N) , c*eye(N) ];
figure();
plot(out.t, vecnorm(T*[out.v,out.w].')); hold on;
plot(out.t, vecnorm(T*[out.v(1,:),out.w(1,:)].')*exp(-eta*out.t), ':k');
legend({'$|x|_{2,T}$', '$e^{-\eta t} |x(0)|_{2,T}$'},'Interpreter','latex');
ylabel('$|x(t)|_{2,T}$', 'Interpreter', 'latex');
xlabel('$t$', 'Interpreter', 'latex');
title('State contraction in scaled L2 norm');
rspike = @(t) interp1([0 0.2 0.3 0.5 1], [0 3 9 0 0], mod(t,1));
rsin = @(t) 4 + 4*sin(2*pi*t);
[out.t,out.y] = ode45(@(t,y) FHN(t,y,N,gamma,a,b,c,L,rsin,ell), [0 25], [v0;w0]);
out.v = out.y(:,1:N);
out.w = out.y(:,N+1:end);
out.r = rsin(out.t);
figure();
subplot(2,1,1);
[~,l] = min(v0);
[~,h] = max(v0);
[~,m] = min(abs(v0 - (v0(l) + v0(h))/2));
v = out.v(:,sort([l,h,m]));
writematrix([out.t, v], 'membrane_voltage.csv');
plot(out.t, v);
ylabel('$v(t)$', 'Interpreter', 'latex');
xlabel('$t$', 'Interpreter', 'latex');
legend(compose('v
title('Membrane voltage in response to excitation');
subplot(2,1,2);
writematrix([out.t, out.r], 'excitation.csv');
plot(out.t, out.r);
ylabel('$r(t)$', 'Interpreter', 'latex');
xlabel('$t$', 'Interpreter', 'latex');
title('Excitation current');
function dydt = FHN(t,y,N,gamma,a,b,c,L,r,ell)
dydt = [c*(y(1:N) + y(N+1:end) - 1/3*y(1:N).^3 + r(t)) - gamma*L*y(1:N) - ell.*y(1:N);
-1/c*(y(1:N) - a + b*y(N+1:end))];
end
\end{comment}
\section{Conclusion}
We considered the problem of efficiently designing local controllers which guarantee that a large-scale network becomes contractive, while keeping the total control effort minimal.
We addressed this problem by first
attaining a constant Metzler matrix~$B$
such that making~$B$ Hurwitz implies contractivity of the network, and then
using an efficient algorithm, based on matrix balancing, for determining the minimal diagonal perturbation making~$B$ Hurwitz~\cite{Ma2022optimal}.
Matrix balancing is a well studied topic with many
generalizations~\cite{ideal2016,Eaves1985}. It may be interesting to use this to derive more
general versions of the optimization problem~\eqref{eq:optim_stab}.
Another direction for further research is
to consider generalized versions of~\eqref{eq:opt_contract} which require contraction with respect to a space- and time-dependent norm, rather than a constant norm.
\subsection*{Appendix: Proof of Thm.~\ref{thm:diag_balance}}
First,~$f(d)$ is homogeneous of degree zero, i.e., $f(c d) = f(d)$ for any $c > 0$, so we may restrict our attention to the set~$\mathring{\Delta} := \{d \in \R_{>0}^n \, | \, \sum_i d_i = 1\}$. Consider the optimization problem
\begin{equation}
\begin{aligned}
\min_{d \in \R_{>0}^n} \quad & f(d), \\
\mathrm{s.t.} \quad & \sum_i d_i = 1.
\end{aligned}
\end{equation}
Fix a vector~$\mathring{d}\in \mathring{\Delta}$ satisfying that the set~$Z$ of indexes~$i$ such that~$\mathring{d}_i=0$ is not empty. Then~$\bar Z :=\{1,\dots,n\} \setminus Z$ is also non empty. Since~$A$ is irreducible, there exist~$i \in Z$ and~$j \in \bar Z$ such that~$a_{ij} > 0$. Then
$
f(d) \ge \operatorname{trace}(A) + a_{ij} {d_j} d_i^{-1},
$
so~$\lim_{d \to \mathring{d}} f(d) = \infty$. Since~$f$ is continuous in~$\mathring{\Delta}$, it attains a minimal value there. This proves the assertion in~1).
To prove the second assertion, note that since $d\in\R^n_{>0}$, we can define a vector~$g\in \R^n$ by~$g_i := \ln(d_i), i=1,\dots,n$. Then~\eqref{eq:minfd} can be rewritten as
\begin{equation}\label{eq:optim_balance_conv}
\min_{g \in \R^n} \tilde{f}(g),
\end{equation}
where
\begin{align*}
\tilde{f}(g): &= \mathbbm{1}_n^T \exp(-\operatorname{diag}(g)) A \exp(\operatorname{diag}(g)) \mathbbm{1}_n\\
& = \sum_{i,j} a_{ij} \exp(g_j-g_i)
\end{align*}
Since~$a_{ij} \ge 0$ for any~$i \neq j$, $\tilde f$ is a sum of convex functions, so it is convex. Therefore,~\eqref{eq:optim_balance_conv} is convex and unconstrained, so the minimum is achieved at any point~$g^*$ where the gradient~$\frac{\partial }{\partial g} \tilde f (g^*)$ vanishes. This is equivalent to~$\exp(-\operatorname{diag}(g^*)) A\exp( \operatorname{diag}(g^*))
=(\operatorname{diag}(d^*))^{-1} A \operatorname{diag}(d^*)
$ being a balanced matrix.
To prove the third assertion, let~$p,q\in\R_{>0}^n $, with~$p \not = q$, be two minimizers of~\eqref{eq:optim_balance_conv}. Define~$v(\varepsilon):=\varepsilon p+(1-\varepsilon)q$. Then
\begin{align*}
\frac{ d^2 }{d \varepsilon^2}\tilde f(v(\varepsilon)) &= \sum_{i\neq j}
a_{ij} ( p_j-q_j+q_i-p_i )^2 \exp(
v_j(\varepsilon)-v_i(\varepsilon)) \\
&=\sum_{ i\neq j}
a_{ij} ( r_j-r_i )^2 \exp(
v_j(\varepsilon)-v_i(\varepsilon))\\
&\geq 0 ,
\end{align*}
where~$r:=p-q$.
If~$ \sum_{i \neq j}
a_{ij} ( r_j-r_i )^2>0$ then~$\frac{ d^2 }{d \varepsilon^2}\tilde f(v(\varepsilon))>0$ for any~$\varepsilon>0$, so~$\tilde f(v(1/2))< \tilde f(v(0))$ which is a contradiction. We conclude that
\begin{equation}\label{eq:allzerp}
\sum_{i,j}
a_{ij} ( r_j-r_i )^2=0.
\end{equation}
Hence, there exists a set of indexes~$I\subseteq \{1,\dots,n\} $, with~$ |I|\geq 2$,
such that~$r_{i_1}=r_{i_2}$ for any~$i_1,i_2 \in I$, and~$r_{i}\not = r_{j} $ for any~$i\in I,j\in\bar I:=\{1,\dots,n\}\setminus I $.
Suppose that~$\bar I$ is not empty. Since~$A$ is irreducible and Metzler, there exist~$i\in I$ and~$j\in \bar I$ such that~$a_{ij}>0$, and this contradicts~\eqref{eq:allzerp}.
Thus,~$I=\{1,\dots,n\}$. Hence, if~$p\not = q$ are two minimizers of~$\tilde{ f}$ then~$p=q+c \mathbbm{1}_n$ for some~$c\not =0$.
This completes the proof of Thm.~\ref{thm:diag_balance}.
\subsection*{Acknowledgements} We thank Rami Katz and Chengshuai Wu for helpful comments.
|
1,116,691,497,300 | arxiv | \section{Introduction}
The complexity of a natural number $n$ is the least number of $1$'s needed to
write it using any combination of addition and multiplication, with the order of
the operations specified using parentheses grouped in any legal nesting. For
instance, $11$ has complexity of $8$, since it can be written using $8$ ones as
$(1+1+1)(1+1+1)+1+1$, but not with any fewer. This notion was introduced by
Kurt Mahler and Jan Popken in 1953 \cite{MP}. It was later circulated by
Richard Guy \cite{Guy}, who includes it as problem F26 in his \emph{Unsolved
Problems in Number Theory} \cite{UPINT}. It has since been studied by a number
of authors, e.g. Rawsthorne \cite{Raws} and especially Juan Arias de Reyna
\cite{Arias}.
Following Arias de Reyna \cite{Arias} we will
denote the complexity of $n$ by $\cpx{n}$. Notice that for any
natural numbers $n$ and $m$ we will have
\begin{equation*}
\cpx{1}=1, \quad
\cpx{n+m}\le \cpx{n}+\cpx{m},\quad
\cpx{nm}\le \cpx{n}+\cpx{m},\quad
\end{equation*}
More specifically, for any $n>1$, we have
\begin{displaymath}
\cpx{n}=\min_{\substack{a,b<n\in \mathbb{N} \\ a+b=n\ \mathrm{or}\ ab=n}}
\cpx{a}+\cpx{b}.
\end{displaymath}
This fact together with $\cpx{1}=1$ allows one to compute $\cpx{n}$ recursively.
If the equality $\cpx{n} = \cpx{a}+ \cpx{b}$ holds, with either $n=a+b$ or
$n=ab$, then we will say $n$ can be written \emph{most-efficiently} as $a+b$ or
as $ab$, respectively.
Integer complexity is approximately logarithmic; it satisfies the bounds
\begin{equation*}
3\log_3 n\le \cpx{n} \le 3\log_2 n,\qquad n>1.
\end{equation*}
The upper bound can be obtained by writing $n$ in binary and finding a
representation using Horner's algorithm. The lower bound follows from results
described below. The lower bound is known to be attained infinitely often,
namely for all $n=3^k$. The constant in the upper bound above can be improved further \cite{upbds},
and it is an open problem to determine the true
asymptotic order of magnitude of the upper bound. At present even the
possibility that an asymptotic formula $\cpx{n} \sim 3 \log_3 n$ might hold has
not been ruled out.
Let $E(k)$ be the largest number writable with $k$ ones, i.e., with complexity
at most $k$.
John Selfridge (see \cite{Guy}) proved that $E(1) =1$, and the larger values
depend on the residue class of $k$ modulo $3$, namely for $k=3j +i \ge 2$,
\begin{eqnarray*}
E(3j) &=&3^j\\
E(3j+1) &=& 4 \cdot 3^{j-1} \\
E(3j+2) &= & 2 \cdot 3^j
\end{eqnarray*}
Observe that $E(k)\le 3^{k/3}$ in all cases, and that equality holds for
cases where $3$ divides $k$.
These formulas also show that
$E(k) > E(k-1)$, a fact that implies that the integer
$E(k)$ requires exactly
$k$ ones. This yields the following result:
\begin{thm} \label{th1}
For $a=0, 1,2$ and for all $k \ge 0$ with $a+k \ge 1$, one has
\begin{equation*}\label{3m}
\cpx{2^a \cdot3^k}=2a +3k.
\end{equation*}
\end{thm}
Further results are known on the largest possible integers having a given
complexity. We can generalize the notion of $E(k)$ with the following
definition:
\begin{defn}
Define $E_r(k)$ to be the $(r+1)$-th largest number writable using $k$ ones,
i.e.~
complexity at most $k$, so long as there are indeed $r+1$ or more distinct such
numbers. Thus $E_r(k)$ is defined only for $ k \ge k(r)$.
Here $E_0(k)=E(k)$.
\end{defn}
Daniel A. Rawsthorne \cite{Raws} determined a formula for $E_1(k)$, namely:
\begin{equation*}
E_1(k)=\frac{8}{9} E(k), \qquad k\ge 8
\end{equation*}
Direct computation establishes that
$E_1(k)\le(8/9)E(k)$ holds for all $2 \le k \le 7$ (note that $E_1(1)$ is not
defined). From this fact we deduce that, for $0\le a \le 5$ and all $k \ge 0$
with $a+k>0$,
$$
\cpx{2^a \cdot 3^k}=2a+3k.
$$
J. Iraids et al. \cite{data2}
has verified that $\cpx{2^a 3^k}=2a+3k$ for
$2 \le 2^a \cdot 3^k \le 10^{12}$ , so in particular
$$\cpx{2^a}=2a, \quad \mbox{ for} \quad 1\le a\le 39.$$
These results together with results given later in this paper lend
support to the following conjecture, which was originally formulated
as a question in Guy \cite{Guy}.
\begin{conj}\label{cj11}
For all $a \ge 0$ and all $k \ge 0$ with $a+k \ge 1$ there holds
$$
|| 2^a \cdot 3^k || = 2a + 3k.
$$
\end{conj}
This conjecture is presented as a convenient form for summarizing existing
knowledge; there is limited evidence for its truth, and it may well be false.
Indeed its truth would imply $\cpx{2^a} = 2a$, for all $a$.
Selfridge raised this special case in a contrary form,
asking the question whether there is some $a$ for which $\cpx{2^a} < 2a$
(see \cite{Guy}).
In this paper, we will investigate these questions by looking at numbers $n$ for
which the difference $\delta(n):=\cpx{n}-3\log_3 n$ is less than a given
threshold;
these sets we may call numbers
with integer complexity close to the lower bound.
\subsection{Main Results}
The fundamental issue making the complexity of an integer a complicated
quantity are: (1) It assumes the same value for many integers, because it
is logarithmically small; (2) It is hard to determine lower bounds for a given
value $\cpx{n}$, since
the dynamic programming tree is exponentially large. The feature (1) implies
there can be many tie values in
going down the tree, requiring a very large search, to determine any specific
complexity value.
We introduce a new invariant to study integer complexity.
\begin{defn}
The \emph{defect} of a natural number $n$ is given by
\begin{equation*}\label{defd}
\delta(n)=\cpx{n}-3\log_3 n
\end{equation*}
\end{defn}
The introduction of the defect simplifies things in that it provides
a more discriminating invariant: we show that $\delta(n) \ge 0$ and that
it separates integers into quite small equivalence
classes. In these equivalence classes powers of $3$ play a special role.
The following result establishes a conjecture of Arias de Reyna \cite[Conjecture 1]{Arias}.
\begin{thm}\label{power-of-3}
(1) For a given value $\delta$ of the defect, the set
$S(\delta) :=\{ m:~~\delta(m) = \delta\}$, is a
chain $\{ n\cdot 3^k: 0 \le k \le k(n)\}$ where $k(n)$ may be finite or
infinite.
The value $n$ is called the leader of the chain.
(2) The function $\delta( n \cdot 3^k)$ is non-increasing on the
sequence $\{ n \cdot 3^k : \, k\ge 0\}$.
This sequence has a finite number of leaders
culminating in a largest leader $n \cdot 3^L$, having the property that
$$
|| n \cdot 3^k|| = ||n \cdot 3^L|| + 3(k-L), ~~\mbox{for all}~~k \ge L.
$$
\end{thm}
\noindent
The set of integers $n \cdot 3^k$ for $k \ge L$ are termed {\em
stable integers},
because their representation using $1$'s stabilizes into a predictable form
for $k \ge L$. This result is proved in Section \ref{sec21}.
The main results of the paper concern classifying integers having
small values of the defect. The
defect is compatible with the multiplication aspect of the dynamic
programming definition of the integer complexity, but it does not
fully respect the addition aspect.
The main method underlying the results of this paper is given in
Theorem~\ref{themethod}, which provides strong constraints on the dynamic
programming
recursion for classifying numbers of small defect. It allows construction of
sets of integers including all integers of defect below a specified bound $r$,
which may however include some additional integers. The method contains
adjustable parameters, and with additional work they sometimes permit exact
determination of these sets.
This main method has several applications. First, we use it to explictly
classify
all integers of defect below the bound $12 \delta(2)\approx 1.286$.
(Theorem ~\ref{computeresult}). This requires pruning the sets found using
Theorem ~\ref{themethod} to determine the sets below $k \delta(2)$
for $1\le k \le 12.$
Using this result we obtain an explicit classification of all integers having
defect at most $1$, as follows.
\begin{thm}
The numbers $n$ satisfying $0\le \delta(n)<1$ are precisely those that can be
written in one of the following forms, and have the following complexities:
\begin{enumerate}
\item $3^k$ for $k\ge 1$, of complexity $3k$
\item $2^a 3^k$ for $a\le 9$, of complexity $2a+3k$ (for $a$, $k$ not both
zero)
\item $5\cdot2^a 3^k$ for $a\le 3$, of complexity $5+2a+3k$
\item $7\cdot2^a 3^k$ for $a\le 2$, of complexity $6+2a+3k$
\item $19\cdot3^k$ of complexity $9+3k$
\item $13\cdot3^k$ of complexity $8+3k$
\item $(3^n+1)3^k$ of complexity $1+3n+3k$ (for $n\ne0$)
\end{enumerate}
Furthermore $n=1$ is the only number having defect exactly $1$.
\end{thm}
This result is established in Section \ref{sec61}.
Using a slightly more general result, which we present as Theorem \ref{computeresult},
one can obtain a generalization of Rawsthorne's results,
consisting of a description of all $E_r(k)$ for every finite $r \ge 0$,
valid for all sufficiently large $k$, depending on $r$.
This answer also depends on the congruence class of $k \pmod{3}$. For
example, one has $E_2(3k) = \frac{64}{81} E(3k)$,
$E_2(3k+1) =\frac{5}{6} E(3k+1)$ and $E_2(3k+2) = \frac{5}{6} E(3k+2)$, all
holding for $k \ge 4$.
For $E_5(k)$ all three residue classes have different formulas, valid for $k
\ge 5$. This generalization will be described elsewhere (\cite{seq3}).
Secondly, the result can be used to obtain lower bounds on complexity
of certain integers, by showing they are excluded from sets containing all
integers of complexity at most $r$.
This we use to prove Conjecture \ref{cj11} for $a \le 21$.
\begin{thm}\label{th11main}
For all $0\le a \le 21$ and any $k\ge 0$ having $a+k \ge 1$, there holds
$$
\cpx{2^a3^k}=2a+3k.
$$
\end{thm}
This result is established in Section \ref{sec62}.
It is possible to carry out computations establishing
the Conjecture \ref{cj11} for larger value of $a$,
as we shall describe in \cite{seq2}.
Thirdly, our main method can be used to estimate the magnitude of numbers below
$x$ having a given defect.
\begin{thm}
\label{indcount0}
For any $r >0$ the number of elements $A_r(x)$ smaller than $x$
which have complexity $\delta(n) <r$ satisfies
an upper bound, valid for all $x \ge 2$,
$$
A_r(x) \le C_r (\log x)^{\lfloor r \rfloor+1},
$$
where $C_r >0$ is an effectively computable constant depending on $r$.
\end{thm}
This result is proved in Section \ref{sec63}. It implies that the set of
possible defect values is unbounded.
\subsection{Discussion}
We first remark on computing $\cpx{n}$. The recursive definition permits
computing $\cpx{n}$ by dynamic programming, but it requires knowing
$\{ \cpx{k} : 1 \le k \le n-1\}$, so takes exponential time in the input size
of $n$ measured in bits. In particular, a straightforward
approach to computing $\cpx{x}$
requires on the order of $n^2$ steps. Srinivas and Shankar \cite{waset}
obtained an improvement on this, running in time $O(n^{\log_2 3})$.
We make some further remarks on Conjecture \ref{cj11}.
Let's specialize to $k=0$ and
consider an analogous question for prime powers,
concerning $\cpx{p^m}$ as $m$ varies.
It is clear that $\cpx{p^m} \le m \cdot \cpx{p}$, since we can
concatenate by multiplication $m$ copies of a good representation of $p$.
For which primes $p$ is it true that
$\cpx{p^m} = m \cpx{p}$ holds for all $m \ge 1$?
This is verified for $p=3$ by $\cpx{3^m} = 3m,$
and the truth of Conjecture \ref{cj11} requires that it hold
for $p=2$, with $\cpx{2^m} = 2m$.
However this question has a negative answer for powers of $5$.
Here while $\cpx{5}=5$, one instead gets that $\cpx{5^6}=\cpx{15625}=29<6
\cdot\cpx{5}= 30$, as
\begin{eqnarray*}
15625 & = & 1+(1+1)(1+1)(1+1)(1+1+1)(1+1+1)\cdot \\
& & (1+(1+1)(1+1)(1+1)(1+1+1)(1+1+1)(1+1+1))
\end{eqnarray*}
This encodes the identity $5^5= 1 +72 \cdot 217$,
in which $72= 2^3 \cdot 3^2$ and $217= 1+ 2^3 \cdot 3^3$.
This counterexample for powers of $5$ leaves open the possibility
that there might exist a (possibly far larger) counterexample for powers of $2$,
that has not yet been detected.
This discussion shows that Conjecture \ref{cj11}, if true,
implies a kind of very strong arithmetic independence of powers of $2$ and
powers of $3$. This would represent an important feature of the prime $2$
in integer complexity. Conjecture \ref{cj11} has implications about the
number of nonzero digits in the expansion of $2^n$ in base $3$ as a function of
$n$; namely, if there existed a
large power of $2$ with a huge number of zero digits in its base $3$ expansion,
then this would give a (counter)-example achieving $\cpx{2^k}< 2k$.
Problems similar to this very special subproblem already appear difficult (see
Lagarias \cite{Lag09}). A result of C.~L.~Stewart \cite{Stewart} yields a lower
bound on the number of nonzero digits appearing in the base $3$ expansion of
$2^n$, but it is tiny, being only $\Omega(\frac{\log n}{\log \log n})$.
The truth of $\cpx{2^n}= 2n$ would also immediately imply the lower bound
$$
\limsup_{n\rightarrow\infty} \frac{\cpx{n}}{\log n}\ge \frac{2}{\log 2}.
$$
Computer experiments seem to agree with this prediction and even allow the
possibility of equality, see Iraids et al \cite{data2}.
There remain many interesting open questions concerning the classification of
integers given by the defect. The first concerns the distribution of stable
and unstable integers. How many are there of each kind? A second question
concerns the function $M(n)$ that counts the number of distinct minimal
decompositions into $1$'s that a given integer $n$ has. How does this function
behave?
Finally we remark that the set $\mathscr{D} := \{ \delta(n): n \ge 1 \}$ of all
defect values turns out to be a highly
structured set. In a sequel \cite{seq1}, we shall show that it is
a well-ordered set, of order type $\omega^\omega$, a fact related to some
earlier conjectures of Juan Arias de Reyna \cite{Arias}.
\section{Properties of the defect}
\label{secdft}
The defect is the fundamental tool in this paper; let us begin by
noting some of its basic properties.
\begin{prop}
\label{multdft}
(1) For all integers $a \ge 1$,
\[ \delta(a) \ge 0.\]
Here equality holds precisely for $a= 3^k$, $k \ge 1$.
(2) One has
\[ \delta(ab)\le \delta(a)+\delta(b),\]
and equality holds if and only if
$\cpx{ab}=\cpx{a}+\cpx{b}$.
(3) For $k\ge 1$,
\[\delta(3^k \cdot n) \le \delta(n)\]
and equality holds
if and only if $\cpx{3^k \cdot n}=3k+\cpx{n}$.
\end{prop}
\begin{proof}
(1) This follows from the result of Selfridge. Since for $k\ge 1$,
$\cpx{3^k}=3k$, we have $\delta(3^k)=0$ for $k\ge 1$, while $\delta(1)=1$. For the
converse, note that $3\log_3 n$ is only an integer if $n$ is a power of $3$.
(2) This is a direct consequence of the definition.
(3) This follows from (2), from noting that $\delta(3^k)=0$ for $k\ge 1$.
\end{proof}
Because $\cpx{3^k}=3k$ for $k\ge 1$, one might hope that in general,
$\cpx{3n}=3+\cpx{n}$ for $n>1$. However, this is not so; for instance,
$\cpx{107}=16$, but $\cpx{321}=18$.
The defect measures how far a given integer is from the upper bound $E(||n||)$,
given in terms of the ratio $E(\cpx{n})/n$:
\begin{prop}
We have $\delta(1) =1$ and
\label{dRformulae}
\begin{displaymath}
\delta(n)=\left\{ \begin{array}{ll}
3\log_3 \frac{E(\cpx{n})}{n} & \mathrm{if}\quad \cpx{n}\equiv 0\pmod{3}, \\
3\log_3 \frac{E(\cpx{n})}{n} +2\,\delta(2)
& \mathrm{if}\quad \cpx{n}\equiv 1\pmod{3}, \,\, \mathrm{with} \; n > 1, \\
3\log_3 \frac{E(\cpx{n})}{n} +\delta(2)
& \mathrm{if}\quad \cpx{n}\equiv 2\pmod{3}.
\end{array} \right.
\end{displaymath}
In particular $E(\cpx{n})/n\ge 1$ for any $n \ge 1$.
\end{prop}
\begin{proof}
The proof is a straightforward computation using Selfridge's formulas for $E(k)$,
for $k = 3j+ i,$ $i=0,1,2$.
\end{proof}
\subsection{Stable Integers}\label{sec21}
This example above motivates the following definition.
\begin{defn}
A number $m$ is called \emph{stable} if $\cpx{3^k \cdot m}=3k+\cpx{m}$
holds for every $k \ge 1$.
Otherwise it is called \emph{unstable}.
\end{defn}
We have the following criterion for stability.
\begin{prop}
\label{stabdft}
The number $m$ is stable if and only if $\delta(3^k \cdot m)=\delta(m)$ for all $k\ge 0$.
\end{prop}
\begin{proof}
This is immediate from Proposition~\ref{multdft}(3).
\end{proof}
These results already suffice to prove the following
result, conjectured by Juan Arias de Reyna \cite{Arias}.
\begin{thm}
\label{cj1}
(1) For any $m \ge 1$, there exists a finite $K\ge 0$ such that
$3^K m$ is stable.
(2) If the defect $\delta(m)$ satisfies $0 \le \delta(m)<1$, then $m$ itself is
stable.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{cj1}]
(1) From Proposition~\ref{multdft}, we have that for any $n$,
$\delta(3n)\le
\delta(n)$, with equality if and only if $\cpx{3n}=\cpx{n}+3$. More generally,
$\delta(3n)=\delta(n)-(\cpx{n}+3-\cpx{3n})$, and so the difference
$\delta(n)-\delta(3n)$ is always an integer.
This means that the sequence
$\delta(m), \delta(3m), \delta(9m), \ldots$ is non-increasing, nonnegative, and
can only decrease in integral amounts;
hence it must eventually stabilize. Applying Proposition~\ref{stabdft} proves
the theorem.
(2) If $\delta(m)<1$, since all $\delta(n) \ge 0$ there is no room to remove
any integral
amount, so $m$ must be stable.
\end{proof}
Note that while this proof shows that for any $n$ there exists $K$ such that
$3^K n$ is stable, it yields no upper bound on such a $K$. We will give a more
constructive proof and show how to compute such a $K$ in \cite{seq2}.
The value of the defect separates the integers into small classes, whose members
differ only by powers of $3$.
\begin{prop}
\label{eqdefect}
Suppose that $m$ and $n$ are two positive integers, with $m>n$.
(1) If $q:= \delta(n)- \delta(m)$ is rational, then it is necessarily a nonnegative integer,
and furthermore $m=n \cdot 3^k$ for some $k \ge 1$.
(2) If $\delta(n) = \delta(m)$ then $m= n \cdot 3^k$ for some $k \ge 1$ and furthermore
\[ || n \cdot 3^j|| = 3j+ || n ||\qquad\mathrm{for}\ 0 \le j \le k.\]
In particular $\delta(n)= \delta(m)$ implies $\cpx{n}\equiv \cpx{m} \pmod{3}$.
\end{prop}
\begin{proof}
(1) If $q=\delta(n)-\delta(m)$ is rational, then $k=\log_3(m/n)\in {\mathbb Q}$ is
rational;
since $m/n$ is rational, the only way this can occur is if $\log_3(m/n)$ is an
integer $k$, in which case, since $m > n,$ $m = n \cdot 3^k$ with $k \ge 1$.
It then follows from the definition of defect that
$q=\cpx{n}+3k-\cpx{m}$.
(2) By (1) we know that $m=n \cdot 3^k$ for some $k \ge 1$. By
Proposition \ref{multdft} (3)
we have $\delta(n \cdot 3^j) \le \delta (n)$, for $j \ge 0$
and it also gives
$\delta(m)= \delta(n \cdot 3^k) \le \delta(n \cdot 3^j),$ for $0 \le j \le k$.
Since $\delta(m)=\delta(n)$ by hypothesis, this gives $\delta(n \cdot 3^j) =
\delta(n)$,
so that $ ||n \cdot 3^j|| = 3j+ ||n||: 0 \le j \le k.$
\end{proof}
The results so far suffice to prove Theorem ~\ref{power-of-3}.
\begin{proof}[Proof of Theorem ~\ref{power-of-3}]
(1) This follows from Proposition \ref{eqdefect}(2).
(2) The non-increasing assertion follows from Proposition \ref{multdft}(3).
The finiteness of the number of leaders in a sequence $3^k \cdot n$ follows from Theorem \ref{cj1} (1).
\end{proof}
\subsection{Leaders}\label{sec22}
Again because $\cpx{3n}$ is not always equal to $3+\cpx{n}$, it makes sense to
introduce the following definition:
\begin{defn}
We call a natural number $n$ a \emph{leader} if it cannot be written
most-efficiently as $3m$ for some $m$; i.e., if either $3\nmid n$, or, if $3\mid n$, then $\cpx{n}<3+\cpx{n/3}$.
\end{defn}
For example, $107$ is a leader since $3\nmid 107$, and $321$ is also a leader
since $\cpx{321}=18<3+16=3+\cpx{107}$. However, $963$ is not a leader, as
$\cpx{963}=21=3+\cpx{321}$.
Leaders can be stable or unstable. In this example $107$ is unstable, but by
Theorem \ref{cj1}
some multiple $3^K \cdot 107$ will be stable, and the smallest such multiple
will be a stable leader.
We have the following alternate characterization of leaders:
\begin{prop}
\label{1stofdft}
(1) A number $n$ is a leader if and only if it is the smallest number having its given defect value.
(2) For any natural number $m$, there is a unique leader $n\le m$ such that
$\delta(n)= \delta(m)$. For it $m=n \cdot 3^k$ for some $k \ge 0$.
\end{prop}
\begin{proof}
(1) If this were false, there would a leader $n$ with some $n' < n$ with $\delta(n')=\delta(n)$.
By Proposition \ref{eqdefect} (2) $n = 3^k \cdot n'$ with $k \ge 1$ and
$||n' \cdot 3^j|| = 3j + ||n'||$ for $0 \le j \le k$. But then $n/3 = n' \cdot
3^{k-1}$ is an integer
and $||n/3|| = ||n'||+ 3k -3= ||n||-3$, which contradicts $n$ being a leader.
Conversely, if $n$ is the first number of its defect and is divisible by $3$,
then we cannot have $\cpx{n}=\cpx{n/3}+3$, or else by Proposition~\ref{multdft}
we would obtain $\delta(n)=\delta(n/3)$, contradicting minimality.
(2) Pick $n$ to be the smallest number such that $\delta(n)=\delta(m)$; this is
the unique leader satisfying $\delta(n)=\delta(m)$. Then $m=3^k n$ for some
$k\ge 0$ by Proposition~\ref{eqdefect}.
\end{proof}
To summarize, if $\delta$ occurs as a defect, then the set of integers
$$
N(\delta) := \{m:\, \delta(m)= \delta\},
$$
having a given defect value $\delta$
has a smallest element that is a leader. If this leader $n$ is unstable, then
$N(\delta) =\{ 3^j \cdot n: 0 \le j \le j(\delta)\}$. If this leader
is stable, then $N(\delta)= \{ 3^j \cdot n: \, j \ge 0\}$ is an infinite set.
Furthermore if $3 \nmid n$ then $n$ is a leader, and there is a unique $K= K(n)
\ge 0$
such that $n' = 3^K n$ is a stable leader.
\section{Good factorizations and solid numbers}
Given a natural number $n>1$, by the dynamic programming definition of
complexity there are either two numbers $u$ and $v$, both smaller than $n$,
such that $n=u\cdot v$ and $\cpx{n}=\cpx{u}+\cpx{v}$, or such that $n=u+v$ and
$\cpx{n}=\cpx{u}+\cpx{v}$. In the case $u$ and $v$
such that $n=u+v$, and $\cpx{n}=\cpx{u}+\cpx{v}$ we say
$n$ is {\em additively reducible}. In the case $n=u\cdot v$ and
$\cpx{n}=\cpx{u}+\cpx{v}$ we say
$n$ is {\em multiplicatively reducible}.
Some numbers $n$ are reducible in both senses. For instance, $10=9+1$ with
$\cpx{10}=\cpx{9}+\cpx{1}$, and $\cpx{10}=2\cdot 5$ with
$\cpx{10}=\cpx{2}+\cpx{5}$.
\subsection{Additive Irreducibility and Solid Numbers}
We introduce terminology for numbers
not being additively reducible.
\begin{defn}
We will say that a natural number $n$ is {\em additively irreducible} if it
cannot be written most-efficiently as a sum, i.e., for all $u$ and
$v$ such that $n=u+v$, we have $\cpx{n}<\cpx{u}+\cpx{v}$. We call such values of
$n$ {\em solid numbers}.
\end{defn}
The first few solid numbers are
\begin{align*}
\{1, 6, 8, 9, 12, 14, 15, 16, 18, 20, 21, 24, 26, 27, \ldots\}
\end{align*}
It can be shown that $3^n$ is a solid number for $n\ge 2$, and so there are
infinitely many solid numbers. Experimental evidence suggests that a positive
fraction of integers below $x$ are solid numbers, as $x \to \infty$.
\subsection{Multiplicative Irreducibility and Good Factorizations}
We introduce further terminology for factorizations that respect complexity.
\begin{defn}
A factorization $n=u_1\cdot u_2\cdots u_k$ is a \emph{good factorization} of $n$
if $n$ can be written most-efficiently as $u_1\cdot u_2\cdots u_k$, i.e., if the
following equality holds:
\begin{displaymath}
\cpx{n}=\cpx{u_1}+\cpx{u_2}+\ldots+\cpx{u_k}.
\end{displaymath}
The factorization containing only one factor is automatically good; this will be
called a \emph{trivial good factorization}.
\end{defn}
\begin{prop}
\label{goodfac}
If $n=n_1\cdot n_2\cdot \ldots \cdot n_k$ is a good factorization then for any
nonempty subset $I\subset \{1,2,\dots,k\}$ the product $m=\prod_{j\in I} n_j$
is a good factorization of $m$.
\end{prop}
\begin{proof}
If the factorization of $m$ were not good, then we would have
\begin{displaymath}
\cpx{m}<\sum_{j\in I} \cpx{n_j}
\end{displaymath}
But then
\begin{displaymath}
\cpx{n} = \Bigl \Vert m \prod_{j\notin I} n_j\Bigr \Vert
<\sum_{j\in I} \cpx{n_j}+\sum_{j\notin I} \cpx{n_j}
=\sum_{j=1}^k \cpx{n_j}
\end{displaymath}
and the given factorization of $n$ would not be a good factorization.
\end{proof}
\begin{prop}
\label{goodfacconcat}
(1) If $n=n_1\cdot n_2 \cdot ... \cdot n_k$ is a good factorization, and each
$n_i=n_{i,1} \cdot
\ldots \cdot n_{i,l_i}$ is a good factorizations, then so is $n=\prod_{i=1}^k
\prod_{j=1}^{l_i} n_{i,j}$.
(2) If $n=n_1\cdot n_2\cdot \ldots \cdot n_k$ is a good factorization, and $I_1,
I_2, \ldots, I_l$ is a partition of $\{1,\ldots,k\}$, then letting
$m_i=\prod_{j\in I_i} n_j$, we have that $n=\prod_{i=1}^l m_i$ is a good
factorization.
\end{prop}
\begin{proof}
(1) We have that $\cpx{n_i}=\sum_{j=1}^{l_i} \cpx{n_{i,j}}$ and
$ \cpx{n}=\sum_{i=1}^k \cpx{n_i}$, so
\[\cpx{n}=\sum_{i=1}^k \sum_{j=1}^{l_i}
\cpx{n_{i,j}}
\]
and we are done.
(2) This follows from Proposition~\ref{goodfac} together with (1).
\end{proof}
\begin{defn}
We will say that
a natural number $n$ is {\em multiplicatively irreducible}
(abbreviated \emph{$m$-irreducible}) if $n$ has no nontrivial good
factorizations.
\end{defn}
Proposition~\ref{goodfacconcat}(2) shows $n$ is $m$-irreducible if and only if
all nontrivial factorizations $n=uv$ have $\cpx{n}<\cpx{u}+\cpx{v}$.
Thus a prime number $p$ is automatically $m$-irreducible since the only
factorization is $p=p\cdot1$ and obviously we have
$\cpx{p}<\cpx{p}+1=\cpx{p}+\cpx{1}$. However, the converse does not hold.
For instance, $46$ is a composite number which is $m$-irreducible.
\begin{prop}
\label{facexists}
Any natural number has a good factorization into $m$-irreducibles.
\end{prop}
\begin{proof}
We may apply induction and assume that any $m<n$ has a factorization into
$m$-irreducibles. If $n$ is $m$-irreducible, we are done. Otherwise, $n$ has a
good factorization $n=uv$. Observe that $n=n\cdot 1$ is never a good
factorization, since $\cpx{1}=1$; hence, $u$, $v<n$. Then the induction
hypothesis implies that $u$ and $v$ have good factorizations into
$m$-irreducibles. Multiplying these factorizations together and applying
Proposition~\ref{goodfacconcat}, we obtain a good factorization of $n$ into
$m$-irreducibles.
\end{proof}
Good factorizations into $m$-irreducibles need not be unique. For
$4838 = 2 \cdot 41 \cdot 59$,
we find that $2\cdot(41\cdot59)$, $(2\cdot59)\cdot41$ and $(2\cdot41)\cdot59$
are all good factorizations, but the full factorization $2\cdot41\cdot59$ is
not a good factorization. (Thanks to Juan Arias de Reyna for this example.) This is deducible from the following data:
\begin{gather*}
\cpx{2\cdot41\cdot 59}=27,\\
\cpx{2}=2,\quad
\cpx{41}=12,\quad
\cpx{59}=14.\\
\cpx{2\cdot41}=13,\quad
\cpx{2\cdot59}=15,\quad
\cpx{41\cdot59}=25,
\end{gather*}
\subsection{Good factorizations and leaders}
The next two propositions show how the notion of good factorization interacts
with leaders and stability.
\begin{prop}
\label{leaderfac}
Let $n=n_1\cdot n_2\cdots n_r$ be a good factorization. If $n$ is a leader
then each of the factors $n_j$ is a leader.
\end{prop}
\begin{proof}
Suppose otherwise; without loss of generality, we may assume that $n_1$ is not a
leader, so $3\mid n_1$ and $\cpx{n_1}=3+\cpx{n_1/3}$. So $3\mid n$ and
\begin{multline*}
\cpx{n/3}=\cpx{(n_1/3)\cdot n_2\cdot\ldots\cdot n_r}\le
\cpx{n_1/3}+\sum_{j=2}^r \cpx{n_j}\\ =
\cpx{n_1}-3+\sum_{j=2}^r \cpx{n_j}=\cpx{n}-3.
\end{multline*}
Since $\cpx{n}\le 3+\cpx{n/3}$, we have $\Vert n\Vert= 3+\Vert n/3\Vert$,
and thus $n$ is not a leader.
\end{proof}
\begin{prop}
\label{stablefac}
Let $n=n_1\cdot n_2\cdots n_r$ be a good factorization. If $n$ is stable,
then each of its factors $n_j$ is stable.
\end{prop}
\begin{proof}
Suppose otherwise. Without loss of generality, we may assume that $n_1$ is
unstable; say $\cpx{3^k n_1}<\cpx{n_1}+3k$. So
\begin{multline*}
\cpx{3^k n}=\cpx{(3^k n_1)\cdot n_2\cdot\ldots\cdot n_r}\le
\cpx{3^k n_1}+\sum_{j=2}^r \cpx{n_j}\\<
\cpx{n_1}+3k+\sum_{j=2}^r \cpx{n_j}=\cpx{n}+3k.
\end{multline*}
and thus $n$ is not stable.
\end{proof}
Assembling all these results we deduce that being a leader and being stable are
both inherited properties for subfactorizations of good factorizations.
\begin{prop}
\label{lsfac2}
Let $n=n_1\cdot n_2\cdots n_r$ be a good factorization, and $I$ be a nonempty
subset of $\{1,\ldots,r\}$; let $m=\prod_{i\in I} n_i$. If $n$ is a leader,
then so is $m$. If $n$ is stable, then so is $m$.
\end{prop}
\begin{proof}
Immediate from Proposition~\ref{leaderfac}, Proposition~\ref{stablefac}, and
Proposition~\ref{goodfacconcat}.(2).
\end{proof}
\section{The Classification Method}
\label{mainlem}
Here, we state and prove a result (Theorem \ref{themethod}) that will be our
primary tool for the rest of the paper. By applying it repeatedly, for any
$r>0$, we can put restrictions on what integers $n$ can satisfy $\delta(n)<r$.
\begin{defn}
(1) For any real $r\ge0$, define $A_r$ to be $\{n\in\mathbb{N}:\delta(n)<r\}$.
(2) Define
$B_r$ to be the set consisting of those elements of $A_r$ that are leaders.
\end{defn}
While $A_r$ is our main object of interest, it turns out to be easier
and more natural to deal with $B_r$.
Note that knowing $B_r$ is enough to determine $A_r$, as
expressed in the following proposition:
\begin{prop}
\[A_r=\{ 3^k n: n\in B_r, k\ge 0 \}\]
\end{prop}
\begin{proof}
If $n\in B_r$, then $\delta(3^k n)\le \delta(n)<r$, so $3^k n\in A_r$.
Conversely, if $m\in A_r$, by Proposition~\ref{1stofdft}(2) we can take $n\ge 1$
and $k\ge 0$ such that $n$ is a leader, $m=3^k n$, and $\delta(m)=\delta(n)$;
then $n\in B_r$ and we are done.
\end{proof}
We now let $\alpha >0$ be a real parameter, specifiable in
advance. The main result puts constraints on the allowable forms of
the dynamic programming recursion (most efficient representations) to compute
integers in $B_{(k+1) \alpha}$
in terms of integers in $B_{j \alpha}$ for $1 \le j \le k$.
However there are some exceptional cases that must be considered separately in
the theorem;
fortunately, for any $\alpha<1$, there are only finitely many. We will
collect these into a set we call $T_\alpha$.
\begin{defn}
Define $T_\alpha$ to consist of $1$ together with those $m$-irreducible
numbers $n$ which satisfy
\[\frac{1}{n-1}>3^{\frac{1-\alpha}{3}}-1\]
and do not satisfy $\cpx{n}=\cpx{n-b}+\cpx{b}$ for any solid numbers $b$ with
$1<b\le n/2$.
\end{defn}
Observe that for $0< \alpha<1$, the above inequality is equivalent to
\[n<(3^{\frac{1-\alpha}{3}}-1)^{-1}+1\] and hence $T_\alpha$ is a finite set.
For $\alpha \ge 1$, the inequality is trivially satisfied and so
$T_\alpha= T_{1}$. We do not know whether $T_1$ is a finite or an infinite set.
However in our computations we will always choose values $0< \alpha <1.$
We can now state the main classification result, which puts strong
constraints on the form of most efficient decompositions on numbers in sets
$B_{(k+1)\alpha}$.
\begin{thm}
\label{themethod}
Suppose $0< \alpha <1$ and that $k\ge1$.
Then any $n\in B_{(k+1)\alpha}$ can be most-efficiently
represented in (at least) one of the following forms:
\begin{enumerate}
\item
For $k=1$,
there is either a good factorization $n=u\cdot v$ where
$u,v\in {B}_\alpha$, or a good factorization $n=u\cdot v\cdot w$ with
$u,v,w\in {B}_\alpha$; \\
For $k \ge 2$, there is a good factorization $n=u \cdot v$ where $u\in
B_{i\alpha}$,
$v\in B_{j\alpha}$ with $i+j=k+2$ and $2\le i, j\le k$.
\item $n=a+b$ with $\cpx{n}=\cpx{a}+\cpx{b}$, $a\in A_{k\alpha}$, $b\le a$ a
solid number and
\[\delta(a)+\cpx{b}<(k+1)\alpha+3\log_3 2.\]
\item There is a good factorization $n=(a+b)v$ with $v\in B_\alpha$ and $a$
and $b$ satisfying the conditions in the case (2) above.
\item $n\in T_\alpha$ (and thus in particular either $n=1$ or
$\cpx{n}=\cpx{n-1}+1$.)
\item There is a good factorization $n = u\cdot v$ with $u\in T_\alpha$ and
$v\in B_\alpha$.
\end{enumerate}
\end{thm}
We will prove Theorem ~\ref{themethod} in Section \ref{sec43},
after establishing a preliminary combinatorial lemma in Section \ref{sec42}.
To apply Theorem \ref{themethod}, one recursively constructs
from given sets $B_{j\alpha}^{*}$, $A_{j\alpha}^{*}$ for $1 \le j \le k-1$
which contain $B_{j\alpha}, A_{j\alpha}$, respectively,
the set of all $n$ satisfying the relaxed conditions (1)-(5) obtained replacing
$B_{j\alpha}$ by
$B_{j\alpha}^{\ast}$ and $A_{j \alpha}$ by $A_{j \alpha}^{\ast}$.
This new set $B_{(k+1)\alpha}^{\ast\ast}$ contains the set $B_{(k+1)\alpha}$
we want.
Sometimes we can, by other methods, prune some elements from
$B_{(k+1)\alpha}^{\ast\ast}$ that do not belong
to $B_{(k+1)\alpha}$, to obtain a new approximation $B_{(k+1)\alpha}^{\ast}$.
This then determines $A_{(k+1)\alpha}^{\ast} := \{ 3^k n: \, k \ge 0, n \in
B_{(k+1)\alpha}^{\ast}\}$,
permitting continuation to the next level $k+2$.
We will present two applications of this construction:
\begin{enumerate}
\item
To get an upper bound on the cardinality of $B_{(k+1)\alpha}$ of numbers below
a given bound $x$.
\item
To get a lower bound for the complexity $\cpx{n}$ of a number $n$ by showing it
does not belong to a given set $B_{k \alpha}^{*}$; this excludes it from $B_{k
\alpha}$, whence $\cpx{n} \ge 3 \log_3 n + k \alpha$.
\end{enumerate}
In some circumstances we can obtain the exact sets $B_{k \alpha}$ and $A_{k
\alpha}$ for $1 \le k \le k_0$, i.e. we recursively construct $B_{k
\alpha}^{\ast}$ so that $B_{k\alpha}^{\ast} = B_{k \alpha}$.
This requires a perfect pruning operation at each step. Here a good choice of
the parameter $\alpha$ is helpful.
In applications we will typically not use the full strength of Theorem
\ref{themethod}. Though the representations it yields are most efficient, the
proofs will typically not
use this fact. Also, in the addition case (2), the requirement that
$\delta(a)+\cpx{b}<(k+1)\alpha+3\log_3 2$ implies the weaker
requirement that just $\cpx{b}<(k+1)\alpha+3\log_3 2$.
The latter relaxed condition is easier to check, but it does enlarge the initial
set $B_{(k+1) \alpha}^{\ast\ast}$ to be pruned.
\subsection{A Combinatorial Lemma}\label{sec42}
We establish a combinatorial lemma regarding decomposing a sum of
real numbers into blocks.
\begin{lem}
\label{blocklem}
Let $x_1, x_2, \dots, x_r>0$ be real numbers such that $\sum_{i=1}^r x_i <
k+1$, where $k\ge1$ is a natural number.
(1) If $k \ge 2$ then either there is some $i$ with
$x_i\ge k$, or else we may find a partition $A\cup B$ of the set
$\{1,2,\dots, r\}$ such that
\[
\sum_{i\in A}x_i<k,\qquad\sum_{i\in B}x_i<k.
\]
(2) If $k=1$ then either there is some $i$ with $x_i\ge 1$, or else we may find
a partition
$A\cup B\cup C$ of the set $\{1,2,\dots, r\}$ such that
\[
\sum_{i\in A}x_i<1,\qquad\sum_{i\in B}x_i<1,\qquad\sum_{i\in C}x_i<1.
\]
\end{lem}
\begin{proof}
(1) Suppose $k \ge 2$.
Let us abbreviate $\sum_{i\in S}x_i$ by $\sum S$. Among all partitions $A\cup
B$ of $\{1,\ldots,r\}$, take one that minimizes $|\sum A - \sum B|$, with $\sum
A\ge\sum B$. Suppose that $\sum A\ge k$; then since $\sum A+\sum B<k+1$, we
have $\sum B<1$, and so $\sum A-\sum B>k-1$. So pick $x_i\in A$ and let
$A'=A\setminus\{i\}$, $B'=B\cup\{i\}$. If $\sum A'>\sum B'$, then
$|\sum A'-\sum B'|=\sum A-\sum B - 2x_i<\sum A-\sum B$, contradicting
minimality, so $\sum A'\le\sum B'$.
So $\sum B'-\sum A'\ge \sum A-\sum B$, i.e.,
\[
x_i\ge \sum A-\sum B>k-1.
\]
Now $i$ was an arbitrary element of $A$; this means that $A$ can
have at most one element, since otherwise, if $j\ne i\in A$, we would have $\sum
A\ge x_i + x_j$ and hence $x_j\le \sum A-x_i \le \sum B<1$, but also $x_j>k-1$,
contradicting $k\ge 2$. Thus $A=\{i\}$ and so $x_i\ge k$.
(2) Here $k=1$. Assume that $x_1\ge x_2\ge\cdots\ge x_r$. If $x_1\ge1$ we are
done. Otherwise,
if $r\le3$, we can partition $\{1,\ldots,r\}$ into singletons.
For $r\ge 4$, assume by induction the lemma is true for all sets of numbers with
strictly less than $r$ elements. Let $y=x_{r-1}+x_r$. We must have $y<1$
because otherwise $x_{r-3}+x_{r-2}\ge x_{r-1}+x_r\ge 1$ and we get $\sum_{i=1}^r
x_i\ge 2$ in contradiction to the hypothesis.
Hence, if we define $x'_1=x_1$, \dots, $x'_{r-2}=x_{r-2}$, $x'_{r-1}=y$, we have
$\sum_{i=1}^{r-1}x'_i=\sum_{i=1}^r x_i<2$, and $x'_i<1$ for all $i$. By the
inductive hypothesis, then, there exists a paritition
$A'\cup B'\cup C'=\{1,\dots, r-1\}$ with
\[
\sum_{i\in A'}x'_i<1,\quad
\sum_{i\in B'}x'_i<1,\quad\sum_{i\in C'}x'_i<1.
\]
Replacing $x'_{r-1}$ with $x_{r-1}$ and $x_r$, we get the required partition of
$\{1,\ldots,r\}$.
\end{proof}
For $k=1$ the example taking $\{ x_1, x_2, x_3\} = \{ 3/5, 3/5, 3/5\}$ shows
that a partition into three sets is sometimes necessary.
\subsection{Proof of the Classification Method} \label{sec43}
\begin{proof}[Proof of Theorem \ref{themethod}.]
Suppose $n\in B_{(k+1)\alpha}$; take a most-efficient representation of $n$,
which is either $ab$, $a+b$, or $1$. If $n=1$, then $n\in T_\alpha$ and we are
in case (4). So suppose $n>1$.
If $n$ is $m$-irreducible, we will pick a way of writing $n=a+b$ with
$\cpx{n}=\cpx{a}+\cpx{b}$, $a\ge b$, and $b$ is solid. There is necessarily a
way to do this, since one way to do so is to write $n=a+b$ with
$\cpx{n}=\cpx{a}+\cpx{b}$ and $b$ minimal. Since this is possible, then, if
there is a way to choose $a$ and $b$ to have $b>1$, do so; otherwise, we must
pick $b=1$. In either case,
\[\cpx{a}+\cpx{b}=\cpx{n}<3\log_3(a+b)+(k+1)\alpha\le3\log_3(2a)+(k+1)\alpha,\]
so $\delta(a)+\cpx{b}<(k+1)\alpha+3\log_3 2$.
If $a\in A_{k\alpha}$, we are in case (2). Otherwise, we have
\begin{eqnarray*}
3\log_3 a+k\alpha+\cpx{b}\le \cpx{a}+\cpx{b}=\cpx{n}<\\
3\log_3(a+b)+(k+1)\alpha \le 3\log_3(2a)+(k+1)\alpha,
\end{eqnarray*}
so $\cpx{b}<3\log_3 2 +\alpha$; since $\alpha<1$, we have $\cpx{b}\le2$ and
thus $b\le2$. Because $b$ is solid, we have $b=1$. By assumption, we only
picked $b=1$ if this choice was forced upon us, so in this case, we must have
that $n$ does not satisfy $\cpx{n}=\cpx{n-b}+\cpx{b}$ for any solid $b$ with
$1<b\le n/2$.
Since $b=\cpx{b}=1$ we have
$3\log_3 a+k\alpha+1<3\log_3(a+1)+(k+1)\alpha$; since $\alpha<1$, solving for
$a$, we find that
\[
\frac{1}{n-1}=\frac{1}{a}>3^{\frac{1-\alpha}{3}}-1.
\]
Thus, $n\in T_\alpha$ and we are in case (4).
Now we consider the case when $n$ is not $m$-irreducible. Choose a good
factorization of $n$ into $m$-irreducible numbers, $n=\prod_{i=1}^r m_i$; since
$n$ is not $m$-irreducible, we have $r \ge 2$. Then we have $\sum_{i=1}^r
\delta(m_i)=\delta(n)<(k+1)\alpha$. Note that since we assumed $n$ is a leader,
every product of a nonempty subset of the $m_i$ is also a leader by
Proposition~\ref{lsfac2}. We now have two cases.
{\em Case 1.} $k\ge2$.
Now by Lemma~\ref{blocklem}(1), either there
exists an $i$ with $\delta(m_i)\ge k\alpha$, or else we can partition the
$\delta(m_i)$ into two sets each with sum less than $k\alpha$.
In the latter case, we may also assume these sets are nonempty, as if one is
empty, this implies that $\delta(n)<k\alpha$, and hence any partition of the
$\delta(m_i)$ will work; since $r\ge 2$, we can take both these sets to be
nonempty. In this case, call the products of these two sets $u$ and $v$, so
that $n=uv$ is a good factorization of $n$. Then
$\delta(u)+\delta(v)<(k+1)\alpha$, so if we let $(i-1)\alpha$ be the largest
integral multiple of $\alpha$ which is at most $\delta(u)$, then letting
$j=k+2-i$, we have $\delta(v)<j\alpha$. So $i+j=k+2$; furthermore, since
$i\alpha$ is the smallest integral multiple of $\alpha$ which is greater than
$\delta(u)$, and $\delta(u)<k\alpha$, we have $i\le k$, so $j\ge
2$. If also $i\ge 2$ then $j\le k$, and so we are in case (1). If instead
$i=1$, then we have $u\in B_\alpha \subseteq B_{2\alpha}$, and $v\in
B_{k\alpha}$ (since $\delta(v)<k\alpha$), so we are again in case
(1) if we take $i=2$ and $j=k$.
If such a partition is not possible, then let $u$ be an $m_i$ with
$\delta(m_i)\ge k\alpha$, and let $v$ be the product of the other $m_i$, so that
once again $n=uv$ is a good factorization of $n$. Since
$\delta(u)+\delta(v)=\delta(n)$, we have $\delta(v)<\alpha$, and so $v\in
B_\alpha$. Finally, since $u$ is $m$-irreducible and an element of
$B_{(k+1)\alpha}$, it satisfies the conditions of either case (2) or case (4),
and so $n$ satisfies the conditions of either case (3) or case (5).
{\em Case 2.} $k=1$.
Now by Lemma~\ref{blocklem}(2), either there
exists an $i$ with $\delta(m_i)\ge \alpha$, or else we can partition the
$\delta(m_i)$ into three sets each with sum less than $\alpha$.
In the latter case, we may also assume at least two of these sets are nonempty,
as otherwise $\delta(n)<\alpha$, and hence any
partition of the $\delta(m_i)$ will work.
If there are two nonempty sets, call the products of these two sets $u$ and
$v$, so that $n=uv$ is a good factorization of $n$. If there are three
nonempty sets, call their products $u, v, w$, so that
$n=uvw$ is a good factorization of $n$. Thus we are in case (1) for $k=1$.
If such a partition is not possible, then we repeat the argument in Case 1
above, determining that $n$ satisfies one of the conditions of cases (3) or (5).
\end{proof}
\section{Determination of all elements of defect below a given bound $r$}
\label{sec5}
In this section we determine all elements of $A_{r}$ for certain small $r$,
using Theorem ~\ref{themethod}
together with a pruning operation.
\subsection{Classification of numbers of small defect}\label{sec50}
We will now choose as our parameter
\[
\alpha := \delta(2) = 2 - 3 \log_{3} 2 \approx 0.107.
\]
The choice of this parameter is motivated by Theorem \ref{thm-delta2} below.
We use above method to
inductively compute $A_{k\delta(2)}$
and $B_{k\delta(2)}$ for $0\le k \le 12$. Numerically,
$1.286<12\delta(2)<1.287$.
The following result classifies all integers in $A_{12 \delta(2)}$.
\begin{thm}
\label{computeresult} {\em (Classification Theorem)}
The numbers $n$ satisfying $\delta(n)<12\delta(2)$ are precisely those that can
be written in at least one of the following forms, which have the indicated
complexities:
\begin{enumerate}
\item $3^k$ of complexity $3k$ (for $k\ge 1$)
\item $2^a 3^k$ for $a\le11$, of complexity $2a+3k$ (for $a$, $k$ not both zero)
\item $5\cdot 2^a 3^k$ for $a\le6$, of complexity $5+2a+3k$
\item $7\cdot 2^a 3^k$ for $a\le5$, of complexity $6+2a+3k$
\item $19\cdot 2^a 3^k$ for $a\le 3$, of complexity $9+2a+3k$
\item $13\cdot 2^a 3^k$ for $a\le 2$, of complexity $8+2a+3k$
\item $2^a(2^b3^l+1)3^k$ for $a+b\le 2$, of complexity $2(a+b)+3(l+k)+1$ (for
$b$, $l$ not both zero).
\item $1$, of complexity $1$
\item $55\cdot 2^a 3^k$ for $a\le 2$, of complexity $12+2a+3k$
\item $37\cdot 2^a 3^k$ for $a\le 1$, of complexity $11+2a+3k$
\item $25\cdot3^k$ of complexity $10+3k$
\item $17\cdot3^k$ of complexity $9+3k$
\item $73\cdot3^k$ of complexity $13+3k$
\end{enumerate}
In particular, all numbers $n>1$ with $\delta(n)<12\delta(2)$ are stable.
\end{thm}
This list is redundant; for example list (7) with $a=0, b=1, l=1$ gives $7
\cdot 3^k$, which overlaps list (4) with $a=0$.
But the given form is convenient for later purposes.
In the next section we will give several applications
of this result. They can be derived knowing only the statement of
this theorem, without its proof, though one will also require
Theorem~\ref{themethod}.
The detailed proof of this theorem is given in the rest of this section.
The proof recursively determines all the sets $A_{k \delta(2)}$ and $B_{k
\delta(2)}$ for $1 \le k \le 12.$
It is possible to extend this method to values $k \delta(2)$
with $k > 12$ but it is tedious.
In a sequel paper \cite{seq2}, we will present a method for automating these
computations.
\subsection{Base case}
The use of $\delta(2)$ may initially seem like an odd choice of
step size. Its significance is shown by the following base case, which
is proved using Rawsthorne's result that $E_1(k)\le(8/9)E(k)$
(with equality for $k \ge 8$).
\begin{thm}\label{thm-delta2}
If $\delta(n)\ne 0$, then $\delta(n)\ge \delta(2)$. Equivalently, if $n$ is not
a power of $3$, then $\delta(n)\ge \delta(2)$.
\end{thm}
\begin{proof}
We apply Proposition \ref{dRformulae}. There are four cases.
Case 1. If $n=1$, then $\delta(n)=1\ge \delta(2)$.
Case 2. If $\cpx{n}\equiv 2 \pmod{3}$, then
$$\delta(n)=\delta(2)+3\log_3 \frac{E(\cpx{n})}{n}\ge \delta(2).$$
Case 3. If $\cpx{n}\equiv 1 \pmod{3}$ and $n> 1$, then
$$\delta(n)=2\delta(2)+3\log_3 \frac{E(\cpx{n})}{n}\ge 2\delta(2)\ge \delta(2).$$
Case 4. If $\cpx{n}\equiv 0\pmod{3}$, then $\delta(n)=3\log_3 (E(\cpx{n})/n)$. We
know that in this case $n=E(\cpx{n})$ if and only if $n$ is a power of $3$ if
and only if $\delta(n)=0$. So if $\delta(n)\ne 0$, then $n\le E_1(\cpx{n})$. But
$E_1(\cpx{n})\le (8/9)E(\cpx{n})$, so $E(\cpx{n})/n\ge 9/8$, so $\delta(n)\ge
3\log_3 \frac{9}{8} = 3\delta(2)\ge \delta(2)$.
\end{proof}
The proof above also establishes:
\begin{prop}\label{basecase}
$B_0=\emptyset$, and $B_{\delta(2)}=\{3\}$.
\end{prop}
To prove Theorem \ref{computeresult}
we will use Theorem \ref{themethod} for the ``inductive step''. However, while
Theorem \ref{themethod}
allows us to place restrictions on what $A_r$ can contain, if we
want to determine $A_r$ itself, we need a way to certify membership in it.
To certify inclusion in $A_r$ we need an upper bound on the defect, which
translates to an upper bound on complexity, which is relatively easy to do.
However we also need to discard $n$ that do not belong to $A_r$, i.e. pruning
the set we are starting with.
This requires establishing lower bounds on their defects, certifying they
are $r$ or larger,
and for this we need lower bounds on their complexities.
\subsection{Two pruning lemmas}\label{sec52}
To find lower bounds on
complexities, we typically use the following technique. Say we want to
show that $\cpx{n}\ge l$ ($l\in\mathbb{N}$); since $\cpx{n}$ is always an
integer, it suffices to show $\cpx{n}>l-1$. We do this by using our current knowledge
of $A_r$ for various $r$; by showing that if $\cpx{n}\le l-1$ held, then it would put
$n$ in some $A_r$ which we have already determined and know it's not in. The
following two lemmas, both examples of this principle, are useful for this
purpose.
\begin{lem}
\label{multlem}
If $\alpha\le 1/2$, $i+j=k+2$, and $a$ and $b$ are natural numbers then
\[ a\in A_{i\alpha},\quad b\in A_{j\alpha},\quad ab\notin A_{k\alpha}
\quad \Longrightarrow \quad \cpx{ab}=\cpx{a}+\cpx{b}.\]
\end{lem}
\begin{proof}
Note
\[\cpx{ab}\ge3\log_3(ab)+k\alpha=3\log_3 a+3\log_3 b+(i+j-2)\alpha>
\cpx{a}+\cpx{b}-1\]
so $\cpx{ab}\ge \cpx{a}+\cpx{b}$.
\end{proof}
\begin{lem}
\label{addlem}
For natural numbers $a$, $k$, and $m\ge 0$ we have
\[ a\in A_{k\alpha}, \quad 3^m(a+1)\notin A_{k\alpha} \quad
\Longrightarrow \quad \cpx{3^m(a+1)}=\cpx{a}+3m+1.\]
\end{lem}
\begin{proof}
Note \[\cpx{3^m(a+1)}\ge3\log_3(a+1)+3m+k\alpha>\cpx{a}+3m\] so
$\cpx{3^m(a+1)}\ge3m+\cpx{a}+1$.
\end{proof}
In applying the lemmas to verify that a given $n$ does not lie in a given
$A_r$, one must check that $n$ is not in some other $A_s$. In our
applications, we will have $s<r$, and $A_s$ will already be known, allowing the
required check. In the following subsection we will typically not indicate
these checks explicitly,
using the fact that in our cases one can always check whether $n\in A_s$ by
looking at the base-$3$ expansion of $n$.
\subsection{Proof of Theorem ~\ref{computeresult}: Inductive Steps }
\label{appcomp}
We prove Theorem~\ref{computeresult} by repeatedly
applying Theorem~\ref{themethod}, to go from $k$ to $k+1$ for $0 \le k \le 12$.
We will use a step size $\alpha=\delta(2)$, so let us first determine
$T_{\delta(2)}$. We compute that
$3<(3^{\frac{1-\delta(2)}{3}}-1)^{-1}+1<4$, and so $T_{\delta(2)}=\{1,2,3\}$.
We note that in all cases of attempting to determine
$B_{(k+1)\alpha}$ we are considering, we will have $(k+1)\alpha\le12\delta(2)$,
and so if $\cpx{b}<(k+1)\alpha+3\log_3 2$, then
\[\cpx{b}<12\delta(2)+3\log_3 2=3.179\ldots,\]
so $\cpx{b}\le 3$, which for $b$ solid implies $b=1$.
The base cases $B_{0} = \emptyset$ and $B_{\delta(2)}=\{ 3\}$ were handled in
Proposition \ref{basecase}.
We now treat the $B_{k \delta(2)}$ in increasing order.
\begin{prop}
\[B_{2\delta(2)}=B_{\delta(2)}\cup\{2\},\]
and the elements of $A_{2\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem,
\begin{eqnarray*}
B_{2\delta(2)}\setminus B_{\delta(2)} & \subseteq & \{1,2,6,9,27\} \cup \\
& & \{3\cdot3^n+1 : n\ge 0\}\cup\{3(3\cdot3^n+1) : n\ge 0\}.
\end{eqnarray*}
We can exclude $1$ because $\delta(1)=1$, and we can exclude $6$, $9$, and
$27$ as they are not leaders. For $3^{n+1}+1$, Lemma~\ref{addlem} shows
$\cpx{3^{n+1}+1}=3(n+1)+1$, and thus $\delta(3^{n+1}+1)=1-3\log_3
(1+3^{-(n+1)})$, which allows us to check that none of these lie in
$A_{2\delta(2)}$. We can exclude $3(3^{n+1}+1)$ since
Lemma~\ref{addlem} shows it has the same defect as $3^{n+1}+1$ (and so
therefore also is not a leader). Finally, checking the complexity of $2\cdot
3^k$ can be done with Lemma~\ref{multlem}.
\end{proof}
To make later computations easier, let us observe here that
$\delta(3^1+1)=\delta(4)=2\delta(2)$;
$6\delta(2)<\delta(3^2+1)=\delta(10)<7\delta(2)$;
$8\delta(2)<\delta(3^3+1)=\delta(28)<9\delta(2)$; and that for $n\ge4$,
$9\delta(2)<\delta(3^n+1)<10\delta(2)$.
In the above, for illustration, we explicitly considered and excluded $3$, $6$,
$9$, $27$, and $3(3^{n+1}+1)$, but henceforth we will simply not mention any
multiplications by $3$. If $n=3a$ is a good factorization, $n$ cannot be a
leader (by definition), but if it is not a good factorization, we can by
Theorem~$\ref{themethod}$ ignore it.
\begin{prop}
\[B_{3\delta(2)}=B_{2\delta(2)}\cup\{4\},\]
and the elements of $A_{3\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem,
\begin{eqnarray*}
B_{3\delta(2)}\setminus B_{2\delta(2)} & \subseteq & \{1,4\} \cup \\
& & \{3\cdot3^n+1 : n\ge 0\}\cup\{2\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
Again, $\delta(1)=1$. By the above computation, the only number of the form
$3^{n+1}+1$ occuring in $A_{3\delta(2)}$ is $4$. Lemma~\ref{addlem}
shows that $\cpx{2\cdot3^n+1}=3+3n$ for $n>0$, and hence
$\delta(2\cdot3^n+1)=3-3\log_3(2+3^{-n})$, which allows us to check that none of
these lie in $A_{3\delta(2)}$. Finally, checking the complexity of
$4\cdot 3^k$ can be done with Lemma~\ref{multlem}.
\end{proof}
To make later computations easier, let us observe here that
$6\delta(2)<\delta(2\cdot3^1+1)=\delta(7)<7\delta(2)$;
$8\delta(2)<\delta(2\cdot3^2+1)=\delta(19)<9\delta(2)$;
$9\delta(2)<\delta(2\cdot3^3+1)=\delta(55)<10\delta(2)$; and that for $n\ge 4$,
$10\delta(2)<\delta(2\cdot3^n+1)<11\delta(2)$.
We will henceforth stop explicitly considering and then excluding $1$,
since we know that $9\delta(2)<\delta(1)=1<10\delta(2)$.
\begin{prop}
\[B_{4\delta(2)}=B_{3\delta(2)}\cup\{8\},\]
and the elements of $A_{4\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem,
\begin{eqnarray*}
B_{4\delta(2)}\setminus B_{3\delta(2)} & \subseteq & \{8\}\cup
\{3\cdot3^n+1 : n\ge 0\}\cup\\ & & \{2\cdot3^n+1 : n\ge 0\}\cup
\{4\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
By the above computation, no numbers of the form $3^{n+1}+1$ or $2\cdot3^n+1$
occur in $A_{4\delta(2)}\setminus A_{3\delta(2)}$. Lemma~\ref{addlem}
shows $\cpx{4\cdot3^n+1}=5+3n$ and hence
$\delta(4\cdot3^n+1)=5-3\log_3(4+3^{-n})$, which allows us to check that none of
these lie in $A_{4\delta(2)}$. Finally, checking the complexity of
$8\cdot 3^k$ can be done with Lemma~\ref{multlem}.
\end{proof}
To make later computations easier, let us observe here that
$5\delta(2)<\delta(4\cdot3^0+1)=\delta(5)<6\delta(2)$;
$9\delta(2)<\delta(4\cdot3^1+1)=\delta(13)<10\delta(2)$;
$10\delta(2)<\delta(4\cdot3^2+1)=\delta(37)<11\delta(2)$; and that for $n\ge 3$,
$11\delta(2)<\delta(4\cdot3^n+1)<12\delta(2)$.
\begin{prop}
\[B_{5\delta(2)}=B_{4\delta(2)}\cup\{16\},\]
and the elements of $A_{5\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem,
\begin{eqnarray*}
B_{5\delta(2)}\setminus B_{4\delta(2)} & \subseteq & \{16\}\cup
\{3\cdot3^n+1 : n\ge 0\}\cup\{2\cdot3^n+1 : n\ge 0\}\cup\\ & &
\{4\cdot3^n+1 : n\ge 0\}\cup\{8\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
By the above computation, no numbers of the form $3^{n+1}+1$, $2\cdot3^n+1$, or
$4\cdot3^n+1$ occur in $A_{5\delta(2)}\setminus A_{4\delta(2)}$. Lemma~\ref{addlem}
shows that $\cpx{8\cdot3^n+1}=7+3n$ for $n>0$, and hence
$\delta(8\cdot3^n+1)=7-3\log_3(8+3^{-n})$, which allows us to check that none of
these lie in $A_{5\delta(2)}$. Finally, checking the
complexity of $16\cdot 3^k$ can be done with Lemma~\ref{multlem}.
\end{proof}
To make later computations easier, let us observe here that
$11\delta(2)<\delta(8\cdot3^1+1)=\delta(25)<\delta(8\cdot3^2+1)=\delta(73)<12\delta(2)$,
and that for $n\ge3$, $\delta(8\cdot3^n+1)>12\delta(2)$.
\begin{prop}
\[B_{6\delta(2)}=B_{5\delta(2)}\cup\{32,5\},\]
and the elements of $A_{6\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem,
\begin{eqnarray*}
B_{6\delta(2)}\setminus B_{5\delta(2)} & \subseteq & \{32\}\cup
\{3\cdot3^n+1 : n\ge 0\}\cup\{2\cdot3^n+1 : n\ge 0\}\cup\\ & &
\{4\cdot3^n+1 : n\ge 0\}\cup\{8\cdot3^n+1 : n\ge 0\}\cup\\ & &
\{16\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
By the above computations, the number of any of the forms $3^{n+1}+1$,
$2\cdot3^n+1$, $4\cdot3^n+1$, or $8\cdot3^n+1$ occurring in
$A_{5\delta(2)}\setminus A_{4\delta(2)}$ is $5=4\cdot3^0+1$.
Lemma~\ref{addlem} shows that $\cpx{16\cdot3^n+1}=9+3n$, and hence
$\delta(16\cdot3^n+1)=9-3\log_3(16+3^{-n})$, which allows us to check that none
of these lie in $A_{6\delta(2)}$. Finally, checking the
complexity of $32\cdot3^k$ can be done with Lemma~\ref{multlem}, and checking
the complexity of $5\cdot 3^k$ can be done with Lemma~\ref{addlem}.
\end{proof}
To make later computations easier, let us observe here that
$11\delta(2)<\delta(16\cdot3^0+1)=\delta(17)<12\delta(2)$, and that for $n\ge1$,
$\delta(16\cdot3^n+1)>12\delta(2)$.
In the above, for illustration, we explicitly considered and excluded numbers of
the form $3\cdot3^n+1$, $2\cdot3^n+1$, etc., for large $n$, despite having
already computed their complexities earlier. Henceforth, to save space, we will
simply not consider a number if we have already computed its defect and seen it
to be too high. E.g., in the above proof, we would have simply said, ``By the
main theorem and the above computations, $B_{6\delta(2)}\setminus B_{5\delta(2)}
\subseteq \{32,5\}\cup\{8\cdot3^n+1 : n\ge 0\}$''.
\begin{prop}
\[B_{7\delta(2)}=B_{6\delta(2)}\cup\{64,7,10\},\]
and the elements of $A_{7\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem and the above computations,
\[B_{7\delta(2)}\setminus B_{6\delta(2)} \subseteq \{64,7,10\}\cup
\{32\cdot3^n+1 : n\ge 0\}\cup\{5\cdot3^n+1 : n\ge 0\}.\]
Lemma~\ref{addlem} shows that $\cpx{32\cdot3^n+1}=11+3n$
and, for $n\ge 2$, $\cpx{5\cdot3^n+1}=6+3n$. Hence
$\delta(32\cdot3^n+1)=11-3\log_3(32+3^{-n})$, and, for $n\ge 2$,
$\delta(5\cdot3^n+1)=6-3\log_3(5+3^{-n})$ which allows us to check that none of
these lie in $A_{7\delta(2)}$.
Finally, checking the complexities of $64\cdot3^k$, $7\cdot3^k$, and
$10\cdot3^k$ can be done via Lemma~\ref{multlem} (for $64$ and $10$) and
Lemma~\ref{addlem} (for $7$ and $10$).
\end{proof}
To make later computations easier, let us observe here that
$\delta(32\cdot3^n+1)>12\delta(2)$ for all $n$, and that for $n\ge2$,
$\delta(5\cdot3^n+1)>12\delta(2)$ as well. Indeed, as we will see, from this
point on, no new examples of multiplying by a power of $3$ and then adding $1$
will ever have complexity less than $12\delta(2)$.
\begin{prop}
\[B_{8\delta(2)}=B_{7\delta(2)}\cup\{128,14,20\},\]
and the elements of $A_{8\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem and the above computations,
\begin{eqnarray*}
B_{8\delta(2)}\setminus B_{7\delta(2)} & \subseteq & \{128,14,20\}\cup
\{64\cdot3^n+1 : n\ge 0\}\cup\\ & & \{7\cdot3^n+1 : n\ge 0\}\cup
\{10\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
Lemma~\ref{addlem} shows that
$\cpx{64\cdot3^n+1}=13+3n$, $\cpx{10\cdot3^n+1}=8+3n$, and, for $n\ne 0, 2$,
$\cpx{7\cdot3^n+1}=7+3n$. Using this to check their defects, we see that none
of these lie in $A_{8\delta(2)}$, or even $A_{12\delta(2)}$.
Finally, checking the complexities of $128\cdot3^k$, $14\cdot3^k$, and
$20\cdot3^k$ can be done with Lemma~\ref{multlem}.
\end{proof}
\begin{prop}
\[B_{9\delta(2)}=B_{8\delta(2)}\cup\{256,28,40,19\},\]
and the elements of $A_{9\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem and the above computations,
\begin{eqnarray*}
B_{9\delta(2)}\setminus B_{8\delta(2)} & \subseteq & \{256,28,40,19\}\cup
\{128\cdot3^n+1 : n\ge 0\}\cup\\ & &\{14\cdot3^n+1 : n\ge 0\}\cup
\{20\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
Lemma~\ref{addlem} shows that
$\cpx{128\cdot3^n+1}=15+3n$, and for $n\ge 1$, $\cpx{14\cdot3^n+1}=9+3n$ and
$\cpx{20\cdot3^n+1}=10+3n$. Using this to check their defects, we see that none
of these lie in $A_{8\delta(2)}$, or even $A_{12\delta(2)}$.
Finally, checking the complexities of $256\cdot3^k$, $28\cdot3^k$, and
$40\cdot3^k$, and $19\cdot3^k$ can be done via Lemma~\ref{multlem} (for $256$,
$28$, and $40$) and Lemma~\ref{addlem} (for $28$ and $19$).
\end{proof}
\begin{prop}\label{level10}
\[B_{10\delta(2)}=B_{9\delta(2)}\cup\{512,13,1,56,80,55,38\}\cup
\{3\cdot3^n+1:n\ge 3\},\]
and the elements of $A_{10\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem and the above computations,
\begin{eqnarray*}
B_{10\delta(2)}\setminus B_{9\delta(2)} & \subseteq & \{512,13,1,56,80,55,38\}
\cup \{3\cdot3^n+1:n\ge 3\}\cup \\ & &
\{256\cdot3^n+1 : n\ge 0\}\cup\{28\cdot3^n+1 : n\ge 0\}\cup\\ & &
\{40\cdot3^n+1 : n\ge 0\}\cup\{19\cdot3^n+1 : n\ge 0\}.
\end{eqnarray*}
We know $\delta(1)=1$. Lemma~\ref{addlem} shows that $\cpx{256\cdot3^n+1}=17+3n$,
$\cpx{28\cdot3^n+1}=11+3n$, $\cpx{40\cdot3^n+1}=12+3n$, and for $n\ge 1$,
$\cpx{19\cdot3^n+1}=10+3n$. Using this to check their defects, we see that none
of these lie in $A_{10\delta(2)}$, or even $A_{12\delta(2)}$. Finally, checking the
complexities of $512\cdot3^k$, $13\cdot3^k$, $56\cdot3^k$, $80\cdot3^k$,
$55\cdot3^k$, $38\cdot3^k$, and $(3^{n+1}+1)3^k$ can be done via
Lemma~\ref{multlem} (for $512$, $56$, $80$, and $38$) and Lemma~\ref{addlem}
(for $13$, $55$ and $3^{n+1}+1$).
\end{proof}
\begin{prop}
\begin{eqnarray*}
B_{11\delta(2)} & = & B_{10\delta(2)}\cup\{1024,26,112,37,160,110,76\}\cup\\
& &\{2(3\cdot3^n+1):n\ge 3\}\cup\{2\cdot3^n+1:n\ge 4\},
\end{eqnarray*}
and the elements of $A_{11\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem and the above computations,
\begin{eqnarray*}
B_{11\delta(2)}\setminus B_{10\delta(2)} & \subseteq &
\{1024,26,112,37,160,110,76,25\} \cup \\ & & \{2(3\cdot3^n+1):n\ge 3\}\cup
\{2\cdot3^n+1:n\ge4\} \cup \\ & & \{512\cdot3^n+1 : n\ge 0\}\cup
\{13\cdot3^n+1 : n\ge 0\} \cup \\ & & \{56\cdot3^n+1 : n\ge 0\}\cup
\{80\cdot3^n+1 : n\ge 0\} \cup \\ & & \{55\cdot3^n+1 : n\ge 0\}\cup
\{38\cdot3^n+1 : n\ge 0\}\cup \\ & & \{(3\cdot3^n+1)3^m+1: n\ge3, m\ge 0\}
\end{eqnarray*}
Lemma~\ref{addlem} shows that for $m\ge3$,
$\cpx{(3^{m+1}+1)3^n+1}=2+3(m+1)+3n$, and that for $n\ge 1$,
$\cpx{512\cdot3^n+1}=19+3n$, $\cpx{56\cdot3^n+1}=13+3n$,
$\cpx{80\cdot3^n+1}=14+3n$, $\cpx{55\cdot3^n+1}=13+3n$,
$\cpx{38\cdot3^n+1}=12+3n$, and that for $n\ge 2$, $\cpx{13\cdot3^n+1}=9+3n$.
Using this to check their defects, we see that none of these lie in
$A_{11\delta(2)}$, or even $A_{12\delta(2)}$.
We checked earlier that $\delta(25)>11\delta(2)$.
Finally, checking the complexities of $1024\cdot3^k$, $26\cdot3^k$,
$112\cdot3^k$, $37\cdot3^k$, $160\cdot3^k$, $110\cdot3^k$, $76\cdot3^k$,
$2(3^{n+1}+1)3^k$, and $(2\cdot3^n+1)3^k$ can be done via Lemma~\ref{multlem}
(for $1024$, $26$, $112$, $160$, $110$, $76$, and $2(3^{n+1}+1)$) and
Lemma~\ref{addlem} (for $37$ and $2\cdot3^n+1$).
\end{proof}
\begin{prop}
\begin{eqnarray*}
B_{12\delta(2)} & = & B_{11\delta(2)}\cup\{2048,25,52,224,74,320,17,220,152,73\}
\cup\\ & & \{4(3\cdot3^n+1):n\ge 3\}\cup\{2(2\cdot3^n+1):n\ge4\} \cup \\
& & \{4\cdot3^n+1:n\ge3\}
\end{eqnarray*}
and the elements of $A_{12\delta(2)}$ have the complexities listed in
Theorem~\ref{computeresult}.
\end{prop}
\begin{proof}
By the main theorem and the above computations,
\begin{eqnarray*}
B_{12\delta(2)}\setminus B_{11\delta(2)} & \subseteq &
\{2048,25,52,224,74,320,17,220,152,73,35\} \cup \\ & &
\{4(3\cdot3^n+1):n\ge 3\}\cup \{2(2\cdot3^n+1):n\ge4\}\cup
\\ & & \{4\cdot3^n+1:n\ge3\}
\cup\{1024\cdot3^n+1 : n\ge 0\}\cup\\ & &\{26\cdot3^n+1 : n\ge 0\}
\cup\{112\cdot3^n+1 : n\ge 0\}\cup \\ & & \{37\cdot3^n+1:n\ge 0\}
\cup\{160\cdot3^n+1 : n\ge 0\}\cup \\ & &
\{110\cdot3^n+1 : n\ge 0\}\cup \{76\cdot3^n+1 : n\ge 0\}\cup \\ & &
\{2(3\cdot3^n+1)3^m+1: n\ge3, m\ge 0\}\cup \\ & &
\{(2\cdot3^n+1)3^m+1: n\ge4, m\ge 0\}
\end{eqnarray*}
Lemma~\ref{addlem} shows that for
$m\ge3$ and $n\ge1$, $\cpx{2(3^{m+1}+1)3^n+1}=4+3(m+1)+3n$, and that for $m\ge
4$ and $n\ge 1$, $\cpx{(2\cdot3^m+1)3^n+1}=4+3m+3n$, and that
$\cpx{1024\cdot3^n+1}=21+3n$, $\cpx{112\cdot3^n+1}=15+3n$,
$\cpx{160\cdot3^n+1}=16+3n$, $\cpx{76\cdot3^n+1}=14+3n$, and that for $n\ge 1$,
$\cpx{26\cdot3^n+1}=11+3n$, $\cpx{110\cdot3^n+1}=15+3n$, and that for $n\ge 2$,
$\cpx{37\cdot3^n+1}=12+3n$. Using this to check their defects, we see that none
of these lie in $A_{12\delta(2)}$.
We can then check that $\delta(35)>12\delta(2)$.
Finally, checking the complexities of $2048\cdot3^k$, $25\cdot3^k$,
$52\cdot3^k$, $224\cdot3^k$, $74\cdot3^k$, $320\cdot3^k$, $220\cdot3^k$,
$152\cdot3^k$, $73\cdot3^k$, $4(3^{n+1}+1)3^k$, $2(2\cdot3^n+1)3^k$, and
$(4\cdot3^n+1)3^k$ can be done via Lemma~\ref{multlem} (for $2048$, $25$, $52$,
$224$, $74$, $320$, $220$, $152$, $4(3^{n+1}+1)$, and $2(2\cdot3^n+1)$) and
Lemma~\ref{addlem} (for $25$, $17$, $73$, and $4\cdot3^n+1$).
\end{proof}
Combining all these propositions establishes Theorem~\ref{computeresult}.
\section{Applications}
\label{theory}
We now present several applications of the classification obtained
in Section 5. These are:
(i) Stability of numbers $n>1$ of defect less than $12\delta(2)+1$;
(ii) Classification of all integers $n$ having defect $0 \le \delta(n) \le 1$
and finiteness of $B_{r}$ for all $r<1$;
(iii) Determination of complexities $\cpx{2^{a}\cdot 3^k}$ for $a \le 21$
and all $k$;
(iv) Upper bounds on the number of integers $n \le x$ having complexity
$\delta(n) < r$, for any fixed $r>0$.
\subsection{Stability of numbers of low defect}
We have already noted in Theorem~\ref{computeresult} that numbers $n>1$ of
defect less than $12\delta(2)$ are stable. In fact, we can conclude something
stronger.
\begin{thm}
If $n>1$ and $\delta(n)<12\delta(2)+1=2.2865\ldots$, then $n$ is stable.
\end{thm}
\begin{proof}
From Theorem~\ref{computeresult}, we can check that if $\delta(3n)<12\delta(2)$,
then $\delta(n)<12\delta(2)$. So suppose the theorem were false, and we have
unstable $n>1$ with $\delta(n)<12\delta(2)+1$. Then for some $K$, $\delta(3^K n)\le
\delta(n)-1<12\delta(2)$. So by above, we have $\delta(n)<12\delta(2)$, and thus, as
noted in Theorem~\ref{computeresult}, $n$ is stable unless $n=1$.
\end{proof}
In fact, if $n>1$ and $\delta(n)<\delta(107)=3.2398\ldots$, then $n$ is stable,
as we will prove in \cite{seq2}.
\subsection{Classifying the integers of Defect at most $1$}
\label{sec61}
Using Theorem~\ref{computeresult} we can classify all the numbers with defect
less than $1$, as follows:
\begin{thm}
The natural numbers $n$ satisfying $\delta(n)<1$ are precisely those that can be
written in one of the following forms, and have the following complexities:
\begin{enumerate}
\item $3^k$ for $k\ge 1$, of complexity $3k$
\item $2^a 3^k$ for $a\le 9$, of complexity $2a+3k$ (for $a$, $k$ not both
zero)
\item $5\cdot2^a 3^k$ for $a\le 3$, of complexity $5+2a+3k$
\item $7\cdot2^a 3^k$ for $a\le 2$, of complexity $6+2a+3k$
\item $19\cdot3^k$ of complexity $9+3k$
\item $13\cdot3^k$ of complexity $8+3k$
\item $(3^n+1)3^k$ of complexity $1+3n+3k$ (for $n\ne0$)
\end{enumerate}
Furthermore $n=1$ is the only number having defect exactly $1$.
\end{thm}
\begin{proof}
This list includes all numbers in $A_{9 \delta(2)}$,
and some numbers in
$A_{10\delta(2)}$. These in turn are determined by the
corresponding lists for $B_{9 \delta(2)}, B_{10 \delta(2)}$,
in the latter case (Proposition \ref{level10}) checking the complexities to
exclude the leaders
$\{56, 80, 55, 38\}$.
\end{proof}
Using this list one
may deduce the following important fact.
\begin{thm}
\label{finite}
For every $0< \alpha<1$, the set of leaders $B_\alpha$ is a finite set.
For every $\alpha \ge 1$, the set $B_{\alpha}$ is an infinite set.
\end{thm}
\begin{proof}
The first part follows from the fact that each of the categories above has a
finite set of leaders, and that the final list (7) has a finite number of
sublists
with defect smaller than $1- \epsilon$, for any epsilon.
The defects
$$
\delta((3^n+1)3^k) =
(3n+1) - 3 \log_3(3^n+1)
= 1-3 \log_3(1+ \frac{1}{3^n})
$$
approach $1$
from below as $n$
approaches infinity. This also establishes that $B_{1}$ is an
infinite set, giving the second part.
\end{proof}
\subsection{The complexity of $2^m3^k$ for small $m$}
\label{sec62}
The determination of $A_r$ in
Theorem \ref{computeresult}
allows us to put lower bounds on the complexities of any
numbers not in it. Thus for instance we have the following result.
\begin{lem}
Let $n$ be a natural number and suppose that there is no $k$ such that
$2^{n+9}3^k\in A_{n\delta(2)}$. Then for any $m\le n+9$ and any $k$ (with $m$
and $k$ not both zero), $\cpx{2^m 3^k}=2m+3k$.
\end{lem}
\begin{proof}
It suffices to show that $\cpx{2^{n+9}3^k}>2n+3k+17$, but by assumption,
\[\cpx{2^{n+9}3^k}\ge(n+9)3\log_3 2+3k+n\delta(2)=2n+3k+27\log_3 2>2n+3k+17,\]
and we are done.
\end{proof}
This lemma immediately establishes Conjecture \ref{cj11} for $ a \le 21$.
\begin{proof}[Proof of Theorem \ref{th11main}.] From our classification, it is
straightforward to check that $2^{21} 3^k$
does not lie in $A_{12\delta(2)}$ for any $k$, so we can conclude that
for $m\le 21$ and any $k$, with $m$ and $k$ not both zero, $\cpx{2^m
3^k}=2m+3k$.
\end{proof}
\subsection{Counting the integers below $x$ having defect at most $r$}
\label{sec63}
In our computations in Section \ref{sec5}, we used a small step size
$\alpha=\delta(2)$, and kept our superset of $A_r$ small by using a pruning
step. In what follows, we will use a different trick to keep our supersets of
$A_r$ from getting too large. Instead of pruning, we will use step sizes
arbitrarily close to $1$.
\begin{prop}
\label{indcount}
Given any $0<\alpha<1$, and any $k\ge1$, we have that
$B_{k\alpha}(x)=O_{k\alpha}((\log x)^{k-1})$, and
$A_{k\alpha}(x)=O_{k\alpha}((\log x)^k)$.
\end{prop}
\begin{proof}
We induct on $k$. Suppose $k=1$; by Corollary \ref{finite}, then
$B_{k\alpha}=B_\alpha$ is a finite set, so $B_{k\alpha}(x)=O_{k\alpha}(1)$.
Also, for any $r$, $A_r(x)\le B_r(x)(\log_3 x)$; in particular,
$A_{k\alpha}(x)=O_{k\alpha}(\log x)$.
So suppose it is true for $k$ and we want to prove it for $k+1$; we apply
Proposition~\ref{themethod} with step size $\alpha$. For convenience, let $S_r$
denote the set of solid numbers $b$ satisfying $\cpx{b}<r+3\log_3 2$, as
mentioned in the discussion after Theorem~\ref{themethod}; for any $r$, this is
a finite set.
In the case $k+1=2$,
\begin{eqnarray*}
B_{2\alpha}(x) & \le & B_\alpha(x)^3 +
(A_\alpha(x)|S_{2\alpha}| + |T_\alpha|)(|B_\alpha|+1) \\
& = & O_{\alpha}(1)^3 + O_\alpha(\log x) + O_\alpha(1) \\
& = & O_{(k+1)\alpha}(\log x).
\end{eqnarray*}
In the case $k+1>2$,
\small
\begin{eqnarray*}
B_{(k+1)\alpha}(x) & \le& \sum_{\substack{i+j=k+2 \\ i,j\ge2}} B_{i\alpha}(x)
B_{j\alpha}(x) + (A_{k\alpha}(x)|S_{(k+1)\alpha}| +
|T_\alpha|)(|B_\alpha|+1) \\
& =& \sum_{\substack{i+j=k+2 \\ i,j\ge2}} O_{i\alpha}((\log x)^{i-1})
O_{j\alpha}((\log x)^{j-1}) + O_{(k+1)\alpha}((\log x)^k) + O_\alpha(1) \\
& =& O_{k\alpha}((\log x)^k).
\end{eqnarray*}
\normalsize
In either case, we also have $A_{(k+1)\alpha}(x)=O_{(k+1)\alpha}((\log
x)^{k+1})$. This completes the proof.
\end{proof}
Using this result we conclude:
\begin{thm}\label{upperbound}
For any number $r>0$, $B_r(x)=\Theta_r((\log x)^{\lfloor r \rfloor})$, and
$A_r(x)=\Theta_r((\log x)^{\lfloor r \rfloor+1})$.
\end{thm}
\begin{proof}
For the upper bound, it suffices to note that
$r=(\lfloor r \rfloor +1 )\frac{r}{\lfloor r \rfloor+1}$, and that
$\frac{r}{\lfloor r \rfloor+1}<1$,
and apply Proposition~\ref{indcount}.
For the lower bound, let $k=\lfloor r \rfloor$, and consider numbers of the
form
\[N=((\cdots((3\cdot3^{n_k}+1)3^{n_{k-1}}+1)\cdots)3^{n_1}+1)3^{n_0}.\]
Then
\[\cpx{N}\le3(n_0+\cdots+n_k+1)+k\]
and since $\log_3 N\ge n_0+\cdots+n_k+1$, this means $\delta(N)\le k$.
Furthermore, if $n_0=0$ and $n_1>0$ then $N$ is not divisible by $3$ and so is a
leader. It is then easy to count that there are at least $\binom{\lfloor \log_3
x \rfloor}{k+1}\gtrsim \frac{1}{(k+1)!}(\log_3 x)^{k+1}$ such $N$ less than a
given $x$, and at least $\binom{\lfloor \log_3 x \rfloor}{k}\gtrsim
\frac{1}{k!}(\log_3 x)^k$ if we insist that $N$ be a leader.
\end{proof}
An immediate consequence of Theorem \ref{upperbound} is Theorem~\ref{indcount0}
in the introduction.
\begin{proof}[Proof of Theorem \ref{indcount0}.]
The existence of numbers of arbitrarily large defect follows from the fact
that the set of integers of defect $< r$ has density zero.
\end{proof}
This result is a long way from proving a bound of the type $\cpx{n}\nsim 3\log_3
n$.
\section{Acknowledgements}
The authors are indebted to
J\=anis Iraids and Karlis Podnieks for supplying a wealth of numerical data.
We thank Jeffrey Lagarias for looking over an early draft of this paper and
elucidating just what it was we were doing, as well as for other help with
editing, and to Mike Zieve and David Rohrlich for providing assistance with
early drafts of the paper. We thank Paul Pollack and Mike Bennett for pointing
out the paper \cite{Stewart}.
Most of all we thank Juan Arias de Reyna for greatly clarifying much of our
work, suggesting improved notation, shortening some proofs, and helping
extensively with structuring and editing of this paper. We thank the reviewer
for very helpful comments.
|
1,116,691,497,301 | arxiv | \section{Wave forms in Odd dimensions}
The first order formalism of D'Eath and Payne was discussed in detail and generalised to higher $D$ in~\cite{Herdeiro:2011ck}. Two equal Aichelburg-Sexl shock waves collide head on in $D$ dimensions. The inelasticity of the process $\epsilon$ can be expressed as (we refer to~\cite{Herdeiro:2011ck} for all details)
\begin{equation}
\epsilon_{\rm 1st \, order}= \frac{1}{8}\dfrac{D-2}{D-3}\lim_{\hat{\theta}\rightarrow 0,r\rightarrow \infty}\left(\int (r\rho^{\frac{D-4}{2}}E_{,v})^2 dt \right) \; , \label{rad1}
\end{equation}
where the limit is selecting a radiation extraction point far away from the collision ($r\rightarrow \infty$) and along the collision axis ($\hat{\theta}\rightarrow 0$); the wave form $E_{,v}$ at the space time-point $\mathcal{P}$ with null coordinates $u,v$ and at a distance $\rho$ from the symmetry (collision) axis is
\begin{eqnarray}
E_{,v}&&(u,v,\rho)=-\dfrac{\sqrt{8}\Omega_{D-4}}{(2\pi u)^{\frac{D-2}{2}}} \int_{0}^{+\infty} \dfrac{d\rho'}{\rho'} \times \nonumber \\ &&
\int_{-1}^{1} dx \,\dfrac{d}{dx}\left[x(1-x^2)^{\frac{D-3}{2}}\right] \delta^{(\frac{D-4}{2})}\left(\Delta v\right) \label{rad2}
\; ,
\end{eqnarray}
where $\Delta v$ is selecting the events that support the radiation observed at $\mathcal{P}$.
\begin{figure*}
\includegraphics[scale=0.75,clip=true,trim= 0 0 0 0]{Red2rVarD5.eps}\hspace{3mm}
\includegraphics[scale=0.75,clip=true,trim= 0 0 0 0]{Red2rVarD7.eps}\hspace{3mm}
\includegraphics[scale=0.75,clip=true,trim= 0 0 0 0]{Red2rVarD9.eps}\hspace{3mm}
\caption{\label{fig:WaveForms}{\em Wave Forms for $D$ odd:} The panels contain wave form curves for the radiation signal seen by an observer close to the axis for various (large) $r$ as a function of time. The horizontal axis coordinate has been rescaled and shifted so that the times for the first and second optical rays coincide for the different curves.}
\end{figure*}
The main difference between the odd $D$ and even $D$ case in Eq.~\eqref{rad2} is the fractional derivative of the delta function denoted by its exponent. The fractional derivatives of delta functions have support not only at the zeros of their argument but also for positive argument. This is related to the well known property that the retarded Green's function in odd dimensions has support not only on the past light cone, but also \textit{inside} the light cone \cite{friedlander}. In other words, odd dimensional flat spacetime behaves like a dispersive medium.
In~\cite{Herdeiro:2011ck}, the procedure followed for even $D$ was to perform $(D-4)/2$ integrations by parts over $x$ in Eq.~\eqref{rad2}, as to obtain a delta function and perform the $x$ integration completely. This procedure constrains the domain of integration of $\rho'$ to
\begin{equation}
\mathcal{D}:\; -1 \leq x_\star \equiv \dfrac{U\Phi\left(\rho'\right)+\rho'^2-UT}{2\rho \rho'} \leq 1\; ,
\label{eq:domain}
\end{equation}
where $U = \tau + 2r\sin^2(\theta/2)$, $T=\tau +2r\cos^2(\theta/2)-\rho^2/U$ and $\tau,r,\theta$ are the usual retarded time, radial and polar angle coordinates (see~\cite{Herdeiro:2011ck}). For odd $D$ the procedure is similar, except that after $M=[(D-4)/2]$ integrations by parts there is a fractional delta function of order $1/2$, $\delta^{(1/2)}$. For this case, the $x$-integration of Eq. (B.14) of~\cite{Herdeiro:2011ck} is in fact more intricate. A careful integration by parts shows that the result is
\begin{equation}
r\rho^{M+\frac{1}{2}}E_{,v}=\dfrac{(-1)^M4\Omega_{D-4}}{(2\pi)^{M+2}(D-1)}\frac{r}{\rho} \int_{\mathcal{D'}} \dfrac{d\rho'}{\rho'^{M+\frac{5}{2}}}\dfrac{P^{(M+2)}(x_\star)}{\sqrt{1-x_\star}}
\, , \label{odd}
\end{equation}
where now $\mathcal{D}'=\mathcal{D}\cup\mathcal{D}_{\rm extra}$, with $\mathcal{D}_{\rm extra}:\; x_\star \leq -1 \,$, and the polynomial $P^{(M+2)}(x_\star)$ factor in the domain $\mathcal{D}_{\rm extra}$ is replaced according to
\begin{equation}
\dfrac{P^{(M+2)}}{\sqrt{1-x_\star}}\equiv \sum_{k=0}^{M+2}\sum_{j=0}^k\dfrac{ c_k x_\star^{k-j}\left[\dfrac{(1-x_\star)^{j}}{\sqrt{1-x_\star}}-\dfrac{(-1-x_\star)^{j}}{\sqrt{-1-x_\star}}\right]}{(k-j)!j!(2j-1)} \; ,
\end{equation}
\begin{equation}
\dfrac{d^{M+2}}{dx^{M+2}}\left[(1-x^2)^{M+2}\right]\equiv \sum_{k=0}^{M+2}\dfrac{c_k}{k!} x^k \; .
\end{equation}
This extra term turns out to be crucial to obtain the correct late time tail of the wave forms.
A selection of wave forms is presented in Fig.~\ref{fig:WaveForms}. These were generated using the same numerical strategy as in~\cite{Herdeiro:2011ck}, with the extra term. We represent wave forms which have been rescaled by the relevant time scale for the problem, such that they start at retarded time $\tau_1$ and peak at $\tau_2$. Such a time scale, $\Delta \tau=\tau_1-\tau_2$, is interpreted in the geometrical optics limit. For a (far away) observation point \textit{not} at the symmetry axis, a first ray arrives at the retarded time $\tau_1$ (corresponding to the beginning of the burst of radiation); then, a second ray arrives at $\tau_2$, corresponding to the optical path that crosses a caustic at the axis before entering the curved region and hitting the observer (cf. Fig. 4 in~\cite{Herdeiro:2011ck}).
Observing the curves in Fig.~\ref{fig:WaveForms} one finds some similarities and differences with the even $D$ case. As for the similarities, the number of oscillations in the wave forms increases with $D$, with one more zero for each (from left to right in Fig.~\ref{fig:WaveForms}). Concerning the differences, the peak of radiation corresponding to the second optical ray is no longer singular, albeit becoming more pronounced as we increase $r$. The tails to the right of this peak are non-zero but integrable, since they decay as a power law. We have checked that the integrable tails are obtained from a cancellation of a non-integrable contribution from $\mathcal{D}$ and the contribution from $\mathcal{D}_{\rm extra}$.
\begin{figure*}
\includegraphics[scale=0.73,clip=true,trim= 0 0 0 0]{IntThetaVarD5.eps}
\includegraphics[scale=0.73,clip=true,trim= 0 0 0 0]{IntThetaVarD7.eps}
\includegraphics[scale=0.73,clip=true,trim= 0 0 0 0]{IntThetaVarD9.eps}
\caption{\label{fig:ExtractRad}{\em Limiting fractions:} The panels contain, for each $D$, curves of $\epsilon(r)$ for some values of $\theta$ which can be used to extract the limiting $\epsilon_{\rm 1st \, order}$. Some asymptotic curves which fit the numerical data to a high accuracy are indicated in each panel. The best estimates (relative error less than $10^{-3}$) are indicated by the constant terms in the asymptotic fits of the red solid curves.}
\end{figure*}
The inelasticity factor is also extracted numerically, through the double limit in Eq.~\eqref{rad1}. For that purpose, we plotted the right hand side of Eq.~\eqref{rad1}, before taking the limit, as a function of $r$ in Fig.~\ref{fig:ExtractRad}, for several small $\hat\theta=\pi-\theta$ angles. The most precise fit is extracted with $\hat \theta=0.01$. Similarly to even $D$, the result is extracted with a relative error smaller than $10^{-3}$.
\noindent{\bf{\em III Discussion.}}
The method presented in \cite{Herdeiro:2011ck} is technically involved, both analytically and numerically. It is quite reassuring that one can obtain the results for odd $D$ by the same method, fitting appropriately in the window bracketed by the neighbouring even $D$ values, even though they are obtained from integrating very different polynomial functions. This strongly legitimates the method we have used.
\begin{figure}
\includegraphics[scale=0.8,clip=true,trim= 0 0 0 0]{Dots.eps}
\caption{\label{fig:ComparisonD} Apparent horizon bound (blue dashed line and points) compared with the first order result (red points) and our fit - Eq. (\ref{miracle}) - matching the result perfectly (red solid curve).}
\end{figure}
That the final result of this method fits, within numerical error (smaller than 0.1\%), with the simple formula given by Eq. (\ref{miracle}) is \textit{truly remarkable}. This agreement is exhibited in Fig.~\ref{fig:ComparisonD}, where the apparent horizon bound is also displayed. It suggests the existence of a simpler physical or mathematical argument to derive the inelasticity in this process. This certainly deserves further investigation and motivates the study of higher order perturbation theory for this type of processes. It is worth noting that, in second order perturbation theory, the matching conditions between the exact solution describing the two shock waves prior to the collision and the perturbative method are \textit{exact}. Moreover, in $D=4$, the second order result for the inelasticity ($16.3\%$ \cite{D'Eath:1992qu}) agrees with the value obtained in the high energy collision of two black holes in numerical relativity, within the numerical error ($14\pm3\%$ \cite{Sperhake:2008ga}). Another suggestive fact is the convergence of both our fit and the apparent horizon bound to $1/2$ as $D\rightarrow \infty$, but its significance, or if it will hold in higher order perturbation theory, is yet to be unveiled.
\noindent{\bf{\em Acknowledgements.}}
F.C. and M.S. are funded by FCT through the grants SFRH/BD/60272/2009 and SFRH/BPD/69971/2010. The work in this paper is also supported by the grants CERN/FP/116341/2010, PTDC/FIS/098962/2008, PTDC/FIS/098025/2008 and NRHEP--295189-FP7-PEOPLE-2011-IRSES.
\bibliographystyle{h-physrev4}
|
1,116,691,497,302 | arxiv | \section{Introduction}
The AdS/CFT correspondence~\cite{Maldacena, Polyakov, Witten} has proved very useful in providing novel tools to study strongly-coupled/correlated systems. It has been applied to, e.g. RHIC physics~\cite{Chernicoff:2012bu,Yee:2013qma,Wu:2013qja,v_2,DeWolfe:2013cua}, and recently to condensed matter phenomena~\cite{Herzog:2007ij,Hartnoll:2007ih,Hartnoll:2007ip,Hartnoll:2008hs,Minic:2008an,Sun:2013wpa,Sun:2013zga} (for a review, see e.g. Ref.~\cite{Hartnoll:2009sz}). A gravity model was proposed in Refs.~\cite{Gubser:2005ih,Gubser:2008px} in which a $U(1)$ symmetry is spontaneously broken by the existence of a black hole. This mechanism was recently incorporated in the model of superconductivity: critical temperature and magnetic field were observed~\cite{Hartnoll:2008vx,Nakano:2008xc,Albash:2008eh}\footnote{The issue of emergent dynamical gauge field in holographic superconductor is discussed in \cite{Domenech:2010nf}.}, and later non-Abelian gauge condensate~\cite{Gubser:2008zu} and condensate of higher spin~\cite{Gubser:2008wv,
Roberts:2008ns,Chen:2010mk}. Some interesting phenomena observed in the laboratory also appeared in the study of fermion
spectral functions~\cite{Chen:2009pt,Benini:2010qc,Chen:2011ny}.
Historically, Ginzburg-Landau theory has proved to be an extraordinarily valuable phenomenological tool in understanding single-component superconductors. Its generalization to the two-component Ginzburg-Landau model (TCGL) was constructed, and its applicability to the two-band systems studied in Refs.~\cite{Silaev:2012,Shanenko:2011,Vagov:2012}. Upon switching on the interband coupling between the two components, this model can describe the phenomenon of the two gaps in materials such as MgB$_{2}$ ($s_{++}$)\cite{Carlstrom:2010,Buzea:2001} and iron pnictides ($s_{+-}$)\footnote{Another mechanism due to the shape resonance is also proposed to explain some cases of the iron pnictides \cite{Bianconi:2013}. We thank to Prof. Antonio Bianconi for bringing this interesting reference.}\cite{iron-based1,iron-based2,iron-based3}. A holographic model with two order parameters was first studied in the probe limit~\cite{Basu:2010fa}, and recently with back-reaction~\cite{Cai:2013wma}, where phases with two condensates
coexisting and competing were observed. However, the absence of an interband (Josephson) coupling in those models makes it difficult to justify them as models of two-band superconductivity where the interaction between the two bands is crucial. A multi-band holographic model
for three coherent orders was discussed in Ref.~\cite{Huang:2011ac}. However, in their model the form and strength of the interband interaction is completely fixed by the built-in $SO(3)$ gauge symmetry in the bulk, and is not a parameter that can be tuned. A similar holographic model for the two-band case based on an $U(2)$ symmetry was also constructed~\cite{Krikun:2012yj} in which the two condensates can be of the same or opposite sign, i.e. zero or $\pi$ relative phase difference.
In this paper, we study the effects the interband coupling has on the superconducting condensates and the critical temperature of their formation in a holographic model adapted from that proposed in Ref.~\cite{Aprile:2009ai}, which has a tunable interband Josephson coupling. In the language of the TCGL model, for positive Josephson coupling the two-band superconductor is in the same sign, $s_{++}$, state, while for negative Josephson coupling, it is in the opposite sign, $s_{+-}$, state. A defining characteristic of the two-band superconductor is the existence of coherent orders in which the two orders have the same critical temperature. Here we look for this characteristic feature in our holographic two-band superconductor, and we study it electrical and thermal transport properties. The thermal conductivity, $\kappa$, is of particular interest as its low temperature behavior provides a good probe of the superconducting gap structure experimentally. The contribution to the thermal conductivity due to
conduction electrons is expected to behaves as $\sim T$ at low temperatures, while that due to phonons $\sim T^3$. Thus a linear temperature dependence in $\kappa$ at low temperatures may be attributed to electron excitations. Now $\kappa \rightarrow 0$ as
$T \rightarrow 0$ would point to a fully gapped superconductor, but a finite value can indicate either a nodal structure due to pairing symmetry, or strong electron-electron interactions, or gapless behavior due to scattering.
The paper is organized as follows. We describe our holographic two-band model in Section~II. Results from our numerical study of the condensates and the electric and thermal conductivities are reported in the Section~III. We derive analytical results in regimes where it is possible in Section~IV. We end with a summary and directions for the future in Section~V.
\section{The Model}
We start by putting a generalized two-component Ginzburg-Landau theory into the (3+1)-dimensional Einstein-Maxwell-Dilaton gravity:
\begin{equation}
2\kappa_G^2 (-g)^{-1/2}{\cal L} = R + \frac{6}{L^2} - \frac{G(\varphi_{1},\varphi_{2})}{4} F^{\mu\nu}F_{\mu\nu} -\frac{1}{2}|D_{\mu}\varphi_{1}|^2 - \frac{1}{2}|D_{\mu}\varphi_{2}|^2 - V(\varphi_{1},\varphi_{2}),
\end{equation}
where $G(\varphi_{1},\varphi_{2})=1+\kappa_{1}\varphi^{\ast}_{1}\varphi_{1}+\kappa_{2}\varphi^{\ast}_{2}\varphi_{2}$ is the non-minimal coupling between the charged scalars and gauge field, and $\varphi_{1}$ and $\varphi_{2}$ are charged scalars. Except for the mass terms for the two charged scalars, we also introduce the interactions between the two charged scalars in the potential term
\begin{equation}
V(\varphi_{1},\varphi_{2}) = m^{2}_{1}\varphi^{\ast}_{1}\varphi_{1} + m^{2}_{2}\varphi^{\ast}_{2}\varphi_{2} + \epsilon (\varphi_{1}^{\ast}\varphi_{2} + \varphi_{1}\varphi_{2}^{\ast}) + \eta |\varphi_{1}|^2|\varphi_{2}|^2,
\end{equation}
where the $\epsilon$ term is the Josephson coupling introduced in the field theory literature, and the last term is the direct coupling~\cite{Basu:2010fa}. Since $\varphi_{1}$, $\varphi_{2}$ are complex scalars, we may parameterize them as $\varphi_{1}=\psi_{1}e^{i\theta_{1}}$, $\varphi_{2}=\psi_{2}e^{i\theta_{2}}$. Then the bulk action can be rewritten as
\begin{eqnarray}
S&=&\frac{1}{2\kappa^{2}_{G}}\int{d^{4}x}\sqrt{-g}[R+\frac{6}{L^{2}}-\frac{1}{4}G(\psi_{1},\psi_{2})F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}(\partial\psi_{1})^{2}-\frac{1}{2}\psi^{2}_{1}(\partial_{\mu}\theta_{1}-A_{\mu})^{2}\nonumber\\
&&-\frac{1}{2}(\partial\psi_{2})^{2}-\frac{1}{2}\psi^{2}_{2}(\partial_{\mu}\theta_{2}-A_{\mu})^{2}-V(\psi_{1},\psi_{2})], \nonumber
\end{eqnarray}
which is invariant under the gauge transformation
\begin{equation*}
A_{\mu}\rightarrow A_{\mu}+\partial_{\mu}\alpha, \quad \theta_{1}\rightarrow\theta_{1}+\alpha, \quad \theta_{2}\rightarrow\theta_{2}+\alpha. \nonumber
\end{equation*}
To preserve the gauge transformation, we can generalize the action as \cite{Aprile:2009ai}
\begin{eqnarray}
S&=&\frac{1}{2\kappa^{2}_{G}}\int{d^{4}x}\sqrt{-g}[R+\frac{6}{L^{2}}-\frac{1}{4}G(\psi_{1},\psi_{2})F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}(\partial\psi_{1})^{2}-\frac{1}{2}J_{1}(\psi_{1})(\partial_{\mu}\theta_{1}-A_{\mu})^{2}\nonumber\\
&&-\frac{1}{2}(\partial\psi_{2})^{2}-\frac{1}{2}J_{2}(\psi_{2})(\partial_{\mu}\theta_{2}-A_{\mu})^{2}-V(\psi_{1},\psi_{2})],
\end{eqnarray}
where $J_{1}(\psi_{1})$, $J_{2}(\phi_{2})$ are arbitrary functions of $\psi_{1}$, $\psi_{2}$, and
\begin{eqnarray}
&&G(\psi_{1},\psi_{2})=1+\kappa_{1}\psi_{1}^{2}+\kappa_{2}\psi_{2}^{2}, \nonumber\\
&&V(\psi_{1},\psi_{2})=m_{1}^{2}\psi_{1}^{2}+m_{2}^{2}\psi_{2}^{2}+2\epsilon\psi_{1}\psi_{2}+\eta\psi_{1}^{2}\psi_{2}^{2}.
\end{eqnarray}
In the following we only consider the minimal model which gives the phase-locking condition, saying $\theta_{1}=\theta_{2}\equiv\theta$~\cite{phase-locking}. Since we do not consider the vortex solution, we can consistently set $\theta$ to be any constant, say $\theta=0$ for simplicity.
The equations of motion are
\begin{eqnarray}
&&\nabla^{2}\psi_{1}-\frac{1}{4}\frac{\partial G(\psi_{1},\psi_{2})}{\partial\psi_{1}}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\frac{\partial V(\psi_{1},\psi_{2})}{\partial\psi_{1}}-\frac{1}{2}\frac{\partial J_{1}(\psi_{1})}{\partial\psi_{1}}A_{\mu}A^{\mu}=0, \nonumber\\
&&\nabla^{2}\psi_{2}-\frac{1}{4}\frac{\partial G(\psi_{1},\psi_{2})}{\partial\psi_{2}}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\frac{\partial V(\psi_{1},\psi_{2})}{\partial\psi_{2}}-\frac{1}{2}\frac{\partial J_{2}(\psi_{2})}{\partial\psi_{2}}A_{\mu}A^{\mu}=0, \nonumber\\
&&\nabla_{\mu}(G(\psi_{1},\psi_{2})F^{\mu\nu})-J_{1}(\psi_{1})A^{\nu}-J_{2}(\psi_{2})A^{\nu}=0, \nonumber \\
&&R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{1}{2}f(\phi)(F_{\mu\alpha}F_{\nu}^{\alpha}-\frac{1}{4}g_{\mu\nu}F^{2})+\frac{1}{2}J_{1}(\psi_{1})(A_{\mu}A_{\nu}-\frac{1}{2}g_{\mu\nu}A^{2})+\frac{1}{2}J_{2}(\psi_{2})\nonumber\\
&&(A_{\mu}A_{\nu}-\frac{1}{2}g_{\mu\nu}A^{2})+\frac{1}{2}(\partial_{\mu}\psi_{1}\partial_{\nu}\psi_{1}-\frac{1}{2}g_{\mu\nu}(\partial\psi_{1})^{2})+\frac{1}{2}(\partial_{\mu}\psi_{2}\partial_{\nu}\psi_{2}-\frac{1}{2}g_{\mu\nu}(\partial\psi_{2})^{2})\nonumber\\ &&-\frac{1}{2}g_{\mu\nu}V(\psi_{1},\psi_{2}).
\end{eqnarray}
We take the fully back-reacted ansatz as
\begin{equation}
ds^{2}=-g(r)e^{-\chi(r)}dt^{2}+r^{2}(dx_{1}^{2}+dx_{2}^{2})+\frac{dr^{2}}{g(r)}, \quad \psi_{1}=\psi_{1}(r), \quad \psi_{2}=\psi_{2}(r), \quad A=\phi(r)dt.
\end{equation}
With the choice of $J_{1}=q^{2}\psi_{1}^{2}$, $J_{2}=q^{2}\psi_{2}^{2}$, and minimal coupling $\kappa_{1}=\kappa_{2}=0$, the independent equations of motion are given by~\footnote{Note that due to gauge invariance, the two scalars have the same charge.}
\begin{eqnarray}
&&\psi''_{1}+\psi'_{1}(\frac{g'}{g}-\frac{\chi'}{2}+\frac{2}{r})+\frac{q^{2}e^{\chi}\phi^{2}}{g^{2}}\psi_{1}-\frac{1}{g}(m^{2}_{1}\psi_{1}+\epsilon\psi_{2}+\eta\psi_{1}\psi^{2}_{2})=0,\nonumber\\
&&\psi''_{2}+\psi'_{2}(\frac{g'}{g}-\frac{\chi'}{2}+\frac{2}{r})+\frac{q^{2}e^{\chi}\phi^{2}}{g^{2}}\psi_{2}-\frac{1}{g}(m^{2}_{2}\psi_{2}+\epsilon\psi_{1}+\eta\psi^{2}_{1}\psi_{2})=0, \nonumber\\
&&\phi''+\phi'(\frac{\chi'}{2}+\frac{2}{r})-\frac{q^{2}(\psi^{2}_{1}+\psi^{2}_{2})}{g}\phi=0, \nonumber\\
&&\chi'+r(\psi'^{2}_{1}+\psi'^{2}_{2})+\frac{q^{2}re^{\chi}\phi^{2}}{g^{2}}(\psi^{2}_{1}+\psi^{2}_{2})=0, \nonumber\\
&&2(\psi'^{2}_{1}+\psi'^{2}_{2})+\frac{e^{\chi}\phi'^{2}}{g}+\frac{4g'}{rg}+\frac{4}{r^{2}}+\frac{-12+2m^{2}_{1}\psi^{2}_{1}+2m^{2}_{2}\psi^{2}_{2}+4\epsilon\psi_{1}\psi_{2}+2\eta\psi^{2}_{1}\psi^{2}_{2}}{g}\nonumber\\
&&+2\frac{e^{\chi}q^{2}\phi^{2}}{g^{2}}(\psi^{2}_{1}+\psi^{2}_{2})=0,\label{EOM}
\end{eqnarray}
where a prime denotes the derivative with respect to $r$, and we work in units where the AdS radius is unity.
The Hawking temperature is given by \cite{Petersen:1999zh}
\begin{eqnarray}
&&T=\frac{\sqrt{g^{rr}}}{2\pi}\frac{d}{dr}\sqrt{-g_{tt}}|_{r=r_{+}}=\frac{g'_{+}e^{-\frac{\chi_{+}}{2}}}{4\pi}\nonumber\\
&&=\frac{r_{+}}{16\pi}[(12-2m^{2}_{1}\psi^{2}_{1+}-4\epsilon\psi_{1+}\psi_{2+}-2m^{2}_{2}\psi^{2}_{2+}-2\eta\psi^{2}_{1+}\psi^{2}_{2+})e^{\frac{-\chi_{+}}{2}}-E^{2}_{+}e^{\frac{\chi_{+}}{2}}],
\end{eqnarray}
where the horizon is located at $r=r_{+}$, $E_{+}=\phi'(r_{+})$ and the subscript $+$ denotes taking the value at the horizon.
Near the boundary, the asymptotic behavior of scalar fields are in the form of
\begin{equation}
\psi_{i}=\Psi_{i}^{(1)}r^{-\Delta}+\Psi_{i}^{(2)}r^{\Delta-3}, \quad i=1,2
\end{equation}
where the renormalizable (non-renormalizable) term represents the source (expectation value) for the scalar field, and the scaling dimension of the scalar field $\Delta$ is given by
\begin{equation}
\Delta(\Delta-3)=m^{2}.
\end{equation}
In the rest of this paper, we choose $m_{1}^{2}=-2$, $m_{2}^{2}=-1$. In this choice of mass, both falloffs of scalar fields near the boundary are normalizable, and one can impose the boundary condition that either one vanishes. For simplicity, we choose the $\Psi_{i}^{(1)}$ terms to vanish, and let $\Psi_{i}^{(2)}$ be the condensates for two scalar fields~\footnote{More precisely, $\Psi_{i}^{(2)}$ corresponds to the expectation value of the scalar field operator, and the condensate is proportional to the expectation value up to some prefactor which we just neglect it.}. The condensates have the mass dimension $\lambda_{i}=3-\Delta_{i}$, where $\lambda_{1}=2$, $\lambda_{2}=\frac{3+\sqrt{5}}{2}$.
\section{Numerical study}
To solve all five independent functions ($\psi_{1}$, $\psi_{2}$, $\phi$, $g$, $\chi$), we have to impose appropriate boundary conditions at the boundary $r\rightarrow\infty$ and horizon $r=r_{h}$. At the horizon, the regularity condition is required, means $\phi(r_{h})=0$. Others can be obtained by taking Taylor expansion near the horizon and derived from the equations of motion. This leaves five undetermined parameters ($\psi_{1}(r_{h})$, $\psi_{2}(r_{h})$, $\phi'(r_{h})$, $r_{h}$, $\chi(r_{h})$). At the boundary, the five functions should behave as
\begin{eqnarray}
&&\psi_{1}=\Psi_{1}^{(1)}r^{-\Delta_{1}}+\Psi_{1}^{(2)}r^{\Delta_{1}-3}, \quad \psi_{2}=\Psi_{2}^{(1)}r^{-\Delta_{2}}+\Psi_{2}^{(2)}r^{\Delta_{2}-3}, \quad \phi=\mu-\frac{\rho}{r}, \nonumber\\
&&g=r^{2}+..., \quad \chi=0+... \quad.
\end{eqnarray}
As discussed in the previous section, we impose the source free condition $\Psi_{i}^{(1)}=0$ since we hope the U(1) symmetry spontaneously broken. Also according to the AdS/CFT dictionary, up to a normalization, the expansion coefficients $\rho$, $\mu$, $\Psi_{i}^{(2)}\equiv\langle\mathcal{O}_i\rangle$ are interpreted as the charge density, chemical potential and condensates in the dual field theory respectively.
On the other hand, the eq.(\ref{EOM}) have the scaling symmetries
\begin{eqnarray}
&&e^{-\chi}\rightarrow\alpha^{2}e^{-\chi}, \quad \phi\rightarrow\alpha\phi, \quad t\rightarrow\alpha t,\label{scaling1}\\
&&r\rightarrow\beta r, \quad (t,x_{1},x_{2})\rightarrow\beta^{-1}(t,x_{1},x_{2}), \quad g\rightarrow \beta^{2}g, \quad \phi\rightarrow\beta\phi,
\end{eqnarray}
and one can use these two scaling symmetries to set $r_{h}=1$ and $\chi(r_{h})=0$ for performing numerics. Then we choose two of the remaining three undetermined parameters as shooting parameters to match with the source free condition $\Psi_{i}^{(1)}=0$ and solve the coupled differential equations. After solving the coupled differential equations, we need to apply the first scaling symmetry eq.(\ref{scaling1}) to set $\chi(\infty)=0$ such that the Hawking temperature can be interpreted as the temperature in the dual field theory~\cite{Hartnoll:2008kx}. Below, we fix $q=1$. We also set $\eta=0$ in our numerical calculations to focus on the effect of the Josephson coupling. We have checked that leaving the quartic scalar interaction turne on, viz. $\eta \neq 0$, we obtain similar solutions for the gauge and scalar fields, and the condensates and conductivities extracted exhibit similar behaviours, as in the $\eta = 0$ case. We leave the detailed study of the case where both the Josephson coupling and the
quartic scalar interaction are present to future work.
We emphasize here that the physical quantities of interest here are those associated with $\psi_{1,2}$. With $\eta = 0$, it is possible to go to a basis where the quadratic scalar potential becomes diagonal. However, this does not mean that the effect of the Josephson coupling is gone. The theory with respect to $\psi_{1,2}$ is not free. If one were to calculate quantities composed of $\psi_{1,2}$ (e.g. their correlation functions) in the new diagonal basis, the Josephson coupling will reappear.
\subsection{Condensates}
In Fig.~\ref{fig:vevi}, we show how the two condensates of the two charged scalar fields vary as a function of temperature. We plot the dimensionless scaling-invariant quantities $\langle\mathcal{O}_i\rangle^{1/\lambda_i}/\mu$, $i = 1,\,2$, as a function of $T/\mu$ for various values of Josephson coupling.
These scaling-invariant quantities are equivalent to those scaled to $\mu = 1$. We have checked that these hairy black hole solutions have lower free energy than the normal phase solutions without condensation, and are thus thermodynamically favored below the critical temperature.
Without the Josephson coupling, i.e. $\epsilon=0$, the critical temperature for two scalar fields are different from that found in \cite{Cai:2013wma}. When the Josephson coupling is turned on, the two scalar fields condense at the same critical temperature,
i.e. when one of the scalar condenses, it triggers the other to condense as well. This is a characteristic of two-band superconductors such as MgB$_{2}$ or Fe-based superconductors found in experiments \cite{twobandTc}. We see also that the critical temperature decreases as the strength of interband coupling increases, which is the same as the single band case.
In the weakly-coupled (BCS) theory, the value of condensate is proportional to the superconducting gap. If we n\"{a}ively extrapolate this to the strongly-coupled case here, we see interestingly from Fig.~\ref{fig:vevi} that with non-zero Josephson coupling, the ratio of two superconducting gaps in the $\epsilon>0$ case (s$_{++}$ superconductor) is higher than that in the $\epsilon<0$ case (s$_{+-}$ superconductor). If our speculation can be confirmed, this would be a novel feature predicted from our holographic model and merits further investigations, both theoretically and experimentally.
\begin{figure}[ht]
\centering
\subfloat[$\epsilon=0.1$]{
\label{fig:subfig:epp01}
\includegraphics[height=1.8in,width=2.5in]{epp01.eps}}
\hspace*{0.1in}
\subfloat[$\epsilon=-0.1$]{
\label{fig:subfig:epn01}
\includegraphics[height=1.8in,width=2.5in]{epn01.eps}}\\
\subfloat[$\epsilon=0.5$]{
\label{fig:subfig:epp05}
\includegraphics[height=1.8in,width=2.5in]{epp05.eps}}
\hspace*{0.1in}
\subfloat[$\epsilon=-0.5$]{
\label{fig:subfig:eppn05}
\includegraphics[height=1.8in,width=2.5in]{epn05.eps}}\\
\subfloat[$\epsilon=1$]{
\label{fig:subfig:epp1}
\includegraphics[height=1.8in,width=2.5in]{epp1.eps}}
\hspace*{0.1in}
\subfloat[$\epsilon=-1$]{
\label{fig:subfig:epn1}
\includegraphics[height=1.8in,width=2.5in]{epn1.eps}}
\caption{The two condensates, $\langle\mathcal{O}_1\rangle$ (blue) and $\langle\mathcal{O}_2\rangle$ (red), as functions of temperature for non-zero Josephson coupling, $\epsilon$. The mass dimensions of the two condensates are $\lambda_1 = 2$ and $\lambda_{2}=\frac{3+\sqrt{5}}{2}$ respectively.}
\label{fig:vevi}
\end{figure}
\clearpage
\subsection{Conductivity}
\subsubsection{Optical conductivity}
We are interested in the transport properties of the two-band superconductors, such as those encapsulated by the optical and thermal conductivities, which are important physical quantities measured in experiments. We compute the conductivities based on the linear response theory. Following the standard prescription in AdS/CFT correspondence, we turn on the fluctuations
$\delta A_{x}=a_{x}(r)e^{-i\omega t}$ and $\delta g_{tx}=h_{tx}(r)e^{-i\omega t}$. The fluctuation equations are given by
\begin{equation}
a''_{x}+a'_{x}(\frac{g'}{g}-\frac{\chi'}{2})+a_{x}(\frac{\omega^{2}e^{\chi}}{g^{2}}
-\frac{q^{2}(\psi^{2}_{1}+\psi^{2}_{2})}{g}) =
\frac{\phi'e^{\chi}}{g}(-h'_{tx}+\frac{2}{r}h_{tx}) \,,
\end{equation}
\begin{equation}
h'_{tx}-\frac{2}{r}h_{tx}+\phi'a_{x} = 0 \,,
\end{equation}
which can be combined into
\begin{equation}
a''_{x}+a'_{x}(\frac{g'}{g}-\frac{\chi'}{2})+[(\frac{\omega^{2}}{g^{2}}-\frac{\phi'^{2}}{g})e^{\chi}-\frac{q^{2}(\psi^{2}_{1}+\psi^{2}_{2})}{g}]a_{x}=0 \,.
\end{equation}
By solving this equation for $a_x$ with incoming wave boundary condition, the optical conductivity can be extracted from the asymptotic behavior of $a_x$ using the standard holographic prescription based on Ohm's law~\cite{Hartnoll:2009sz}:
\begin{equation}
a_x(r) = a_x^{(0)} + \frac{a_x^{(1)}}{r} + \cdots \qquad
\sigma(\omega) = \frac{J_x}{E_x} = \frac{1}{i\omega}\frac{a_x^{(1)}}{a_x^{(0)}}
\end{equation}
As an example of the typical behavior of the optical conductivity, $\sigma(\omega)$, in our model, we show in Fig.~\ref{fig:sigACepn1} the real and imaginary part of $\sigma(\omega)$ when $\epsilon = -1$ for various values of $\mathcal{T}/\mathcal{T}_c$, where we define $\mathcal{T} \equiv T/\mu$. We normalize the real part by
$\sigma_\infty\equiv\lim_{\omega\rightarrow\infty}\mathrm{Re}\,\sigma(\omega)$ to better display its features.
\begin{figure}[htbp]
\centering
\subfloat[$\mathrm{Re}\,\sigma(\omega)/\sigma_\infty$]{
\label{fig:subfig:epnReAC}
\includegraphics[width=2.95in]{ResigACepn1.eps}}
\hspace*{0.1in}
\subfloat[$\mathrm{Im}\,\sigma(\omega)$]{
\label{fig:subfig:epnImAC}
\includegraphics[width=3.05in]{ImsigACepn1.eps}}
\caption{The AC conductivity in the case $\epsilon = -1$. The colored lines, blue, purple, brown, and green, correspond to $\mathcal{T}/\mathcal{T}_c = 0.92,\, 0.79,\,0.65,\,0.45$ respectively. In the same order,
$\sigma_\infty = 0.94,\,0.85,\,0.72,\,0.58$.}
\label{fig:sigACepn1}
\end{figure}
We see that the optical conductivity exhibits features typically seen in the one-band superconductor case. Notice the pole at $\omega = 0$ in the imaginary part of the optical conductivity. By Kramer-Kronig relation this implies a delta function in the real part of optical conductivity with the strength given by the coefficient of the pole. In our model, this coefficient does not vanish, and it approaches a constant as $T \rightarrow T_c$. This delta function at $T \geq T_c$ is due to the translational invariance, and is only visible in systems with full backreactions~\cite{Hartnoll:2008kx}. By varying the strength and sign of the interband coupling $\epsilon$, the qualitative features do not change; only $\sigma_\infty$ is changed.
\subsubsection{Thermal conductivity}
The thermal conductivity is a useful probe of nodal structure of superconductors\footnote{Other probes of the nodal structure used in experiments include specific heat, magnetic penetration length and the NMR spin lattice relaxation time}. In experiments, the low temperature behavior of the thermal conductivity is well fitted by $\kappa/T=a+bT^{\gamma-1}$, where constant part comes from the contribution of nodal excitations, while the $T^{\gamma}$ part can arise from effects that break the cooper pairs, phonons (for $\gamma=3$) or gapped excitations at low temperature.
In the holographic model, the thermal conductivity, $\bar{\kappa}(\omega)$, is given by~\cite{Hartnoll:2009sz}
\begin{equation}
T\bar{\kappa}(\omega) = \frac{i\left(\epsilon + P - 2\mu\rho\right)}{\omega} + \mu^2\sigma(\omega) \,,
\end{equation}
We see that the real part of $\bar{\kappa}(\omega)$ is determined by that of the electric conductivity alone.
In Figs.~\ref{fig:kap} we plot the behavior of
$\bar{\kappa}/T\equiv\lim_{\omega \rightarrow 0}\mathrm{Re}\,\bar{\kappa}(\omega)/T$. For convenience, we plot it as a function of
$\mathcal{T}/\mathcal{T}_c \propto T/T_c$ at various values of $\epsilon$.
\begin{figure}[htbp]
\centering
\subfloat[$\epsilon =$ 1 (blue), 0.5 (purple), 0.1 (brown)]{
\label{fig:subfig:eppkap}
\includegraphics[width=3in]{kapepp.eps}}
\hspace*{0.1in}
\subfloat[$\epsilon =$ -1 (blue), -0.5 (purple), -0.1 (brown)]{
\label{fig:subfig:epnkap}
\includegraphics[width=3in]{kapepn.eps}}
\caption{Thermal conductivity for various values of $\epsilon$ with both (a) positive and (b) negative signs.}
\label{fig:kap}
\end{figure}
At low temperature, we find for $x\equiv\mathcal{T}/\mathcal{T}_c \lesssim 0.2$, $\bar{\kappa}/T$ can be well fitted by the form
$a x^b+c x^d$, which indicates that as $T \rightarrow 0$, $\bar{\kappa}/T \rightarrow 0$ indicating the nodeless feature of our model. We list the fitted values of the parameters in Table~\ref{tab:kapfit}.
\begin{table}[h]
\caption{Values of the low temperature fit $\bar{\kappa}/T = a x^b+ c x^d$ at various $\epsilon$.}
\centering
\begin{tabular}{c|c c c c c c}
$\epsilon$ & $a$ & $b$ & $c$ & $d$ \\
\hline
1 & -0.374 & 1.238 & 0.436 & 1.274 \\
0.5 & -1.886 & 1.224 & 2.16 & 1.26 \\
0.1 & -0.139 & 1.207 & 0.758 & 1.6 \\
-0.1 & 3.562 & 1.581 & 0 & 0 \\
-0.5 & 5.36 & 1.532 & 0 & 0 \\
-1 & 3.789 & 1.152 & 11.02 & 2.011\\
\hline
\end{tabular}
\label{tab:kapfit}
\end{table}
From our fits, both the $s_{++}$ and the $s_{+-}$ states seem to be nodeless ($\bar{\kappa} \rightarrow 0$ as $T \rightarrow 0$).
For the $s_{++}$ state, this is to be expected due to the existence of the superconducting gap, as is confirmed by experiments \cite{nodeless}. The situation for the $s_{+-}$ state is less clear experimentally. The $s_{+-}$ state is widely believed to appear in iron-based superconductors. While most families of iron-based $s_{+-}$ superconductors are found to be nodeless, not all of them are. Some families such as LaFePO are found to have a residual linear temperature dependence in $\kappa$ at low temperature, and there is nodal excitations in at least one of its bands \cite{iron-based3}.
Experimentally, the thermal conductivity of a fully gapped (and thus nodeless) superconductor is seen to have at least a power-law temperature dependence at low temperature with an exponent larger than three. In the case with nodes however, the power-law exponent can be arbitrary depending on how the cooper pairs are broken. In our holographic model, we found the power-law exponent to be less than three for all the values of Josephson coupling we looked at. This maybe a feature of our holographic model, which requires further investigation beyond the scope of the present work. But given that confusions remain under what circumstances $s_{+-}$ superconductors are nodeless experimentally, we caution against too literal a comparison with current experiments.
From Fig.~\ref{fig:kap}, we find the temperature dependence of $\bar{\kappa}/T$ in $s_{+-}$ and $s_{++}$ states are quite different near the critical temperature. We see that the thermal conductivity increases faster for a holographic $s_{++}$ superconductor than an $s_{+-}$ one
as the temperature increases. This might explain the result that the critical temperature of the $s_{+-}$ state is generically higher than the $s_{++}$ state (for the normalized critical temperature $T^{nor}\equiv T_{c}/\mu_{c}$ at various $\epsilon$, see Table \ref{tab:Tc}): if the $s_{++}$ state is more susceptible to thermal excitations than the $s_{+-}$ state, the cooper pairs in the $s_{++}$ state would be easier to break than in the $s_{+-}$ state as the temperature increases, resulting in an exit from superconductivity at a lower temperature.
\begin{table}[h]
\caption{The normalized critical temperature $T^{nor}_{c}\equiv T_{c}/\mu_{c}$ at various $\epsilon$.}
\centering
\begin{tabular}{c|c c|c c|c c}
$\epsilon$ & 1 & -1 & 0.5 & -0.5 & 0.1 & -0.1 \\
\hline
$T^{nor}_{c}$ & $9.35\times 10^{-5}$ & $1.38\times 10^{-3}$ & $6.44\times 10^{-4}$ & $4.25\times 10^{-2}$ & $1.29\times 10^{-3}$ & $2.14\times 10^{-2}$\\
\hline
\end{tabular}
\label{tab:Tc}
\end{table}
\section{Analytic study}
\begin{figure}
\center{
\includegraphics[width=8cm]{condensate_analytic.eps}\hspace{0.2cm}
\caption{\label{fig:condensate_analytic} Analtyic fit (curves) of two condnsates near and below the critical temperature agrees well with numerical results (dots). We remark that to obtain the analytic result, we have adopted oupling $\epsilon=1$ and ${\cal O}_{12}=36$.}
}
\end{figure}
In previous sections, we have investigated the two-band model numerically. It would be also insightful to study the connection between condensates and other variables in the model via some analytic method. Many analytic approaches have been proposed to address the universal properties of second order phase transitions in holographic superconductors\cite{Ge:2010aa, Siopsis:2010uq, Zeng:2010zn,Li:2011xja,Cai:2011ky,Chen:2011en,Ge:2011cw,Momeni:2011iw}. In particular, it would be interesting to apply the variational method for the Sturm-Liouville eigenvalue problem in \cite{Siopsis:2010uq}, to our two-band model. First we notice that near the critical temperature, where one can neglect the backreaction, the $\Phi$ and $\Psi_i$ takes the following forms:
\begin{eqnarray}
&&\Phi = \lambda r_+ (1-z),\nonumber\\
&&\Psi_i = \frac{<{\cal O}_i>}{\sqrt{2}r_+^{\Delta_i}}z^{\Delta_i}F^i(z),
\end{eqnarray}
where $\lambda = \frac{\rho}{r_{+c}^2}$ and $\Delta_i^{\pm}=\frac{3}{2}\pm \sqrt{\frac{9}{4}+m_i^2}$.
Applying Sturm-Liouville theorem to the equations of condensate fields, one can minimize the eigenvalue $\lambda^2$ providing the coupling $\epsilon$ and condensate ratio ${\cal O}_{12}\equiv \frac{<{\cal O}_1>}{<{\cal O}_{2}>}$ at the critical temperature. To be specific, we have to minimize
\begin{align}\label{analytic-eqn}
\lambda^2 &= \frac{1}{\int\!dz\left[W_1(z)F_1(z)^2+W_2(z)F_2(z)^2\right]}
\bigg\{\int\!dz\left[P_1(z)F'_1(z){}^2 + P_2(z)F'_2(z){}^2\right] \notag \\
&\quad +\int\!dz\left\{\left[Q_1(z)+R_1(z)\right]F_1(z)^2 + \left[Q_2(z)+R_2(z)\right]F_2(z)^2\right\}
\bigg\}
\end{align}
for trial functions $F_i=1-\alpha_i z^2$. The functions $P_i(z),Q_i(z),R_i(z)$ are derived in the Appendix. Then one can read the critical temperature as a function of $\rho$:
\begin{equation}
T_c = \frac{3}{4\pi}\sqrt{\frac{\rho}{\lambda_m}}
\end{equation}
To compare with our numerical results, here we focus on the same choice for the conformal dimensions of condensates. Following similar derivation in \cite{Siopsis:2010uq}, one can express the condensates near and below the critical temperature as follows:
\begin{eqnarray}\label{eq:condensate_fit}
&&<{\cal O}_1> \simeq (1-\frac{T}{T_c})^{1/2}\gamma_1 T_c^{\Delta_1} (1+(\frac{\gamma_1}{\gamma_2})^2 T_c^{2(\Delta_1-\Delta_2)}{\cal O}^{-1}_{12})^{-1/2},\nonumber\\
&&<{\cal O}_2>\simeq (1-\frac{T}{T_c})^{1/2}\gamma_2 T_c^{\Delta_2} (1+(\frac{\gamma_2}{\gamma_1})^2 T_c^{2(\Delta_2-\Delta_1)}{\cal O}_{12})^{-1/2},
\end{eqnarray}
where $\gamma_i \equiv \frac{2}{\sqrt{C_i}}(\frac{4\pi}{3})^{\Delta_i}$ and $C_i=\int{\lambda m_i^2 \frac{z^{2(\Delta_i-1)}(1-z)}{1-z^3}}F_i^2 dz$. In the figure (\ref{fig:condensate_analytic}), we showed that the analytical approximation (\ref{eq:condensate_fit}) agrees very well with the numerical results near the critical point.
The analytic method also has the advantages to easily reveal the connection between model parameters. In the figure (\ref{fig:parameter_Tc}), we plot the fitting curve of eigenvalues $\lambda^2$ as a function of $\epsilon$ and condensate ratio ${\cal O}_{12}$. We conclude that the $T_c$ slightly decreases (increases) for positive (negative) coupling and remains nearly same for different condensate ratios at small coupling.
\begin{figure}
\center{
\includegraphics[width=8 cm]{Oratio_Tc.eps}\hspace{0.2cm}
\caption{\label{fig:parameter_Tc} The eigenvalues $\lambda^2$ (vertical axis) is plotted against the condensate ratio (horizontal axis) at the $T_c$. Different curves, from top to down, correspond to $\epsilon = 1,0.1, 0.05, 0, -0.05, -0.1, -1$.}
}
\end{figure}
\section{Summary and Outlooks}
We have constructed a fully back-reacted holographic model of two-band superconductor with an explicit interband coupling between the two charged scalars. The sign of the interband coupling indicates whether the pairings of two bands is in phase or out of phase. We have studied its effects on the two condensates and the critical temperature. We have shown that in the presence of the interband coupling, when one scalar field condenses, it will induce the other scalar to condense at the same critical temperature, and the critical temperature decreases as the strength of the interband coupling increases. The ratio of the two gaps in the $s_{++}$ state is larger than in the $s_{+-}$ state, but the critical temperature of $s_{+-}$ is generically higher~\footnote{The critical temperature of the $s_{+-}$ state larger than that of the $s_{++}$ state is consistent with earlier studies~\cite{Ummarino:2009,Krikun:2012yj}}.
We have also studied the transport properties of the holographic two-band superconductor, and we calculated its optical and thermal conductivities. The optical conductivity is qualitative similar to that of the single band superconductor, while the thermal conductivity seems to indicate that our model has no nodal excitations. Our study is primarily a numerical one. But in regimes where the Sturm-Liouville method is applicable, analytically results can be obtained, and is fully consistent with our numerical results.
There are many directions for future works. One is to see in the higher frequency region of the optical conductivity if there exists a mid-infrared peak when the interband coupling is large. In this work, we worked with a translational invariant system with no impurities. It would be interesting to introduce impurities in our model, as the mid-infrared peak in the optical conductivity is expected when the scattering between the impurities and charge carriers is large. Furthermore, it would be interesting to study the impurity induced $s_{+-} \rightarrow s_{++}$ transition as discussed in Ref.~\cite{Efremov:2012} in our model.
To be completely sure of the nodal structure, the strict zero temperature limit should be taken in our model. As was shown in the
single band case~\cite{Horowitz:2009ij}, the bulk geometry could be quite different when $T$ is strictly zero from when $T$ is small but nevertheless finite. It is reasonable to expect this applies to the two-band case as well, and new solutions for the strict $T = 0$ case have to be found. Another way to probe the gap structure of the superconductor complementary to the thermal conductivity is to study the specific heat. A generalization of the discussions in Ref.~\cite{Hartnoll:2012pp} to the two-band case would be an immediate next step.
The strongest indication for an $s_{+-}$ superconductor is in the neutron spin measurement, where there is a resonance peak in the dynamical spin susceptibility at $\omega \sim 2\Delta$~\cite{spinresonance1,spinresonance2}~\footnote{Note such spin resonances are only found in unconventional superconductors such as the $s_{+-}$ or the d-wave superconductors.}. If we can see this feature in our model, we can be sure that at negative Josephson coupling we are indeed modeling the $s_{+-}$ superconductor.
The response of the two-band superconductor to an external magnetic field presents many very interesting questions. For one, magnetic field can significantly change the temperature dependence of the thermal conductivity, and it would be very interesting to see what it would be in our model.
It has been argued that the $s_{++}$ superconductor could be the so-called ``type-1.5'' superconductor~\cite{phase-locking,type 1.5}, which has the unusual properties that the intervortex interaction is attractive at long range and repulsive at short range \cite{Carlstrom:2010,semi-Meissner1,Geurts:2010,Chave:2011,Silaev:2011}, and vortex clusters coexisting with Meissner domain at intermediate field strength forming the so-called "semi-Meissner" state~\cite{semi-Meissner1,semi-Meissner2}. More technically, for a two-band type-1.5 superconductor, the coherence lengths for the two bands, $\xi_{1}$ and $\xi_{2}$, and the magnetic penetration length, $\lambda$, satisfy the relation $\xi_{1}<\sqrt{2}\lambda<\xi_{2}$ \cite{Carlstrom:2010,Silaev:2011,semi-Meissner1,Komendova:2011,Komendova:2012}.
As steps to confirm whether the type-1.5 state truly exists, it would be very interesting to verify this relation, and to look for a first order phase transition between the Meissner and the semi-Meissner state.
|
1,116,691,497,303 | arxiv | \section{Introduction}
The existence of a crossover from BCS superconductivity to Bose Einstein (BE)
condensation of pre-formed pairs has received increasing
attention in the literature over the last few years. Originally discussed by
Eagles \cite{Eagles} within the context of pairing in thin film
semi-conductors, and considerably expanded upon by the works of Leggett
\cite{Leggett} and Nozi\`{e}res and Schmitt-Rink \cite{Nozieres3}, the latest
resurgence in interest has been due to its possible application to the
understanding of the phase diagram of high temperature cuprate
superconductors.
In
particular, the appearance of a pseudo-gap in both charge and spin
excitations
of the normal state \cite{Cooper,Batlogg,Takigawa,Marshall,Loeser}
has led to a number of suggestions that pairing correlations well
above
$T_{c}$ may occur in these systems. Whether these correlations
are in the bosonic form of pre-formed pairs
\cite{Ranninger,Alexandrov}, or pair
resonances originating from the intermediate regime of a BCS-BE crossover\cite{Janko,Trivedi}, or simply from classical phase
fluctuations\cite{Kivelson} is still however a controversial issue.
Furthermore, now that there
is a large body of evidence to suggest that pairing in the cuprates is
predominantly $\dxy$ in character \cite{Ding,Tsuei,Annett}, there is
a clear need to understand the properties of the BCS-BE crossover
within the context of this type of pairing symmetry.
Although there have been some
attempts to discuss the effect of $\dxy$ pairing on pseudo-gap formation above $T_{c}$ in the cuprates\cite{Engelbrecht}, there has been little discussion on the systematic
groundstate properties of the BCS-BE crossover in the $\dxy$ channel.
This is an important issue, since pairs with
this symmetry cannot contract to point like bosons and so accordingly one expects that there will be severe
consequences for the properties of the groundstate of this type of system. This is of
direct importance in terms of the validity of the crossover scenario for
the cuprates, whilst also being of general interest in understanding
macroscopic pairing in higher angular momentum channels.
In this paper, we consider the BCS-BE crossover at zero temperature as a
function of both coupling strength and carrier density,
within a two-dimensional toy model which has
a $\dxy$ pairing instability. The study at zero temperature is well
controlled by use of the BCS variational
wavefunction (which contains the BE limit \cite{Leggett}) and allows us to
establish at the two body level the qualitative groundstate properties of the system
upon which future studies may be based. In particular we show that the
groundstate
properties of the system are severely modified from the $s$-wave picture,
where a smooth crossover exists for all densities \cite{Leggett,Nozieres3}.
For the $\dxy$ case
the effect of the exclusion
principle as the density of carriers is increased results in a suppression
of the emergence of bosonic degrees of freedom for moderately
large densities and strong coupling, where the system instead remains
fermionic (BCS like). We find that only in the dilute limit is a crossover
possible, and that when this occurs
\begin{center}
\begin{figure}
\includegraphics[width=8.0cm]{bec-phase.eps}
\caption{The crossover between fermionic and bosonic degrees
of freedom as a function of carrier density and coupling strength (as defined
in text). The left boundary is determined by the onset of a finite gap amplitude, and the right boundary by when the chemical potential falls below the band minimum. The solid lines are a guide for the eye only.}
\label{bec-phase}
\end{figure}
\end{center}
the single particle distribution function undergoes a radical change.
We summarize most of our results in Fig. \ref{bec-phase}, which shows when bosonic degrees of freedom can emerge as a function
of both density and coupling strength.
\section{Toy Model}
We introduce as our
toy model a `reduced' Hamiltonian in the BCS sense which
describes an
effective two-particle interaction in real space and
in the singlet pairing channel \cite{note2}. Specifically it is
\begin{eqnarray}
\label{hamil4}
H&=&\sum_{<ij>\sigma}-t(c^{{\sss \dag}}_{i\sigma}c^{}_{j\sigma} + h.c.) -\mu^{*}\sum_{i\sigma} n_{i\sigma}
\nonumber \\
&+&
W\sum_{i} c^{{\scriptscriptstyle \dag}}_{i\uparrow}c^{{\scriptscriptstyle \dag}}_{i\downarrow}
c^{}_{i\downarrow}c^{}_{i\uparrow}
-
V \sum_{<ij>}c^{{\scriptscriptstyle \dag}}_{i\uparrow}c^{{\scriptscriptstyle
\dag}}_{j\downarrow}c^{}_{j\downarrow}c^{}_{i\uparrow} ,
\end{eqnarray}
where first and second terms describe nearest neighbor hopping on a
two-dimensional square lattice with chemical potential $\mu^{*}$, and the
third and fourth terms
describe an effective two-body pairing interaction in real space. In
particular,
the third term
represents the repulsive part of the effective interaction while
the last term provides an attractive
interaction for nearest neighbor particles\cite{note10}.
By expressing the superconducting gap in terms of its various symmetry
components and
introducing the BCS variational wavefunction
$|\Psi\rangle=\prod_{{\bf k}}(u_{{\bf k}} +
v_{{\bf k}}c^{\scriptscriptstyle \dag}_{{\bf k}\uparrow}c^{\scriptscriptstyle \dag}_{-{\bf k}\downarrow})|0\rangle$, the
zero temperature gap equation in the $d_{x^{2}-y^{2}}$ channel is given by,
\begin{equation}
\label{gap1}
1=\frac{1}{2M}\sum_{{\bf k}}\frac{V(\cos k_{x} - \cos
k_{y})^{2}}{\left[(\xi({\bf k}) - \mu)^{2} +
\Delta^{2} (\cos k_{x} - \cos k_{y})^{2}\right]^{1/2}} ,
\end{equation}
where $\xi({\bf k})=-2t\eta_{k}$ is the nearest
neighbor tight-binding energy dispersion, the geometric factor
$\eta_{{\bf k}}=\cos k_{x} + \cos k_{y}$ and the effective chemical potential
$\mu=\mu^{*} -n(W/2 -2V)$. The amplitude of the d-wave gap is
denoted by $\Delta$.
As Eagles first pointed out \cite{Eagles}, any deviation from weak
coupling requires a self consistent solution of both gap and number
equations, since the BCS approximation of the chemical potential being equal
to its value in the normal state can no longer be reasonably justified.
Specifically, the number equation
for the chemical potential $\mu$, which defines the particle density
$n=N/M$ is ,
\begin{equation}
\label{num1}
n-1=\frac{1}{M}\sum_{{\bf k}}\frac{-(\xi({\bf k}) - \mu)}
{\left[(\xi({\bf k})-\mu)^{2} +
\Delta^{2}(\cos k_{x} - \cos k_{y})^{2}\right]^{1/2}} .
\end{equation}
We have solved Eqns. (\ref{gap1}) and (\ref{num1}) self consistently
at a given density by numerical integration.
In this theory, pairing takes place over
the entire Brillouin zone (BZ). The zone edge acts
as a natural boundary or momentum cut-off which avoids the renormalization
methods used to remove ultraviolet divergences in
continuum model treatments of strong coupling within the
BCS framework \cite{Randeria4}.
\section{Crossover and Analysis}
The inset of Fig. \ref{gapdata}
shows the gap parameter
$\Delta$ for densities ranging from
the dilute limit to the almost half-filled case.
Taking the onset of
a finite gap parameter as the signal for the
manifestation of the superconducting state,
an
increase in the particle density clearly favors the emergence of
a $\dxy$ paired groundstate.
On the other hand, as the system becomes more
dilute, there is a
need for stronger coupling to induce pairing.
\begin{center}
\begin{figure}
\includegraphics[height=6.5cm,width=8.5cm]{chem-pot-gap2.eps}
\caption{The dependence of $\mu$ on
coupling strength for various densities.
{\it Inset:} The gap $\Delta$ as a function of coupling
strength (dashed
lines and data points). The solid
lines are the asymptotic behavior derived in the
text.}
\label{gapdata}
\end{figure}
\end{center}
In the dilute limit (of relevance to the underdoped cuprates, where the
density of carriers is proportional to the doped hole concentration $x$),
the occurrence of $\dxy$
bound states for the two-particle problem on an empty
lattice is for our system, equivalent to that problem for the $t-J$
model. Kagan and Rice \cite{Kagan} have shown that a $\dxy$
bound state will not occur unless the coupling $J$ (equivalent to $V$ in our
case) is greater than $V_{c}/4t \sim 1.8$.
Also, Randeria \mbox{\it et al.} \cite{Randeria6} have shown that in two dimensions, a
necessary and sufficient condition for a dilute many-body $s$-wave Cooper
instability to occur is that an $s$-wave
bound
state exists for the corresponding two-body problem on the empty
lattice. Importantly, in the context of $\dxy$ symmetry,
they have also shown that such
a condition does not exist for higher angular momentum pairing.
In our study, we find that the onset of pairing occurs for progressively
weaker coupling as the density is increased, contrary to an $s$-wave system.
In the $\dxy$ channel at a coupling strength less than $V_{c}/4t$, the dilute
system gains more kinetic energy compared to pairing energy and thus has a
total condensation energy which is positive, whereas for the more dense system
the kinetic energy of the carriers is on average less effected and a pairing
instability occurs. This steady evolution from the
extreme dilute limit where pairing does not occur until $V/4t \geq
1.8$, to the near half filled case where superconductivity can
manifest itself at a much weaker coupling than that required for a two body
bound state, is indicated by the boundary on the left in Fig \ref{bec-phase}.
The $n=0$ point corresponds to the critical coupling strength $V_{c}$
discussed above and calculated initially by Kagan and Rice\cite{Kagan}.
In Fig. \ref{gapdata} we show $\mu$ as found in
conjunction with the
solutions for the gap $\Delta$ at various densities and as a function of the
coupling
strength. The horizontal line at $\mu/4t=-1.0$ represents the bottom of the tight
binding band. For weak coupling, it is well known that $\mu$ in the
superconducting phase is given roughly by the Fermi energy of the normal
state and this can be seen in the figure. However, at large doping,
$\mu$ shows little deviation from its
normal state value over a large variation in the the coupling strength.
This is to be contrasted with the low density results, which show a relatively
rapid deviation from weak coupling behavior as $V$ is increased.
In general, bosonic degrees of freedom can be expected to emerge once
the chemical potential of the many-body groundstate slips below the band
minimum in a tight binding system, or below zero in a continuum model. For an
$s$-wave system, Nozi\`{e}res and Schmitt-Rink \cite{Nozieres3} were able to
show that a crossover from fermionic superconductivity to bosonic degrees o freedom can occur for
all densities as the coupling strength is increased. For the $d$-wave system
considered here, this is not the case. Bosonic degrees of freedom can only
emerge in the dilute regime, while for large densities, the system behaves more
like a weak coupling superconductor with a value of the chemical potential
comparable to that of the normal state. These results are expressed by the
boundary on the right in Fig. \ref{bec-phase}. Given that the BCS wavefunction only takes two-body correlations into account, we expect that the suppression of bosonic degrees of freedom would be increased in a more sophisticated treatment which would include a repulsive interaction between fermion pairs\cite{Haussmann}.
Further insight into this feature of a $d$-wave system can be gained by
examining the limit of infinite coupling strength.
In the case $V\rightarrow\infty$, the kinetic term (and thus any Fermi surface geometry) becomes negligible and
the asymptotic behavior of
the gap and chemical potential can be shown from Eqns. (\ref{gap1}) and (\ref{num1}) to have the form
$\Delta\rightarrow\gamma V/2$
and
$\mu\rightarrow\gamma V (n-1)/2\alpha$, implying that in the infinite coupling
limit, $\mu/\Delta\rightarrow(n-1)/\alpha$. Here the parameter $\alpha$ is
given by the solution to
\begin{equation}
\alpha=\frac{1}{M}\sum_{{\bf k}}\frac{1}{\left[ (\cos k_{x} - \cos
k_{y})^{2} +
([n-1]/\alpha)^{2} \right]^{1/2}} ,
\end{equation}
while $\gamma$ is defined by
\begin{equation}
\label{gammastrong}
\gamma=\frac{1}{M}\sum_{{\bf k}}\frac{(\cos k_{x} - \cos
k_{y})^{2}}{\left[ (\cos k_{x} - \cos k_{y})^{2} + ([n-1]/\alpha)^{2}
\right]^{1/2}} .
\end{equation}
We have indicated the asymptotic behavior of $\Delta$ at various densities in
the inset of
Fig. \ref{gapdata} by the solid lines. As one increases the
density, the
convergence to the asymptotic behavior is poor for large densities, indicating
that the $\mu$ is still of the order of the band energy.
In the strong coupling regime, if
true bosonic characteristics emerge then one expects that $\mu$
should simply reduce to the binding energy per
particle for
the `diatomic' electron molecule. It can readily be shown that only in the
dilute limit does the ratio
$\gamma/\alpha$ approach unity. This indeed leads to the result $\mu=-V/2$,
exactly one half of the binding energy for the two body problem in the strong
coupling
limit \cite{note3}.
On the other hand,
at half filling ($n=1$) $\mu$
clearly remains zero even in the limit of infinite coupling. Remarkably, the
groundstate remains a fermionic superconductor for all coupling strengths and
has
characteristics equivalent to a weak coupling BCS system. For
densities in
between these two limits, an increase in coupling will eventually
reduce the
chemical potential below the bottom of the band, however it will
always be
somewhat greater than the binding energy per particle of the
two-body case.
This general dependence on the density for the BCS-BE crossover in $d$-wave systems
can be viewed as a manifestation of the effect of the exclusion principle.
Due to the symmetry of the pairs, they cannot contract in real space to point
bosons,
but must always retain a finite spatial extent. As the density
increases, the overlap of the pair wavefunctions exerts its influence
through the exclusion principle contributing a positive energy to the
system.
At half filling, this overlap prevents the system from crossing over
to a
system displaying bosonic qualities even in the infinite coupling
limit.
This interpretation can be further supported by a calculation of the
average pair
coherence length. We can define this length $\xi_{0}$ through the
expectation value of the quantity
$
\xi_{0}^{2}=\langle F_{{\bf k}}|-\nabla^{2}_{{\bf k}}|F_{{\bf k}}\rangle/\langle F_{{\bf k}}|F_{{\bf k}}\rangle$
where $F_{{\bf k}}=u_{{\bf k}}^{*}v_{{\bf k}}$ plays the role of the pair wavefunction \cite{Leggett2}.
\begin{center}
\begin{figure}
\includegraphics[width=8cm]{coherence3.eps}
\caption{The coherence length in the strong coupling limit as
a function of density. The lattice spacing has been
set to unity.}
\label{corrlength}
\end{figure}
\end{center}
In Fig. \ref{corrlength} we show the behavior of $\xi_{0}$ in
the
infinite coupling limit as a function of density. It is clear
that a shrinking of the pair size in real space to a compactified
boson spread out
over nearest neighbor sites can only
occur in the dilute limit, where the exclusion principle
becomes irrelevant. For higher density, the overlap of the pair wavefunctions
has the effect of increasing the correlation between particles, which then
increases the average
pair size in order that the total energy of the system can be minimized.
Lastly, we examine the single particle distribution function
$n_{{\bf k}\sigma}$
as a function of the coupling strength. We find that in weak coupling,
the
system behaves as a typical BCS system, with $n_{{\bf k}\sigma}$
resembling
a step function up to a point close to the normal state Fermi
surface, where
upon it becomes smeared over a small region about $\mu=E_{F}$. The
degree of
smearing increases with the coupling until the chemical potential
drops below the bottom of the band. At this point, the behavior of
the single
particle distribution function radically changes.
In Fig. \ref{occupation}, we
compare
contour plots of $n_{{\bf k}\sigma}$ for $n=0.3$ throughout the
BZ for the weak coupling case $V/4t=1$, where $\mu$ is well
approximated by its normal state value, and the stronger coupling case $V/4t=4$
where the
chemical potential falls just below the bottom of the band.
\begin{center}
\begin{figure}
\includegraphics[height=5cm]{n-0.3-V-1.eps}
\includegraphics[height=5cm]{n-0.3-V-4.eps}
\caption{Contour plots within the first BZ of $n_{{\bf k}\sigma}$ for
$V/4t=1$ (top) and $V/4t=4$ (bottom) and $n=0.3$. The brighter the region the larger the value of $n_{{\bf k}\sigma}$. }
\label{occupation}
\end{figure}
\end{center}
In the
weak
coupling case, the Fermi surface of a tight binding band with a small density
per unit volume can
be clearly seen by the bright region in the middle of the plot.
On the other hand, when $\mu$ drops below the bottom of the band and bosonic
degrees of freedom emerge, $n_{{\bf k}\sigma}$ undergoes a redistribution
within the BZ. For $V/4t=4$, the probability for
occupation of
highest momentum states is now found in the regions around $(\pm \pi,0)$
and $(0,\pm\pi)$.
For $\dxy$ pairing, this change in behavior of the single particle
distribution function $n_{{\bf k}\sigma}$ as the chemical potential falls below the minimum of the tight binding band is an interesting feature.
For $s$-wave systems, $n_{{\bf k}\sigma}$ becomes a constant for strong coupling,
representing the Fourier transform of a point internal wavefunction.
Accordingly we can interpret the new structure for $n_{{\bf k}\sigma}$ as the
$d$-wave version of a local pair.
One
may speculate what the effect of this new structure
for $n_{{\bf k}\sigma}$ may be at finite temperatures. Above $T_{c}$, if
a pseudo-gap in the normal state excitation spectrum can arise from
pairing fluctuations in the crossover regime \cite{Trivedi},
then one would expect these fluctuations to
occur predominantly in the region of the BZ which has the largest
probability of occupation by pairs.
On the one hand, it is interesting to note from the
bottom plot in Fig. \ref{occupation} that these regions correspond to the
angular dependence of the pseudo-gap in the underdoped cuprates \cite{Marshall,Loeser}. However, it is still unknown whether these materials correspond
to such a regime.
\section{Summary}
In summary, we find that for $\dxy$ pairing,
only in the dilute
limit is it likely that a BCS-BE crossover can occur, while it is possible
at any
density for $s$-wave systems. If bosonic behavior does emerge, the $\dxy$ symmetry causes the single
particle distribution function to undergo a radical redistribution.
\section{Acknowledgments}
BCdH would like to thank A.J. Berlinsky, M.P. Das, M.J.P. Gingras and
C. Kallin for valuable discussions and comments.
This work was partially funded by the
Australian Commonwealth Government.
|
1,116,691,497,304 | arxiv |
\section{Introduction}
\label{sec:intro}
Since the time that Moggi first connected them to effectful
computation~\cite{moggi89computational}, \emph{monads} have proven to be a
surprisingly versatile computational structure. Perhaps best known as
the foundation of Haskell's support for state, I/O, and other effects,
monads have also been used to structure APIs for libraries that
implement a wide range of programming tasks, including
parsers~\cite{hutton98monadic}, probabilistic
computations~\cite{ramsey02stochastic}, and functional
reactivity~\cite{elliot97functional,cooper06father}.
Monads (and morphisms between them) are not a panacea, however, and
so researchers have proposed various extensions. Examples include
Wadler and Thiemann's~\cite{wadler:2003} indexed monad for typing
effectful computations; Filli{\^a}tre's
generalized monads~\cite{filliatre99atheory}; Atkey's parameterized
monad~\cite{atkey09}, which has been used to
encode disciplines like regions~\cite{kiselyov2008lightweight} and
session types~\cite{pucella2008haskell}; Devriese and
Piessens'~\cite{devriese2011information} monad-like encodings for
information flow controls;
and many others. Oftentimes these extensions are needed to prove
stronger properties about computations, for instance to prove the
absence of information leaks or memory errors.
Unfortunately, these extensions do not enjoy the same status as monads
in terms of language support. For example, the conveniences that
Haskell provides for monadic programs (e.g., the \sfont{do} notation
combined with type-class inference) do not apply to these extensions.
One might imagine adding specialized support for each of these
extensions on a case-by-case basis, but a unifying construction into
which all of them, including normal monads, fit is clearly preferable.
This paper proposes just such a unifying construction, making several
contributions. Our first contribution is the
definition of a \emph{polymonad}, a new way to structure effectful
computations. Polymonads give the familiar monadic bind (having type
\ls!forall $a,b$. M $a$ -> ($a$ -> M $b$) -> M $b$!) the more general
type \ls!forall $a,b$. L $a$ -> ($a$ -> M $b$) -> N $b$!. That is, a
polymonadic bind can compose computations with three different types
to a monadic bind's one. Section~\ref{sec:polymonads-alt} defines
polymonads formally, along with the \emph{polymonad laws}, which we
prove are a generalization of the monad and morphism laws. To precisely
characterize their expressiveness, we prove that polymonads correspond
to Tate's \emph{productoids}~\cite{tate12productors} (Theorem~\ref{thm:productoid}), a recent
semantic model general enough to capture most known effect systems,
including all the constructions listed above.\footnote{We
discovered the same model concurrently with Tate and independently
of him, though we have additionally developed supporting algorithms
for (principal) type inference, (provably coherent) elaboration, and
(generality-preserving) simplification. Nevertheless, our presentation here has benefited from
conversations with him.}
Whereas Tate's interest is in semantically modeling sequential
compositions of effectful computations, our interest is in supporting
practical programming in a higher-order language. Our second
contribution is the definition of \lang{}
(Section~\ref{sec:programming}), an ML-like programming language
well-suited to programming with polymonads. We work out several
examples in \lang, including novel polymonadic constructions for
stateful information flow tracking, contextual type and effect
systems~\cite{neamtiu08context}, and session types.
Our examples are made practical by \lang's support for type inference
and elaboration, which allows programs to be written in a familiar
ML-like notation while making no mention of the bind
operators. Enabling this feature, our third contribution
(Section~\ref{sec:syntactic}) is an instantiation of Jones' theory of
qualified types~\cite{jones1992theory} to \lang. In a manner similar
to Haskell's type class inference, we show that type inference for
\lang{} computes \emph{principal types} (Theorem~\ref{thm:oml}).
Our inference algorithm is equipped with an elaboration phase, which
translates source terms by inserting binds where needed.
We prove that elaboration is \emph{coherent}
(Theorem~\ref{thm:coherence}), meaning that when inference produces
constraints that could have several solutions, when these solutions
are applied to the elaborated terms the results will have equivalent
semantics, thanks to the polymonad laws. This property allows us to do
better than Haskell, which does not take such laws into account, and
so needlessly rejects programs it thinks might be ambiguous. Moreover,
as we show in Section~\ref{sec:solve}, the polymonad laws allow us to
dramatically simplify types, making them far easier to read without
compromising their generality. A prototype implementation of \lang{}
is available from the first author's web page and has been used to
check all the examples in the paper.
Put together, our work lays the foundation for providing practical
support for advanced monadic programming idioms in typed, functional
languages.
\section{Polymonads}
\label{sec:polymonads-alt}
We begin by defining polymonads formally.
We prove that a polymonad
generalizes a collection of monads and morphisms among those
monads. We also establish a correspondence between polymonads and
productoids, placing our work on a semantic foundation that is known
to be extremely general.
\begin{definition
\label{def:polymonad}
A \textbf{polymonad} $(\mconstrs, \Sigma)$ consists of (1) a
collection $\mconstrs$ of unary type constructors, with a
distinguished element $\tfont{Bot} \in \mconstrs$, such that
$\kw{Id}~\tau=\tau$, and (2) a collection, $\Sigma$, of $\mybind$
operators such that the laws below hold,
where $\bind{(M,N)}{P} \triangleq$\ls!forall a b. M a -> (a -> N b) -> P b!.
\\[-1ex]
\begin{small}
\[\begin{array}{ll}
\multicolumn{2}{l}{$For all$~\kw{M}, \kw{N}, \kw{P}, \kw{Q}, \kw{R}, \kw{S}, \kw{T}, \kw{U} \in \mconstrs.} \\
$\textbf{(Functor)}$ & \exists \kw{b}. \kw{b}@\bind{(M,\tfont{Bot})}{M} \in \Sigma ~$and$~\kw{b}~\mbox{\ls!m!}~(\lambda \kw{y}.\kw{y}) = \mbox{\ls!m!} \\[1ex]
$\textbf{(Paired morphisms)}$ & \exists \kw{b}_1@\bind{(M,\tfont{Bot})}{N} \in \Sigma \iff \exists \kw{b}_2@\bind{(\tfont{Bot}, M)}{N} \in \Sigma~\mbox{\emph{and}} \\
& \forall \kw{b}_1@\bind{(M,\tfont{Bot})}{N}, \kw{b}_2@\bind{(\tfont{Bot}, M)}{N}. \kw{b}_1\, \mbox{\ls!(f v)!}~(\lambda \kw{y}.\kw{y}) = \kw{b}_{2}~\mbox{\ls!v f!} \\[1ex]
$\textbf{(Diamond)}$ & \exists \kw{P},\kw{b}_1,\kw{b}_2. \aset{\kw{b}_1@\bind{(M,N)}{P}, \kw{b}_2@\bind{(P,R)}{T}} \subseteq \Sigma \; \iff \\
& \exists \kw{S},\kw{b}_3,\kw{b}_4. \aset{\kw{b}_3@\bind{(N,R)}{S}, \kw{b}_4@\bind{(M,S)}{T}} \subseteq \Sigma \\[1ex]
$\textbf{(Associativity)}$ & \forall \kw{b}_1,\kw{b}_2,\kw{b}_3,\kw{b}_4. $If$~\\
& \aset{\kw{b}_1@\bind{(M,N)}{P}, \kw{b}_2@\bind{(P,R)}{T}, \kw{b}_3@\bind{(N,R)}{S}, \kw{b}_4@\bind{(M,S)}{T}}\subseteq\Sigma\\
& $then$~\kw{b}_2~(\kw{b}_1~m~f)~g = \kw{b}_4~m~(\lambda x. \kw{b}_3~(f~x)~g) \\[1ex]
$\textbf{(Closure)}$ & $If$~\exists \kw{b}_1,\kw{b}_2,\kw{b}_3,\kw{b}_4. \\
& \aset{\kw{b}_1@\bind{(\kw{M}, \kw{N})}{\kw{P}},
\kw{b}_2@\bind{\kw{(S,Id)}}{\kw{M}},
\kw{b}_3@\bind{\kw{(T,Id)}}{\kw{N}},
\kw{b}_4@\bind{\kw{(P,Id)}}{\kw{U}}} \subseteq \Sigma \\
& $then$~\exists \kw{b}. \kw{b}@\bind{(\kw{S}, \kw{T})}{\kw{U}} \in \Sigma
\end{array}\]
\end{small}
\end{definition}
Definition~\ref{def:polymonad} may look a little austere, but there is
a simple refactoring that recovers the structure of functors and monad
morphisms from a polymonad.\footnote{An online version of this paper
provides an equivalent formulation of Definition~\ref{def:polymonad}
in terms of join operators instead of binds. It can be found here:
\url{http://research.microsoft.com/en-us/um/people/nswamy/papers/polymonads.pdf}.
The join-based definition is perhaps more natural for a
reader with some familiarity with category theory; the bind-based
version shown here is perhaps more familiar for a functional
programmer.} Given $(\mathcal{M},\Sigma)$, we can easily
construct the following sets:
\begin{small}
\[\begin{array}{llcl}
$(Maps)$ & M & = & \aset{(\lambda f m. \mybind~m~f)\colon \kw{(a -> b) -> M a -> M b} \mid \mybind\colon\bind{(\mconst,\tfont{Bot})}{\mconst} \in \Sigma}\\
$(Units)$ & U & = & \aset{(\lambda x. \mybind~x~(\lambda y.y))\colon \kw{a -> M a} \mid \mybind\colon\bind{(\tfont{Bot},\tfont{Bot})}{M} \in \Sigma}\\
$(Lifts)$ & L & = & \aset{(\lambda x. \mybind~x~(\lambda y.y))\colon \kw{M a -> N a} \mid \mybind\colon\bind{(\mconst,\tfont{Bot})}{\mconst[N]} \in \Sigma}\\
\end{array}\]
\end{small}
It is fairly easy to show that the above structure satisfies
generalizations of the familiar laws for monads and monad
morphisms. For example, one can prove $\mybind~(\myunit~e)~ f = f~e$,
and $\mylift~(\myunit_1~e) = \ensuremath{\mathsf{unit}}_2~e$ for all suitably typed
$\myunit_1,\myunit_2 \in U$, $\mylift \in L$ and $\mybind \in
\Sigma$.
With these intuitions in mind, one can see that the \textbf{Functor}
law ensures that each $\mconst \in \Sigma$ has a \ls$map$ in $M$, as
expected for monads.
From the construction of $L$, one can see that a bind
$\bind{(M,\tfont{Bot})}{N}$ is just a morphism from $\mconst$ to
\ls$N$. Since this comes up quite often, we write
$\morph{\mconst}{\kw{N}}$ as a shorthand for $\bind{(M,\tfont{Bot})}{N}$.
The \textbf{Paired morphisms} law amounts to a coherence condition
that all morphisms can be re-expressed as binds.
The \textbf{Associativity} law is the familiar associativity law
for monads generalized for both our more liberal typing for bind
operators and for the fact that we have a \emph{collection} of binds
rather than a single bind. The \textbf{Diamond} law
essentially guarantees a coherence property for associativity, namely
that it is always possible to complete an application of
\textbf{Associativity}.
The \textbf{Closure} law ensures closure under composition of monad morphisms
with binds, also for coherence.
It is easy to prove that every collection of monads and monad
morphisms is also a polymonad. In fact, in
Appendix~\ref{sec:productoids}, we prove a stronger result
that relates polymonads to Tate's \emph{productoids}~\cite{tate12productors}.
\iffull
\begin{lemma}
\label{lemma:monad-is-polymonad}
If $(\kw{M}, \mymap, \myunit, \mybind)$ is a monad then $(\{\kw{M},
\kw{Id}\}, \{b_1, b_2, b_3, b_4\})$ is a polymonad where
$b_1=\lambda x\colon\kw{M a}.\lambda f\colon\kw{a->Id b}. \kw{map}~f~x$,
$b_2=\lambda x\colon\kw{Id a}.\lambda f\colon\kw{a -> M b}. f~x$,
$b_3=\mybind$,
$b_4=\lambda x\colon\kw{Id a}.\lambda f\colon\kw{a -> Id a}. \kw{unit}~x$.
\end{lemma}
\fi
\begin{theorem}
\label{thm:productoid}
Every polymonad gives rise to a productoid, and every productoid that
contains an \ls$Id$ element and whose joins are closed with respect to
the lifts, is a polymonad.
\end{theorem}
Tate developed productoids as a categorical
foundation for effectful computation. He
demonstrates the expressive power of productoids by showing how they
subsume other proposed extensions to
monads~\cite{wadler:2003,filinski1999representing,atkey09}. This
theorem shows polymonads can be soundly interpreted using
productoids. Strictly speaking, productoids are more expressive than
polymonads, since they do not, in general, need to have an \ls$Id$
element, and only satisfy a slightly weaker form of our
\textbf{Closure} condition. However, these restrictions are mild, and
certainly in categories that are Cartesian closed, these conditions
are trivially met for all productoids. Thus, for programming purposes,
polymonads and productoids have exactly the same expressive power.
The development of the rest of this paper shows, for the first time,
how to harness this expressive power in a higher-order programming
language, tackling the problem of type inference, elaborating a
program while inserting binds, and proving elaboration coherent.
\section{Programming with polymonads}
\label{sec:programming}
\begin{figure}[t]
\[\begin{array}{lllcl}
$\textit{Signatures}$ (\mconstrs,\Sigma):
&k$-ary constructors$ & \mconstrs & ::= & \cdot \mid M/k, \mconstrs\\
&$ground constructor$ & \gm & ::= & M~\overline{\tau} \\
&$bind set$ & \Sigma & ::= & \cdot \mid \sfont{b}@s, \Sigma \\
&$bind specifications$ & s & ::= & \forall\bar{a}. \Phi \Rightarrow \bind{(\gm_1,\gm_2)}{\gm_3} \\
&$theory constraints $ & \Phi & \\[2ex]
$\textit{Terms:}$ & $values$ & v & ::= & x \mid c \mid \slam{x}{e} \\
& $expressions$ & e & ::= & v \mid \sapp{e_1}{e_2} \mid \slet{x}{e_1}{e_2} \\
& & & \mid & \sif{e}{e_1}{e_2} \mid \sletrec{f}{v}{e} \\[2ex]
$\textit{Types:}$ & $monadic types$ & m & ::= & \gm \mid \mvar\\
& $value types$ & \tau & ::= & a \mid T\, \overline{\tau} \mid \tfun{\tau_1}{\tapp{m}{\tau_2}} \\
& $type schemes$ & \sigma & ::= & \forall \bar{a}\bar\mvar. \Binds => \tau \\
& $bag of binds$ & \Binds & ::= & \cdot \mid \pi, \Binds \\
& $bind type$ & \pi & ::= & \tbind{(m_1,m_2)}{m_3}
\end{array}\]
\caption{\lang: Syntax for signatures, types, and terms}
\label{fig:lang}
\end{figure}
\newcommand{\theorysays}{\ensuremath{\vDash}}
\newcommand\wadler{\ensuremath{W}}
\renewcommand\mconst[1][M]{\ensuremath{\text{#1}}}
This section presents \lang, an ML-like language for programming with
polymonads. We also present several examples that provide a flavor of
programming in \lang. As such, we aim to keep our examples as simple
as possible while still showcasing the broad applicability of
polymonads. For a formal characterization of the expressiveness of
polymonads, we appeal to Theorem~\ref{thm:productoid}.
\paragraph{Polymonadic signatures.} A \lang{} \emph{polymonadic
signature} $(\mathcal{M}, \Sigma)$ (Figure~\ref{fig:lang}) amends
Definition~\ref{def:polymonad} in two ways. Firstly, each element
$M$ of $\mathcal{M}$ may be \emph{type-indexed}---we write
$M/k$ to indicate that $M$ is a $(k+1)$-ary type
constructor (we sometimes omit $k$ for brevity). For example,
constructor $\wadler/1$ could represent an effectful computation so
that $\wadler\;\epsilon\;\tau$ characterizes computations of type $\tau$
that have effect $\epsilon$. Type indexed constructors (rather than
large enumerations of non-indexed constructors) are critical for
writing reusable code, e.g., so we can write functions
like $\sfont{app}: \forall a,b,\varepsilon. (a \rightarrow
\wadler\;\epsilon\;b) \rightarrow a \rightarrow \wadler\;\epsilon\;b$.
We write $\gm$ to denote \emph{ground
constructors}, which are monadic constructors applied to all their
type indexes; e.g., $\wadler\;\epsilon$ is ground. Secondly, a bind
set $\Sigma$ is not specified intensionally as a set, but rather
extensionally using a language of \emph{theory constraints} $\Phi$. In
particular, $\Sigma$ is a list of mappings $\sfont{b}@s$ where $s$
contains a triple $\bind{(\gm_1,\gm_2)}{\gm_3}$ along with constraints
$\Phi$, which determine how the triple's constructors may be instantiated. For
example, a mapping $\sfont{sube}: \forall \varepsilon_1,
\varepsilon_2.\, \varepsilon_1 \subseteq \varepsilon_2 \Rightarrow
\tbind{(\wadler\;\varepsilon_1, \tfont{Bot}) }{\wadler\,\varepsilon_2}$
specifies the set of binds involving type indexes $\varepsilon_1,
\varepsilon_2$ such that the theory constraint $\varepsilon_1
\subseteq \varepsilon_2$ is satisfied.
\lang's type system is parametric in the
choice of theory constraints $\Phi$, which allows us to encode a
variety of prior monad-like systems with \lang.
To interpret a particular set of constraints, \lang{} requires a theory
entailment relation \theorysays. Elements of this relation, written
$\Sigma \theorysays \pi \leadsto \sfont{b}; \theta$, state that there
exists $\sfont{b}@\forall\bar{a}. \Phi \Rightarrow
\bind{(\gm_1,\gm_2)}{\gm_3}$ in $\Sigma$ and a substitution $\theta'$
such that $\theta\pi = \theta'\bind{(\gm_1,\gm_2)}{\gm_3}$, and the
constraints $\theta'\Phi$ are satisfiable.
Here, $\theta$ is a substitution for the free
(non-constant) variables in $\pi$, while $\theta'$ is an instantiation
of the abstracted variables in the bind specification. Thus, the
interpretation of $\Sigma$ is the following set of binds:
$\aset{\sfont{b}@\pi \mid \Sigma \theorysays \pi \leadsto \sfont{b};
\cdot}$. Signature $(\mathcal{M}, \Sigma)$ is a polymonad if this
set satisfies the polymonad laws (where each ground constructor is
treated distinctly).
Our intention is that type indices are \emph{phantom}, meaning that
they are used as a type-level representation of some property of the
polymonad's current state, but a polymonadic bind's implementation
does not depend on them. For example, we would expect that binds
treat objects of type $\wadler\,\varepsilon\,\tau$ uniformly, for all
$\varepsilon$; different values of $\varepsilon$ could statically
prevent unsafe operations like double-frees or dangling pointer
dereferences. Of course, a polymonad may include other constructors
distinct from $\wadler$ whose bind operators could have a completely
different semantics. For example, if an object has different states
that would affect the semantics of binds, or if other effectful
features like exceptions were to be modeled, the programmer can use a
different constructor $M$ for each such feature. As such, our
requirement that the type indices are phantom does not curtail
expressiveness.
\paragraph{Terms and types.}
\lang's term language is standard. \lang{} programs do not explicitly
reference binds, but are written in \emph{direct style}, with implicit
conversions between computations of type $m\;\tau$ and their
$\tau$-typed results. Type inference determines the bind
operations to insert (or abstract) to type check a program.
To make inference feasible, we rely crucially on \lang's call-by-value
structure. Following our prior work on monadic programming for
ML~\cite{swamy11monadICFP}, we
restrict the shape of types assignable to a \lang{} program by
separating value types $\tau$ from the types of polymonadic
computations $m~\tau$. Here, metavariable $m$ may be either a ground
constructor $\gm$ or a polymonadic type variable $\mvar$. The co-domain of
every function is required to be a computation type $m~\tau$, although
pure functions can be typed $\tau -> \tau'$, which is a synonym for $\tau
-> \tfont{Bot}~\tau'$. We also include types $T~\bar\tau$ for fully applied
type constructors, e.g., $\sfont{list}~\tfont{int}$.
Programs can also be given type schemes $\sigma$ that are polymorphic
in their polymonads, e.g., $\forall a,b,\mvar.$ $(a -> \mvar\,b) -> a ->
\mvar\,b$. Here, the variable $a$ ranges over value types $\tau$,
while $\mvar$ ranges over ground constructors $\gm$. Type schemes may also
be qualified by a set $P$ of bind constraints $\pi$. For example,
$\forall \mvar. \bind{(\mvar,\tfont{Bot})}{\gm} \Rightarrow (\tfont{int} ->
\mvar~\tfont{int}) -> \gm~\tfont{int}$ is the type of a function that
abstracts over a bind having shape $\bind{(\mvar,\tfont{Bot})}{\gm}$.
Notice that $\pi$ triples may contain polymonadic type variables
$\mvar$ while specification triples $s \in \Sigma$ may not. Moreover,
$\Phi$ constraints never appear in $\sigma$, which is thus entirely
independent of the choice of the theory.
\subsection{Polymonadic information flow controls}
\label{sec:ist-example}
Polymonads are appealing because they can
express many interesting constructions as we now show.
\newcommand{\entSec}{\Vdash}
\newcommand\intref{\ensuremath{\mathit{intref}}}
Figure~\ref{fig:ist} presents a polymonad $\ensuremath{\mbox{\textit{IST}}}$, which implements
\emph{stateful} information flow
tracking~\cite{devriese2011information,russo08lightweight,li2006encoding,crary2005monadic,abadi1999dcc}.
The idea is that some program values are secret and some are public,
and no information about the former should be learned by observing the
latter---a property called
noninterference~\cite{goguen1982security}. In the setting of $\ensuremath{\mbox{\textit{IST}}}$,
we are worried about leaks via the heap.
Heap-resident storage cells are given type $\intref\;l$ where $l$ is
the secrecy label of the referenced cell. Labels $l \in
\aset{\tfont{L},\tfont{H}}$ form a lattice with order $L \sqsubset H$. A program
is acceptable if data labeled $H$ cannot flow, directly or indirectly,
to computations or storage cells labeled $L$. In our polymonad
implementation, $\tfont{L}$ and $\tfont{H}$ are just types $T$ (but only ever
serve as indexes), and the lattice ordering is implemented by theory
constraints $l_1 \sqsubseteq l_2$ for $l_1,l_2 \in \aset{\tfont{L},\tfont{H}}$.
\newcommand{\lfont}[1]{{\tiny \mathit{#1}}}
\begin{figure}[t]
\small
\centering
\begin{tabular}{ll}
\begin{minipage}{3.1in}
\noindent
$\ensuremath{\!\!\!\!}\begin{array}{l@{~}c@{~}ll}
\multicolumn{4}{l}{\!\!$\textit{Signature:}$} \\
\mconstrs & = & \ensuremath{\mbox{\textit{IST}}}/2 \\
\Phi & ::= & \multicolumn{2}{l}{l_1 \sqsubseteq l_2 \mid \Phi_1,\Phi_2} \\
\Sigma & = & \sfont{bId} : & \morph{\tfont{Bot}}{\tfont{Bot}}, \\
& & \sfont{unitIST}: & \forall p,l.\,\morph{\tfont{Bot}}{\ensuremath{\mbox{\textit{IST}}}\;p\;l},\\
& & \sfont{mapIST}: &\forall p_1,l_1,p_2,l_2.\, p_2
\sqsubseteq p_1, l_1\sqsubseteq l_2 \Rightarrow \\
& & & \morph{\ensuremath{\mbox{\textit{IST}}}\;p_1\;l_1}{\ensuremath{\mbox{\textit{IST}}}\;p_2\;l_2},\\
& & \sfont{appIST}: & \forall p_1,l_1,p_2,l_2.\, p_2
\sqsubseteq p_1, l_1\sqsubseteq l_2 \Rightarrow \\
& & & \bind{(\tfont{Bot},\ensuremath{\mbox{\textit{IST}}}\;p_1\;l_1)}{\ensuremath{\mbox{\textit{IST}}}\;p_2\;l_2}, \\
& & \sfont{bIST}: &\forall p_1,l_1,p_2,l_2,p_3,l_3. \\
& & & l_1 \sqsubseteq p_2, l_1 \sqsubseteq l_3, l_2
\sqsubseteq l_3, \\
& & & p_3 \sqsubseteq p_1, p_3 \sqsubseteq p_2
\Rightarrow \\
& & & \bind{(\ensuremath{\mbox{\textit{IST}}}\; p_1\; l_1,\ensuremath{\mbox{\textit{IST}}}\; p_2\; l_2)}{\ensuremath{\mbox{\textit{IST}}}\; p_3\; l_3}
\end{array}$
\end{minipage}
&
\begin{minipage}{2.6in}
\begin{tabular}{l}
$\ensuremath{\!\!\!\!}\begin{array}{ll}
\multicolumn{2}{l}{\!\!$\textit{Types and auxiliary functions:}$} \\
\tau : & ... \mid \intref~\tau \mid \tfont{L} \mid \tfont{H} \\
\code{read} : & \forall l.\, \intref~l \rightarrow \ensuremath{\mbox{\textit{IST}}}\; H\; l\; \tfont{int} \\
\code{write} : & \forall l.\, \intref~l \rightarrow \tfont{int} \rightarrow \ensuremath{\mbox{\textit{IST}}}\; l\; L\; ()
\end{array}$\\
~\\
\noindent
$\ensuremath{\!\!\!\!}\begin{array}{ll}
\multicolumn{2}{l}{\!\!$\textit{Example program:}$} \\
\multicolumn{2}{l}{\kw{let add_interest = lam savings. lam interest.}} \\
& \kw{let currinterest = read interest in} \\
& \kw{if currinterest > 0 then} \\
& \quad\kw{let currbalance = read savings in}\\
& \quad\kw{let newbalance =}\\
& \quad \quad \kw{currbalance + currinterest in}\\
& \quad\kw{write savings newbalance} \\
& \kw{else ()}
\end{array}$
\end{tabular}
\end{minipage}
\end{tabular}
\caption{Polymonad $\ensuremath{\mbox{\textit{IST}}}$, implementing stateful
information flow control}
\label{fig:ist}
\end{figure}
The polymonadic constructor $\ensuremath{\mbox{\textit{IST}}}/2$ uses secrecy labels for its type
indexes. A computation with type $\ensuremath{\mbox{\textit{IST}}}\;p\;l\;\tau$ potentially writes
\mwh{I think $\ensuremath{\mbox{\textit{IST}}}$ should be italics; at least, that's what you've
written up to this point}
to references labeled $p$ and returns a $\tau$-result that has security
label $l$; we call $p$ the \emph{write label} and $l$ the \emph{output
label}. Function \ls$read$ reads a storage cell, producing a
$\ensuremath{\mbox{\textit{IST}}}\; H\; l\; \tfont{int}$ computation---the second type index $l$
matches that of $l$-labeled storage cell. Function \ls$write$ writes
a storage cell, producing a $\ensuremath{\mbox{\textit{IST}}}\; l\; L\; ()$ computation---the
first type index $l$ matches the label of the written-to storage
cell. $\tfont{H}$ is the most permissive write label and so is used for the
first index of \ls$read$, while $\tfont{L}$ is the most permissive output
label and so is used for the second index of \ls$write$.
Aside from the identity bind $\sfont{bId}$, implemented as reverse
apply, there are four kinds of binds. Unit $\sfont{unitIST}\;p\;l$
lifts a normal term into an $\ensuremath{\mbox{\textit{IST}}}$ computation. Bind
$\sfont{mapIST}\;p\;l$ lifts a computation into a more permissive
context (i.e., $p_2$ and $l_2$ are at least as permissive as $l_1$ and
$l_2$), and $\sfont{appIST}\;p\;l$ does likewise, and are
implemented using $\sfont{mapIST}$ as follows: $\sfont{appIST}\;p\;l =
\lambda x. \lambda f. \sfont{mapIST}\;p\;l\; (f\;x)\;(\lambda
x.x)$. Finally, bind $\sfont{bIST}$ composes a computation
$\ensuremath{\mbox{\textit{IST}}}\;p_1\;l_1~\alpha$ with a function $\alpha ->
\ensuremath{\mbox{\textit{IST}}}\;p_2\;l_2~\beta$. The constraints ensure safe information flow:
$l_1 \sqsubseteq p_2$ prevents the second computation from leaking
information about its $l_1$-secure $\alpha$-typed argument into a
reference cell that is less than $l_1$-secure. Dually, the constraints
$l_1 \sqsubseteq l_3$ and $l_2 \sqsubseteq l_3$ ensure that the
$\beta$-typed result of the composed computation is at least as secure
as the results of each component. The final constraints $p_3
\sqsubseteq p_1$ and $p_3 \sqsubseteq p_2$ ensure that the write label
of the composed computation is a lower bound of the labels of each
component.
Proving $(\mathcal{M},\Sigma)$ satisfies the polymonad laws is
straightforward. The functor and paired morphism laws are
easy to prove. The diamond law is more tedious: we
must consider all possible pairs of binds that
compose. This reasoning involves consideration of the theory
constraints as implementing a lattice, and so would work for any
lattice of labels, not just $\tfont{H}$ and $\tfont{L}$. In all, there were ten
cases to consider. We prove the associativity law for the same ten
cases. This proof is straightforward as the implementation of $\ensuremath{\mbox{\textit{IST}}}$
ignores the indexes: \code{read}, \code{write} and various binds are
just as in a normal state monad, while the indexes serve only to
prevent illegal flows. Finally, proving closure is relatively
straightforward---we start with each possible bind shape and then
consider correctly-shaped flows into its components; in all there were
eleven cases.
\paragraph{Example.}
The lower right of Figure~\ref{fig:ist} shows an example use of
$\ensuremath{\mbox{\textit{IST}}}$. The
$\kw{add_interest}$ function takes two reference cells, $\kw{savings}$
and $\kw{interest}$, and modifies the former by adding to it the
latter if it is non-negative.\footnote{For ease of presentation, the program
in Figure~\ref{fig:ist} uses \ls$let$ to sequence computations. This is not
essential, e.g., we need not have \ls$let$-bound \ls$currbalance$.} Notice that
expressions of type $\ensuremath{\mbox{\textit{IST}}}\;p\;l\;\tau$ are used as if they merely had
type $\tau$---see the branch on \ls|currinterest|, for
example. The program is rewritten during type inference to insert, or
abstract, the necessary binds so that the program type checks. This
process results in the following type for
\ls$add_interest$:\footnote{This and other example types were
generated by our prototype implementation.}
\[\small\begin{array}{l@{~}l}
\multicolumn{2}{l}{\forall \mvar_6,\mvar_{27}, a_1, a_2. \Binds =>
\lfont{\intref\;a_1 \rightarrow \intref\;a_2 \rightarrow
\mvar_{27}\;()}} \\
\text{where } \Binds = &
\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{6}}, \bind{(\ensuremath{\mbox{\textit{IST}}}\; \tfont{H}\; a_1, \ensuremath{\mbox{\textit{IST}}}\; a_1\; \tfont{L})}{\mvar_{6}}, \bind{(\ensuremath{\mbox{\textit{IST}}}\; \tfont{H}\; a_2,\mvar_{6})}{\mvar_{27}}
\end{array}
\]
The rewritten version of \ls$add_interest$ starts with a sequence of
$\lambda$ abstractions, one for each of the bind constraints in
$\Binds$. If we imagine these are numbered $\sfont{b1}$
... $\sfont{b3}$, e.g., where $\sfont{b1}$ is a bind with type
$\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{6}}$, then the term looks as follows
(notation $\kw{...}$ denotes code elided for simplicity):
\begin{lstlisting}
lam savings. lam interest. b3 (read interest)
(lam currinterest. if currinterest > 0 then (b2 ...) else (b1 () (lam z. z)))
\end{lstlisting}
In a program that calls \ls|add_interest|, the bind constraints will
be solved, and actual implementations of these binds will be passed in
for each of $\sfont{b}_i$ (using a kind of dictionary-passing style as
with Haskell type classes).
Looking at the type of \ls|add_interest| we can see how the
constraints prevent improper information flows. In particular, if we
tried to call \kw{add_interest} with $a_1 = \tfont{L}$ and $a_2 = \tfont{H}$, then
the last two constraints become $\bind{(\ensuremath{\mbox{\textit{IST}}}\; \tfont{H}\; \tfont{L}, \ensuremath{\mbox{\textit{IST}}}\;
\tfont{L}\; \tfont{L})}{\mvar_{6}}, \bind{(\ensuremath{\mbox{\textit{IST}}}\; \tfont{H}\;
\tfont{H},\mvar_{6})}{\mvar_{27}}$, and so we must instantiate $\mvar_6$
and $\mvar_{27}$ in a way allowed by the signature in
Figure~\ref{fig:ist}. While we can legally instantiate $\mvar_6 =
\ensuremath{\mbox{\textit{IST}}}\;\tfont{L}\;l_3$ for any $l_3$ to solve the second constraint, there
is then no possible instantiation of $\mvar_{27}$ that can solve the
third constraint. After substituting for $\mvar_6$, this constraint
has the form $\tbind{(\ensuremath{\mbox{\textit{IST}}}\;\tfont{H}\; \tfont{H}, \ensuremath{\mbox{\textit{IST}}}\;\tfont{L}\;l_3)}{\mvar_{27}}$,
but this form is unacceptable because the $\tfont{H}$ output of the first
computation could be leaked by the $\tfont{L}$ side effect of the second
computation. On the other hand, all other instantiations of $a_1$ and
$a_2$ (e.g., $a_1 = \tfont{H}$ and $a_2 = \tfont{L}$ to correspond to a secret
savings account but a public interest rate) do have solutions and do
not leak information.
Having just discussed the latter two constraints, consider the
first, $\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{6}}$. This constraint is
important because it says that $\mvar_6$ must have a unit, which is
needed to properly type the else branch; units are not required of a
polymonad in general.
The type given above for \ls|add_interest| is not its principal type, but an
\emph{improved} one. As it turns out, the principal type is
basically unreadable, with 19 bind constraints! Fortunately,
Section~\ref{sec:solve} shows how some basic rules can greatly
simplify types without reducing their applicability, as has been done
above. Moreover, our coherence result (given in the next section)
assures that the corresponding changes to the elaborated term do not
depend on the particular simplifications: the polymonad laws ensure
all such elaborations will have the same semantics.
\subsection{Contextual type and effect systems}
\newcommand{\CE}{\ensuremath{\mbox{\textit{CE}}}}
\mwh{Here again, $\CE$ maybe should be italics (see session types in
the next figure)}
Wadler and Thiemann~\cite{wadler:2003} showed how a monadic-style
construct can be used to model type and effect systems. Polymonads
can model standard effect systems, but more interestingly can be used
to model \emph{contextual effects}~\cite{neamtiu08context}, which
augment traditional effects with the notion of \emph{prior}
and \emph{future} effects of an expression within a broader
context. As an example, suppose we are using a language that
partitions memory into \emph{regions} $R_1, ..., R_n$ and reads/writes
of references into region $R$ have effect $\aset{R}$. Then in the
context of the program $\kw{read}\;r_1; \kw{read}\;r_2$, where $r_1$
points into region $R_1$ and $r_2$ points into region $R_2$, the
contextual effect of the subexpression $\kw{read}\;r_1$ would be the
triple $[ \emptyset; \aset{R_1}; \aset{R_2} ]$: the prior effect is
empty, the present effect is $\aset{R_1}$, and the future effect is
$\aset{R_2}$.
\begin{figure}[t]
\hspace*{-.1in}
\begin{tabular}{ll}
\begin{minipage}{2.9in}
\noindent
$\begin{array}{lcl}
\mconstrs & = & \CE/3 \\
\Sigma & = & \sfont{bId} : \tbind{(\tfont{Bot},\tfont{Bot})}{\tfont{Bot}}, \\
&&\sfont{unitce}: \tbind{(\tfont{Bot},\tfont{Bot})}
{\CE\,\top\,\emptyset\,\top}\\
&& \sfont{appce}: \forall
\alpha_1,\alpha_2,\epsilon_1,\epsilon_2,\omega_1,\omega_2. \\
&& \quad (\alpha_2 \subseteq \alpha_1, \epsilon_1 \subseteq \epsilon_2, \omega_2 \subseteq \omega_1) \Rightarrow\\
&&\quad \tbind{(\tfont{Bot},\CE\;\alpha_1\;\epsilon_1\,\omega_1)
}{\CE\,\alpha_2\;\epsilon_2\,
\omega_2} \\
&& \sfont{mapce}: \forall
\alpha_1,\alpha_2,\epsilon_1,\epsilon_2,\omega_1,\omega_2. \\
&& \quad (\alpha_2 \subseteq \alpha_1, \epsilon_1 \subseteq \epsilon_2, \omega_2 \subseteq \omega_1) \Rightarrow\\
&&\quad \tbind{(\CE\;\alpha_1\;\epsilon_1\,\omega_1, \tfont{Bot})
}{\CE\,\alpha_2\;\epsilon_2\,
\omega_2} \\
&&\sfont{bindce}: \forall
\alpha_1,\epsilon_1,\omega_1,\alpha_2,\epsilon_2,\omega_2,\epsilon_3. \\
&& \quad\epsilon_2 \cup \omega_2 = \omega_1, \epsilon_1\cup \alpha_1 = \alpha_2, \epsilon_1 \cup \epsilon_2 = \epsilon_3) \Rightarrow \\
&&\quad \tbind{(\CE\;\alpha_1\;\epsilon_1\,\omega_1,
\CE\, \alpha_2\;\epsilon_2\,\omega_2)}{\CE\,\alpha_1\;\epsilon_3\,\omega_2}
\end{array}$
\end{minipage}
&
\begin{minipage}{3in}
\noindent
$\begin{array}{ll}
\multicolumn{2}{l}{\!\!$\textit{Types and theory constraints:}$} \\
\tau & ::= ... \mid \{A_1\} ... \{A_n\} \mid \emptyset \mid \top \mid \tau_1 \cup \tau_2 \\
\Phi & ::= \tau \subseteq \tau' \mid \tau = \tau' \mid \Phi,\Phi \\
\\
\multicolumn{2}{l}{\!\!$\textit{Auxiliary functions:}$} \\
\code{read} : & \forall \alpha,\omega,r.\, \intref~r \rightarrow \CE\; \alpha\; r\; \omega\; \tfont{int} \\
\code{write} : & \forall \alpha,\omega,r.\, \intref~r \rightarrow \tfont{int} \rightarrow \CE\; \alpha\;r\; \omega\; ()\\
\\
\\
\\
\\
\\
\end{array}$
\end{minipage}
\end{tabular}
\caption{Polymonad expressing contextual type and effect systems}
\label{fig:ctxeff}
\end{figure}
Figure~\ref{fig:ctxeff} models contextual effects as the polymonad
$\CE~\alpha~\epsilon~\omega~\tau$, for the type of a computation with
prior, present, and future effects $\alpha$, $\epsilon$, and $\omega$,
respectively. Indices are sets of atomic effects $\{A_1\}
... \{A_n\}$, with $\emptyset$ the empty effect, $\top$ the effect set
that includes all other effects, and $\cup$ the union of two
effects. We also introduce theory constraints for subset relations and
extensional equality on sets, with the obvious interpretation. As an
example source of effects, we include \code{read} and \code{write}
functions on references into region sets $r$. The bind $\sfont{unitce}$
ascribes a pure computation as having an empty effect and
any prior and future effects. The binds $\sfont{appce}$ and
$\sfont{mapce}$ express that it is safe to consider an additional
effect for the current computation (the $\epsilon$s are covariant),
and fewer effects for the prior and future computations ($\alpha$s and
$\omega$s are contravariant). Finally, $\sfont{bindce}$ composes two
computations such that the future effect of the first computation
includes the effect of the second one, provided that the prior effect
of the second computation includes the first computation; the effect
of the composition includes both effects, while the prior effect is
the same as before the first computation, and the future effect is the
same as after the second computation.
\subsection{Parameterized monads, and session types}
\newcommand\atkey{\ensuremath{A}}
Finally, we show $\langext$ can express Atkey's parameterized
monad~\cite{atkey09}, which has been used to
encode disciplines like regions~\cite{kiselyov2008lightweight} and
session types~\cite{pucella2008haskell}. The type constructor
$\atkey~p~q~\tau$ can be thought of (informally) as the type of a
computation producing a $\tau$-typed result, with a pre-condition $p$
and a post-condition $q$.
\begin{figure}[t]
\begin{tabular}{ll}
\begin{minipage}{2.9in}
\noindent
$\begin{array}{lcl}
\mconstrs & = & \tfont{Bot},\atkey/2 \\
\Sigma & = & \sfont{bId} : \tbind{(\tfont{Bot},\tfont{Bot})}{\tfont{Bot}}, \\
& & \sfont{mapA}: \forall p,r.\; \tbind{(\atkey\;p\;r,\tfont{Bot})}{\atkey\;p\;r}, \\
& & \sfont{appA}: \forall p,r.\; \tbind{(\tfont{Bot},\atkey\;p\;r)}{\atkey\;p\;r}, \\
& & \sfont{unitA}: \forall p.\; \tbind{(\tfont{Bot},\tfont{Bot})}{\atkey\;p\;p}, \\
& & \sfont{bindA}:\forall p,q,r.\; \tbind{(\atkey\,p\,q,\; \atkey\,q\,r)}{\atkey\,p\,r} \\
\end{array}$
\end{minipage}
&
\begin{minipage}{3.1in}
\begin{tabular}{l}
$\ensuremath{\!\!\!\!}\begin{array}{ll}
\multicolumn{2}{l}{\!\!$\textit{Types:}$} \\
\multicolumn{2}{l}{\tau ::= \dots \mid \tsend{\tau_1}{\tau_2} \mid \trecv{\tau_1}{\tau_2} \mid \tdone} \\
\\
\multicolumn{2}{l}{\!\!$\textit{Auxiliary functions:}$} \\
\sfont{send}\, : & \forall a,q.\,a\,\xrightarrow\, \atkey\, (\tsend{a}{q})\,q\, () \\
\sfont{recv}\, : & \forall a,q.\,()\,\xrightarrow\, \atkey\, (\trecv{a}{q})\,q\, a
\end{array}$
\end{tabular}
\end{minipage}
\end{tabular}
\caption{Parameterized monad for session types, expressed as a
polymonad}
\label{fig:session}
\end{figure}
As a concrete example, Figure~\ref{fig:session} gives a polymonadic
expression of Pucella and Tov's notion of session
types~\cite{pucella2008haskell}. The type $\atkey\,{p}\,{q}\, \tau$
represents a computation involved in a two-party session which starts
in protocol state $p$ and completes in state $q$, returning a value of
type $\tau$. The key element of the signature $\Sigma$ is the
$\sfont{bindA}$, which permits composing two computations where the
first's post-condition matches the second's precondition. We use the
type index $\tsend{\tau}{q}$ to denote a protocol state that requires a
message of type $\tau$ to be sent, and then transitions to
$q$. Similarly, the type index $\trecv{\tau}{r}$ denotes the protocol
state in which once a message of type $\tau$ is received, the protocol
transitions to $r$. We also use the index $\tdone$ to denote the
protocol end state. The signatures of two primitive operations for
sending and receiving messages capture this behavior.
As an example, the following \langext{} program
implements one side of a simple protocol that sends a message
\ls$x$, waits for an integer reply \ls$y$, and returns \ls$y+1$.
\[
\begin{array}{c}
\kw{let go = lam x. let _ = send x in incr (recv ())} \\
$Simplified type: $\forall a,b,q,\mvar.\,
\tbind{(\atkey\,(\tsend\,a\,b)\, b,\;\atkey\,(\trecv\,\tfont{int}\;q)\, q))}{\mvar}
\Rightarrow \,(a\rightarrow\mvar\;\tfont{int}) \\
\end{array}
\]
There are no specific theory constraints for session types:
constraints simply arise by unification and are solved as usual when
instantiating the final program (e.g., to call \ls$go 0$).
\section{Coherent type inference for \lang}
\label{sec:syntactic}
\renewcommand\mconst[1][M]{\ensuremath{\sfont{#1}}}
\newcommand{\evidence}[1]{\mathsf{app}(#1)}
\newcommand{\evlift}[2]{\mathsf{b}_{#1,#2}\,}
\newcommand{\evbind}[3]{\mathsf{b}_{#1,#2,#3}\,}
\begin{figure*}[tH!]
{\begin{small}\[
\begin{array}{l}
\fbox{$\Binds |= \Binds'$} \qquad
\inference{\forall \pi \in \Binds'. \pi \in \Binds \vee \pi \in \Sigma}
{\Binds |= \Binds'}[(TS-Entail)]
\\\\
\fbox{$\Binds |= \sigma \geqslant \tau\;\leadsto\mathsf{f}$} \qquad
\inference{
\theta = [\bar \tau/\bar{a}][\bar{m}/\bar{\mvar}] & \Binds |= \theta\Binds_1}
{\Binds |= (\tscheme{\bar{a}\bar{\mvar}}{\Binds_1}{\tau}) \,\geqslant\,
{\theta\tau} \;\leadsto \evidence{\theta\Binds_1}}[(TS-Inst)]
\\\\
\fbox{$\prefix\Binds\Gamma v : \tau \;\leadsto\mathsf{e}$} \qquad
\inference{v\in\aset{x,c} & \Binds |= \Gamma(v) \geqslant \tau \;\leadsto \mathsf{f}}
{\prefix\Binds\Gamma v : \tau \;\leadsto \mathsf{f}\,v}[(TS-XC)]
\\\\
\inference{\prefix\Binds{\Gamma,x@\tau_1} e : \tapp{m}{\tau_2} \;\leadsto \mathsf{e}}
{\prefix\Binds\Gamma \slam{x}{e} : \tfun{\tau_1}{\tapp{m}{\tau_2}} \;\leadsto \slam{x}{\mathsf{e}}}[(TS-Lam)]
\\\\
\fbox{$\prefix\Binds\Gamma e : \tapp{m}\tau \;\leadsto\mathsf{e}$}\quad
\inference{\prefix\Binds\Gamma v : \tau \;\leadsto \mathsf{e}}
{\prefix{\Binds,\morph{\mconst[Id]}{m}}\Gamma v : \tapp{m}{\tau}
\;\leadsto \evbind{\mconst[Id]}{\mconst[Id]}{m}{\mathsf{e}}\;(\lambda x.x)}[(TS-V)]
\\\\
\inference{\prefix{\Binds_1}{\Gamma,x@\tau} v : \tau \;\leadsto \mathsf{e}_1 &
(\sigma,\mathsf{e_2}) = \Gen{\Gamma}{\Binds_1 => \tau,\,\mathsf{e_1}} \\
\prefix\Binds{\Gamma,x@\sigma} e : \tapp{m}{\tau'} \;\leadsto \mathsf{e}_3}
{\prefix\Binds\Gamma \sletrec{x}{v}{e} : \tapp{m}{\tau'}
\;\leadsto \sletrec{x}{\mathsf{e}_2}{\mathsf{e}_3}
}[(TS-Rec)]
\\\\
\inference{\prefix{\Binds_1}\Gamma v : \tau \;\leadsto \mathsf{e}_1 &
(\sigma,\mathsf{e}_2) = \Gen{\Gamma}{\Binds_1 => \tau,\,\mathsf{e}_1} \\
\prefix\Binds{\Gamma,x@\sigma} e : \tapp{m}\tau' \;\leadsto\mathsf{e}_3}
{\prefix\Binds\Gamma \slet{x}{v}{e} : \tapp{m}\tau'
\;\leadsto \slet{x}{\mathsf{e}_2}{\mathsf{e}_3}
}[(TS-Let)]
\\\\
\inference{ \prefix\Binds\Gamma e_1 : \tapp{m_1}{\tau_1} \;\leadsto \mathsf{e}_1 &
\prefix\Binds{\Gamma,x@\tau_1} e_2 : \tapp{m_2}{\tau_2} \;\leadsto \mathsf{e}_2 \\
e_1 \neq v &\Binds |= (m_1,m_2) \rhd {m_3}}
{\prefix\Binds\Gamma \slet{x}{e_1}{e_2} : \tapp{m_3}{\tau_2}
\;\leadsto \evbind{m_1}{m_2}{m_3}{\mathsf{e}_1}\,{(\lambda x.\,\mathsf{e}_2)}
}[(TS-Do)]
\\\\
\inference{\prefix\Binds\Gamma e_1 : \tapp{m_1}{(\tfun{\tau_2}{\tapp{m_3}{\tau}})} \;\leadsto \mathsf{e}_1 &
\prefix\Binds\Gamma e_2 : \tapp{m_2}{\tau_2} \;\leadsto \mathsf{e}_2 \\
\Binds |= {(m_1,m_4)}\rhd{m_5} &
\Binds |= {(m_2,m_3)}\rhd{m_4} }
{\prefix\Binds\Gamma \eapp{e_1}{e_2} : \tapp{m_5}{\tau}
\;\leadsto \evbind{m_1}{m_4}{m_5}{\mathsf{e}_1}\;{(\evbind{m_2}{m_3}{m_4}{\mathsf{e}_2}})}[(TS-App)]
\\\\
\inference{\prefix\Binds\Gamma e_1 : \tapp{m_1}\kw{bool} \;\leadsto \mathsf{e}_1 &
\prefix\Binds\Gamma e_2 : \tapp{m_2}{\tau} \;\leadsto \mathsf{e}_2 \\
\prefix\Binds\Gamma e_3 : \tapp{m_3}{\tau} \;\leadsto \mathsf{e}_3 &
\Binds |= \morph{m_2}{m}, \morph{m_3}{m}, {(m_1,m)}\rhd{m'}}
{\prefix\Binds\Gamma \sif{e_1}{e_2}{e_3} : \tapp{m'}{\tau}
}[(TS-If)]
\\{ \;\leadsto \evbind{m_1}{m}{m'}{\mathsf{e}_1}\,{(\lambda b.\,\mathsf{if}\;b\;\mathsf{then}\;\evbind{m_2}{\mconst[Id]}{m}{\mathsf{e}_2}\;(\lambda x.x)\;\mathsf{else}\; \evbind{m_3}{\mconst[Id]}{m}{\mathsf{e}_3}\; (\lambda x. x))}
}
\iffull
\else
\\\\
\begin{array}{ll}
\Gen{\Gamma}{\Binds => \tau, \mathsf{e}}
& = (\forall (\ftv{\Binds => \tau} \setminus \ftv{\Gamma}). \Binds => \tau,\;\mathsf{abs}(\Binds,\mathsf{e}))\\
\mathsf{abs}((\tbind{(m_1,m_2)}{m_3},P),\mathsf{e}) &= \lambda\evbind{m_1}{m_2}{m_3}.\,\mathsf{abs}(P,\mathsf{e})\\
\mathsf{abs}(\cdot,\mathsf{e}) &= \mathsf{e} \\
\evidence{P,\tbind{(m_1,m_2)}{m_3})} &= \lambda f.\,\evidence{P}\,(f\;\evbind{m_1}{m_2}{m_3})\\
\evidence{\cdot} &= \lambda x.\,x
\end{array}
\fi
\end{array}\]\end{small}}
\caption{Syntax-directed type rules for \lang{}, where $\Sigma$
is an implicit parameter. \iffull See Figure~\ref{fig:extraops} for the definitions
of \textit{Gen}, \textsf{app}, and \textsf{abs}. \fi}
\label{fig:ssyntaxrules}
\end{figure*}
\iffull
\begin{figure*}
\begin{small}
\[
\begin{array}{ll}
\Gen{\Gamma}{\Binds => \tau, \mathsf{e}}
& = (\forall (\ftv{\Binds => \tau} \setminus \ftv{\Gamma}). \Binds => \tau,\;\mathsf{abs}(\Binds,\mathsf{e}))\\
\\
\mathsf{abs}((\tbind{(m_1,m_2)}{m_3},P),\mathsf{e}) &= \lambda\evbind{m_1}{m_2}{m_3}.\,\mathsf{abs}(P,\mathsf{e})\\
\mathsf{abs}(\cdot,\mathsf{e}) &= \mathsf{e} \\
\\
\evidence{P,\tbind{(m_1,m_2)}{m_3})} &= \lambda f.\,\evidence{P}\,(f\;\evbind{m_1}{m_2}{m_3})\\
\evidence{P,\cdot} &= \lambda x.\,x\\
\end{array}
\]
\end{small}
\caption{Extra operations for the syntax-directed type rules defined in Figure~\ref{fig:ssyntaxrules}.
\textit{Gen} returns both a generalized type, and a new expression (using \textsf{abs}) that takes the newly
abstracted evidence as arguments. Dually, the \textsf{app} operation returns a function that
applies evidence for an instantiated expression. }
\label{fig:extraops}
\end{figure*}
\fi
This section defines our declarative type system for \lang{} and
proves that type inference produces principal types, and that
elaborated programs are coherent.
Figure~\ref{fig:ssyntaxrules} gives
\iffull
and Figure~\ref{fig:extraops} give
\fi
a syntax-directed type system,
organized into two main judgments. The value-typing judgment
$\prefix{\Binds}{\Gamma} v : \tau \;\leadsto\mathsf{e}$ types a value $v$ in an environment
$\Gamma$ (binding variables $x$ and constants $c$ to type schemes) at
the type $\tau$, provided the constraints $\Binds$ are satisfiable.
Moreover, it \emph{elaborates} the value $v$ into a lambda term $\mathsf{e}$
that explicitly contains binds, lifts, and evidence passing (as shown
in Section~\ref{sec:ist-example}). However, note that the elaboration is
independent and we can read just the typing rules by igoring the elaborated terms.
The expression-typing judgment $\prefix{\Binds}{\Gamma} e :
\tapp{m}{\tau}\;\leadsto\mathsf{e}$
is similar, except that it yields a computation type. Constraint
satisfiability $\Binds |= \Binds'$, defined in the figure, states that
$\Binds'$ is satisfiable under the hypothesis $\Binds$ if $\Binds'
\subseteq \Binds \cup \Sigma$ where we consider
$\pi \in \Sigma$ if and only if $\Sigma
\theorysays \pi \leadsto \sfont{b}; \cdot$ (for
some $\sfont{b}$).
The rule (TS-XC) types a variable or constant at an instance of its
type scheme in the environment. The instance relation for type schemes
$\Binds |= \sigma \geq \tau\;\leadsto\mathsf{f}$ is standard: it instantiates the
bound variables, and checks that the abstracted constraints are
entailed by the hypothesis $\Binds$. The elaborated $\mathsf{f}$ term
supplies the instantiated evidence using the \textsf{app} form. The rule (TS-Lam) is
straightforward where the bound variable is given a value
type and the body a computation type.
The rule (TS-V) allows a value $v:\tau$ to be used as an expression
by lifting it to a computation type $\tapp{m}{\tau}$, so long
as there exists a morphism (or unit) from the \ls$Id$ functor to
$m$. The elaborated term uses $\evbind{\mconst[Id]}{\mconst[Id]}{m}$ to lift
explicitly to monad $m$. Note that for evidence we make up names
($\evbind{\mconst[Id]}{\mconst[Id]}{m}$) based on the constraint ($\morph{\mconst[Id]}{m}$).
This simplifies our presentation but an implementation would
name each constraint explicitly \cite{jones1994improvement}.
We use the name $\evbind{m_1}{\mconst[Id]}{m_2}$ for morphism constraints $\morph{m_1}{m_2}$,
and use $\evbind{m_1}{m_2}{m_3}$ for general bind constraints ${(m_1,m_2)}\rhd{m_3}$.
(TS-Rec) types a recursive let-binding by typing the definition $v$ at
the same (mono-)type as the \ls$letrec$-bound variable $f$. When
typing the body $e$, we generalize the type of $f$ using a standard
generalization function $\Gen{\Gamma}{\Binds => \tau,\;\mathsf{e}}$, which closes
the type relative to $\Gamma$ by generalizing over its free type
variables. However, in constrast to regular generalization, we return
both a generalized type, as well as an elaboration of $\mathsf{e}$ that
takes all generalized constraints as explicit evidence parameters (as
defined by rule $\mathsf{abs}$).
(TS-Let) is similar, although somewhat simpler since there is no
recursion involved.
(TS-Do) is best understood by looking at its elaboration: since we are in a call-by-value setting,
we interpret a \ls$let$-binding as forcing and sequencing two
computations using a single bind where $e_1$ is typed monomorphically.
(TS-App) is similar to (TS-Do), where, again, since we use
call-by-value, in the elaboration we sequence the function and its
argument using two bind operators, and then apply the function.
(TS-If) is also similar, since we sequence the expression $e$ in the
guard with the branches. As usual, we require the
branches to have the same type. This is achieved by generating
morphism constraints, $\morph{m_2}{m}$ and $\morph{m_3}{m}$ to coerce
the type of each branch to a functor $m$ before sequencing it
with the guard expression.
\subsection{Principal types}
\newcommand{\el}[1]{\llbracket #1\rrbracket}
\newcommand{\retTrans}{\textsf{ret}}
\newcommand{\bindTrans}{\textsf{do}}
\newcommand{\appTrans}{\textsf{app}}
\newcommand{\ifTrans}{\textsf{cond}}
\newcommand{\recTrans}{\textsf{rec}}
\newcommand{\prefixOML}[2]{\prefix{#1}{#2}_{\textsc{\tiny OML}}}
\begin{figure}[t]
\[
\begin{array}{ll}
\el{x}^\star &= x \\
\el{c}^\star &= c \\
\el{\lambda x.e}^\star &= \lambda x. \el{e} \\
\\
\el{v} &= \mathtt{ret}\;\el{v}^\star \\
\el{e_1\; e_2} &= \mathtt{app}\; \el{e_1}\; \el{e_2} \\
\el{\slet{x}{v}{e}} &= \slet{x}{\el{v}^\star}{\el{e}} \\
\el{\slet{x}{e_1}{e_2}} &= \mathtt{do}\;\el{e_1}\; \el{\slam{x}{e_2}}^\star \qquad\textrm{(when $e_1 \neq v$)}\\
\el{\sif{e_1}{e_2}{e_3}} &= \mathtt{cond}\; \el{e_1}\; \slam{()}{\el{e_2}} \; \slam{()}{\el{e_3}} \\
\el{\sletrec{f}{v}{e}} &= \mathtt{letrec}\; {f = \el{v}^\star}~\mathtt{in}~{\el{e}}
\end{array}
\]
\[
\begin{array}{ll}
\retTrans &: \forall\alpha \mvar.\,(\morph{\tfont{Bot}}{\mvar}) => \tfun{\alpha}{\tapp{\mvar}{\alpha}} \\
\bindTrans &: \forall\alpha \beta \mvar_1 \mvar_2 \mvar.\,(\bind{(\mvar_1,\mvar_2)}{\mvar}) => \tfun{\tapp{\mvar_1}{\alpha}}{(\tfun{\tfun{\alpha}{\tapp{\mvar_2}{\beta}})}{\tapp{\mvar}{\beta}}} \\
\appTrans &: \forall\alpha \beta \mvar_1 \mvar_2 \mvar_3 \mvar_4 \mvar.\, (\bind{(\mvar_1,\mvar_4)}{\mvar}, \bind{(\mvar_2,\mvar_3)}{\mvar_4}) => \tfun{\tapp{\mvar_1}{(\tfun{\alpha}{\tapp{\mvar_3}{\beta}})}}{\tfun{\tapp{\mvar_2}{\alpha}}{\tapp{\mvar}{\beta}}} \\
\ifTrans &: \forall\alpha\mvar_1\mvar_2\mvar_3\mvar\mvar'.\, (\morph{\mvar_2}{\mvar}, \morph{\mvar_3}{\mvar}, \bind{(\mvar_1,\mvar)}{\mvar'}) \\
& \qquad=> \tfun{\tapp{\mvar_1}{\kw{bool}}}
{\tfun{(\tfun{()}{\tapp{\mvar_2}{\alpha}})}
{\tfun{(\tfun{()}{\tapp{\mvar_3}{\alpha}})}
{\tapp{\mvar'}{\alpha}}}}
\end{array}\]
\caption{Type inference for \lang{} via elaboration to OML}
\label{fig:xlate-oml}
\end{figure}
The type rules admit principal types, and there exists an efficient
type inference algorithm that finds such types. The way we
show this is by a translation
of polymonadic terms (and types) to terms (and types) in Overloaded ML
(OML)~\cite{jones1992theory} and prove this translation is sound and
complete: a polymonadic term is well-typed if and only if its
translated OML term has an equivalent type.
OML's type inference algorithm is
known to enjoy principal types, so a corollary of our translation is
that principal types exist for our system too.
We encode terms in our language into OML as shown in
Figure~\ref{fig:xlate-oml}. We rely on four primitive OML terms that
force the typing of the terms to generate the same constraints as our
type system does: $\retTrans$ for lifting a pure term,
$\bindTrans$ for typing a do-binding,
$\appTrans$ for typing an application,
and $\ifTrans$ for conditionals. Using these
primitives, we encode values and expressions of our system into OML.
We write $\prefixOML\Binds\Gamma e : \tau$ for a derivation in the
syntax directed inference system of OML (cf. Jones~\cite{jones1992theory},
Fig. 4).
\begin{theorem}[Encoding to OML is sound and complete]
\label{thm:oml}
\strut\\\textbf{Soundness}: Whenever $\prefix{\Binds}{\Gamma} v : \tau$
we have $\prefixOML{\Binds}{\Gamma} \el{v}^\star : \tau$.
Similarly, whenever $\prefix\Binds\Gamma e : \tapp{m}{\tau}$
then we have $\prefixOML\Binds\Gamma \el{e} : \tapp{m}{\tau}$.
\noindent\textbf{Completeness}: Whenever
$\prefixOML\Binds\Gamma \el{v}^\star : \tau$, then we have
$\prefix\Binds\Gamma v : \tau$. Similarly, whenever
$\prefixOML\Binds\Gamma \el{e} : \tapp{m}{\tau}$, then we have
$\prefix\Binds\Gamma e : \tapp{m}{\tau}$.
\end{theorem}
\noindent The proof is by straightforward induction on the typing derivation of
the term. It is important to note that our system uses the same
instantiation and generalization relations as OML which is required
for the induction argument. Moreover, the constraint entailment over
bind constraints also satisfies the monotonicity, transitivity and
closure under substitution properties required by OML.
As a corollary of the above properties, our system admits
principal types via the general-purpose OML type inference algorithm.
\subsection{Ambiguity}
Seeing the previous OML translation, one might think we could
directly translate our programs into Haskell since Haskell uses
OML style type inference.
Unfortunately, in practice, Haskell would
reject many useful programs. In particular, Haskell
rejects as ambiguous any term whose type $\forall
\bar{\alpha}. \Binds => \tau$ includes a variable $\alpha$ that
occurs free in $\Binds$
but not in $\tau$;\footnote{The
actual ambiguity rule in Haskell is more involved due to functional dependencies
and type families but that does not affect our results.} we
call such type variables \emph{open}.
Haskell, in its generality, must reject such terms since the
instantiation of an open variable can have operational effect,
while at the same time, since the variable does not appear in $\tau$, the
instantiation for it can never be uniquely determined by the context
in which the term is used. A common example is the term
\lstinline|show . read| with the type \lstinline|(Show a, Read a) => String -> String|,
where \lstinline|a| is open. Depending on the instantiation of \lstinline|a|,
the term may parse and show integers, or doubles, etc.
Rejecting all types that contain open variables works well for type
classes, but it would be unacceptable for \lang. Many simple terms
have principal types with open variables. For example, the term
$\slam{f}{\slam{x}{\sapp{f}{x}}}$ has type $\forall a b \mvar_1
\mvar_2 \mvar_3.$ $((\mathsf{Id},\mvar_1)\rhd\;\mvar_2,
(\mathsf{Id},\mvar_2)\rhd\;\mvar_3)$ $\Rightarrow\;(a \rightarrow
\mvar_1\;b) \rightarrow \alpha \rightarrow \mvar_3\;b$ where type
variable $\mvar_2$ is open.
In the special case where there is only one polymonadic constructor
available when typing the program, the coherence problem is moot,
e.g., say, if the whole program were to only be typed using only the
\ensuremath{\mbox{\textit{IST}}}{} polymonad of Section~\ref{sec:ist-example}. However, recall
that polymonads generalize monads and morphisms, for which there can
be coherence issues (as is well known), so polymonads must address
them. As an example, imagine combining our $\ensuremath{\mbox{\textit{IST}}}$ polymonad (which
generalizes the state monad) with an exception monad $\sfont{Exn}$,
resulting in an $\sfont{ISTExn}$ polymonad. Then, an improperly coded
bind that composed $\ensuremath{\mbox{\textit{IST}}}$ with $\sfont{Exn}$ could sometimes reset
the heap, and sometimes not (a similar example is provided by
Filinski~\cite{filinski94representing}).
A major contribution of this paper is that for binds that satisfy the
polymonad laws, we need not reject all types with open
variables. In particular, by appealing to the polymonadic laws, we can
prove that programs with open type variables in bind constraints are
indeed unambiguous. Even if there are many possible instantiations,
the semantics of each instantiation is equivalent, enabling us to
solve polymonadic constraints much more aggressively. This
coherence result is at the essence of making programming with
polymonads practical.
\subsection{Coherence}
\label{sec:coherence}
\mwh{The key to point out here is that you can ignore the type indexes
since they have no impact on computation, and thus pretend that all
constructors are unary. Then the argument is as before.}
The main result of this section (Theorem~\ref{thm:coherence})
establishes that for a certain class of polymonads, the ambiguity
check of OML can be weakened to accept more programs while still
ensuring that programs are coherent. Thus, for this class of
polymonads, programmers can reliably view our syntax-directed system
as a specification without being concerned with the details of how the
type inference algorithm is implemented or how programs are
elaborated.
The proof of Theorem~\ref{thm:coherence} is a little technical---the
following roadmap summarizes the structure of the development.
\newcommand\unamb[3]{\ensuremath{\mathsf{unambiguous}(#1,#2,#3)}}
\begin{itemize}
\item We define the class of \emph{principal} polymonads for which
unambiguous typing derivations are coherent. All polymonads that we
know of are principal.
\item Given $\prefix{\Binds}{\Gamma} e : t \leadsto \tgtfont{e}$ (with $t \in
\aset{\tau, \tapp{m}{\tau}}$), the predicate
$\unamb{\Binds}{\Gamma}{t}$ characterizes when the derivation is
unambiguous. This notion requires interpreting $\Binds$ as a graph
$G_\Binds$, and ensuring (roughly) that all open variables in
$\Binds$ have non-zero in/out-degree in $G_\Binds$.
\item A \emph{solution} $S$ to a constraint graph with respect to a
polymonad $(\mconstrs, \Sigma)$ is an assignment of ground polymonad
constructors $\mconst\in\mconstrs$ to the variables in the graph such that
each instantiated constraint is present in $\Sigma$. We give an
equivalence relation on solutions such that $S_1 \cong S_2$ if they
differ only on the assignment to open variables in a manner where
the composition of binds still computes the same function according
to the polymonad laws.
\item Finally, given $\prefix{\Binds}{\Gamma} e : t \leadsto \tgtfont{e}$
and $\unamb{\Binds}{\Gamma}{t}$, we prove that all solutions
to $\Binds$ that agree on the free variables of $\Gamma$ and $t$ are
in the same equivalence class.
\end{itemize}
While Theorem~\ref{thm:coherence} enables our type system to be used
in practice, this result is not the most powerful theorem one can
imagine. Ideally, one might like a theorem of the form
$\prefix{\Binds}{\Gamma} e : t \leadsto \tgtfont{e}$ and
$\prefix{\Binds'}{\Gamma} e : t \leadsto \tgtfont{e}'$ implies $\tgtfont{e}$ is
extensionally equal to $\tgtfont{e}'$, given that both $\Binds$ and
$\Binds'$ are satisfiable. While we conjecture that this result is
true, a proof of this property out of our reach, at present. There are
at least two difficulties. First, a coherence result of this form is
unknown for qualified type systems in a call-by-value setting. In an
unpublished paper, Jones~\cite{jones93coherencefor} proves a coherence
result for OML, but his techique only applies to call-by-name
programs. Jones also does not consider reasoning about coherence based
on an equational theory for the evidence functions (these functions
correspond to our binds).
So, proving the ideal coherence theorem would require both
generalizing Jones' approach to call-by-value and then extending it
with support for equational reasoning about evidence. In the meantime,
Theorem~\ref{thm:coherence} provides good assurance and lays the
foundation for future work in this direction. \mwh{be careful how we
refer to this result earlier}
\paragraph*{Defining and analyzing principality.} We introduce a notion of
principal polymonads that corresponds to Tate's ``principalled
productoids.'' Informally, in a principal polymonad, if there is more
than one way to combine pairs of computations in the set $F$ (e.g.,
$\bind{(M,M')}{M_1}$ and $\bind{(M,M')}{M_2}$), then there must be a
``best'' way to combine them. This best way is called the principal
join of $F$, and all other ways to combine the functors are related
to the principal join by morphisms. All the polymonadic libraries we
have encountered so far are principal polymonads. It is worth
emphasizing that principality does not correspond to functional
dependency---it is perfectly reasonable to combine $\mconst$ and
$\mconst'$ in multiple ways, and indeed, for applications like
sub-effecting, this expressiveness is important. We only require that
there be an ordering among the choices. In the definition below, we
take $\downarrow\!\!\mathcal{M}$ to be set of ground instances of all
constructors in $\mathcal{M}$.
\begin{definition}[Principal polymonad]
A polymonad $(\mathcal{M}, \Sigma)$ is a \emph{principal polymonad}
if and only if for any set $F \subseteq \downarrow\!\!\mathcal{M}^2$, and any
$\aset{\mconst_1, \mconst_2}\subseteq\downarrow\!\!\mathcal{M}$ such
$\aset{\tbind{(\mconst, \mconst')}{\mconst_1} \mid (\mconst,\mconst') \in F} \subseteq
\Sigma$ and $\aset{\tbind{(\mconst, \mconst')}{\mconst_2} \mid (\mconst,\mconst') \in F}
\subseteq \Sigma$, then there exists $\hat\mconst \in \downarrow\!\!\mathcal{M}$ such
that $\aset{\morph{\hat\mconst}{\mconst_1}, \morph{\hat\mconst}{\mconst_2}}
\subseteq \Sigma$, and $\aset{\tbind{(\mconst,\mconst')}{\hat\mconst} \mid (\mconst,\mconst') \in
F} \subseteq \Sigma$. We call $\hat\mconst$ the principal join of $F$
and write it as $\bigsqcup F$
\end{definition}
\newcommand\subscript[1]{{\mbox{\textit{\tiny #1}}}}
\begin{definition}[Graph-view of a constraint-bag $\Binds$]
A graph-view $G_\Binds=(V,A,E_\rhd,E_{eq})$ of a constraint-bag
$\Binds$ is a graph consisting of a set of vertices $V$, a vertex
assignment $A: V -> m$, a set of directed edges $E_\rhd$, and a
set of undirected edges $E_{eq}$, where:
\begin{itemize}
\item $V = \aset{\pi.0, \pi.1, \pi.2 \mid \pi \in \Binds}$, i.e.,
each constraint contributes three vertices.
\item $A(\pi.i) = m_i$ when $\pi = \tbind{(m_0,m_1)}{m_2}$, for all $\pi.i \in V$
\item $E_\rhd = \aset{(\pi.0,\pi.2), (\pi.1, \pi.2) \mid \pi \in \Binds}$
\item $E_{eq} = \aset{(v,v') \mid v,v' \in V ~\wedge~v\neq v'\wedge \exists \mvar.\mvar=A(v)=A(v')}$
\end{itemize}
\end{definition}
\noindent\textbf{Notation} We use $v$ in this section to stand for a
graph vertex, rather than a value in a program. We also make use of a
pictorial notation for graph views, distinguishing the two flavors of
edges in a graph. Each constraint $\pi \in \Binds$ induces two edges
in $E_\rhd$. These edges are drawn with solid lines, with a triangle
for orientation. Unification constraints arise from correlated
variable occurrences in multiple constraints---we
\begin{wrapfigure}{r}{3.5cm
\vspace{-5ex}
\begin{tiny}
\[\nquad\nquad
\xymatrix@C=1em@R=0.5em{
m_1 \ar@{-}[dr] & & & m_2\ar@{-}[dr] \\
\mvar \ar@{-}[r] & \rhd\ar@{-}[r] & \mvar'\ar@{:}[r] & \mvar'\ar@{-}[r] & \rhd\ar@{-}[r] & \mvar\ar@/^/@{:}[lllll]
}
\]
\end{tiny}
\vspace{-7ex}
\end{wrapfigure}
depict these with double dotted lines. For
example, the pair of constraints $\tbind{(m_1, \mvar)}{\mvar'},
\tbind{(m_2,\mvar')}{\mvar}$ contributes four unification edges, two
for $\mvar$ and two for $\mvar'$. We show its graph view alongside
Unification constraints reflect the dataflow in a program. Referring
back to Figure~\ref{fig:ssyntaxrules}, in a principal derivation using
(TS-App), correlated occurrences of unification variables for $m_4$ in
the constraints indicate how the two binds operators compose. The
following definition captures this dataflow and shows how to interpret
the composition of bind constraints using unification edges as a
lambda term (in the expected way).\footnote{Note, for the purposes of
our coherence argument, unification constraints between value-type
variables $a$ are irrelevant. Such variables may occur in two kinds
of contexts. First, they may constrain some value type in the
program, but these do not depend on the solutions to polymonadic
constraints. Second, they may constrain some index of a polymonadic
constructor; but, as mentioned previously, these indices are phantom
and do not influence the semantics of elaborated terms.}
\begin{definition}[Functional view of a flow edge]
Given a constraint graph $G = (V, A, E_\rhd, E_{eq})$, an edge
$\eta=(\pi.2, \pi'.i) \in E_{eq}$, where $i \in \aset{0,1}$ and
$\pi\neq\pi'$ is called a \emph{flow edge}. The flow edge $\eta$ has a
functional interpretation $F_G(\eta)$ defined as follows:\\[-2ex]
\[\begin{array}{lcl}
$If$~~i=0,~~
F_G(\eta) & = & \lambda (x@A(\pi.0)~a)~(y@a->A(\pi.1)~b)~(z@b->A(\pi'.1)~c).\\
& & ~~\sfont{bind}_{A(\pi'.0),A(\pi'.1),A(\pi'.2)}(\sfont{bind}_{A(\pi.0),A(\pi.1),A(\pi.2)}~x~y)~z\\
$If$~~i=1,~~
F_G(\eta) & = & \lambda (x@A(\pi'.0)~a)~(y@a -> A(\pi.0)~b)~(z@b->A(\pi.1)~c).\\
& & ~~\sfont{bind}_{A(\pi'.0),A(\pi'.1),A(\pi'.2)}~x~(\lambda a.\sfont{bind}_{A(\pi.0),A(\pi.1),A(\pi.2)}~(y~a)~z)
\end{array}\]
\end{definition}
We can now define our ambiguity check---a graph is unambiguous if it
contains a sub-graph that has no cyclic dataflows, and where open
variables only occur as intermediate variables in a sequence of binds.
\begin{definition}[Unambiguous constraints]
\label{def:unambiguous}
Given $G_\Binds=(V,A,E_\rhd,E_{eq})$, the predicate $\unamb{\Binds}{\Gamma}{t}$
holds if and only if there exists $E_{eq}' \subseteq
E_{eq}$, such that in the graph $G'=(V,A,E_\rhd,E_{eq}')$ all of the
following are true.
\begin{enumerate}
\item For all $\pi \in \Binds$, there is no path from $\pi.2$ to
$\pi.0$ or $\pi.1$.
\item For all $v \in V$, if $A(v)\in \ftv{\Binds} \setminus
\ftv{\Gamma,t}$, then there exists a flow edge that connects to
$v$.
\end{enumerate}
\noindent We call $G'$ a \emph{core} of $G_\Binds$.
\end{definition}
\begin{definition}[Solution to a constraint graph]
For a polymonadic signature $(\mathcal{M}, \Sigma)$, a solution
to a constraint graph $G=(V, A, E_\rhd, E_{eq})$, is a vertex assignment
$S : V -> \mathcal{M}$ such that all of the following are true.
\begin{enumerate}
\item For all $v \in V$, if $A(v) \in \mathcal{M}$ then $S(v)=A(v)$
\item For all $(v_1,v_2) \in E_{eq}$, $S(v_1) = S(v_2)$.
\item For all $\aset{(\pi.0,\pi.2), (\pi.1,\pi.2)} \subseteq E_\rhd$,
$\tbind{(S(\pi.0),S(\pi.1))}{S(\pi.2)} \in \Sigma$.
\end{enumerate}
\noindent We say that two solutions $S_1$ and $S_2$ to $G$ \emph{agree
on} $\mvar$ if for all vertices $v \in V$ such that $A(v) = \mvar$,
$S_1(v) = S_2(v)$.
\end{definition}
Now we define $\cong_R$, a notion of equivalence of two solutions
which captures the idea that the
differences in the solutions are only to the internal open variables
while not impacting the overall function computed by the binds in a
constraint. It is easy to check that $\cong_R$ is an equivalence
relation.
\begin{definition}[Equivalence of solutions]
Given a polymonad $(\mathcal{M},\Sigma)$ and constraint
graph $G=(V,A,$ $E_\rhd,E_{eq})$, two solutions $S_1$ and $S_2$ to $G$
are equivalent with respect to a set of variables $R$ (denoted $S_1
\cong_R S_2$) if and only if $S_1$ and $S_2$ agree on all $\mvar
\in R$ and for
each vertex $v \in V$ such that $S_1(v) \neq S_2(v)$ for all flow
edges $\eta$ incident on $v$, $F_{G_1}(\eta) = F_{G_2}(\eta)$,
where $G_i=(V, S_i, E_\rhd, E_{eq})$.
\end{definition}
\begin{theorem}[Coherence]
\label{thm:coherence}
For all principal polymonads, derivations $\Binds|\Gamma |- e : t
\leadsto \tgtfont{e}$ such that \\ $\unamb{\Binds}{\Gamma}{t}$, and for any two
solutions $S$ and $S'$ to $G_\Binds$ that agree on
$R=ftv(\Gamma,t)$, we have $S \cong_R S'$.
\end{theorem}
\noindent (Sketch; full version in appendix) The main idea is to show that all
solutions in the core of $G_\Binds$ are in the same equivalence class
(the solutions to the core include $S$ and $S'$). The proof proceeds
by induction on the number of vertices at which $S$ and $S'$
differ. For the main induction step, we take vertices in
\begin{wrapfigure}{r}{5cm
\[\begin{tiny}\ensuremath{\!\!\!\!}
\begin{array}{c}
\underline{S/S'} \\
\xymatrix@C=.1em@R=0.75em{
& \mconst_1/\mconst_1' & \ldots & \mconst_2/\mconst_2' \\
& \tup\ar[u] & \ldots\tup\ar[u]\ar@{-}[d]\ldots &\tup\ar[u] \\
\mconst_3/\mconst_3'\ar@{-}[ru] & \mconst[A]/\mconst[B]\ar@{-}[u]\ar@{:}[r]\ar@{:}[dr] & \ldots & \ar@{:}[dl]\ar@{:}[l]\mconst[A]/\mconst[B]\ar@{-}[u] & \mconst_4/\mconst_4'\ar@{-}[ul] \\
& \eta_1\ar@{:}[u] & \ldots\eta\ldots & \eta_k\ar@{:}[u] \\
& \mconst[A]/\mconst[B]\ar@{:}[u]\ar@{:}[r]\ar@{:}[ur] & \ldots & \ar@{:}[ul]\ar@{:}[l]\mconst[A]/\mconst[B]\ar@{:}[u] \\
& \tup\ar[u] & \ldots\tup\ar[u]\ar@{-}[d]\ldots & \tup\ar[u] \\
\mconst_5\ar@{-}[ur] & \mconst_6\ar@{-}[u] & \ldots & \mconst_7\ar@{-}[u] & \mconst_8\ar@{-}[ul] \\
}
\end{array}
\end{tiny}\]
\end{wrapfigure}
topological order, considering the least (in the order) set of vertices $Q$, all
related by unification constraints, and whose assignment in $S$ is
$\mconst[A]$ and in $S'$ is $\mconst[B]$, for some
$\mconst[A]\neq\mconst[B]$. The vertices in $Q$ are shown in the graph
alongside, all connected to each other by double dotted lines
(unification constraints), and their neighborhood is shown as
well. Since vertices are considered in topological order, all the
vertices below $Q$ in the graph have the same assignment in $S$ and in
$S'$. We build solutions $S_1$ and $S_1'$ from $S$ and $S'$
respectively, that instead assign the principal join $\mconst[J]=\bigsqcup \aset{(\mconst_5,\mconst_6),\ldots,(\mconst_7,\mconst_8)}$ to
the vertices in $Q$, where $S_1 \cong_R S_1'$ by the induction
hypothesis. Finally, we prove $S \cong_R S_1$ and $S' \cong_R S_1'$
by showing that the functional interpretation of each of the flow
edges $\eta_i$ are equal according to the polymonad laws, and conclude
$S \cong_R S'$ by transitivity.
\section{Simplification and solving} \label{sec:solve}
\renewcommand{\dom}[1]{\textsf{dom}(#1)}
\newcommand{\transupperbounds}[2]{\mbox{\textit{trans-up-bnd}}_{#1}(#2)}
Before running a program, we must solve the constraints produced
during type inference, and apply the appropriate evidence for these
constraints in the elaborated program. We also perform
\emph{simplification} on constraints prior to generalization to make
types easier to read, but without compromising their utility.
A simple syntactic transformation on constraints can make inferred
types easier to read. For example, we can hide duplicate constraints,
identity morphisms (which are trivially satisfiable), and constraints
that are entailed by the signature.
\iffull
Formally, we can define the function \hide{P} to do this, as follows:
\begin{small}
\[\ensuremath{\!\!\!\!}\begin{array}{lclr}
\hide{P,\pi,P'} & = & \hide{P,P'} & \quad\mbox{if}~\pi\in P,P'~\vee~\pi=\morph{m}{m}~\vee~|= \pi\\
\hide{P} & = & P & \quad\mbox{otherwise}
\end{array}\]
\end{small}
Syntactically, given a scheme $\forall \bar\nu.\Binds => \tau$, we can
simply show the type $\forall\bar\nu. \hide{\Binds} => \tau$ to the
programmer. Formally, however, the type scheme is unchanged since
simply removing constraints from the type scheme changes our
evidence-passing elaboration.
\fi
More substantially, we can find instantiations for open variables in a
constraint set before generalizing a type (and at the top-level,
before running a program). To do this, we introduce below a modified
version of (TS-Let) (from Figure~\ref{fig:ssyntaxrules}); a similar
modification is possible for (TS-Rec).
\begin{small}
\[
\inference{\prefix{\Binds_1}\Gamma v : \tau \leadsto \tgtfont{e}_1 &
\bar\mvar,\bar{a} = \ftv{\Binds_1 => \tau} \setminus \ftv{\Gamma} \\
\Binds_1 \simp[\bar\mvar \setminus \ftv{\tau}] \theta
&
(\sigma,\tgtfont{e}_2) = \Gen{\Gamma}{\theta\Binds_1 =>
\tau, \tgtfont{e}_1} &
\prefix{\Binds}{\Gamma,x@\sigma} e : \tapp{m}\tau' \leadsto \tgtfont{e}_3}
{\prefix{\Binds}\Gamma \slet{x}{v}{e} : \tapp{m}\tau' \leadsto \slet{x}{\tgtfont{e}_2}{\tgtfont{e}_3}}
\]
\end{small}
This rule employs the judgment $\Binds \simp \theta$, defined in
Figure~\ref{fig:decl-solving}, to simplify constraints by eliminating
some open variables in $\Binds$ (via the substitution $\theta$) before
type generalization. There are three main rules in the judgment,
(S-$\Uparrow$), (S-$\Downarrow$) and (S-$\sqcup$), while the last two
simply take the transitive closure.
\begin{figure}[t!]
\[\small
\begin{array}{c}
\inference[S-$\Uparrow$]
{\pi = \bind{(\mconst[Id],m)}{\mvar} \;\vee\; \pi =
\bind{(m,\mconst[Id])}{\mvar} \\ \mvar \in \bar\mvar &
\flowsFrom{\mvar}{\Binds,\Binds'} \neq \{\} \\
\flowsTo{\mvar}{\Binds,\Binds'} =\{\} }
{\Binds,\pi,\Binds' \simp \mvar \mapsto m}
\qquad
\inference[S-$\Downarrow$]
{\pi = \bind{(\mconst[Id],\mvar)}{m} \;\vee\; \pi =
\bind{(\mvar,\mconst[Id])}{m} \\ \mvar \in \bar\mvar &
\flowsFrom{\mvar}{\Binds,\Binds'} = \{\} \\ \flowsTo{\mvar}{\Binds,\Binds'} \neq \{\} }
{\Binds,\pi,\Binds' \simp \mvar \mapsto m}
\\\\
\inference[S-$\sqcup$]
{ F = \flowsTo{\mvar}{\Binds} \\ m \in F \Rightarrow m =
\gm\\ \text{for some $\gm$} }
{\Binds \simp \mvar \mapsto \bigsqcup F}
\qquad
\inference{\Binds \simp \theta \\ \theta\Binds \simp \theta'}
{\Binds \simp \theta'\theta}
\qquad
\inference{} {\Binds \simp \cdot}
\end{array}\]
\[\small
\text{where~~}
\begin{array}{lcl}
\flowsTo{\mvar}{\Binds} & = & \aset{\,(m_1,m_2) \mid
\bind{(m_1,m_2)}{\mvar} \in \Binds \,} \\
\flowsFrom{\mvar}{\Binds} & = & \aset{\,m \mid \exists m'.\;~ \pi \in
\Binds~ \wedge~ (\pi = \bind{(\mvar,m')}{m}~ \vee~ \pi = \bind{(m',\mvar)}{m})\,} \\
\end{array}
\]
\caption{Eliminating open variables in constraints}
\label{fig:decl-solving}
\end{figure}
Rule (S-$\Uparrow$) solves monad variable $\mvar$ with monad $m$ if we
have a constraint $\pi =\bind{(\mconst[Id], m)}{\mvar}$, where the
only edges directed inwards to $\mvar$ are from $\mconst[Id]$ and $m$,
although there may be many out-edges from $\mvar$. (The case where
$\pi=\bind{(m,\mconst[Id])}{\mvar}$ is symmetric.) Such a constraint
can always be solved without loss of generality using an identity
morphism, which, by the polymonad laws is guaranteed to
exist. Moreover, by the closure law, any solution that chooses
$\mvar=m'$, for some $m'\neq m$ could just as well have chosen
$\mvar=m$. Thus, this rule does not impact solvability of the
costraints. Rule S-$\Downarrow$ follows similar reasoning in the
reverse direction.
Finally, we the rule (S-$\sqcup$) exploits the properties
of a principal polymonad. Here we have a variable $\mvar$ such that
all its in-edges are from pairs of ground constructors $\gm_i$, so we
can simply apply the join function to compute a solution for
$\mvar$. For a principal polymonad, if such a
solution exists, this simplification does not impact solvability of
the rest of the constraint graph.
\paragraph*{Example.} Recall the information flow example we gave in
Section~\ref{sec:ist-example}, in Figure~\ref{fig:ist}. Its principal type is the following,
which is hardly readable:
\[\small\begin{array}{l@{~}l}
\multicolumn{2}{l}{\forall \bar\mvar_i, a_1, a_2. \Binds_0 =>
\lfont{\intref\;a_1 \rightarrow \intref\;a_2 \rightarrow
\mvar_{27}\;()}} \\
\text{where } \Binds_0 = &
\bind{(\tfont{Bot},\mvar_{3})}{\mvar_{2}}, \bind{(\tfont{Bot},\ensuremath{\mbox{\textit{IST}}}\; \tfont{H}\; a_2)}{\mvar_{3}}, \bind{(\mvar_{26},\tfont{Bot})}{\mvar_{4}},
\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{4}}, \\
& \bind{(\mvar_{8},\mvar_{4})}{\mvar_{6}}, \bind{(\tfont{Bot},\mvar_{9})}{\mvar_{8}}, \bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{9}},
\bind{(\mvar_{11},\mvar_{25})}{\mvar_{26}}, \\
& \bind{(\tfont{Bot},\mvar_{12})}{\mvar_{11}}, \bind{(\tfont{Bot},\ensuremath{\mbox{\textit{IST}}}\; \tfont{H}\;
a_1)}{\mvar_{12}}, \bind{(\mvar_{17},\mvar_{23})}{\mvar_{25}},
\bind{(\mvar_{14},\mvar_{18})}{\mvar_{17}}, \\
& \bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{18}}, \bind{(\tfont{Bot},\mvar_{15})}{\mvar_{14}},
\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{15}},
\bind{(\mvar_{20},\mvar_{24})}{\mvar_{23}}, \\
& \bind{(\tfont{Bot},\ensuremath{\mbox{\textit{IST}}}\; a_1\; \tfont{L})}{\mvar_{24}}, \bind{(\tfont{Bot},\mvar_{21})}{\mvar_{20}},
\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_{21}}.
\end{array}\]
After applying (S-$\Uparrow$) and (S-$\Downarrow$) several times, and
then hiding redundant constraints, we simplify $\Binds_0$ to $\Binds$
which contains only three constraints. If we had fixed $a_1$ and $a_2$
(the labels of the function parameters) to $\tfont{H}$ and $\tfont{L}$,
respectively, we could do even better. The three constraints would be
$\bind{(\ensuremath{\mbox{\textit{IST}}}\,\tfont{H}\,\tfont{L},\mvar_{6})}{\mvar_{27}},
\bind{(\tfont{Bot},\tfont{Bot})}{\mvar_6},\bind{(\ensuremath{\mbox{\textit{IST}}}\,\tfont{H}\,\tfont{H},\ensuremath{\mbox{\textit{IST}}}\,\tfont{H}\,\tfont{L})}{\mvar_{6}}$.
Then, applying (S-$\sqcup$) to $\mvar_6$ we would get $\mvar_{6}
\mapsto \ensuremath{\mbox{\textit{IST}}}\,\tfont{H}\,\tfont{H}$, which when applied to the other constraints
leaves only $\bind{(\ensuremath{\mbox{\textit{IST}}}\,\tfont{H}\,\tfont{L},\ensuremath{\mbox{\textit{IST}}}\,\tfont{H}\,\tfont{H})}{\mvar_{27}}$,
which cannot be simplified further, since $\mvar_{27}$ appears in the
result type.
Pleasingly, this process yields a simpler type that can be used in the
same contexts as the original principal type, so we are not
compromising the generality of the code by simplifying its type.
\begin{lemma}[Simplification improves types]
\label{lem:simplification}
For a principal polymonad, given $\sigma$ and $\sigma'$ where $\sigma$ is $\forall
\bar{a}\bar{\mvar}. \Binds => \tau$ and $\sigma'$ is an \emph{improvement} of
$\sigma$, having form $\forall \bar{a'}\bar{\mvar'}. \theta\Binds => \tau$ where $\Binds
\simp \theta$ and $\bar{a'}\bar{\mvar'} = (\bar{a}\bar{\mvar}) - dom(\theta)$. Then
for all $\Binds'', \Gamma, x, e, m, \tau$, if
$\prefix{\Binds''}{\Gamma,x@\sigma} e : m\,\tau$ such that $|=
\Binds''$ then there exists some $\Binds'''$ such that
$\prefix{\Binds'''}{\Gamma,x@\sigma'} e : m\,\tau$ and $|= \Binds'''$.
\end{lemma}
\iffull
\begin{proof}
The proof is by induction on the derivation
$\prefix{\Binds''}{\Gamma,x@\sigma} e : m\,\tau$. Most cases are by
assumption or induction, with the interesting one being (TS-Var)
where the variable in question is $x$, and we know that all of the
constraints are solvable according to the reasoning we used to
justify the simplifications, above.
\end{proof}
\fi
Note that our $\simp$ relation is non-deterministic in the way it
picks constraints to analyze, and also in the order in which rules are
applied. In practice, for an acyclic constraint graph, one could
consider nodes in the graph in topological order and, say,
apply (S-$\sqcup$) first, since, if it succeeds, it eliminates a
variable. For principal polymonads and acyclic constraint
graphs, this process would always terminate.
However, if unification constraints induce cycles in the
constraint graph, simply computing joins as solutions to internal
variables may not work. This should
not come as a surprise. In general, finding solutions to arbitrary
polymonadic constraints is undecidable, since, in the limit, they can
be used to encode the correctness of programs with general
recursion. Nevertheless, simple heuristics such as unrolling cycles in
the constraint graph a few times may provide good mileage, as would
the use of domain-specific solvers for particular polymonads, and such
approaches are justified by our coherence proof.
\section{Related work and conclusions}
This paper has presented \emph{polymonads}, a generalization of monads
and morphisms, which, by virtue of their relationship to Tate's
\emph{productoids}, are extremely powerful, subsuming monads,
parameterized monads, and several other interesting
constructions. Thanks to supporting algorithms for (principal) type
inference, (provably coherent) elaboration, and
(generality-preserving) simplification (none of which Tate considers),
this power comes with strong supports for the programmer. Like monads
before them, we believe polymonads can become a useful and important
element in the functional programmer's toolkit.
Constructions resembling polymonads have already begun to creep into
languages like Haskell. Notably, Kmett's
\texttt{Control.Monad.Parameterized} Haskell package~\cite{kmett}
provides a type class for bind-like operators that have a signature
resembling our $\tbind{(m_1,m_2)}{m_3}$. One key limitation is that
Kmett's binds must be \emph{functionally dependent}: $m_3$ must be
functionally determined from $m_1$ and $m_2$. As such, it is not possible to program
morphisms between different constructors, i.e., the pair of
binds $\tbind{(m_1,\tfont{Bot})}{m_2}$ and $\tbind{(m_1,\tfont{Bot})}{m_3}$ would be
forbidden, so there would be no way to convert from $m_1$ to $m_2$ and
from $m_1$ to $m_3$ in the same program. Kmett also requires units
into $\tfont{Bot}$, which may later be
lifted, but such lifting only works for first-order code before running afoul
of Haskell's ambiguity restriction. Polymonads do not have either
limitation. Kmett does not discuss laws that should govern the proper
use of non-uniform binds. As such, our work provides the formal basis
to design and reason about libraries that functional
programmers have already begun developing.
While polymonads subsume a wide range of prior monad-like
constructions, and indeed can express any system of \emph{producer
effects}~\cite{tate12productors}, as might be expected, other researchers have explored
generalizing monadic effects along other dimensions that are
incomparable to polymonads. For example, Altenkirch et
al.~\cite{Altenkirch10relative} consider \emph{relative monads} that
are not endofunctors; each polymonad constructor must be an
endofunctor. Uustalu and Vene~\cite{Uustalu08comonad} suggest
structuring computations comonadically, particularly to work with
context-dependent computations. This suggests a loose connection with
our encoding of contextual effects as a polymonad, and raises the
possibility of a ``co-polymonad'', something we leave for the
future. Still other generalizations include reasoning about effects
equationally using Lawvere theories~\cite{plotkin01semantic} or with
arrows~\cite{Hughes00arrows}---while each of these generalize monadic
constructions, they appear incomparable in expressiveness to
polymonads. A common framework to unify all these treatments of
effects remains an active area of research---polymonads are a useful
addition to the discourse, covering at least one large area of the
vast design space.
\let\oldthebibliography=\thebibliography
\let\endoldthebibliography=\endthebibliography
\renewenvironment{thebibliography}[1]
\begin{oldthebibliography}{#1
\vskip -20mm
\setlength{\parskip}{0ex
\setlength{\itemsep}{.25ex
\end{oldthebibliography
}
\begin{small}
\bibliographystyle{eptcs}
|
1,116,691,497,305 | arxiv | \section{Introduction}
Over the last fifteen years, considerable effort has gone into trying
to analyse nuclear forces using the systematic tools of effective field
theory (EFT).\footnote{For reviews, from various points of view,
see Refs.~\cite{border,bvkrev,eprev}.} The starting point was Weinberg's
original proposal \cite{wein} that these forces could be described within
the framework of chiral perturbation theory (ChPT). This approach
organises the terms in the effective Lagrangian or Hamiltonian
according to powers of low-energy scales they contain. These scales,
generically denoted $Q$, include momenta and the pion mass. In this
``Weinberg" power counting, the leading terms of the nucleon-nucleon
potential are one-pion exchange (OPE) and an energy-independent contact
interaction, both of which are of order $Q^0$.
Weinberg also noted the enhancement of the nonrelativistic two-nucleon
propagator near threshold and proposed that the leading terms in the
potential should be iterated to all orders in order to generate
nonperturbative effects, such as the deuteron bound state. This approach,
referred to here as the ``Weinberg--van-Kolck" (WvK) scheme, has been
widely applied by van Kolck and collaborators and by many
others.\footnote{Examples of successful applications can be found the
reviews cited in footnote 1.} However, even with this enhancement,
nucleon-nucleon loop integrals are of order $Q$. Although this is lower
than the order $Q^2$ expected in a relativistic theory, it means that
each iteraction of the leading potential in a scattering equation raises
the order by one power of $Q$. Hence the resulting amplitude should still
be perturbatively expandable in powers of the scales $Q$.
In order to justify treating the leading terms in the potential
nonperturbatively, a further IR enhancement is needed, to promote them
to order $Q^{-1}$. Such a promotion is only possible within a consistent
power counting if we can identify additional low-energy scales in the
nucleon-nucleon system. In the case of $S$-wave scattering, the large
scattering lengths provide such scales, and these lead to an EFT
in which the leading, energy-independent contact terms are treated
nonperturbatively \cite{bvk,vk,ksw}. At low energies, where the
finite-range of OPE is not resolved, the resulting expansion of the
potential is simply the effective-range expansion \cite{bethe,newton}.
Although a similar systematic justification for the iteration of OPE was
not provided, the WvK scheme has been successfully used to describe a
variety of few-nucleon systems and their interactions. Nonetheless,
its validity in the $^3S_1$--$^3D_1$ channel has been questioned
\cite{bbsvk} and, more recently, several groups have observed that
Weinberg power counting can break down for nucleon-nucleon scattering in
spin-triplet channels with nonzero orbital angular momentum
\cite{ntvk,birse,em}.\footnote{Closely related observations can be
found in the work of Pav\'on Valderrama and Ruiz Arriola \cite{pvra}.}
In particular, Nogga, Timmermans and van Kolck \cite{ntvk} find that the
leading contact interactions can be substantially promoted in channels
where tensor OPE is attractive, although this conclusion does depend on
the choice of cut-off, as stressed by Epelbaum and Meissner \cite{em}.
To establish a quantitative form for this new power counting, we
need first to identify a low-energy scale that would justify iterating OPE,
and then to analyse the scale dependence of the associated short-range
interactions. The renormalisation group (RG) \cite{wrg} provides the
natural tool for such an analysis. In its Wilsonian version, it has been
applied to two-body scattering by short-range forces, showing that the
effective-range expansion is based on a nontrivial fixed point of the
RG flow \cite{bmr}. Distorted-wave methods have been used to extend the
approach to systems with known long-range forces \cite{bb1,bb2}.
Once a factor of $1/M_{\scriptscriptstyle N}$ has been divided out of the
Hamiltonian, the strength of the OPE potential can be expressed in terms
of the momentum scale
\begin{equation}
\lambda_\pi=\frac{m_\pi^2}{f_{\pi{\scriptscriptstyle NN}}^2
M_{\scriptscriptstyle N}}\simeq 290\;\mbox{MeV}.
\label{eq:lambdapi}
\end{equation}
where $f_{\pi{\scriptscriptstyle NN}}$ is the pseudovector $\pi$N coupling
constant. In the chiral limit, it can be written
\begin{equation}
\lambda_\pi=\frac{16\pi F_\pi^2}{g_{\scriptscriptstyle A}^2
M_{\scriptscriptstyle N}},
\end{equation}
where $M_{\scriptscriptstyle N}$ the nucleon mass, $g_A$ is the axial
coupling of the nucleon and $F_\pi$ the pion decay constant. In strict
ChPT, $\lambda_\pi$ is therefore a high-energy scale, built out of
$4\pi F_\pi$ and $M_{\scriptscriptstyle N}$. None-the-less its numerical value
is small, only about twice $m_\pi$. As a result, perturbative treatments of
OPE (as advocated by Kaplan, Savage and Wise \cite{ksw}) fail to converge or
converge only slowly \cite{fms,bbsvk}. This suggests that we should explore
the consequences of identifying $\lambda_\pi$ as a low-energy scale, counting
it as of order $Q$. Since $\lambda_\pi$ is proportional to
$1/M_{\scriptscriptstyle N}$, this can be thought of as a concrete version of
Weinberg's suggestion that $1/M_{\scriptscriptstyle N}$ should be treated as
if it were of order $Q$ \cite{wein}.
In Ref.~\cite{birse}, I applied a renormalisation group analysis to the
short-range potential in the presence of tensor OPE. This made use of the
distorted waves (DW's) of a $1/r^3$ potential (the chiral limit of tensor
OPE). In the resulting power counting, the leading short-range potential is
of order $Q^{-1/2}$, independently of the orbital angular momentum. This is
quite different from Weinberg counting, where the leading term in the
$L$-th partial wave is of order $Q^{2L}$. Subleading terms containing
powers of the energy appear at orders $Q^{3/2}$, $Q^{7/2}$, and so on. This
promotion of short-range terms confirms the numerical observations of Nogga,
Timmermans and van Kolck \cite{ntvk} and makes quantitative the new counting
proposed there. The terms in the resulting potential can be directly
related to a DW Born expansion, similar to that in Refs.~\cite{bb1,bb2}.
The validity of this counting does depend on the energies considered, since
in each channel there is critical momentum above which waves penetrate
the centrifugal barrier and reach the region where the $1/r^3$ singularity
dominates. The analyses of Refs.~\cite{ntvk,birse} show that a nonperturbative
treatment of OPE, and hence the new counting, is needed in the $S$, $P$
and $D$ waves for momenta of order $m_\pi$. In contrast, waves with
$L\geq 3$ do not probe the singularity until momenta of $\sim 2$~GeV are
reached. In these higher partial waves OPE can be treated as a perturbation
and short-distance interactions can be organised according to the usual
Weinberg power counting.
The results of the analysis of Ref.~\cite{birse} were purely formal,
leading to the power counting that governs the importance of the terms in the
expansion of the short-range potential. In the present paper, I explore
its practical consequences by analysing nucleon-nucleon scattering in
spin-triplet channels, with an extension of the method applied to
singlet channels in Ref.~\cite{bmcg}. For simplicity I consider only
the uncoupled waves: $^3P_{0,1}$, $^3D_2$, $^3F_3$ and $^3G_4$. The
extension of the method to coupled waves such as $^3S_1$--$^3D_1$ or
$^3P_2$--$^3F_2$ is very similar in principle, but is technically more
complicated because of the matrix nature of the equations.
The RG analysis relies on the forms of the DW's at small radii, where they
tend to asymptotic forms that are independent of energy. These waves are
obtained by solving the Sch\"odinger equation, as described in Sec.~II. At
small enough radii they show nonperturbative behaviour controlled by the
$1/r^3$ singularity of the tensor potential. In the case of waves with
$L\leq 2$, this region extends out to about 1~fm. For lab kinetic energies
up to 300~MeV the waves reach their asymptotic forms only for radii less
than about 0.6~fm, and there they are dominated by the $1/r^3$ potential.
This nonperturbative behaviour is present in waves with $L\geq 3$, but only
for only for radii less than about 0.2 fm. In the range 0.2--0.6~fm they
have the normal power-law forms associated with the centrifugal barrier. This
confirms the expectations in Refs.~\cite{ntvk,birse} that low partial waves
need the new power counting for energies in this range, whereas the higher
waves can still be described perturbatively using Weinberg counting.
I then use DW methods to ``deconstruct" empirical scattering amplitudes
by removing the effects of known long-range-forces. The residual
amplitude can then be interpreted directly in terms of an effective
short-range potential. This technique can provide a better indication
of how well the known forces are able to describe the scattering, compared
to simply plotting phase shifts. Such plots can be misleading since they
tend to hide small differences in peripheral waves at low energies,
which is just where the long-range forces should dominate. If the resulting
potential still shows strongly nonlinear energy dependence at low energies,
then this implies that long-range forces are still making important
contributions. Short-range forces lead to a smooth energy dependence
that can be expanded as a power series.
As in the similar treatment of singlet channels \cite{bmcg}, I take
several Nijmegen PWA's or potentials \cite{nijnn}, to give an indication
of the uncertainties involved in these analyses of the data. In Sec.~III,
I use them to construct scattering amplitudes between the DW's of the
OPE potential and from these I extract short-range potentials in the
uncoupled spin-triplet channels. The resulting potentials show rapid
energy dependence at low energies, indicating that important long-range
physics is still present.
The most obvious long-range forces that need to be removed next are
two-pion exchange (TPE) and relativistic corrections to OPE. These appear
at orders $Q^2$ and $Q^3$ (in Weinberg counting). I use here the forms
of the TPE potentials given in Refs.~\cite{kbw,nij99} and the corresponding
order-$Q^2$ correction to OPE \cite{friar}. At this order, there is
also a $\gamma\pi$-exchange potential, calculated in Ref.~\cite{fvkpc}.
These potentials can all be subtracted perturbatively using the DWBA.
The residual short-distance interactions shown in
Sec.~IV are consistent with the new power counting in the $^3P_{0,1}$ and
$^3D_2$ waves. In higher waves, $^3F_3$ and $^3G_4$, the uncertainties in
the Nijmegen PWA's make it hard to draw very strong conclusions but the
residual short-range potentials are smaller after removal of TPE, and
similar to those in the singlet channels \cite{bmcg}.
\section{Distorted waves}
The radial Schr\"odinger equation that describes the relative motion of two
nucleons interacting through the long-range OPE potential is
\begin{equation}
-\,\frac{1}{M_{\scriptscriptstyle N}}\left[\frac{{\rm d}^2}{{\rm d}r^2}
+\frac{2}{r}\,\frac{{\rm d}}{{\rm d}r}-\frac{L(L+1)}{r^2}\right]\psi(r)
+\Bigl[V_{\pi {\scriptscriptstyle C}}(r)
+V_{\pi {\scriptscriptstyle T}}(r)\Bigr]
\,\psi(r)=\frac{p^2}{M_{\scriptscriptstyle N}}\psi(r),
\label{eq:se}
\end{equation}
where the central piece of the lowest-order potential is
\begin{equation}
V_{\pi {\scriptscriptstyle C}}(r)=\frac{1}{3}\,f_{\pi{\scriptscriptstyle NN}}^2
\frac{e^{-m_\pi r}}{r}\, ({\boldsymbol\sigma}_1\cdot{\boldsymbol\sigma}_2)
({\boldsymbol\tau}_1\cdot{\boldsymbol\tau}_2),
\end{equation}
and its tensor piece is
\begin{equation}
V_{\pi {\scriptscriptstyle T}}(r)=\frac{1}{3}\,
\frac{f_{\pi{\scriptscriptstyle NN}}^2}{m_\pi^2}
\left(3+3m_\pi r+m_\pi^2 r^2\right)
\frac{e^{-m_\pi r}}{r^3}\, S_{12} ({\boldsymbol\tau}_1
\cdot{\boldsymbol\tau}_2).
\end{equation}
Here the tensor operator, $S_{12}=3({\boldsymbol\sigma}_1\cdot\hat{\bf r})
({\boldsymbol\sigma}_2\cdot\hat{\bf r})
-{\boldsymbol\sigma}_1\cdot{\boldsymbol\sigma}_2$, takes the value
$+2$ in the uncoupled $^3P_1$, $^3D_2$, \dots~channels, and $-4$ in the
$^3P_0$ channel. The isospin factor,
${\boldsymbol\tau}_1\cdot{\boldsymbol\tau}_2$, is $+1$ for channels with
odd $L$ and $-3$ for even $L$.
The on-shell momentum in the centre-of-mass frame, denoted
by $p$, is related to the lab kinetic energy, $T$, by
$T=2p^2/M_{\scriptscriptstyle N}$.
At small enough radii, all the solutions in a given channel tend to a common,
energy-independent form which is determined by the $1/r^3$ term of the tensor
potential and the $1/r^2$ centrifugal barrier. This form can be found by
solving Eq.~(\ref{eq:se}) at zero energy in the chiral limit ($m_\pi=0$) where
it can be written
\begin{equation}
\left[\frac{{\rm d}^2}{{\rm d}r^2}
+\frac{2}{r}\,\frac{{\rm d}}{{\rm d}r}-\frac{L(L+1)}{r^2}
-\frac{\beta_{LJ}}{r^3}\right]\psi_0(r)=0,
\label{eq:zese}
\end{equation}
Here I have specialised to the case of uncoupled triplet channels and I have
introduced the length scale
\begin{equation}
\beta_{LJ}=\left\{\begin{array}{rl} -4/\lambda_\pi, &\quad L=1,\ J=0,\cr
\noalign{\vspace{5pt}}
+2/\lambda_\pi, &\quad L=J\ \mbox{odd,}\cr
\noalign{\vspace{5pt}}
-6/\lambda_\pi, &\quad L=J\ \mbox{even.}\end{array}\right.
\end{equation}
The solutions in this limit can be expressed in terms of Bessel functions
of order $2L+1$. This can be seen by defining the variable
$x=\sqrt{|\beta_{LJ}|/r}$ and the function
$\phi(x)=x^{-1/2}\psi_0(|\beta_{LJ}|/x^2)$, so that the equation becomes
\begin{equation}
\left[\frac{{\rm d}^2}{{\rm d}x^2}
+\frac{1}{x}\,\frac{{\rm d}}{{\rm d}x}-\frac{(2L+1)^2}{x^2}
\pm 4\right]\phi(x)=0,
\label{eq:zese2}
\end{equation}
where the plus sign applies to channels with even $J$ (where the
tensor potential is attractive) and the minus sign to odd $J$.
The solutions in the attractive channels are oscillatory:
\begin{equation}
\psi_0(r)=A\sqrt{\frac{|\beta_{LJ}|}{r}}\,\left[\sin\alpha\,
J_{2L+1}\!\left(2\sqrt{\frac{|\beta_{LJ}|}{r}}\right)
+\cos\alpha\,
Y_{2L+1}\!\left(2\sqrt{\frac{|\beta_{LJ}|}{r}}\right)\right],
\label{eq:zeatt}
\end{equation}
where $J_{2L+1}$ and $Y_{2L+1}$ denote the regular and irregular Bessel
functions. In the limit $r\rightarrow 0$, these solutions tend to the WKB
form of a sinusoidal function of $2\sqrt{|\beta_{LJ}|/r}$ times $r^{-1/4}$.
They depend on a free parameter $\alpha$. This angle fixes the phase
of the small-$r$ oscillations or, equivalently, it specifies a self-adjoint
extension of the original Hamiltonian (see Refs.~\cite{bb2,birse} for
further references). This is necessary since both Bessel functions
give acceptable solutions for small $r$. There is a redundancy between
$\alpha$ and the leading term in the effective short-range potential since
both have the effect of fixing the phase of the wave function for small $r$.
In waves where the scattering is weak, it is simplest to set $\alpha=0$
and use the potential to represent short-distance physics. This leads to
solutions to Eq.~(\ref{eq:zese}) that grow like $r^L$ for large $r$. In
channels where the OPE potential can be treated perturbatively,
this allows the waves to match on to their usual short-distance forms at
larger radii where the centrifugal barrier dominates over the tensor
potential.
The special choice $\alpha=\pi/2$ gives a solution to Eq.~(\ref{eq:zese})
that decays like $r^{-(L+1)}$ for large $r$. Imposing this boundary
condition on the full OPE problem would lead to a wave that was very large
inside the attractive well of the $1/r^3$ potential. This
would correspond to a system with a low-energy bound state or resonance.
Since none of the NN channels with $L>0$ has such a low-energy state,
values of $\alpha$ close to $\pi/2$ should be avoided. In practice,
$\alpha=0$ is a good choice for all but one of the waves studied here.
In the repulsive channels, the solutions are given by modified Bessel
functions, and the regular one has the form
\begin{equation}
\psi_0(r)=A\sqrt{\frac{|\beta_{LJ}|}{r}}\,
K_{2L+1}\!\left(2\sqrt{\frac{|\beta_{LJ}|}{r}}\right).
\label{eq:zerep}
\end{equation}
This vanishes exponentially with $2\sqrt{|\beta_{LJ}|/r}$ as
$r\rightarrow 0$ but, like Eq.~(\ref{eq:zeatt}), grows as $r^L$ for large $r$.
It is convenient to normalise these solutions so that, as $r$ increases,
they match on to the expected short-distance behaviour of the free solutions,
$j_L(pr)/p^L$. (Since the asymptotic solutions are defined at zero energy,
I have divided out the $p^L$ energy dependence from the spherical Bessel
functions.) With this normalisation,
\begin{equation}
\psi_0(r)\sim \frac{r^L}{(2L+1)!!},\qquad\mbox{as}\ r\rightarrow\infty.
\label{eq:sdsphbes}
\end{equation}
the asymptotic solutions become
\begin{equation}
\psi_0(r)=\left\{\begin{array}{ll}
-\,{\displaystyle\frac{\pi|\beta_{LJ}|^L}{(2L)!(2L+1)!!\cos\alpha}}\,
\sqrt{{\displaystyle\frac{|\beta_{LJ}|}{r}}}\,\left[\sin\alpha\,
J_{2L+1}\!\left(2\sqrt{{\displaystyle\frac{|\beta_{LJ}|}{r}}}\right)
+\cos\alpha\,
Y_{2L+1}\!\left(2\sqrt{{\displaystyle\frac{|\beta_{LJ}|}{r}}}\right)\right]
&\quad J\ \mbox{even,}\cr
\noalign{\vspace{5pt}}
{\displaystyle\frac{2|\beta_{LJ}|^L}{(2L)!(2L+1)!!}}\,
\sqrt{{\displaystyle\frac{|\beta_{LJ}|}{r}}}\,
K_{2L+1}\!\left(2\sqrt{{\displaystyle\frac{|\beta_{LJ}|}{r}}}\right)
&\quad J\ \mbox{odd.}\end{array}\right.
\label{eq:zesolns}
\end{equation}
Fig.~\ref{fig:wfns} shows the solutions to the full Schr\"odinger equation
(\ref{eq:se}) for the channels $^3P_0$, $^3P_1$, $^3D_2$ and $^3G_4$ at
lab kinetic energies of 5 and 300~MeV, and compares them to the
zero-energy, chiral-limit solutions of Eq.~(\ref{eq:zesolns}). To make this
comparison easier, I have also divided out the $p^L$ energy dependence
from the full solutions. The waves shown for the attractive channels all
have the short-distance phase $\alpha=0$.
\begin{figure}[t,b]
\includegraphics[width=17.5cm,keepaspectratio,clip]{wfplots.eps}
\caption{\label{fig:wfns} Plots of the wave functions $\psi(r)/p^L$ (in
arbitrary units) against $r$ (in fm) for the channels (a) $^3P_0$, (b) $^3P_1$,
(c) $^3D_2$, and (d) $^3G_4$. Short-dashed lines: $T=5$~MeV;
long-dashed lines: $T=300$~MeV; solid lines: energy-independent
asymptotic form from Eq.~(\ref{eq:zese}).}
\end{figure}
Although there is still some energy dependence in their normalisations,
the shapes of the solutions in these channels reach their energy-independent
forms for radii smaller than about $0.8$~fm. The waves in channels where the
tensor potential is attractive all oscillate for small enough radii.
In the lowest waves, $^3P_0$ and $^3D_2$, this behaviour covers the whole
energy-independent region. Indeed the first node of the $^3P_0$ wave
function appears at the edge of or, depending on the choice of $\alpha$,
within the domain of the low-energy EFT. Moreover the normalisation of the
short-distance wave functions shows significant energy-dependence, beyond
the $p^L$ expected from the free solutions.
In contrast, the $^3G_4$ wave becomes oscillatory only for radii smaller than
about 0.25~fm, beyond the scope of our EFT. In the rest of its energy-independent
region it has power-law behaviour as expected from the centrifugal potential.
More importantly from the EFT point of view, the energy dependence of the
short-distance normalisation is almost entirely given by the $p^L$ factor
expected for a free solution. These results are consistent with the
estimates in Ref.~\cite{birse} based on the chiral limit of the tensor
potential. There, the critical momentum scales for the breakdown of
perturbation theory were found to be of the order of $m_\pi$ or
$\lambda_\pi$ in the $S$, $P$ and $D$ waves, and much larger ($\sim 2$~GeV)
in the $F$ waves and above.
The $^3D_2$ and $^3G_4$ waves have low-energy bound states or narrow
resonances if $\alpha$ is taken to be close to $\pi/2$. The
phase shifts can depend significantly on the choice of $\alpha$ in
this region which, in the $^3D_2$ case, is roughly
$\pi/4\lesssim\alpha\lesssim 3\pi/4$. The high centrifugal barrier in
the $^3G_4$ wave means its phase shift is very weakly dependent on $\alpha$,
outside a very narrow band around $\pi/2$. In contrast the $^3P_0$ phase
shift shows a strong dependence on $\alpha$ over the whole range
$0\leq\alpha<\pi$.
The wave functions in the repulsive channels $^3P_1$ and $^3F_3$ both have
qualitatively similar forms, and so only one is shown in Fig.~\ref{fig:wfns}.
They switch from power-law behaviour at larger radii, where the
centrifugal barrier dominates, to exponential at small radii, where the
$1/r^3$ potential wins out. In both cases the energy dependence of the
short-distance normalisation is quite well described by the free $p^L$ form,
at least for energies below about 300 MeV. At higher energies than
considered here, 500 MeV or above, the $^3P_1$ normalisation does develop
a much stronger energy dependence (whereas the $^3F_3$ does not). These
numerical observations indicate that the effect of the finite
pion mass has been to shift the critical momentum scale in $^3P_1$ channel
to a somewhat higher value than the estimate in Ref.~\cite{birse}.
From the wave functions alone, it is thus not clear whether tensor
OPE is better treated nonperturbatively in this channel.
\section{One-pion-exchange effective potential}
The RG method developed in Refs.~\cite{bb1} shows that the terms in
the short-range effective potential are directly related to an expansion of
the DW $K$-matrix in powers of the energy and other low-energy scales.
A DW approach like this is obviously needed in channels where OPE must
be treated nonperturbatively. I will also use it in the channels where a
perturbative treatment of OPE would be valid, since it provides a convenient
alternative to fourth-order perturbation theory. The scattering between DW's
of the long-range potential can be described by a reactance matrix
$\widetilde K$. In the uncoupled channels, its on-shell matrix element
is related to the difference between the observed and OPE phase shifts by
\begin{equation}
\widetilde K(p)=-\,\frac{4\pi}{Mp}\,
\tan\Bigl(\delta_{\scriptscriptstyle\rm PWA}(p)
-\delta_{\scriptscriptstyle\rm OPE}(p)\Bigr),
\end{equation}
where $\delta_{\scriptscriptstyle\rm PWA}(p)$ is the empirical phase shift
(from the Nijmegen group's PWA or one of their potentials that have
been fitted directly to data) and
$\delta_{\scriptscriptstyle\rm OPE}(p)$ is that obtained from
the solutions of Eq.~(\ref{eq:se}).
None of the channels examined here contains a low-energy bound state or
resonance, and so the residual scattering can be represented by an EFT
expanded around a trivial fixed point. The amplitude $\widetilde K(p)$ is
then given by the distorted-wave Born approximation (DWBA): the matrix
element of the short-range interaction between the DW's of the long-range
potential.
Since these DW's are either vanishing or singular as $r\rightarrow 0$, an
ordinary $\delta$-function at the origin cannot be used to represent the
potential. Instead, I take a $\delta$-shell potential, as in
Refs.~\cite{bb1,bmcg,birse}. If the radius of this, $R_0$, is chosen to
be smaller than about 0.7~fm, in the region where the waves have reached
their common energy-independent forms, then the extracted potential
will be independent of $R_0$ to a good approximation.\footnote{Obviously,
points where the wave functions vanish in the attractive channels should be
avoided for numerical reasons. Similarly very small values of $R_0$,
$\sim 0.01$~fm or less, should not be used in the repulsive channels.}
The results shown below are for $R_0=0.1$~fm, the same radial cut-off
as used in Ref.~\cite{bmcg}, but I return to the question of the choice of
cut-off at the end of Sec.~IV, after examining the subtraction of TPE.
To remove the dependence on the arbitrary radius $R_0$,
I divide out the square of the asymptotic radial function,
Eqs.~(\ref{eq:zeatt}) or (\ref{eq:zerep}), from the strength of the
$\delta$-shell potential. The resulting potential is defined by
\begin{equation}\label{eq:vshort}
V_S(p,r)=\frac{1}{4\pi R_0^2|\psi_0(R_0)|^2}\,\widetilde V(p)\,\delta(r-R_0),
\end{equation}
as in the RG analysis of Refs.~\cite{bb1,birse}.
Equating $\widetilde K(p)$ to the DWBA matrix element of this potential,
we can deduce the strength of the potential directly from the residual
scattering amplitude, as in Ref.~\cite{bmcg}:
\begin{equation}\label{eq:vtilde}
\widetilde V(p)=\frac{|\psi_0(R_0)|^2}{|\psi(p,R_0)|^2}\,\widetilde K(p),
\end{equation}
where $\psi(p,R_0)$ is the solution to the Schr\"odinger equation
with the known long-range potential, and $\psi_0(R_0)$ is its
energy-independent short-distance form.
The leading term in the short-distance interaction represents only
short-distance physics, and so it should be independent of any low-energy
(long-distance) scales such as $p$, $m_\pi$ or $\lambda_\pi$. To ensure this,
any dependence on these scales should be factored out of the normalisation
of the asymptotic solutions in Eqs.~(\ref{eq:zeatt}) and (\ref{eq:zerep}).
In waves where the OPE potential can be treated perturbatively, this means
normalising these functions by dividing out the energy-dependent factor of
$p^L$, as in Eq.~(\ref{eq:zesolns}). The resulting potentials in the
$^3F_3$ and $^3G_4$ channels are the defined in the same way as the ones
used to analyse the spin-singlet channels in Ref.~\cite{bmcg}.
At LO their matrix elements are proportional to $p^{2L}$,
showing that they are equivalent to $2L$-th derivatives of
$\delta$-functions (the more conventional representations for
the effective interactions in these partial waves).
As discussed above, $\lambda_\pi$ should be regarded as an additional
low-energy scale in the channels where OPE must be treated
nonperturbatively. Since $\beta_{LJ}\propto 1/\lambda_\pi$, the wave
functions normalised as in Eq.~(\ref{eq:zesolns}) still contain powers of
$\lambda_\pi$. This can be removed by dividing out the factors of
$|\beta_{LJ}|^{L+1/4}$ from the zero-energy solutions. Using solutions
with this normalisation has the effect of multiplying $\widetilde V(p)$ by
$\lambda_\pi^{2L+1/2}$. This obviously changes the magnitudes of the
short-range interactions, but it does not affect their energy
dependences. In the plots below, I just show results for $\widetilde V(p)$
defined using the forms given in Eq.~(\ref{eq:zesolns}), but the extra
factors would be needed if one wanted to estimate the momentum scales
in this potential.
Once the effects of leading OPE have been removed, the residual potential
starts at order $Q^2$ in the standard power-counting notation.
Contributions at this order include the leading TPE potential
\cite{kbw,nij99}, and a relativistic correction to OPE \cite{friar}.
Since the power counting for the long-range forces is not affected by
the renormalisation of the short-range ones, I denote this potential
by $\widetilde V^{(2)}(p)$. In Weinberg's counting for a partial wave
with angular momentum $L$, the leading short-range term appears at the
usual order, $Q^{2L}$. However, if OPE is treated nonperturbatively,
we need to take account of the fact that this term also contains
a factor of $\lambda_\pi^{-(2L+1/2)}$. Since $\lambda_\pi$
should be now regarded as a low-energy scale, we see that the
net order of such a term is $Q^{-1/2}$, for any $L$. This half-integer
power agrees with the RG analysis in Ref.~\cite{birse}, which shows
that the leading term has an RG eigenvalue of $1/2$. After extracting
the effects of tensor OPE we are therefore left with an interaction
that starts at order $Q^{-1/2}$. This also contains a term proportional
to energy ($p^2$) at order $Q^{3/2}$.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{pspl1234.eps}
\caption{\label{fig:ps1234} Plots of the short-distance interaction
$\widetilde V^{(2)}(p)$, in fm$^{2L+2}$, against lab kinetic energy $T$,
in MeV. These have been extracted from Nijmegen PWA's or potentials
using Eq.(\ref{eq:vtilde}) with $R_0=0.1$~fm, for the $np$ channels
(a) $^3P_1$, (b) $^3D_2$, (c) $^3F_3$, and (d) $^3G_4$.}
\end{figure}
Fig.~\ref{fig:ps1234} shows the short-distance interactions
$\widetilde V^{(2)}(p)$ extracted directly from various Nijmegen
analyses \cite{nijnn}, using Eq.~(\ref{eq:vtilde}) with $R_0=0.1$~fm.
The partial-wave analysis, PWA93, and three potentials, NijmegenI,
NijmegenII and Reid93, all fit the $np$ data with similarly good values
of $\chi^2$. They can thus be regarded as alternative partial-wave
analyses. Using results from all of them gives an indication of the
systematic uncertainties associated with the different choices of
parametrisation. This is particularly important in higher partial waves
where each fit contains only a small number of parameters. In the
$^3D_2$ and $^3G_4$ channels where the tensor OPE is attractive, I have
taken the phase $\alpha=0$ since, as already noted, their OPE phase shifts
depend only weakly on $\alpha$, provided the region around $\pi/2$ is
avoided.
One immediate observation is that the residual interactions all show
rapid, nonlinear dependences on energy below about 150~MeV. This is
similar to what was found for the corresponding spin-singlet channels
in Ref.~\cite{bmcg}. Experience with those channels suggests that TPE
is responsible for much of this rapid energy dependence. Therefore
these forces need to be subtracted before any conclusions can be drawn
about short-range ones.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{pspl3p0.eps}
\caption{\label{fig:ps3p0} Plots of the $^3P_0$ short-distance
interaction $\widetilde V^{(2)}(p)$, in fm$^{4}$, against lab kinetic
energy $T$, in MeV. Results for two choices of short-distance
phase are shown: (a) $\alpha=0$, (b) $\alpha=0.54$.}
\end{figure}
Finally I turn to the $^3P_0$ wave, shown in Fig.~\ref{fig:ps3p0}.
The left-hand plot was obtained using $\alpha=0$, as in the $^3D_2$
and $^3G_4$ cases. As in the attractive $^3D_2$ channel, this shows
a rapid, non-monotonic energy dependence.
In addition, its overall strength is much larger than in the $^3P_1$
channel, by a factor of 30 or more. This implies that the leading
(order-$Q^{-1/2}$) interaction in this channel is too strong to be
treated in the DWBA. It should either be iterated to higher
orders or, possibly, treated nonperturbatively.
It practice, it is most convenient to include this term to all
orders using the equivalent short-distance parameter $\alpha$.
As discussed above, this defines a self-adjoint extension of the
the OPE Hamiltonian in Eq.~(\ref{eq:se}) by fixing the phase of
the oscillations of the wave functions as $r\rightarrow 0$.
In the right-hand plot, I show the residual interaction
for $\alpha=0.54$. This value was chosen so that the
interaction vanishes at zero energy, within the uncertainties of the
PWA's. Removing the effects of the energy-independent term
in this way leaves a residual interaction that is consistent with
a monotonic piece plus one linear in the energy.
Again, more definite conclusions require subtraction of the effects
of other long-range forces, which I now turn to.
\section{Two-pion exchange}
The DW method described above could also be used to separate
off the scattering produced by other known long-range forces,
most importantly those arising from two-pion exchange. However,
since such forces start at order $Q^2$, their effects can just
be subtracted perturbatively from the residual interactions
left after removal of OPE \cite{bmcg}. Although iterations of TPE
will contribute at higher orders, this DWBA treatment is adequate
up to order $Q^4$.
The relevant TPE potentials have been calculated at orders $Q^2$ and
$Q^3$ and their forms can be found in Refs.~\cite{kbw,nij99}.
At order $Q^2$, the long-range interactions also include a term
from expanding the relativistic correction to OPE \cite{friar},
\begin{equation}\label{eq:ope2}
V^{(2)}_{1\pi}(r)=-\frac{p^2}{2M^2}\left[V_{\pi {\scriptscriptstyle C}}(r)
+V_{\pi {\scriptscriptstyle T}}(r)\right].
\end{equation}
Lastly, there is an electromagnetic interaction, generated by $\pi\gamma$
exchange, whose form is derived in \cite{fvkpc}.
If all of these are subtracted from $\widetilde K(p)$ using the DWBA, then
any residual long-ranged contributions to the scattering start at order
$Q^4$. The resulting potential is
\begin{equation}\label{eq:vt4}
\widetilde V^{(4)}(p)=\frac{|\psi_0(R_0)|^2}{|\psi(p,R_0)|^2}
\left(\widetilde K(p)-\langle\psi(p)|V_{1\pi}^{(2)}
+V_{2\pi}^{(2,3)}+V_{\pi\gamma}|\psi(p)\rangle\right).
\end{equation}
Note that although this potential is denoted $\widetilde V^{(4)}(p)$
because it contains long-range contributions of order $Q^4$ and higher,
it may contain short-range terms of lower order. Such terms appear
first at order $Q^2$ for $P$-waves in the usual power counting,
and at order $Q^{-1/2}$ for waves where tensor OPE is treated
nonperturbatively.
The long-range potentials are singular and so their matrix elements
can be divergent. I therefore cut the integrals off, using the same radius,
$R_0$, as in the definition of the short-range interaction. For example,
the most divergent part of the order-$Q^3$ TPE potential at small radii
is proportional to $1/r^6$ \cite{kbw,nij99}. In a perturbative treatment
of OPE, the wave functions behave like $r^L$ as $r\rightarrow 0$. Together
these lead to a $1/R_0$ divergence in the $P$-wave matrix elements
which can be renormalised by introducing the order-$Q^2$ counterterm
just mentioned. Other partial waves do not give rise to divergences to
this order.
When OPE is treated nonperturbatively the small-$r$ forms of the wave
functions are quite different, as described above. The $r^{-1/4}$
behaviour of these leads to a $1/R_0^{7/2}$ divergence in
the matrix element of the order-$Q^3$ potential, at
least in the channels where tensor OPE is attractive. This can be
renormalised by the leading short-distance potential,
of order $Q^{-1/2}$. In addition, there are other, weaker
divergences which appear multiplied by powers of $\lambda_\pi$ or
$m_\pi$ and which can be renormalised by higher-order counterterms.
However these energy-independent terms can not be disentangled
phenomenologically from the leading one. The relativistic
correction to OPE, Eq.~(\ref{eq:ope2}), also leads to a
divergence and this can be renormalised by an order-$Q^{3/2}$ term,
proportional to $p^2$. The complete set of terms with orders
below $Q^4$ also includes one proportional to the square of the
energy, $p^4$. This term is of order $Q^{7/2}$ and it has a finite
coefficient at the current level of approximation. However, when
long-range forces of order $Q^4$ are included, it will be needed to
renormalise divergences from, for example, the next relativistic
correction to OPE.
The short-range terms up to order $Q^{7/2}$ provide a smooth,
quadratic energy-dependence. Subtracting them
should leave a residual interaction consisting only of terms of
order $Q^4$ or higher. I do this by fitting a quadratic form,
\begin{equation}
\widetilde V^{(7/2)}(p)=C_0+C_2p^2+C_4p^4,
\end{equation}
to $\widetilde V^{(4)}(p)$ in the range $T=100$ to 200~MeV.
The lower end of this range is chosen to lie above the region where OPE
and TPE can lead to rapid energy dependence. The upper limit corresponds
to a relative momentum $p\simeq 300$~MeV. Above this point, higher
powers of the the energy could start to become noticeable, representing
physics that has been integrated out, such as excitation of the $\Delta$
resonance.
Let me start with the two peripheral waves, where the breakdown scale
for perturbation theory is so high that there is no question about
the validity of the perturbative treatment of OPE or the standard
power counting. As in Ref.~\cite{bmcg}, I use the full DW solutions
for these, although these are not strictly necessary, to avoid the
complications of fourth-order perturbation theory. The residual
interactions $\widetilde V^{(4)}(p)$ for these are shown in
Fig.~\ref{fig:v34}. In both cases, $\widetilde V^{(4)}(p)$ was extracted at
$R_0=0.1$~fm, which is small enough that the wave functions have attained
their energy-independent forms and that the $^3F_3$ radial integral has
converged. The $^3G_4$ integral has reached a plateau here; the divergences
associated with the oscillatory region do not appear until $R_0$ is less
than about 0.03~fm.
As in the corresponding spin-singlet waves \cite{bmcg}, the uncertainties
in the PWA's make it hard to draw strong conclusions. The $^3G_4$ residual
interactions, $\widetilde V^{(2)}(p)$ and $\widetilde V^{(4)}(p)$, are both
consistent with zero, given the spread of the PWA's. In the $^3F_3$
channel, the energy dependence in $\widetilde V^{(2)}(p)$ below
100~MeV is somewhat reduced by subtraction of the order-$Q^{2,3}$
long-range forces.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{v4pl34.eps}
\caption{\label{fig:v34} Plots of the short-distance
interaction $\widetilde V^{(4)}(p)$, in fm$^{2L+2}$, against lab kinetic
energy $T$, in MeV, for the channels: (a) $^3F_3$, (b) $^3G_4$.}
\end{figure}
The interaction $\widetilde V^{(4)}(p)$ in the $^3P_0$ channel is shown in
Fig.~\ref{fig:v0}. The unsubtracted interaction, on the left, is very large,
with a strong linear energy dependence, as a result of the divergences in the
integrals of the order-$Q^{2,3}$ potentials. The residual interaction after
subtracting a quadratic fit is shown on the right.\footnote{In this channel, I
have omitted the Reid93 results from the fit since they lie systematically
below the ones from the other three Nijmegen analyses. Including this potential
would simply shift the fitted constant, and significantly increase the
uncertainty associated with the PWA's.} This shows that the quadratic
form can provide a good account of the energy dependence of
$\widetilde V^{(4)}(p)$ in the $^3P_0$ channel over the whole range of $T$
up to 300~MeV. Any contributions from forces of order $Q^4$ and higher lie
within the uncertainties of the PWA's.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{v4pl0.eps}
\caption{\label{fig:v0} Plots of the short-distance
interaction $\widetilde V^{(4)}(p)$, in fm$^{4}$, against lab kinetic
energy $T$, in MeV, for the channel $^3P_0$ with $\alpha=0.54$, (a)
unsubtracted, (b) quadratic fit to $T=100-200$~MeV subtracted.}
\end{figure}
Fig.~\ref{fig:v2} shows the results of a similar analysis of scattering
in the $^3D_2$ channel. Here, the quadratic fit can account well for the
energy dependence from $T\simeq 80$~MeV to about 250~MeV. Below 80~MeV,
the PWA's require a significant extra attractive interaction with a long
range in view of its rapid energy dependence. It is not obvious what could
be responsible for this, since no appropriate piece seems to
be present in the order-$Q^4$ pion-exchange potentials \cite{kaiser,machl}.
However it should be noted that its size is comparable to the uncertainties
in the PWA's in this region.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{v4pl2.eps}
\caption{\label{fig:v2} Plots of the short-distance
interaction $\widetilde V^{(4)}(p)$, in fm$^{6}$, against lab kinetic
energy $T$, in MeV, for the channel $^3D_2$ with $\alpha=0$, (a)
unsubtracted, (b) quadratic fit to $T=100-200$~MeV subtracted.}
\end{figure}
Finally, $\widetilde V^{(4)}(p)$ for the $^3P_1$ channel has a smooth,
approximately linear energy dependence, as can be seen in Fig.~\ref{fig:v1}.
The value at zero energy increases as the cut-off radius $R_0$ is decreased
from 0.6~fm to about 0.2~fm. This is consistent with the standard power
counting, where a divergence is present at order $Q^2$. In that counting,
this is the only divergence below order $Q^4$. However the coefficient of
the linear energy dependence in the results here also increases significantly
over this range of $R_0$. This suggests that the $^3P_1$ channel may in fact
be better analysed using the new power counting, a conclusion that is
supported by the low breakdown scale for the perturbative treatment of OPE
found in Ref.~\cite{birse}.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{v4pl1.eps}
\caption{\label{fig:v1} Plots of the short-distance
interaction $\widetilde V^{(4)}(p)$, in fm$^{4}$, against lab kinetic
energy $T$, in MeV, for the channel $^3P_1$, (a) unsubtracted,
(b) quadratic fit to $T=100-200$~MeV subtracted.}
\end{figure}
The results shown so far are all obtained with a radial cut-off
$R_0=0.1$~fm. This is well inside the region where the dependence on
$R_0$ is very small for the range of energies considered but it does
correspond to momentum scales greater than 1~GeV,
far beyond the range of validity of our EFT. This small radial cut-off
was used in Ref.~\cite{bmcg} to avoid introducing artefacts proportional
to positive powers of $R_0$. The coefficients in the residual potential
could then be related directly to scales of the underlying physics. The
connection is more complicated here, as one would first need to renormalise
the coefficients to remove the divergences proportional to powers of
$1/R_0$ (and there are several of these in the energy-independent term,
involving different powers of $m_\pi$ and $\lambda_\pi$.) As a result,
I do not attempt to make such an interpretation of the coefficients here
and hence the choice of such a small radius in not crucial.
In this case, one can ask what happens if a larger cut-off radius
is chosen. The question is pertinent since Epelbaum and Meissner
\cite{em} have shown that a single extra term, as required by the new
counting, can give a fairly good description of $P$- and $D$-wave phase
shifts for momentum cut-offs as low as $\sim 3$~fm$^{-1}$. Such values
would also avoid regions where TPE might need to be treated
nonperturbatively. I have therefore repeated these analyses with larger
cut-off radii. The results are essentially indistinguishable from the
ones shown above for radii up to about 0.6~fm (except in regions around
zeros of the wave functions). Beyond that point, the wave functions do
show noticeable energy dependence over the range up to $T=300$~MeV, as
already noted from Fig.~\ref{fig:wfns}. This leads to a dependence on
$R_0$ of the residual interactions, especially at higher energies.
If these artefacts of a finite radial cut-off are expanded as a power
series in $R_0^2p^2$, the first three terms (up to order $p^4$) can be
absorbed in the coefficients of the quadratic fit. They are therefore
removed when this fitted potential is subtracted as described above.
However, they will generate terms in the renormalised coefficients of
the short-range potential where the momentum scale is set by $1/R_0$
and not by the underlying physics. This would make it difficult to
interpret these coefficients in terms of physical scales.
This cut-off dependence increases with energy and so it is most
prominent at higher energies. A measure of how much the shape of the
waves changes is provided by the ratio of ratios,
\begin{equation}
\rho=\left|\frac{\psi(p_{\rm max},R_0)\psi(0,R_1)}
{\psi(p_{\rm max},R_1)\psi(0,R_0)}\right|^2,
\end{equation}
in which the energy-dependent normalisation of the short distance
wave functions cancels out. For $R_0=0.6$~fm, $R_1=0.1$~fm and
$p_{\rm max}=375$~MeV (corresponding to $T=300$~MeV), $\rho$ is
greater than 0.8 for most of the waves considered here. The $\sim 20\%$
changes that this induces in the the residual interactions can
be removed by the subtraction of the quadratic fit. The resulting
subtracted interactions are almost indistinguishable from the ones
shown in Figs.~\ref{fig:v2} (b) and \ref{fig:v1} (b),
except for the $^3P_0$ which shows larger effects as a result of
a nearby zero in its wave functions.
For $R_0=1.0$~fm, the ratio $\rho$ is around 0.6 for these waves.
The residual interaction $\widetilde V^{(4)}(p)$ for the $^3D_2$
channel is shown in Fig.~\ref{fig:v2n}. The divergent terms are much
smaller and so the overall magnitude of the potential is greatly
reduced compared to the one shown in Fig.~\ref{fig:v2} (a). However
the bulk of the change lies in the terms up to order $p^4$. After
these have been subtracted, the differences between Figs.~\ref{fig:v2}
(b) and \ref{fig:v2n} (b) are small, at least for energies below about
250~MeV. The pattern in the $^3P_1$ channel is similar. The $^3P_0$ is
complicated by the fact that the wave functions pass through zero near
0.8~fm. Cut-off-independent results for the subtracted interaction can
be obtained, but only for $R_0\lesssim 0.5$~fm or $R_0\simeq 1.2$~fm.
In the higher partial waves, $^3F_3$ and $^3G_4$, the unsubtracted
residual interactions show very little cut-off dependence over the
range $0.02\ {\rm fm}\lesssim R_0\lesssim 1$~fm.
\begin{figure}[h]
\includegraphics[width=17.5cm,keepaspectratio,clip]{v4pl2n.eps}
\caption{\label{fig:v2n} Plots of the short-distance
interaction $\widetilde V^{(4)}(p)$ for the channel $^3D_2$ with cut-off
radius of 1.0~fm. Other details are the same as in Fig.~\ref{fig:v2}.}
\end{figure}
\section{Conclusions}
Nogga, Timmermans and van Kolck \cite{ntvk} have found that a new
power counting is needed to organise the EFT describing
nucleon-nucleon scattering in spin-triplet channels. In particular,
the leading short-distance terms in $P$ and $D$ waves are significantly
promoted compared to the perturbative (Weinberg) counting. In
Ref.~\cite{birse} I obtained a quantitative statement of this new
counting by identifying the scale of the OPE potential, $\lambda_\pi$,
as an additional low-energy scale, and then treating OPE nonperturbatively.
This leads to an expansion of the short-range interaction
describing scattering between the DW's of the OPE potential.
Its terms correspond to those of an expansion of the DW Born
amplitude in powers of the energy. The leading term is of
order $Q^{-1/2}$, for any orbital angular momentum.
The forms of the DW's show that a nonperturbative treatment of tensor OPE,
and hence the new power counting, is required for energies that are large
enough for the waves to probe the region where the $1/r^3$ core of the
potential dominates over the centrifugal barrier. Otherwise the
short-distance wave functions have the normal $r^L$ behaviour and
perturbative counting remains valid. The analytic estimates
in Ref.~\cite{birse} and the numerical wave functions in Sec.~II both
indicate that the nonperturbative approach is required in waves with
$L\leq 2$ for momenta of the order of $m_\pi$ or larger. In $F$ waves and
above, a perturbative treatment is expected to remain valid
up to energies well beyond the validity of the EFT.
Here I have ``deconstructed" empirical phase shifts by using DW methods to
remove the effects of long-range pion-exchange forces. This generates a
residual short-range interaction directly from the observed phase shifts.
Unlike comparisons of phase-shift plots, this approach emphasises
the low-energy region, where the EFT description ought to work
best. The use of several Nijmegen partial wave analyses \cite{nijnn}
allows estimates of the uncertainties in these fits to the data. These
can be large at low energies, implying that it may be misleading to
fit the coefficients of an EFT to very low-energy data.
Removing only the effects of OPE leaves residual interactions with
strong energy dependences at low energies. This suggests that
higher-order long-range interactions are also important. After
removing the effects of OPE, I therefore use the DWBA to subtract the
contributions of other long-range potentials up to order $Q^3$.
These include TPE and a relativistic correction to OPE, all of which
highly singular at the origin. Their DWBA matrix elements diverge
as inverse powers of the cut-off radius but these divergences can be
cancelled using counterterms at the orders required by the new power
counting. In contrast, Weinberg counting at order $Q^3$ would provide
only one, energy-independent counterterm in each $P$ wave, and none in
any higher wave.
TPE and other long-range forces up to order $Q^3$ are able to account
for much of rapid energy dependence seen below 100~MeV in the $P$ and
$D$ waves. When these are subtracted the residual scattering
amplitudes can, in general, be well fitted by three contact terms
up to order $Q^{7/2}$ in the new counting. The only exception is
the $^3D_2$ channel, which seems to require an additional long-range
attraction. It is not clear where this could arise, given the forms of
the order-$Q^4$ chiral potentials \cite{kaiser,machl}, but one should
note that the uncertainties in the PWA's for this channel are
significant below about 70~MeV. Otherwise, TPE and the short-range
forces up to order $Q^{7/2}$ are able to give a good description of
the scattering in these triplet waves up to energies of about 250~MeV.
In the more peripheral $F$ and $G$ waves, the arguments of
Ref.~\cite{birse} suggest that the standard Weinberg counting should be
adequate for the energies considered here. The numerical wave functions
support this, their short-range forms having the expected $p^L$
dependence driven by the centrifugal barrier. Subtraction of the
long-range forces leaves small residual interactions, as in the
corresponding singlet channels studied in Ref.~\cite{bmcg}. Provided
the cut-off radius is larger than about 0.1~fm, there is no sign of
any divergences whose renormalisation would require the new power
counting.
These results indicate that the spin-triplet waves with
$L\leq 2$ can be analysed consistently using the nonperturbative power
counting developed in Refs.~\cite{ntvk,birse}. In addition, the results
show that deconstructing scattering amplitudes, using the approach of
Ref.~\cite{bmcg}, can provide a very useful tool for determining
effective potentials directly from empirical phase shifts. It should
be straightforward to extend the method to coupled channels, such as
$^3S_1$--$^3D_1$, despite the more complicated matrix equations involved.
It would also be interesting to use it to subtract two- and three-pion
exchanges at order $Q^4$ \cite{kaiser,machl}. However the approach also
demonstrates that there are significant uncertainties in the currently
available Nijmegen PWA's, particularly for energies below about 80 MeV
where the scattering is most sensitive to long-range forces. As a result,
attempts to study the importance of higher-order forces may require the
newer PWA's of Refs.~\cite{nij99,nij03}, when these become available.
\section*{Acknowledgments}
I am grateful to E. Epelbaum, H. Griesshammer, J. McGovern, D. Phillips and
U. van Kolck for helpful discussions.
|
1,116,691,497,306 | arxiv | \section{Introduction}
Martin-Löf's 1966 paper~\cite{martin1966definition} put the notion of an
individual random sequence on a sound mathematical footing. He gave a rigorous
definition of what it means for an infinite binary sequence (which we also refer
to as a \emph{real}) to be random with respect to a Bernoulli measure. Zvonkin
and Levin~\cite{zvonkin1970complexity} extended the definition to computable
measures on $2^\N$ and showed that every non-computable real $X \in 2^\N$ that
is random with respect to computable probability measure is Turing equivalent to
a sequence random with respect to Lebesgue measure on $2^\N$, the measure
induced by a fair coin toss on $\{0,1\}$. This marked one of the first results
connecting randomness and the degrees of unsolvability. Over the following
decades, our understanding of how randomness (in the sense of Martin-Löf and
related, algorithmically based notions) and computability interact has grown
tremendously. Two recent monographs attest to
this~\cite{Nies:2009a,downey2010algorithmic}. However, most investigations
focused on the computational properties sequences that \emph{are} random with
respect to some kind of measure: Lebesgue measure (the vast majority of
results), but also other computable probability measures and Hausdorff measures.
This leaves the question whether we can characterize, in terms of computability
theory, the reals which do not exhibit any random behavior \emph{at all}. The
notion of ``begin far from random'' so far has mostly been studied from the
point of view of \emph{triviality} and \emph{lowness}, which characterize reals
by having low initial-segment Kolmogorov complexity or by having little
derandomization power as oracles, respectively. We again refer to the
monographs~\cite{Nies:2009a,downey2010algorithmic} for an overview of a large
number of results in this direction.
This paper focuses on a different kind of question: {\em Given a real $X \in
2^\N$, and a family of probability measures $\mathcal{M}$, is $X$ random with
respect to a measure in $\mathcal{M}$, and if not, what is the computational
power of $X$?}
Levin~\cite{levin1976uniform} was the first to define Martin-Löf randomness for
arbitrary probability measures. Levin defined \emph{uniform tests} of
randomness. Such a test is a left-enumerable function $t$ that maps pairs of
measures and reals to non-negative real numbers or infinity such that for any
probability measure $\mu$ on $2^\N$, $\int t(\mu,X) d\mu(X) \leq 1$. A sequence
$X$ is random for $\mu$ if for all uniform test $t$, $t(\mu,X) < \infty$. A
different approach to randomness with respect to arbitrary measures was given by
Reimann and Slaman~\cite{reimann2015measures}. Their approach represents
measures as reals and makes these available as oracles in relativized Martin-Löf
tests. We will present more details on this approach in Section~\ref{sec:2}. Day
and Miller~\cite{day2013randomness} showed that the two approaches are
equivalent, that is, they define the same set of random reals.
It is a trivial fact that any real $X$ that is an \emph{atom} of a measure
$\mu$, i.e., $\mu\{X\} >0$, is random with respect to $\mu$. Reimann and
Slaman~\cite{reimann2015measures} showed that a real $X$ is non-trivially random
with respect to some probability measure $\mu$ if and only if $X$ is
non-computable. In other words, if we do not further restrict the family of
probability measures, a real has \emph{some} non-trivial random content if and
only if it is not computable. Day and Miller~\cite{day2013randomness} gave an
alternative prof of this result using Levin's neutral measures (a single measure
relative to which \emph{every} sequence is random).
A more intricate structure emerges when we ask which sequences are random with
respect to a \emph{continuous}, i.e.\ non-atomic, probability measure. Reimann
and Slaman~\cite{reimann2015measures} showed that if a sequence $X \in 2^\N$ is
not $\Delta^1_1$, it is random with respect to a continuous measure. We use the
term \emph{NCR} to denote those reals which are not random with respect to any
continuous measure. Kjos-Hanssen and Montalb\'{a}n~\cite{Montalban:2005a} showed
every member of a countable $\Pi^0_1$ set of sequence is NCR. Cenzer, Clote,
Smith, Soare, and Wainer~\cite{cenzer1986members} showed that members of
countable $\Pi^0_1$ sets of reals exist in every Turing degree
$\mathbf{0}^{(\alpha)}$, where $\alpha$ is any computable ordinal. Therefore,
the Kjos-Hanssen-Montalb\'{a}n result implies the set of NCR reals is cofinal in
$\Delta^1_1$ Turing-degrees.
On the other hand, Barmpalias, Greenberg, Montalb\'{a}n and
Slaman~\cite{barmpalias2012k} connected computational lowness with NCR by
showing that any real Turing below an incomplete r.e.\ degree is NCR. In
particular, every K-trivial is NCR. Their result makes use of a close
connection between the granularity function of a continuous measure (introduced
in the next section) and the settling time of a $\Delta^0_2$ real, which was
first observed by Reimann and Slaman~\cite{reimann-slaman:unpub}. The
granularity function (along with its ``companion'', the dissipation function of
a meaure), will also play a central role in this paper.
The previous results suggest an attempt to classify the $\Delta^1_1$ Turing
degress along the following lines:
\begin{enumerate}[(1)]
\item Which Turing degrees consist entirely of NCR reals?
\item Which Turing degrees do not contain any NCR reals?
\item Which Turing degrees contain NCR reals?
\end{enumerate}
Haken~\cite{haken2014randomizing} studied these questions with respect to
stronger randomness tests for arbitrary (not necessarily continuous) measures,
in particular difference and weak-$n$-randomness for $n \geq 2$. He also linked
continuous randomness to higher randomness by showing that NCR reals are not
3-randomizable, i.e. for any (possibly atomic) measure $\mu$ and any
representation $R_\mu$ of $\mu$, NCR reals are not $\mu$-random with respect to
any Martin-L\"{o}f $\mu$-test relative to $R_\mu''$.
Regarding Question (2), Reimann and Slaman~\cite{reimann2018effective} showed
that every real Turing below a (Lebesgue) $3$-random real and not recursive in
$0'$ is random with respect to a continuous measure.
In this paper, we mainly focus on Question (3). We construct NCR reals in
certain families of Turing degrees. Our main technique is to recursively
approximate non-random reals using other non-random reals which are, in a
certain sense, even ``more non-random'' reals. For this purpose, we quantify
non-randomness with respect to a given measure. We introduce a new randomness
test parameterized by a natural number $n$ which corresponds to the level of
non-randomness. We should point out that the level $n$ of non-randomness we
define in this paper is not related to the notion of Martin-L\"{o}f
$n$-randomness.
This paper is organized as follows. In Section~\ref{sec:2}, we introduce the new randomness test which quantifies the level of non-randomness and prove some basic facts about it which we will need later. In Sections~\ref{sec:3} and~\ref{sec:4}, respectively, we present two constructions of reals based on levels of non-randomness, one for reals r.e.\ above (REA) a given real, the other one for reals with a self-modulus. Finally, in Section~\ref{sec:5}, we infer the existence of NCR reals in certain Turing degrees using the construction in Sections~\ref{sec:3} and~\ref{sec:4}. In particular, our constructions can be used to prove the following theorem.
\bigskip
\begin{restatable}{Thm}{mainone}~
\label{Thm:main1}
\begin{enumerate}[(a)]
\item Any n-REA Turing degree contains an NCR real.
\item Any self-modulus degree contains an NCR real.
\end{enumerate}
\end{restatable}
The theorem in particular implies
\begin{restatable}{Corollary}{maintwo}
\label{Thm:main2}
Every $\Delta^0_2$ degree contains an NCR element.
\end{restatable}
\medskip
\subsection*{Acknowledgments}
We would like to thank Ted Slaman for many helpful discussions, and for first
observing the relation between the granularity function of a measure and the
settling time of a real. This crucial initial insight inspired much of the work
presented here.
\subsection*{Notation}
In the following, we list the notation used in this paper. The reader can refer to \cite{soare2016turing} for more details.
\begin{itemize}
\item We use $\log$ to denote the binary logarithm.
\item Lower case Roman letters denote natural numbers,
except $f,g,h$ (and sometimes $s,t$), which denote functions.
\item We use capital Roman letters $X,Y,Z,A,B,C,R$ to denote set of natural numbers as well as infinite binary strings (reals).
\item We use Greek letters $\sigma$,$\tau$ to denote finite binary strings. The length of a string $\sigma$ will be denoted by $|\sigma|$. We use $\Cyl{\sigma}$ to denote the set of all infinite binary strings extending $\sigma$.
\item We use $\operatorname{dom}(f)$ to denote the domain of a partial recursive function $f$.
\item We fix an effective enumeration $\{\Phi_i\}$ of all oracle Turing machines.
\item We use $\Phi^A_e$ to denote the machine with oracle $A$ and G{\"o}del number $e$. We write $\Phi^A_e(x) = y$ if the machine halts on input $x$ and outputs $y$. If $\Phi^A_e(x)$ does not halt, we write $\Phi^A_e(x) = \uparrow$. Finally, we let $W^A_e = \operatorname{dom}(\Phi^A_e)$.
\item We use $\Phi^A_{e,k}(x)$ to denote the $e$-th machine with oracle $A$ running for $k$ steps. Without loss of generality, $\Phi^A_{e,k}(x)=\uparrow$ when $x>k$. We put $W^A_{e,s} =\operatorname{dom}(\Phi^A_{e,s})\upharpoonright_s$.
\item We use $\sigma \ensuremath{\,\mbox{}^\frown\,} \tau$ to denote the concatenation of strings $\sigma$ and $\tau$.
\end{itemize}
\section{Quantifying non-randomness}
\label{sec:2}
In this section, we first briefly review the definition of randomness with respect to arbitrary measures given by \cite{reimann2015measures}. We refer the readers for \cite{reimann2015measures} and \cite{day2013randomness} for more details.
First of all, we define a metrical structure on the set of all probability measure on $2^\omega$.
\begin{Def}
For any probability measures $\mu$ and $\nu$ on $2^\omega$, define the \textit{distance function} $d(\mu,\nu)$ as
$$d(\mu,\nu)=\sum_{\sigma\in 2^{<\omega}} 2^{-|\sigma|}|\mu\Cyl{\sigma}-\nu\Cyl{\sigma}|.$$
\end{Def}
Let ${\mathcal{P}}(2^\omega)$ be the set of all probability measures on
$2^\omega$, and let $\mu_\sigma$ be the measure which is identical with the
characteristic function of the principal filter of $\{\sigma\ensuremath{\,\mbox{}^\frown\,} 0^\omega\}$, that is, for any
$H\subset 2^\omega$,
\begin{equation*}
\mu_\sigma(H) = \begin{cases}
1 &\text{if $\sigma\ensuremath{\,\mbox{}^\frown\,} 0^\omega \in H$,}\\
0 &\text{if $\sigma\ensuremath{\,\mbox{}^\frown\,} 0^\omega \notin H$. }
\end{cases}
\end{equation*}
The following properties hold.
\begin{Proposition}~
\begin{enumerate}[(1)]
\item $d(\mu,\nu)$ is a metric on ${\mathcal{P}}(2^\omega)$.
\item ${\mathcal{P}}(2^\omega)$ with the topology generated by $d(\mu,\nu)$ is a Polish space.
\item The closure of all $\mu_\sigma$ under binary average forms a countable dense subset of $({\mathcal{P}}(2^\omega),d)$.
\end{enumerate}
\end{Proposition}
For the proof, refer to \cite{reimann2015measures} or \cite{day2013randomness}.
The proposition allows for representing any element of ${\mathcal{P}}(2^\omega)$ by a Cauchy sequences of elements in (3). Let us assume $\{\mu_0,\mu_1,\mu_2,\ldots\}$ is a fixed effective enumeration of the set in (3). Any sequence of measures in (3) can then be represented by its sequence of indices in $\{\mu_0,\mu_1,\mu_2,\ldots\}$. If one develops this correspondence carefully it is possible to prove the following~\cite{day2013randomness}.
\begin{Proposition}
\label{Pro:repre}
There exists a Turing functional $\Gamma$, such that for any real $X$ and any natural number $n$, $\Gamma^X(m)\downarrow$, and the following hold.
\begin{enumerate}
\item $d(\mu_{\Gamma^X(n)},\mu_{\Gamma^X(n+1)})\leq 2^{-n}$;
\item the function $\rho: 2^\omega\rightarrow {\mathcal{P}}(2^\omega)$ defined as
$$\rho(X)=\lim_n \mu_\Gamma^X(n)$$
is a continuous surjection.
\item for any $X$, $\rho^{-1}(\{\rho(X)\})$ is $\Pi^0_1(X)$.
\end{enumerate}
\end{Proposition}
From now on, we fix a mapping $\rho$ as given by Proposition~\ref{Pro:repre}.
\begin{Def}
A \textit{representation} of a probability measure $\mu$ is a real $R$ such that $\rho(R)=\mu$.
\end{Def}
Note that for a given probability measure $\mu$, its representation might not be unique. However, any representation of $\mu$ can compute a two-sided effective approximation to $\mu\Cyl{\sigma}$, for any given $\sigma$.
Using representations as oracles, one can define randomness tests and computability relative to a given probability measure.
\begin{Def}
A \textit{Martin-L\"of-$\mu$-test relative to a representation $R_\mu$(or simply
Martin-L\"of-$R_\mu$-test)} is a sequence of uniformly $\Sigma^0_1(R_\mu)$ sets
$(V_n)_{n\in \mathbb{N}}$ such that for all $n$, $\mu(V_n)\leq2^{-n}$. \\
$X\in 2^\omega$ \textit{passes} a Martin-L\"of-$R_\mu$-test if $X\notin\cap_{n\in \omega} V_n$.\\
For any probability measure $\mu$ on $2^\omega$ and a representation $R_\mu$ of $\mu$, $X\in 2^\omega$ is \textit{$R_\mu$-$\mu$-random} if $X$ passes every Martin-L\"{o}f-$\mu$ test relative to $R_\mu$.
\end{Def}
\begin{Def}
A set or function is \textit{$\mu$-computable ($\mu$-c.e.)} if it is computable (computably enumerable) in any representation of $\mu$.
\end{Def}
Finally, we can formally introduce the property NCR (not random w.r.t.\ any continuous measure).
\begin{Def}
A measure $\mu$ is \textit{continuous} if every singleton has $\mu$-measure $0$.
$X\in 2^\omega$ is \textit{NCR} if and only if $X$ is not
\textit{$R_\mu$-$\mu$-random} w.r.t.\ any continuous probability measure $\mu$
and any representation $R_\mu$ of $\mu$.
\end{Def}
Next, we introduce a new family of randomness tests. We will need two functions
for this, the dissipation function $g$ and the granularity $h$ of a measure.
\begin{Def}
For any continuous probability measure $\mu$, define the \emph{granularity
function} $g_\mu(n):=\min\{l: \forall |\sigma|=l, \mu\Cyl{\sigma}<2^{-n}\}$,
and define the \emph{dissipation function} $h_\mu(l):=\max\{n:\forall
|\sigma|=l, \mu\Cyl{\sigma}<2^{-n+1}\}$.
\end{Def}
We simply write $g(n)$ or $h(n)$ when the underlying measure is clear. The
function $g$ is well-defined by compactness of $2^\omega$. For any natural
number $n$, $g(n)$ gives a length $l$ by which the measure of any cylinder set
of length $l$ is less than $2^{-n}$. Given a length $l$, the dissipation
function $h(l)$ gives the binary upper bound of the measure for cylinder sets of
length $l$.
\begin{Fact} Here are some easy facts about $g$ and $h$.
\label{Fact:gh}
\begin{enumerate}[(1)]
\item $\forall n, n<g(n)<g(n+1)<g(g(n+1))$
\item $\forall l, h(l)\leq h(l+1)\leq h(l)+1\leq l+1$
\item $\forall n, h(g(n))=n+1$
\item $\lim_{l\rightarrow \infty}h(l)=\infty$
\item $g \equiv_T h$
\end{enumerate}
\end{Fact}
\begin{proof}
Properties (1)-(4) follow directly from the definition or via an easy induction.
For (5), $h(l)$ equals the largest $n$ such that $g(n-1)\leq l$, and $g(n)$ is equal to the least $l$ such that $h(l)=n+1$, so $g \equiv_T h$.
\end{proof}
Notice that $g_\mu$ and $h_\mu$ are in general only $\mu$-c.e. But we have the following lemma, which will be useful in Section~\ref{sec:4}.
\begin{Lemma}
\label{Lemma:g*}
For any continuous measure $\mu$, there are $\mu$-computable, non-decreasing functions
$\hat{h}_\mu(n),\hat{g}_\mu(n)$ such that for all $n$,
\begin{gather*}
h_\mu(n)\leq \hat{h}_\mu(n)\leq min\{n, h_\mu(n)+1\}, \\
g_\mu(n)\leq \hat{g}_\mu(n)\leq g_\mu(n+1).
\end{gather*}
\end{Lemma}
\begin{proof}
To define $\hat{h}_\mu$, note that any representation of $\mu$ can effectively find
an $n$ such that $2^{-n}<\mu([\sigma])<2^{-n+2}$, uniformly for any $\sigma$. Let
$\hat{h}_\mu(l)$ be the maximum such $n\leq l$ for all $\sigma$ with length $l$.
Now let $\hat{g}_\mu(n)$ be the minimum $l$ such that $\hat{h}_\mu(l)=n+2$. Since
$\hat{h}_\mu \geq h_\mu$, it follows from the observation in the proof of
Fact~\ref{Fact:gh}(5) that $\hat{g}_\mu(n) \leq g_\mu(n+1)$.
On the other hand, by Fact \ref{Fact:gh}, we have
$$h(\hat{g}_\mu(n))\leq \hat{h}(\hat{g}_\mu(n))=n+2.$$
We also know $h_\mu(g_\mu(n))=n+1$, and $h_\mu$ is monotonic, so $h(\hat{g}_\mu(n))\geq g_\mu(n)$.
\end{proof}
A straightforward induction yields the following.
\begin{Corollary}
\label{Cor:h-0}
For the function $\hat{h}_\mu$ from Lemma~\ref{Lemma:g*}, we have that for all $l,n\in \mathbb{N}$, $$h^{(n)}_\mu(l)\leq \hat{h}_\mu^{(n)}(l)\leq h^{(n)}_\mu(l)+n.$$
\end{Corollary}
\medskip
We will now define a new randomness test. The reader should keep in mind our
main aim is to study not the random reals for a measure, but the non-random
reals. In particular, we want to devise a quantitative measure of \emph{how
non-random} a real is.
The main difference between our test and a regular Martin-L\"{o}f test is how we
weigh cylinders. In Martin-L\"{o}f tests, we set upper bounds on the measure of
a union of cylinders. Thus, for any finite string $\sigma$, its weight is
$\mu\Cyl{\sigma}$ under measure $\mu$. When $\mu$ is Lebesgue measure, strings
with the same length would have the same weight, but this is not generally true
for other measures. However, in our new test, we assign the same weight to
strings with the same length. This means we assign a measure $\mu$ a
corresponding \emph{Hausdorff measure}. The weight of each cylinder is
determined by the dissipation function $h_\mu$. To obtain the desired
stratification, we consider iterates of $h_\mu$. The more we iterate $h_\mu$,
the slower the induced function goes to infinity, and the harder it will be to
cover reals. For technical reasons, we need to multiply by a coefficient that is
also completely determined by $h_\mu$ and the level of iteration. As mentioned
before, we will write $h$ and $\hat{h}$ for $h_\mu$ and $\hat{h}_\mu$, respectively, if
the underlying measure $\mu$ is clear.
\begin{Def}
\label{Def:level}
For any continuous measure $\mu$, a \textit{level-$n$ Solovay test for $\mu$}
is a $\mu$-c.e. sequence $T_n$ of finite binary strings such
that
$$\sum_{\sigma \in T_n} (h^{(n)}(|\sigma|))^{\log n}2^{-h^{(n)}(|\sigma|)}<\infty.$$
We say $A \in 2^\N$ \emph{fails} $T_n$ if $A\in \Cyl{\sigma}$ for infinitely
many $\sigma \in T_n$. We say $A$ is \textit{non-$\mu$-random of level $n$} if
it fails some level-$n$ randomness test for $\mu$, and we say $A$ is
\textit{non-$\mu$-random of level $\omega$} if it is non-$\mu$-random of level $n$ test for all
natural numbers $n$.
\end{Def}
Please note that the level of a test defined as above has nothing to do with what sometimes called the level of a Martin-L\"{o}f test (i.e., the $n$-th uniformly c.e.\ set in a Martin-L\"{o}f test). In our definition, it is a parameter which used to measure how non-random a real is with respect to a specific continuous measure. In the following, we assume, without loss of generality, that all tests are infinite.
If $\mu$ is Lebesgue measure, we have $h_\mu(n) = n$ and thus,
$$\sum_{\sigma \in T_n} (h^{(n)}(|\sigma|))^{\log
n}2^{-h^{(n)}(|\sigma|)}=\sum_{\sigma \in T_n}|\sigma|^{\log n}2^{-|\sigma|},$$
so in this case a level-$1$ Solovay test coincides with the standard notion of a Solovay test\cite[6.2.7]{downey2010algorithmic}
\medskip
We next establish some basic properties of the new test notion.
The following Lemma follows easily by analyzing the derivative.
\begin{Lemma}
\label{Le:mono}
The function $f(x):=x^{\log n}2^{-x}$ is decreasing to 0 from above for $x>\log n$.
\end{Lemma}
We first show that $\mu$-computable reals are non-$\mu$ random of level $\omega$.
\begin{Proposition} \label{prop:computable_omega}
If a real $A$ is computable in $\mu$, then $A$ is non-$\mu$ random of level $\omega$ for all continuous measures $\mu$.
\end{Proposition}
\begin{proof}
If $A$ is a $\mu$-computable real, then we can compute arbitrary long initial segments of $A$ from any representation of $\mu$. By Fact \ref{Fact:gh}, the $\mu$-computable function $\hat{h}(l)$ is non-decreasing, $h(l)\leq \hat{h}(l)\leq h(l)+1$, and $\lim_{l\rightarrow \infty}\hat{h}(l)$ and $\lim_{l\rightarrow \infty}h(l)$ are both infinite. Then for any natural number $n$, if $\sigma$ is an initial segment of $A$ and $\hat{h}^{(n)}(|\sigma|)$ is greater than $n+\log n$, by Lemma \ref{Le:mono} and Corollary \ref{Cor:h-0}, we have the following inequality:
$$(h^{(n)}(|\sigma|))^{\log n}2^{-h^{(n)}(|\sigma|)}\leq (\hat{h}^{(n)}(|\sigma|)-n)^{\log n}2^{-\hat{h}^{(n)}(|\sigma|)+n}.$$
So, for fixed $n$, let $\{\sigma_i\}$ be a $\mu$-computable sequence of initial segments of $A$ such that the following two inequalities are satisfied, for all $i \in \omega$:
\begin{gather*}
(\hat{h})^{(n)}(|\sigma_i|)>n+\log n, \\
(\hat{h}^{(n)}(|\sigma_i|)-n)^{\log n}2^{-\hat{h}^{(n)}(|\sigma_i|)+n}<2^{-i}.
\end{gather*}
Then $\{\sigma_i\}_{i \in \mathbb{N}}$ is a level-$n$ test which covers $A$. Therefore, $A$ is non-$\mu$ random of level $\omega$.
\end{proof}
The next proposition shows the relation between level tests and Martin-L\"{o}f tests.
\begin{Proposition}
If a real $A$ is non-$\mu$-random of level $1$, then $A$ is not $\mu$-Martin-L\"{o}f random.
\end{Proposition}
\begin{proof}
If $n=1$, the sum in Definition~\ref{Def:level} becomes
\[
\sum_{\sigma\in T_1} 2^{-h(|\sigma_i|)}.
\]
By the definition of $h$, we have $\mu\Cyl{\sigma} < 2^{-h(|\sigma|)+1}$, thus any level-$1$ test is a standard Solovay test.
Moreover, for a probability measure, any real covered by a Solovay test is also
covered by a Martin-Löf test, see for example~\cite[Theorem~6.2.8]{downey2010algorithmic}.
\end{proof}
Next, we show that the level tests are indeed nested.
\begin{Proposition}
Every level-$n$ test is also a level-$(n-1)$ test.
\end{Proposition}
\begin{proof}
Assume $\{\sigma_i\}_{i\in \mathbb{N}}$ is a level-$n$ test. By Fact \ref{Fact:gh}, for all but finitely many $i$,
$$h^{(n-1)}(|\sigma_i|) \geq h^{(n)}(|\sigma_i|)>\log (n-1).$$
By Lemma \ref{Le:mono} and the inequality above, for all but finitely many $i$, the following holds: $$(h^{(n-1)}(|\sigma_i|))^{\log n-1}2^{-h^{(n-1)}(|\sigma_i|)}<(h^{(n)}(|\sigma_i|))^{\log n-1}2^{-h^{(n)}(|\sigma_i|)}.$$
Furthermore, we know $h^{(n)}(|\sigma_i|)$ is positive and $\log (n-1)<\log n$, so we have
$$(h^{(n)}(|\sigma_i|))^{\log n-1}2^{-h^{(n)}(|\sigma_i|)}<(h^{(n)}(|\sigma_i|))^{\log n}2^{-h^{(n)}(|\sigma_i|)}.$$
Finally, since $\{\sigma_i\}_{i\in \mathbb{N}}$ is an level-$n$ test,
\[
\sum_{i\in \mathbb{N}} (h^{(n-1)}(|\sigma_i|))^{\log n-1}2^{-h^{(n-1)}(|\sigma_i|)}<\sum_{i\in \mathbb{N}} (h^{(n)}(|\sigma_i|))^{\log n}2^{-h^{(n)}(|\sigma_i|)}<\infty.
\]
So $\{\sigma_i\}_{i\in \mathbb{N}}$ is also a level-$(n-1)$ test.
\end{proof}
The previous results justify thinking of level tests as a hierarchy of
non-randomness for continuous measures.
In particular, we have
\begin{center}
X is non-$\mu$ random of level $\omega$\\[1ex]
$\big\Downarrow$ \\[1ex]
X is non-$\mu$ random of level $n+1$\\[1ex]
$\big\Downarrow$ \\[1ex]
X is non-$\mu$ random of level $n$\\[1ex]
$\big\Downarrow$ \\[1ex]
X is not $\mu$-random.
\end{center}
\bigskip
It is not too hard to construct a measure for which this hierarchy is
proper (see ~\cite{li:thesis}), while for other measures (such as Lebesgue
measure on $2^\N$) it collapses.
One can define a similar hierarchy for NCR instead of for individual measures,
saying that a real $X\in 2^\omega$ is \emph{NCR of level $n$ ($\omega$)} if and only
if $X$ is non-$\mu$ random of level $n$ ($\omega$) for every continuous probability
measure $\mu$. Interestingly, this hierarchy for NCR overall collapses, mostly
due to the correspondence between continuous measures and Hausdorff measures
established by \emph{Frostman's Lemma} (see \cite{reimann2008effectively}).
This is shown in a different paper by the authors~\cite{li-reimann:inprep},
where the generalized Solovay test introduced in Definition~\ref{Def:level}.
\section{Constructing non-random r.e.a.\ reals}~
\label{sec:3}
The goal of this section is to construct level-$n$ non-random reals that are recursively enumerable above (\emph{r.e.a.}) a given level-$2n$ non-random real $A$. In fact, we can construct such a real in any Turing degree r.e.a.\ $A$.
To this end, we first introduce a general construction technique which builds a real $C$ r.e.a.\ a given real $A$.
The basic idea is to add a large amount of ``1''s between each bit of $B$, where the number of ``1''s is still computable by $B$.
\begin{Construction}
\label{Cons:1}
Assume for a given $A$ and a real $B$ r.e.\ above $A$, we have $W^A_e=B$ for some $e$. Without loss of generality, we may assume the first bit of $B$ is ``1"
and it takes $\Phi^A_e$ only one step to halt on input ``0" with no use of the oracle. We also assume that $B$ is infinite.
Denote the $i$-th bit of $A$ by $a_i$ and the $i$-th bit of $B$ by $b_i$. By our assumption, $b_0=1$.
Let
\[
m_i = min\{j>i \colon \Phi^A_{e}(j)\downarrow\},
\]
that is, $m_i$ is the least element of $B$ which is greater than $i$. Define the function $f: \mathbb{N}\rightarrow\mathbb{N}$ as
\begin{equation*}
f(i) = \begin{cases}
min\{s|\forall j\leq m_i(\Phi^A_{e}(j)\downarrow\implies \Phi^A_{e,s}(j)\downarrow)\} & \text{ if } i \in B, \\
1 &\text{ if } i\notin B.
\end{cases}
\end{equation*}
When $i\in B$, $f(i)$ is the minimum number such that for all $j\leq m_i$ and
$j\in B$, $\Phi^A_e(j)$ halts within $f(i)$ many steps. Since $A \leq_T B$, $f$ is
$B$-computable. Define a sequence of finite binary strings $C_i$ as follows:
$$C_i =b^{f(0)}_0 \ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b^{f(1)}_1\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b^{f(2)}_2\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} \ldots \ensuremath{\,\mbox{}^\frown\,} b^{f(i)}_i.$$
Let $C=\lim_i C_i$. Since $b_i$ and $f(i)$ are $B$-computable, so is $C$. On
the other hand, the first $i$ bits of $B$ are coded in $C_i$: Each block of ones
corresponds to exactly one element in $B$ less than $i$. Therefore, $C\equiv_T B$.
\end{Construction}
We illustrate Construction \ref{Cons:1} with an example. Let $A$ be a real and
$B=W^A_e$ as in Construction \ref{Cons:1} and let $s_A(n)$ be the settling time of
$\Phi^A_e(n)$. Assume the first few values of $B$ and $s_A$ are as given in the
following table.
\bigskip
\begin{center}
\begin{tabular}{ c || c | c | c | c | c | c }
$n$ & $0$ & $1$ & $2$ & $3$ & $4$ &\mbox{ } \dots \mbox{ } \\
\hline
$\Phi^A_e$ & $\Phi^A_e(0)\downarrow$ & $\Phi^A_e(1)\downarrow$ & $\Phi^A_e(2)\uparrow$ & $\Phi^A_e(3)\downarrow$ & $\Phi^A_e(4)\downarrow$ & \mbox{ } \dots \mbox{ } \\
$s_A$ & 1 & 37 & $\infty$ & 134 & 28 & \mbox{ } \dots \mbox{ } \\
\end{tabular}
\end{center}
\bigskip
Following Construction \ref{Cons:1}, we obtain the first few bits of $C$ as follows.
\bigskip
\begin{center}
\begin{tabular}{ c || c | c | c | c | c | c }
$n$ & $0$ & $1$ & $2$ & $3$ & $4$ & \mbox{ } \dots \mbox{ }\\
\hline
B& 1$^\frown$ & 1$^\frown$ & 0$^\frown$ & 1$^\frown$ & 1$^\frown$& \mbox{ } \dots \mbox{ }\\
f & 37 & 134 & 1 & 134 & \mbox{ } \dots \mbox{ }& \mbox{ } \dots \mbox{ } \\
&$\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$ & $\Downarrow$\\
C & $\underbrace{1...1}_\text{37}0^\frown$ & $\underbrace{1...1}_\text{134}0^\frown$ & $00^\frown$ & $\underbrace{1...1}_\text{134}0^\frown$ & $1 \dots$ & \mbox{ } \dots \mbox{ } \\
\end{tabular}
\end{center}
\bigskip
We now show that non-randomness properties of $A$ carry over to $C$. Intuitively,
if we know $\sigma$ is an initial segment of $A$, we can use it to
``approximate" some initial segment of $B$ by calculating waiting for
$\Phi^\sigma_e(*)$ to converge,
until the use exceeds $\sigma$. But we cannot effectively get
any initial segment of $B$ in this way, as we have no upper bound
on the settling time of $\Phi^\sigma_e$, therefore we cannot find a effective
cover of $B$ by using this approximation.
We address this problem in the construction of $C$ by adding long series of ones, thereby decreasing the cost in measure of adding an incorrect string to a test. Consider the case when we use a long enough initial segment of
$A$ to approximate the first $n$ bits of $B$ for $s$ steps, but the
approximation $\tau$ we got for $B$ turns out to be wrong. Let $m$ be the index of the first incorrect bit. Then the settling time of $\Phi^\sigma_e(m)$ must be greater than $s$. By
Construction \ref{Cons:1}, an initial segment of $C$ is of the form
$$b^{f(0)}_0 \ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b^{f(1)}_1\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b^{f(2)}_2\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,}
\ldots \ensuremath{\,\mbox{}^\frown\,} \underbrace{111\ldots 1}_{\text{more than s}}.$$
By picking a large $s$,
the total measure of all possible strings of the above form is small. Eventually,
we can effectively find a cover of $C$ from any initial segment of $A$.
\begin{Thm}
\label{Thm:2n-n}
For any continuous measure $\mu$, if $A$ is non-$\mu$ random of level $2n$, $B$ is r.e.a.\ $A$, and $C$ is obtained from $B$ via Construction~\ref{Cons:1}, then $C$ is non-$\mu$ random of level $n$.
\end{Thm}
\begin{proof}
We define an auxiliary function $t$ from $2^{<\omega}\times \mathbb{N}$ to finite subsets of $2^{<\omega}$:
\begin{equation*}
t(\sigma,n) := \begin{cases}
\{\sigma\} &\text{if $|\sigma|<n$;}\\
\{\sigma\upharpoonright_n\}\cup \bigcup_{i=0}^{n} \{\sigma\!\upharpoonright_i \!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma|-i}\} &\text{if $|\sigma|\geq n$.}
\end{cases}
\end{equation*}
\begin{Lemma}
\label{Lemma:2n-n}
If $\{\sigma_i\}_{i\in \mathbb{N}}$ is a level-$2n$ randomness test of $\mu$, then $$\bigcup_{i\in \mathbb{N}} t(\sigma_i,\hat{h}^{(n)}(|\sigma_i|))$$ is a level-$n$ randomness test of $\mu$.
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{Lemma:2n-n}]
By Fact \ref{Fact:gh} (4) and Lemma~\ref{Lemma:g*}, we have $$\lim_n \hat{h}(n)\rightarrow \infty.$$ Hence, for fixed $n$ it holds that for all but finitely many $i$, $$\hat{h}^{(2n)}(|\sigma_i|)> \log 2n+2n.$$
Fact \ref{Fact:gh} and Lemma~\ref{Lemma:g*} also imply that
$$\hat{h}^{(n)}(|\sigma_i|)\leq |\sigma_i|.$$
Therefore, for all $i$,
$$t(\sigma_i,\hat{h}^{(n)}(|\sigma_i|))=\{\sigma_i\upharpoonright_{\hat{h}^{(n)}(|\sigma_i|)}\}\cup \bigcup_{j=0}^{\hat{h}^{(n)}(|\sigma_i|)} \{\sigma_i\upharpoonright_j\!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-j}\}.$$
The contribution of $\sigma_i\upharpoonright_{\hat{h}^{(n)}(|\sigma_i|)}$ to a level-$n$ test is
\begin{equation*}
\begin{split}
& (h^{(n)}(|\sigma_i\upharpoonright_{\hat{h}^{(n)}(|\sigma_i|)}|))^{\log n}2^{-h^{(n)}(|\sigma_i\upharpoonright_{\hat{h}^{(n)}(|\sigma_i|)}|)}\\
& \quad = (h^{(n)}({\hat{h}^{(n)}(|\sigma_i|)}))^{\log n}2^{-(h^{(n)}({\hat{h}^{(n)}(|\sigma_i|)}))}.
\end{split}
\end{equation*}
By Lemma \ref{Le:mono}, for all but finitely many $i$,
\begin{equation}
\begin{split}
& (h^{(n)}({\hat{h}^{(n)}(|\sigma_i|)}))^{\log n}2^{-(h^{(n)}({\hat{h}^{(n)}(|\sigma_i|)}))} \\
& \quad \leq \; (h^{(2n)}(|\sigma_i|))^{\log n}2^{-h^{(2n)}(|\sigma_i|)}\\
& \quad \leq \; (h^{(2n)}(|\sigma_i|))^{\log 2n}2^{-h^{(2n)}(|\sigma_i|)}.
\end{split}
\tag{*} \label{equ1}
\end{equation}
Moreover, the contribution of $\bigcup_{j=0}^{\hat{h}^{(n)}(|\sigma_i|)} \{\sigma_i\upharpoonright_j\!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-j}\}$ to a level-$n$ test is
\begin{equation*}
\begin{split}
& \sum_{j=0}^{\hat{h}^{(n)}(|\sigma_i|)}(h^{(n)}(|\sigma_i\upharpoonright_j\!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-j}|))^{\log n}2^{-h^{(n)}(|\sigma_i\upharpoonright_j\!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-j}|)}\\
& \quad =\;(\hat{h}^{(n)}(|\sigma_i|)+1)((h^{(n)}(|\sigma_i|))^{\log n}2^{-h^{(n)}(|\sigma_i|)})\\
\end{split}
\end{equation*}
By Corollary~\ref{Cor:h-0}, for all but finitely many $i$, we have $$(\hat{h}^{(n)}(|\sigma_i|)+1)<2\cdot h^{(n)}(|\sigma_i|).$$
Therefore
\begin{equation*}
\begin{split}
&(\hat{h}^{(n)}(|\sigma_i|)+1)((h^{(n)}(|\sigma_i|))^{\log n}2^{-h^{(n)}(|\sigma_i|)})\\
& \quad \leq \; 2\cdot h^{(n)}(|\sigma_i|)((h^{(n)}(|\sigma_i|))^{\log n}2^{-h^{(n)}(|\sigma_i|)})\\
& \quad = \;2\cdot (h^{(n)}(|\sigma_i|))^{\log 2n}2^{-h^{(n)}(|\sigma_i|)}.
\end{split}
\end{equation*}
By Fact \ref{Fact:gh}, $h^{(n)}(|\sigma_i|)\geq h^{(2n)}(|\sigma_i|)$ and $\lim_i h(i)=\infty$. Together with Lemma \ref{Le:mono}, for all but finitely many $\sigma_i$, we have the following upper bound.
\begin{equation}
\begin{split}
& 2\cdot (h^{(n)}(|\sigma_i|))^{\log 2n}2^{-h^{(n)}(|\sigma_i|)}\\
& \quad \leq \; 2\cdot (h^{(2n)}(|\sigma_i|))^{\log 2n}2^{-h^{(2n)}(|\sigma_i|)}.
\end{split}
\tag{**} \label{equ2}
\end{equation}
Together, equations \eqref{equ1} and \eqref{equ2} yield the following upper
bound for the contribution of $t(\sigma_i,\hat{h}^{(n)}(|\sigma_i|))$ to a
level-$n$ test:
\begin{equation*}
\begin{split}
&(h^{(n)}(|\sigma_i\upharpoonright_{\hat{h}^{(n)}(|\sigma_i|)}|))^{\log n}2^{-h^{(n)}(|\sigma_i\upharpoonright_{\hat{h}^{(n)}(|\sigma_i|)}|)} \\
& \qquad \qquad + \; \sum_{j=0}^{\hat{h}^{(n)}(|\sigma_i|)}(h^{(n)}(|\sigma_i\upharpoonright_j\!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-j}|))^{\log n}2^{-h^{(n)}(|\sigma_i\upharpoonright_j\!\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-j}|)}
\\
& \quad \leq \; (h^{(2n)}(|\sigma_i|))^{\log 2n}2^{-h^{(2n)}(|\sigma_i|)} + 2\cdot (h^{(2n)}(|\sigma_i|))^{\log 2n}2^{-h^{(2n)}(|\sigma_i|)}\\
& \quad \leq \; 3\cdot(h^{(2n)}(|\sigma_i|))^{\log 2n}2^{-h^{(2n)}(|\sigma_i|)}.
\end{split}
\end{equation*}
Hence if $\{\sigma_i\}_{i\in \mathbb{N}}$ is a level-$2n$ test, $\bigcup_{i\in \mathbb{N}} t(\sigma_i,h_0^{(n)}(|\sigma_i|))$ is a level-$n$ test.
\end{proof}
\medskip
We continue the proof of Theorem~\ref{Thm:2n-n}.
Assume $\{\sigma_i\}_{i\in \mathbb{N}}$ is a level-$2n$ test that $A$ fails. For each $i$, consider the set $W^{\sigma_i}_{e,|\sigma_i|}$.
Write the characteristic sequence of $W^{\sigma_i}_{e,|\sigma_i|}$ as $b_{i,0}\, b_{i,1}\, b_{i,2}\, ...\, b_{i,|\sigma_i|}$, and put $b_{i,|\sigma_i|+1}=1$ for convenience. For $k\leq |\sigma_i|$, define $m_{i,k}:=\min\{j>k| b_{i,j}=1\}$, and define the function $f_i:\{1,2,3,...,|\sigma_i|\}\rightarrow\mathbb{N}$ as
\begin{equation*}
f_i(k) = \begin{cases}
1 &\text{if $b_{i,k}=0$;}\\
min\{l|\forall j\leq m_{i,k}(b_{i,j}=1\implies W^{\sigma_i}_{e,l}(j)=1)\} &\text{if$(b_{i,k}=1) \land (m_{i,k}\neq |\sigma_i|+1)$; }\\
|\sigma_i| &\text{if$(b_{i,k}=1) \land (m_{i,k}=|\sigma_i|+1)$. }
\end{cases}
\end{equation*}
Lastly, define $$\tau_i =b_{i,0}^{f_i(0)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b_{i,1}^{f_i(1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b_{i,2}^{f_i(2)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} \ldots \ensuremath{\,\mbox{}^\frown\,} b_{i,|\sigma_i|}^{f_i(|\sigma_i|)}\upharpoonright_{|\sigma_i|}.$$
Since $|\tau_i|=|\sigma_i|$, $\{\tau_i\}_{i\in \mathbb{N}}$ is also a $V_{2n}$-test. By Lemma~\ref{Lemma:2n-n}, $\bigcup_{i\in \mathbb{N}} t(\tau_i,\hat{h}^{(n)}(|\tau_i|))$ is a level-$n$ test.
\medskip
\textbf{Claim:} $C$ fails the test $\bigcup_{i\in \mathbb{N}} t(\tau_i,\hat{h}^{(n)}(|\tau_i|))$.
\medskip
We will show that if $\sigma_i\sqsubset A$, $t(\tau_i,\hat{h}^{(n)}(|\tau_i|))$ contains an initial segment of $C$.
By the assumption on $B$ in Construction~\ref{Cons:1}, we have $b_{i,0}=1$ for all $i$. Since we assume $\sigma_i\sqsubset A$, it follows that for any $a\leq |\sigma_i|$, $b_{i,a}=1$ implies $b_a=1$.
If $\tau_i\upharpoonright_{\hat{h}^{(n)}(|\tau_i|)}$ is an initial segment of $C$, then by the definition of $t$, it is trivial.
So let us assume $\tau_i\upharpoonright_{\hat{h}^{(n)}(|\tau_i|)}$ is not an initial segment of $C$. Define
$$k_i=\max\{l|\forall j< l(b_{i,j}=b_j)\land (b_{i,l}=1)\}.$$
Thus, $k_i$ is the maximal length for which $b_{i,k_i}=1$ and
$$b_0b_1b_2...b_{k_i-1}=b_{i,0}b_{i,1}b_{i,2}...b_{i,k_i-1}.$$
Then for any $k<k_i$, by the definition of $f_i$, we have $f_i(k)=f(k)$. As we assumed $\tau_i\upharpoonright_{\hat{h}^{(n)}(|\tau_i|)}$ is not an initial segment of $C$, by comparing lengths, we know that
$$k_i<\hat{h}^{(n)}(|\tau_i|).$$
Let $j$ be the minimum number such that $b_j\neq b_{i,j}$, thus $b_j=1$, $b_{i,j}=0$ and $k_i<j<\hat{h}^{(n)}(|\tau_i|)$. We have that
$$\Phi^{A}_{e,|\sigma_i|}(j)=\Phi^{\sigma_i}_{e,|\sigma_i|}(j)=b_{i,j}=0$$
$$\Phi^{A}_{e,f(k_i)}(j)=b_{j}=1.$$
This means $f(k_i)\geq |\sigma_i|$, so we can find an element of $t(\tau_i,\hat{h}^{(n)}(|\tau_i|))$ which is also an initial segment of $C$ as follows.
\begin{equation*}
\begin{split}
& \tau_i\upharpoonright_{\Sigma_{t=0}^{k_i-1}(f_i(t)+1)}\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-\Sigma_{t=0}^{k_i-1}(f_i(t)+1)}\\
& \quad =b_{i,0}^{f_i(0)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b_{i,1}^{f_i(1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} \ldots \ensuremath{\,\mbox{}^\frown\,} b_{i,k_i-1}^{f_i(k_i-1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|-\Sigma_{t=0}^{k_i-1}(f_i(t)+1)}\\
& \quad \sqsubset b_{i,0}^{f_i(0)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b_{i,1}^{f_i(1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} \ldots \ensuremath{\,\mbox{}^\frown\,} b_{i,k_i-1}^{f_i(k_i-1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} 1^{|\sigma_i|}\\
& \quad \sqsubset b_{i,0}^{f_i(0)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b_{i,1}^{f_i(1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} \ldots \ensuremath{\,\mbox{}^\frown\,} b_{i,k_i-1}^{f_i(k_i-1)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} 1^{f(k_i)}\\
& \quad = b^{f(0)}_0\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b^{f(1)}_1\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b^{f(2)}_2\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,}\ldots \ensuremath{\,\mbox{}^\frown\,} b^{f(k_i-1)}_{k_i-1}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} b_{k_i}^{f(k_i)}\\
&\quad \sqsubset C.
\end{split}
\end{equation*}
It follows that $C$ is covered by the level-$n$ test $\bigcup_{i\in \mathbb{N}} t(\tau_i,\hat{h}^{(n)}(|\tau_i|))$ and therefore non-$\mu$-random of level $n$. This completes the proof of Theorem~\ref{Thm:2n-n}.
\end{proof}
\section{Constructing non-random reals using a self-modulus}~
\label{sec:4}
We begin this section by reviewing the concepts of modulus and self-modulus.
\begin{Def}
For any function $f,g:\mathbb{N}\rightarrow\mathbb{N}$, we say $f$ \textit{dominates} $g$ if $f(n)>g(n)$ for all but finitely many $n\in \mathbb{N}$. For any real $A$, we say a function $f$ is a \textit{modulus (of computation)} for $A$ if every function dominating $f$ can compute $A$. We say A has a \textit{self-modulus} if there is a modulus $f_A$ of $A$ such that $f_A\equiv_T A$.
\end{Def}
Arguably the best-known class of reals with a self-modulus is $\Delta^0_2$, see,
for example,~\cite[Theorem~5.6.6]{soare2016turing}.
\medskip
Our second construction method will take real $A$ with a self-modulus $f_A$ and define another real $B\equiv_T A$.
\begin{Construction}~
\label{Cons:2}
Assume $A=a_0 \, a_1 \, a_2 \, a_3 \dots$ and $f_A\equiv_T A$ is a self-modulus of $A$. Without loss of generality, we can assume $f_A(n)$ is increasing.
We define our first string $B_0$ as
$$B_0 =1^{f_A(0)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} a_0,$$
and inductively put
$$B_{n+1} =B_n\ensuremath{\,\mbox{}^\frown\,} 1^{f_A(|B_n|)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} a_{n+1}.$$
Let
$$B =\lim_{i\rightarrow \infty} B_i.$$
In the following, $l_n$ will denote the length of $B_n$.
\bigskip
As each $a_i$ is coded into $B_i$ immediately following a block of the form $1^{f_A(|B_i|)}\ensuremath{\,\mbox{}^\frown\,} 0$, it follows that that $A\leq_T B$. Since the $B_i$ are uniformly computable in $A$, $B\leq_T A$. Therefore, $B\equiv_T A$.
\end{Construction}
We have the following property of Construction \ref{Cons:2}.
\begin{Thm}
\label{Thm:modulelevelinfty}
If $A$ has a self-modulus $f_A$ and $B$ is defined from $A$ and $f_A$ as in
Construction~\ref{Cons:2}, then $B$ is non-$\mu$ random of level $\omega$ for
any continuous $\mu$.
\end{Thm}
\begin{proof}
Let $\mu$ be a continuous measure. If there is a $\mu$-computable function
dominating $f_A$, then $\mu$ can compute $B$ as well as $A$, so $B$ is not
$\mu$-random of level $\omega$. Therefore, let us assume there is no
$\mu$-computable function dominating $f_A$. As before, we write $g$ and $h$ to
denote the granularity and dissipation function $g_\mu$ and $h_\mu$,
respectively.
\begin{Lemma}
\label{Le:moduleDominite}
If there is no $\mu$-computable function dominating $f_A$, then for any $k\in
\mathbb{N}$, there are infinitely many $n$ such that $\hat{g}^{(k)}(2l_n+1)<f_A(l_n)$,
where $l_n$ is the length of $B_n$ as defined in Construction~\ref{Cons:2} and
$\hat{g}$ is as defined in Lemma \ref{Lemma:g*}.
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{Le:moduleDominite}]
Suppose for a contradiction there is an $n_0$ such that for any $m>n_0$, it holds that
$$\hat{g}^{(k)}(2l_m+1)>f_A(l_m).$$
Define a function $G$ as follows. Put
$G(0) ={\hat{g}}^{(k)}(2l_{n_0}+1)$ and inductively define
$G(i+1) =G(i)+{\hat{g}}^{(k)}(2G(i)+1)+2$. Since $\hat{g}$
is computable in $\mu$, $G\leq_T \mu$.
We claim that $G(i) \geq l_i$ for $i\geq n_0$.
For $i=n_0$, $$G(n)> G(0)=\hat{g}^{(k)}(2l_{n_0}+1)\geq l_{n_0}.$$
For $i> n$, if $G(i)>l_i$,
\[
\begin{split}
G(i+1) & =G(i)+\hat{g}^{(k)}(2G(i)+1)+2 \\
& >l_i+\hat{g}^{(k)}(2l_i+1)+2>l_i+f_A(l_i)+2=l_{i+1}.
\end{split}
\]
So $G(i)$ dominates $l_i$ for $i\geq n$.
Moreover, by the definition of $B_i$, $l_i > f_A(i)$ for all $i$.
Combining the previous two facts, we obtain a $\mu$-computable function $G$ such that
$G(i+n) \geq f_A(i)$, a contradiction. So there are infinitely many $n$ such that $${\hat{g}}^{(k)}(2l_n+1)<f_A(l_n).$$
\end{proof}
To complete the proof of Theorem~\ref{Thm:modulelevelinfty}, for any $k\in\mathbb{N}$, we define the following set of strings:
$$T_k =\{\sigma\ensuremath{\,\mbox{}^\frown\,} 1^{\hat{g}^{(k)}(2|\sigma|)}|\sigma\in 2^{<\omega}\}.$$
Then
\begin{equation*}
\begin{split}
& \sum_{\tau\in T_k} (h^{(k)}(|\tau|))^{\log k}2^{-h^{(k)}(|\tau|)} \\
& \quad =\sum_{i=0}^\infty 2^i(h^{(k)}(i+\hat{g}^{(k)}(2i)))^{\log k}2^{-h^{(k)}(i+\hat{g}^{(k)}(2i))} \\
& \quad =\sum_{i> \log k} 2^i(h^{(k)}(i+\hat{g}^{(k)}(2i)))^{\log k}2^{-h^{(k)}(i+\hat{g}^{(k)}(2i))} + \gamma_k,
\end{split}
\end{equation*}
where
\[
\gamma_k = \sum_{i \leq \log k} 2^i(h^{(k)}(i+\hat{g}^{(k)}(2i)))^{\log k}2^{-h^{(k)}(i+\hat{g}^{(k)}(2i))} < \infty.
\]
Moreover, by Fact \ref{Fact:gh} and Lemma \ref{Lemma:g*},
$$h^{(k)}(i+\hat{g}^{(k)}(2i)) \geq h^{(k)}(\hat{g}^{(k)}(2i))\geq h^{(k)}(g^{(k)}(2i))\geq 2i.$$
By Lemma \ref{Le:mono}, we have
\begin{align*}
\begin{split}
&\sum_{i> \log k} 2^i(h^{(k)}(i+\hat{g}^{(k)}(2i)))^{\log k}2^{-h_{\mu}^{(k)}(i+\hat{g}^{(k)}(2i))} + \gamma_k \\
& \quad \leq \sum_{i> \log k}^\infty 2^i(2i)^{\log k}2^{-2i}+ \gamma_k\\
& \quad = \sum_{i> \log k} (2i)^{\log k} 2^{-i}+ \gamma_k <\infty.\\
\end{split}
\end{align*}
Thus, $T_k$ is a level-$k$ test. Finally, when $\hat{g}^{(k)}(2l_n+1)<f_A(l_n)$, we have
$$B_n\ensuremath{\,\mbox{}^\frown\,} 1^{\hat{g}^{(k)}(2l_n)}\sqsubset B_n\ensuremath{\,\mbox{}^\frown\,} 1^{f_A(l_n)}\sqsubset B.$$
By the definition of $T_k$, any string of the form $B_n\ensuremath{\,\mbox{}^\frown\,}
1^{\hat{g}^{(k)}(2l_n)}$ is in $T_k$. By Lemma~\ref{Le:moduleDominite}, for any
$k$, $\hat{g}^{(k)}(2l_n+1)<f_A(l_n)$ is true for infinitely many $n$.
Therefore, $B$ fails $T_k$. Since $k$ was arbitrary, $B$ is non-$\mu$-random of level $\omega$.
\end{proof}
\section{Turing degrees of NCR Reals}~
\label{sec:5}
Using the constructions presented in the previous two sections, we exhibit a large class of Turing degrees that contain NCR elements, as formulated in the Introduction.
\begin{Def}
A real is \textit{1-REA} if it is recursively enumerable. A real is \textit{$(n+1)$-REA} if it is r.e.a.\ some $n$-REA real. A Turing degree is $n$-REA if it contains an $n$-REA real.
\end{Def}
\mainone*
\begin{proof}
By Proposition~\ref{prop:computable_omega} and Theorem~\ref{Thm:2n-n}, every $1$-REA degree contains an NCR real. Part (a) now follows inductively using Theorem~\ref{Thm:2n-n}. Part (b) follows from Theorem~\ref{Thm:modulelevelinfty}.
\end{proof}
The result actually holds in a slightly stronger form in that both kind of
degrees contain NCR reals \emph{of level $\omega$}, that is, reals that are
non-$\mu$-random of level $\omega$ for every continuous measure $\mu$ (see
~\cite{li:thesis}). However, for our main applications the form stated here is
quite sufficient.
\medskip
Since every $\Delta^0_2$ degree has a self-modulus, we obtain
\maintwo*
Furthermore, if a real $B$ has a self-modulus, by using the relativized version of Shoenfield's Limit Lemma, we can prove the above result also holds for any $\Delta^0_2(B)$ real above $B$, so we have the following.
\begin{Corollary} \label{cor:delta02-NCR}
If a real $B$ has a self-modulus, then every $\Delta^0_2(B)$ real above $B$ contains an NCR element.
\end{Corollary}
We can also apply our techniques to prove the existence of weakly generic reals in NCR.
\begin{Thm}\label{thm:ncr_1generic}
For every self-modulus degree above $0'$, there exists a weakly $1$-generic NCR real in it.
\end{Thm}
\begin{proof}
Assume $A=a_0 \, a_1 \, a_2 \, a_3 \dots$ and $f_A\equiv_T A$ is a self-modulus of $A$. Without loss of generality, we can assume $f_A(n)$ is increasing. Let $W_n$ be $n$-th $\Sigma^0_1$ set of binary strings.
We define our first string $B_0$ as
$$B_0 =1^{f_A(0)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} a_0,$$
And define $\sigma_i$, $B_i$ inductively as
\begin{equation*}
\sigma_i := \begin{cases}
\text{the smallest such } \tau &\text{if $\exists \tau \in W_i(B_i\ensuremath{\,\mbox{}^\frown\,} 1 \sqsubset \tau)$;}\\
B_i\ensuremath{\,\mbox{}^\frown\,} 0 &\text{otherwise.}
\end{cases}
\end{equation*}
$$B_{i+1}:=\sigma_i\ensuremath{\,\mbox{}^\frown\,} 1^{f_A(|\sigma_i|)}\ensuremath{\,\mbox{}^\frown\,} 0\ensuremath{\,\mbox{}^\frown\,} a_{i+1}.$$
Finally define $B$ as
$$B:=\lim_{i\rightarrow \infty} B_i.$$
Since $A>_T 0'$ A compute all $\sigma_i$, thus compute $B$. And $B$ can effectively recover all $B_i$, So B also compute $A$, thus $A\equiv_T B$.
Moreover, the proof of Theorem \ref{Thm:modulelevelinfty} also can be applied to the $B$ we constructed here, so $B$ is NCR.
Lastly we show $B$ is weakly 1-generic. If $W_i$ is a dense $\Sigma_0^1$ set, then $\sigma_i\in W_i$ and $\sigma_i$ is an initial segment of $B$, so $B$ is weakly 1-generic.
\end{proof}
Using similar ideas, one can construct 1-generic NCR reals. It is also possible, albeit more complicated, to construct an NCR real of minimal Turing degree. These constructions are given in ~\cite{li:thesis}.
\section{Further applications and open questions}
\label{sec:6}
We can apply the techniques introduced in this paper to address a question
asked by Adam Day and Andrew Marks (private communication).
\begin{Def}
Two reals $X_1,X_2 \in 2^\N$ are \emph{simultaneously continuously random}
if there exists a real $Z$ and a measure $\mu$ such that $Z$ computes
$\mu$ and both $X_1$ and $X_2$ are $\mu$-random relative to $Z$. If such $Z$ and
$\mu$ do not exist, $X_1,X_2$ are called \emph{never simultaneously
continuously random} (NSCR).
\end{Def}
Day and Marks conjectured that $X_1$ and $X_2$ are NSCR if and only if at least one
of them is in NCR. We refute this conjecture by
constructing two reals $X_1$ and $X_2$ such that they are both random with respect to some continuous measure,
but for every measure $\mu$ for which $X_2$ is random, any representation of $\mu$ computes $X_1$.
Let $f(n)$ be a self-modulus of $0'$ and $X_1$ be a $\lambda$-random $\Delta^0_2$ real, where $\lambda$ is Lebesgue measure. It suffices to
find a real $X_2$ which random for some continuous $\mu$ and every representation of a continuous measure
$\nu$ for which $X_2$ is random can compute a function which dominates $f(n)$.
We define
$$S_0:=\{1^{f(0)}\ensuremath{\,\mbox{}^\frown\,} 0 \ensuremath{\,\mbox{}^\frown\,} x \colon x \in \{0,1\} \}.$$
And
$$S_{n+1}:=\{\sigma \ensuremath{\,\mbox{}^\frown\,} 1^{f(|\sigma|)}\ensuremath{\,\mbox{}^\frown\,} 0 \ensuremath{\,\mbox{}^\frown\,} x \colon \sigma \in S_n, x \in \{0,1\} \}.$$
Finally define
$$S:=\{Y \in 2^\N \colon \forall n \exists \sigma_n\in S_n(\sigma_n\sqsubset Y)\}.$$
Suppose $\mu$ is a continuous measure with a representation $R_\mu$ that does
not compute any function dominating $f$. An argument similar to the proof of
Theorem~\ref{Thm:modulelevelinfty} yields that the set $T_k$ defined there is a
level-$k$ test. Moreover, by the definition of $S$, every real in $S$ is
covered by $T_k$. Therefore, any element in $S$ can only be random for a
measure all of whose representations compute a function dominating $f$. It
follows that any element of $S$ is NSCR with $X_1$.
It remains to show that there is a element in $S$ which is random with respect
to a continuous measure. This easily follows from the fact that NCR is
countable (see~\cite{reimann2015measures}), but we can give a direct argument
as follows:
It follows from the construction of $S$ that $S$ is a perfect subset of
$2^\N$. By distributing a unit mass uniformly along $S$, we obtain a
continuous measure whose support is $S$ and we can choose any real that is
random with respect to this measure and obtain
\begin{Corollary}
There are non-NCR reals $X_1$ and $X_2$ which are NSCR.
\end{Corollary}
\bigskip
The exact distribution of NCR reals in $\Delta^1_1$ remains unknown. Taking
into account the results of this paper, the following questions seem
particularly interesting.
Following the results of Section~5, we can ask how strong the
relation between $\Delta^1_1$ degrees containing NCR reals degrees with a
self-modulus is. In particular, does the following hold:
\begin{quote}
\em If $\mathcal{D}$ contains an NCR real, must $\mathcal{D}$ have a
self-modulus?
\end{quote}
If the answer to this question is negative, then we can ask a weaker one:
\begin{quote}
\em If $\mathcal{D}$ contains a real that is NCR of level $\omega$, must
$\mathcal{D}$ have a self-modulus?
\end{quote}
On the other hand, our results only concern the \emph{existence} of \emph{some}
NCR elements in Turing degrees, while \cite{barmpalias2012k} shows that
\emph{all} reals in an incomplete r.e.\ degree are NCR. Thus, we may also ask:
\begin{quote}
\em Are there any other Turing degree not below any incomplete r.e.\ degree in
which every real is in NCR?
\end{quote}
\bigskip
As NCR is $\Pi^1_1$ set of reals, it has a $\Pi^1_1$ rank function (see for
example~\cite{Kechris:1995a}). It is an open problem to find a ``natural'' rank
function for NCR which reflects the stratified complexities of elements in NCR
in a more informative way. Such a rank function is arguably needed to shed more
light on the structure of NCR in the Turing degrees.
Theorem~\ref{thm:ncr_1generic} immediately implies that a rank based on the
Cantor-Bendixson derivative will not work -- NCR is a proper superset of the
members of countable $\Pi^0_1$ classes. (This follows also from the
Barmpalias-Greenberg-Montalbán-Slaman result~\cite{barmpalias2012k}, of
course.)
Restricted to $\Delta^0_2$, the picture is a little clearer. We now know that
every
$\Delta^0_2$ Turing degree contains an NCR real (Corollary~\ref{cor:delta02-NCR}),
and every degree below an incomplete r.e.\ degree is completely
NCR~\cite{barmpalias2012k}. Moreover, using the connection between the
granularity function and the settling function, it is possible to show that
$\operatorname{NCR}\cap \Delta^0_2$ is an arithmetic set of
reals~\cite{reimann-slaman:unpub}\footnote{A full proof of this result will
appear in~\cite{li:thesis}.}. Unfortunately, few of the techniques developed so
far (including the ones developed in this paper) seem to extend easily higher
up the arithmetic hierarchy. The question whether, for example,
$\operatorname{NCR}\cap \Delta^0_2$ is arithmetic remains open.
\bibliographystyle{abbrv}
|
1,116,691,497,307 | arxiv | \section{Introduction} \label{sec:intro}
An {\em acyclic $k$-coloring} of a graph $G$ is a proper $k$-coloring of $G$ with no 2-colored cycles. Confirming a conjecture of Gr{\"u}nbaum~\cite{Gruenbaum1973}, Borodin~\cite{Borodin1979} proved that every planar graph has an acyclic 5-coloring. This celebrated result is best possible as there are planar graphs that are not acyclic 4-colorable (e.g.\ the octahedron). Acyclic coloring has been studied extensively for several decades and applied to solve other problems on graph coloring and partitioning. We refer to~\cite{Borodin2013} for a comprehensive survey on this subject.
This paper studies defective acyclic $k$-coloring of planar graphs mainly for $k=3,4$.
In other words, we study $k$-colorings of planar graphs for which the condition of being an acyclic coloring is not completely satisfied, however, we want to limit the violation of the acyclicity rules. We consider two variants of defective acyclic coloring.
\begin{definition}
\label{def-transversal}
Given a graph $G$ and a proper coloring $\varphi$ of $G$, a {\em $2$-colored cycle transversal} ($2$CC transversal) with respect to $\varphi$ is a subset $E'$ of $E(G)$ that intersects all 2-colored cycles. In other words, $G-E'$ contains no 2-colored cycles.
\end{definition}
\begin{definition}
Let $G$ be a graph and $k$ be a positive integer. We define two parameters $m_k(G)$ and $m'_k(G)$ as follows:
\begin{itemize}
\item $m_k(G):=\min_{E' \subseteq E(G)}\{|E'|: \text{$E'$ is a 2CC transversal with respect to a proper $k$-coloring}\}.$
\item $m'_k(G):=\min_{E' \subseteq E(G)}\{|E'|: \text{$G-E'$ has an acyclic $k$-coloring}\}.$
\end{itemize}
\end{definition}
Note that $m_k(G)=m'_k(G) =0$ if and only if $G$ is acyclic $k$-colorable. If $G$ has no proper $k$-coloring, then $m_k(G)$ is not defined. In this case, we let $m_k(G) := \infty$. It follows from the definition that for any graph $G$ and integer $k$, $m_k(G) \ge m'_k(G)$.
We are interested in the case that $G$ is a planar graph and $k=3,4$ as Borodin's theorem asserts that $m_5(G) = 0$. To obtain an upper bound for $m_k(G)$, we need to construct a proper $k$-coloring $\varphi$ of $G$ and find a 2CC transerval $E'$. One immediate difficulty is that, for $k=4$, the existence of a proper $4$-coloring of a planar graph follows from the Four Color Theorem. For $k=3$, it is NP-complete to decide whether a planar graph $G$ is 3-colorable, and hence there is no easy way to construct a proper 3-coloring of $G$. Fortunately, it turns out that tight upper bounds for $m_4(G)$ and $m_3(G)$ for the whole family of planar graphs and the whole family of 3-colorable planar graphs do not depend on a particular proper coloring of $G$.
For any proper coloring $\varphi$ of a graph $G$, define \begin{align*}
m(G, \varphi) := \min_{E' \subseteq E(G)}\{|E'|:\text{$E'$ is a 2CC transerval with respect to $\varphi$} \}.
\end{align*}
We prove in Section~\ref{sec:subgraph} that for any planar graph $G$ on $n$ vertices and any proper coloring $\varphi$ of $G$, $m(G, \varphi) \le n - |\varphi(V(G))|$, where $|\varphi(V(G))|$ denotes the number of colors used in $\varphi$. To this end, we study the case when $G$ is a plane triangulation in Section~\ref{sec:m}. Moreover, we show that if $n \ge 5$, then there is a 4-coloring $\varphi$ of $G$ with $m(G, \varphi) \le n - 5$. We apply these results to prove that for every planar graph $G$, $m_4(G) \le n-5$ provided that $n\ge 5$, and $m_3(G) \le n-3$ provided that $G$ is 3-colorable. These two bounds are tight as there are infinitely many 3-colorable planar graphs $G$ with $m_3(G)=n-3$ and infinitely many planar graphs $G$ with $m_4(G)=n-5$.
Besides, we show in Section~\ref{sec:subgraph} that for any proper coloring $\varphi$ of a planar graph $G$, we can find a 2CC transerval $E'$ with $|E'| = m(G, \varphi)$ that induces a forest.
In Section~\ref{sec:m_k} we study the parameter $m'_k(G)$. We show that $m'_3(G) \le (13n - 42) / 10$ and $m'_4(G) \le (3n - 12) / 5$.
We shall mention an application of our results on acyclic colorings of subdivisions. For a graph $G$ and a positive integer $k$, define $m''_k(G)$ to be the minimum size of an edge set $E' \subseteq E(G)$ such that the graph obtained from $G$ by subdividing each edge in $E'$ by one vertex is acyclically $k$-colorable. It is easy to observe that $m_k(G) \ge m''_k(G) \ge m'_k(G)$. It was shown in \cite{MNRW2013} that for any $n$-vertex planar graph $G$, $m''_4(G) \le n - 3$. Our upper bound for $m_4(G)$ immediately improves it to $m''_4(G) \le n - 5$ for $n \ge 5$.
All graphs considered in this paper are finite and simple. We denote by $V(G)$ and $E(G)$ the vertex set and the edge set of $G$, respectively. For $v \in V(G)$, denote by $N_G(v)$ the set of vertices adjacent to $v$ and by $d_G(v)$ the degree of $v$. For a positive integer $k$, denote $[k] := \{1, \dots, k\}$. A \emph{$k$-coloring} $\varphi$ of $G$ is a function which assigns a color $\varphi(v) \in [k]$ to each vertex $v \in V(G)$. We say a coloring $\varphi$ is \emph{proper} if $\varphi(u) \neq \varphi(v)$ for any $uv \in E(G)$. In fact, we always consider proper colorings unless specified otherwise. Given a $k$-coloring $\varphi$ of $G$, we define the color classes by $\varphi^{-1}(i) := \{v \in V(G) : \varphi(v) = i\}$ for any $i \in [k]$.
For any distinct $i, j \in [k]$, define $G_{ij}$ to be the subgraph of $G$ induced by $\varphi^{-1}(i) \cup \varphi^{-1}(j)$.
\section{Upper bounds for $m(G,\varphi)$} \label{sec:m}
In this section we prove upper bounds on the parameter $m(G, \varphi)$ for planar graphs. We first present several lemmas for plane triangulations.
\begin{definition}
Let $G$ be a plane triangulation on at least 4 vertices. Denote by $\mathcal{E}_G$ the set of separating triangles of $G$, and by $\mathcal{V}_G$ the set of maximal connected subgraphs of $G$ without separating triangles. The graph $\mathcal{T}_G$ is defined to be the graph on $\mathcal{V}_G$ with edge set $\mathcal{E}_G$ such that $G_1, G_2 \in \mathcal{V}_G$ are joined by $T \in \mathcal{E}_G$ if and only if both $G_1$ and $G_2$ contain $T$.
\end{definition}
It is easy to see that $\mathcal{V}_G$ is a family of 4-connected plane triangulations and $\mathcal{T}_G$ is a tree. Let $\mathcal{V}_G := \{G_1, \dots, G_t\}$ and $\mathcal{E}_G := \{T_1, \dots, T_{t-1}\}$. The graph $G$ can be retrieved from the vertex-disjoint union of $G_1, \dots, G_t$ by identifying the copies of triangle $T$ in $G_i, G_j$ for each $T = G_i G_j \in \mathcal{E}_G$. Hence $\sum_{i \in [t]} |V(G_i)| = |V(G)| + 3(t - 1)$.
\begin{lemma} \label{lem:A}
Let $G$ be a graph and $\varphi$ be a proper coloring of $G$. If $A$ is an edge set of $G$ such that $A \cap E(G_{ij})$ is an acyclic edge set for any distinct $i, j \in [k]$, then there exists $E' \subseteq E(G) \setminus A$ satisfying that $|E'| = m(G, \varphi)$ and $\varphi$ is an acyclic coloring of $G - E'$.
\end{lemma}
\begin{proof}
Let $E' \subseteq E(G)$ be such that $|E'| = m(G, \varphi)$, $\varphi$ is an acyclic coloring of $G - E'$ and, subject to this, $|E' \cap A|$ is minimum. Suppose there exists $uv \in E' \cap A$. There is precisely one cycle $C$ in $G_{\varphi(u)\varphi(v)} - (E' - uv)$. As $A \cap E(G_{\varphi(u)\varphi(v)})$ is acyclic, there exists $e' \in E(C) \setminus A$. Then $G_{\varphi(u)\varphi(v)} - (E' - uv + e')$ is acyclic, $|E' - uv + e'| = |E'| = m(G, \varphi)$ and $|(E' - uv + e') \cap A| < |E' \cap A|$, contradicting our choice of $E'$. Hence $E' \subseteq E(G) \setminus A$ as desired.
\end{proof}
\begin{lemma} \label{lem:trisep}
Let $G$ be a plane graph, $T$ be a separating triangle of $G$ and $\varphi$ be a proper coloring of $G$. Let $A_1$ and $A_2$ be the components of $G - T$, and for $i \in [2]$, $G^i$ be the subgraph of $G$ induced by $V(A_i) \cup V(T)$. Then $m(G, \varphi) = m(G^1, \varphi^1) + m(G^2, \varphi^2)$, where $\varphi^i$ denotes the restriction of $\varphi$ on $V(G^i)$.
\end{lemma}
\begin{proof}
Without loss of generality, we let $V(T) = \{v_1,v_2,v_3\}$ with $\varphi(v_i) = i$ for $i \in [3]$. By Lemma~\ref{lem:A}, there exists $E' \subseteq E(G) \setminus E(T)$ such that $|E'| = m(G, \varphi)$ and $\varphi$ is an acyclic coloring of $G - E'$. As $G^i - (E' \cap E(G^i))$ is acyclically colored by $\varphi^i$ ($i \in [2]$), we have $m(G, \varphi) = |E'| = |E' \cap E(G^1)|+|E' \cap E(G^2)| \ge m(G^1, \varphi^1) + m(G^2, \varphi^2)$.
Similarly, by Lemma~\ref{lem:A}, let $E_i' \subseteq E(G^i) \setminus E(T)$ be such that $|E_i'| = m(G^i, \varphi^i)$ and $G^i - E_i'$ is acyclically colored by $\varphi_i$. Let $E' := E_1' \cup E_2'$. Observe that if there is a cycle $C$ which is colored by only two colors in $G - E'$, then $C$ must contain two vertices of $T$, say $v_1, v_2$, and $C + v_1v_2$ contains some cycle in $G^1 - E_1'$ or $G^2 - E_2'$ which uses only two colors as well, a contradiction. Hence $G - E'$ is acyclically colored and $m(G, \varphi) \le |E'| = |E_1'| + |E_2'| = m(G^1, \varphi^1) + m(G^2, \varphi^2)$.
\end{proof}
\begin{lemma} \label{lem:tritree}
Let $G$ be a plane triangulaion on at least $4$ vertices and $\varphi$ be a proper coloring of $G$. Let $\mathcal{V}_G := \{G_1, \dots, G_t\}$. We have $m(G, \varphi) = \sum_{i \in [t]} m(G_i, \varphi_i)$, where $\varphi_i$ denotes the restriction of $\varphi$ on $V(G_i)$.
\end{lemma}
\begin{proof}
We prove by induction on $|\mathcal{V}_G|$. It trivially holds when $|\mathcal{V}_G| = 1$.
Suppose $|\mathcal{V}_G| > 1$. Let $T \in \mathcal{E}_G$, $A_1$ and $A_2$ be the components of $G - T$, and for $i \in [2]$, $G^i$ be the subgraph of $G$ induced by $V(A_i) \cup V(T)$. We may assume $G_1,\dots,G_{t'} \subseteq G^1$ and $G_{t'+1},\dots,G_t \subseteq G^2$ for some $1 \le t' < t$. Then, by Lemma~\ref{lem:trisep} and the induction hypothesis, $m(G, \varphi) = m(G^1, \varphi^1) + m(G^2, \varphi^2) = \sum_{i \in [t']} m(G_i, \varphi_i) + \sum_{i \in [t] \setminus [t']} m(G_i, \varphi_i) = \sum_{i \in [t]} m(G_i, \varphi_i)$.
\end{proof}
\begin{lemma} \label{lem:n-3}
Let $G$ be a $3$-colorable plane triangulation on $n$ vertices and $\varphi$ be the unique proper $3$-coloring of $G$. For any distinct $i, j \in [3]$, $G_{ij}$ is connected. Moreover, if $n > 3$, $G_{ij}$ is $2$-connected.
\end{lemma}
\begin{proof}
We prove by induction on $n$. The triangulations of order at most 6 are listed in Figure~\ref{fig:smalltri}. Among these graphs, only the triangle and the octahedron are 3-colorable. It is not hard to verify that the claims hold for these two graphs. From now on we assume that $n > 6$.
As $G$ is a 3-colorable triangulation, every vertex of $G$ has an even degree, and hence there exists $v \in V(G)$ with $d_G(v) = 4$. Let $v_1 v_2 v_3 v_4 v_1$ be the cycle induced by $N_G(v)$. We have $\varphi(v_i) = \varphi(v_{i + 2})$ for each $i \in [2]$.
Suppose there exists $i \in [2]$ such that $v_i$ and $v_{i + 2}$ have no common neighbor other than $v, v_{i + 1}, v_{i + 3}$, where $v_5 := v_1$. We contract $v_i v v_{i + 2}$ to obtain $G'$ and call the new vertex $v'$. Let $\varphi': V(G') \rightarrow [3]$ be such that $\varphi'(v') = \varphi(v_i)$ and $\varphi'(u) = \varphi(u)$ for $u \in V(G') \setminus \{v'\}$. It is clear that $\varphi'$ is the unique proper 3-coloring of the triangulation $G'$. By the induction hypothesis, $G_{ij}'$ is 2-connected for any distinct $i, j \in [3]$. Then, one can easily prove by the construction that $G_{ij}$ is 2-connected for any distinct $i, j \in [3]$.
Suppose for every $i \in [2]$, $v_i$ and $v_{i + 2}$ have some common neighbor other than $v, v_{i + 1}, v_{i + 3}$. Since $G$ is not the octahedron, it has some separating triangle $T$. Let $A_1, A_2$ be the components of $G - T$. We consider the subgraphs $G^i$ of $G$ induced by $V(A_i) \cup V(T)$ ($i \in [2]$). Let $\varphi_i$ be restriction of $\varphi$ on $V(G^i)$. As $|V(G^i)| > 3$, it follows from the induction hypothesis that $G_{jk}^i$ is 2-connected for any distinct $j, k \in [3]$ ($i \in [2]$), from which it immediately follows that $G_{jk}$ is 2-connected for any distinct $j, k \in [3]$.
\end{proof}
\begin{figure}[!ht]
\centering
\includegraphics[scale=1.2]{smalltri}
\caption{The triangulations of order at most 6.}
\label{fig:smalltri}
\end{figure}
Let $G$ be a graph with a proper $k$-coloring $\varphi$. Denote by $c_{ij}$ the number of connected components of $G_{ij}$. The number of edges we need to remove from $G_{ij}$ to make $\varphi$ acyclic is $|E(G_{ij})|-|V(G_{ij})|+c_{ij}$. As $E(G_{ij})$ are edge-disjoint for distinct $i,j$, and each vertex $v$ of $G$ is contained in $k-1$ subgraphs $G_{ij}$, we know that \begin{align*}
m(G, \phi) = \sum_{1 \le i < j \le k} (|E(G_{ij})|-|V(G_{ij})|+c_{ij}) = |E(G)|-(k-1)|V(G)|+\sum_{1 \le i < j \le k}c_{ij}.
\end{align*} We obtain the following result by this observation.
\begin{theorem} \label{thm:n-3}
Assume $G$ is a $3$-colorable plane triangulation on $n$ vertices and $\varphi$ is the unique proper $3$-coloring of $G$. Then $m(G, \varphi) = n - 3$. For $v \in V(G)$, let $\varphi_v$ be the $4$-coloring of $G$ defined as $\varphi_v(v) = 4$ and $\varphi_v(u) = \varphi(u)$ for all $u \in V(G) \setminus \{v\}$. If $n > 3$, we have $m(G, \varphi_v) \le n - 5$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:n-3}, $G_{ij}$ is connected for any distinct $i, j \in [3]$. Hence \begin{align*}
m(G, \varphi) &= \sum_{1 \le i < j \le 3} (|E(G_{ij})| - |V(G_{ij})| + 1) = |E(G)| - 2|V(G)| + 3 = n - 3.
\end{align*}
For the second statement, we fix $v \in V(G)$ and focus on the coloring $\varphi_v$. Without loss of generality, assume $\varphi(v) = 3$. By Lemma~\ref{lem:n-3}, $G_{12}$ (with respect to the coloring $\varphi_v$) is 2-connected. Moreover, for $i \in [2]$, the subgraph induced by $\varphi_v^{-1}(i) \cup \varphi_v^{-1}(3) \cup \{v\} = \varphi^{-1}(i) \cup \varphi^{-1}(3)$ is 2-connected and hence $G_{i3}$ (with respect to the coloring $\varphi_v$) is connected. It is also obvious that $G_{i4}$ is a forest for every $i \in [3]$. As $d_G(v) \ge 4$, we have that \begin{align*}
m(G, \varphi) &= \sum_{1 \le i < j \le 3} (|E(G_{ij})| - |V(G_{ij})| + 1) = (|E(G)| - d_G(v)) - 2(|V(G)| - 1) + 3 \le n - 5. \qedhere
\end{align*}
\end{proof}
We are now ready to prove the main result of this section.
\begin{theorem} \label{thm:n-4}
Assume $G$ is a plane triangulation on $n$ vertices and $\varphi$ is a proper coloring of $G$. Let $k := |\varphi(V(G))|$. Then $m(G, \varphi) \le n - k$.
If, in addition, $k = 4$, $n \ge 5$ and $G$ is $4$-connected, then $m(G, \varphi) \le n - 5$.
\end{theorem}
\begin{proof}
We prove both statements by induction on $n$. It is easy to check that they hold for $n \le \max\{6, k\}$, thus we assume $n > \max\{6, k\}$.
We first consider, for the first statement, that $G$ is not 4-connected, i.e.\ $G$ has some separating triangle $T$. Let $A_1, A_2$ be the components of $G - T$. Let $G_i$ be the subgraphs of $G$ induced by $V(A_i) \cup V(T)$ ($i \in [2]$).
Denote by $\varphi_i$ the restriction of $\varphi$ on $V(G_i)$.
Write $n_i := |V(G_i)|$ and $k_i := |\varphi_i(G_i)|$. Note that $n_1 + n_2 = n + 3$ and $k_1 + k_2 \ge k + 3$.
By the induction hypothesis and Lemma~\ref{lem:A}, for each $i \in [2]$, there exists $E_i' \subseteq E(G_i) \setminus E(T)$ such that $|E_i'| \le n_i - k_i$ and $G_i - E_i'$ is acyclically colored by $\varphi_i$. Let $E' := E_1' \cup E_2'$. It is easy to prove that $G - E'$ is acyclically colored by $\varphi$ and
$|E'| = |E_1'| + |E_2'| \le (n_1 - k_1) + (n_2 - k_2) \le n - k$.
Henceforth, we assume that $G$ has no separating triangle and thus $\delta(G) = 4, 5$. Fix $v \in V(G)$ such that $d_G(v) = \delta(G)$.
Depending on the value of $\delta(G)$, we consider two cases.
\smallskip
{\bf Case 1:} $d_G(v) = \delta(G) = 4$.
Let $v_1 v_2 v_3 v_4 v_1$ be the cycle induced by $N_G(v)$. Since $n > 6$ and $G$ has no separating triangle, we can assume that $v_1, v_3$ have no common neighbor other than $v, v_2, v_4$.
If $\varphi(v_1) \neq \varphi(v_3)$, we obtain $G'$ from $G$ by deleting $v$ and adding the edge $v_1 v_3$. Let $\varphi'$ be the restriction of $\varphi$ on $V(G')$. Denote $n' := |V(G')|$ and $k' := |\varphi'(V(G'))|$. Note that $G'$ is 4-connected, $n' = n - 1 \ge 6$ and $k' = k$ or $k - 1$. Moreover, if $k' = k - 1$, then $v$ is the only vertex that is colored by $\varphi(v)$ and hence no 2-colored cycle in $G$ contains $v$. By the induction hypothesis, there exists $E'' \subseteq E(G')$ such that $G' - E''$ is acyclically colored by $\varphi'$ and $|E''| = m(G', \varphi') \le n' - k'$. Define $S := \{v v_2\}$ if $k' = k$, and $S := \emptyset$ if $k' = k - 1$. Set $E' := (E'' \setminus \{v_1 v_3\}) \cup S$. One can readily show that $G - E'$ is acyclically colored by $\varphi$ and $|E'| \le n - k$. If $k = k' = 4$, we additionally require from the induction hypothesis that $|E''| \le n' - 5$, which yields in this case that $|E'| \le n - 5$. If $k = 4$ and $k' = k - 1$, then, suppose $\varphi(V(G)) = [4]$ and $\varphi(v) = 4$, one can deduce from Lemma~\ref{lem:n-3} that $G_{ij}$ are connected for all distinct $i, j \in [3]$ and hence prove in a similar way as in the proof of Theorem~\ref{thm:n-3} that $m(G, \varphi) = n - 5$.
Assume $\varphi(v_1) = \varphi(v_3)$. First we prove that $m(G, \varphi) \le n- |\varphi(V(G))|$. Let $G'$ be from $G$ by contracting $v_1 v v_3$ to a new vertex $v'$ and denote the coloring induced from $\varphi$ by $\varphi'$ so that $\varphi(v') = \varphi(v_1)$. Denote $n' := |V(G')|$ and $k' := |\varphi'(V(G'))|$. We have $n' = n - 2 \ge 5$ and $k' = k$ or $k - 1$. By the induction hypothesis and Lemma~\ref{lem:A}, there exists $E'' \subseteq E(G') \setminus \{v'v_2, v'v_4\}$ such that $G' - E''$ is acyclically colored by $\varphi'$ and $|E''| = m(G', \varphi') \le n' - k'$. Note that any path joining $v_1, v_3$ in $G - \{v, v_2, v_4\}$ corresponds to a cycle containing $v'$ in $G'$ as $v_1, v_3$ have no common neighbor other than $v, v_2, v_4$. Define $S := \{v v_2\}$ if $k' = k$, and $S := \emptyset$ if $k' = k - 1$. Let $E' := E'' \cup \{v_1v_2\} \cup S$. It is clear that $|E'| \le n - k$ and $G - E'$ is acyclically colored by $\varphi$ as $v_1 v_2 v_3 v_4 v_1$ is the only cycle that is possibly 2-colored in $G - E'' - v$.
It remains to show that if $k=4$, then $m(G, \varphi) \le n - 5$.
If $\varphi(v_2) \neq \varphi(v_4)$, we take $E' := E''$ with $|E'| \le n' - 4 = n - 6$ and it is easy to show that $G - E'$ is acyclically colored by $\varphi$. So we assume that $\varphi(v_2) = \varphi(v_4)$. If $k' = 3$, then it follows from Theorem~\ref{thm:n-3} that $m(G, \varphi) \le n - 5$. So we assume $k' = 4$; in particular, $|E''| \le n' - 4$.
If $|E''| = m(G', \varphi') \le n' - 5$, we take $E' := E'' \cup \{vv_2, v_1v_2\}$, so $|E'| = |E''| + 2 \le n - 5$ and $G - E'$ is acyclically colored by $\varphi$. This yields that $m(G, \varphi) \le |E'| \le n - 5$.
Assume $m(G', \varphi') =|V(G')| - 4$. As $|V(G')| > 4$, by the induction hypothesis, $G'$ is not 4-connected, and hence contains separating triangles. As $G$ is 4-connected, it follows that
each separating triangle of $G'$ contains $v'$ and separates $v_2$ and $v_4$; an example is given in Figure~\ref{fig:BC}.
This implies that
$\mathcal{T}_{G'}$ is a path
$G_1' \dots G_t'$ ($t \ge 2$), with end-vertex $G'_1$ containing $v_2$, and the other end-vertex $G'_t$ containing $v_4$.
Denote by $\varphi'_i$ the restriction of $\varphi'$ on $V(G_i')$. By Lemma~\ref{lem:tritree} and Theorem~\ref{thm:n-3}, precisely one graph $G_i'$ from $\mathcal{V}_{G'}$ has $|\varphi'_i(V(G'_i))|=4$, $m(G'_i, \varphi_i)= |V(G'_i)| - 4$ and $|\varphi'_j(V(G_j))| = 3$ for all $j \in [t] \setminus \{i\}$.
By the induction hypothesis, we know that $|V(G'_i)| \le 4$ and hence $G'_i$ is isomorphic to $K_4$.
Note that $G_i'$ is not a leaf of $\mathcal{T}_{G'}$, for otherwise, say $i = 1$, then $|\varphi'(V(G') \setminus \{v_2\}| = 3$. This implies that $\varphi(v_2) \neq \varphi(v_4)$, contradicting the above assumption.
Thus $|\varphi'(V(G_j'))| = 3$ and $\{\varphi'(v'), \varphi'(v_2)\} \subset \varphi'(V(G_{j}'))$ for $j \in \{1,t\}$. As $G_i'$ is an internal vertex of $\mathcal{T}_{G'}$, we have $\varphi'(V(G_1')) \neq \varphi'(V(G_t'))$. Without loss of generality, we may assume that $\varphi'(V(G_1')) = [4] \setminus \varphi(v)$ and $\varphi'(V(G_t')) = \{\varphi(v), \varphi'(v'), \varphi'(v_2)\}$. Let $T$ be the separating triangle of $G'$ that is contained in $G_1'$. Write $V(T) := \{v', u, w\}$ such that $\varphi'(u) = \varphi'(v_2)$. Note that $\varphi'(w) \neq \varphi(v)$. Let $C$ be the cycle induced by the neighbors of $u$ in $G_1'$ (see Figure~\ref{fig:BC}(b) for an example) and $e_C$ be an arbitrary edge of $C$. By Lemma~\ref{lem:A}, we may require $E'' \subseteq E(G') \setminus (\{v'v_2, v'v_4\} \cup (E(C) \setminus \{e_C\}))$ as $\varphi'(v_2) = \varphi'(v_4) = \varphi'(u) \notin \varphi'(V(C))$, and hence $e_C \in E''$. Let $E' := (E'' \setminus \{e_C\}) \cup \{vv_2, v_1v_2\}$. We have $|E'| = |E''| + 1 \le n - 5$. It remains to show that $G - E'$ is acyclically colored by $\varphi$. Again, it is easy to show that $G - E' - e_C$ is acyclically colored by $\varphi$. Hence, any cycle $K$ which uses only two colors in $G - E'$ contains $e_C$ and the two colors
used in $K$ are $\varphi'(w), \varphi'(v')$. So $K$
does not contain $v, v_2, v_4$. If $\{v_1, v_3\} \subset V(K)$, then after contracting the path $v_1 v v_3$, $K$ becomes the union of two edge-disjoint cycles in $(G'_{\varphi'(v')\varphi'(w)} - E'') + e_C$ (as $v_1, v_3$ have no other common neighbors than $v, v_2, v_4$), a contradiction. If $|\{v_1, v_3\} \cap V(K)| \le 1$, then $K$ corresponds to $C$. Since $C$ is a cycle separating $v_2$ and $v_4$ in $G'$, $K$ is a cycle separating $v_2$ and $v_4$ in $G$, which is however impossible since $v_2 v v_4$ is a path in $G$ not intersecting $K$.
\smallskip
{\bf Case 2:} $d_G(v) = \delta(G) = 5$.
Let $v_1v_2v_3v_4v_5v_1$ be the induced cycle on $N_G(v)$.
If $|\varphi(N_G(v))| = 3$, we may assume that $\varphi(v_1) = \varphi(v_3)$ and $\varphi(v_2) = \varphi(v_4)$. As $G$ is 4-connected and $\delta(G) = 5$, we may assume that $v_1, v_3$ have no common neighbor other than $v, v_2$. Let $G'$ be obtained from $G$ by contracting $v_1 v v_3$ to a new vertex $v'$. We do not distinguish edges from $E(G') \setminus \{v' v_2\}$ from their corresponding edges in $G$. Set $\varphi'(v') := \varphi(v_1)$ and $\varphi'(u) := \varphi(u)$ for all $u \in V(G') \setminus \{v'\}$. Denote $n' := |V(G')|$ and $k' := |\varphi'(V(G'))|$. We have $n' = n - 2$ and $k' = k$ or $k - 1$. By Lemma~\ref{lem:A} and the induction hypothesis, there exists $E'' \subseteq E(G') \setminus \{v'v_2\}$ such that $\varphi'$ is an acyclic coloring of $G' - E''$ and $|E''| = m(G', \varphi') \le n' - k'$. Set $S := \{v v_2\}$ if $k' = k$ and $S := \emptyset$ if $k' = k - 1$. Define $E' := E'' \cup S$. It is easy to show that $|E'| \le n - k - 1$ and $\varphi$ is an acyclic coloring of $G - E'$.
If $|\varphi(N_G(v))| \ge 4$, we may assume that $\varphi(v_i) = i$ for each $i \in [4]$. Obtain $G'$ from $G$ by deleting $v$ and adding edges $v_1v_3, v_1v_4$. Let $\varphi'$ be the restriction of $\varphi$ on $V(G) \setminus \{v\}$. Denote $n' := |V(G')|$ and $k' := |\varphi'(V(G'))|$. We have $n' = n - 1$ and $k' = k$ or $k - 1$. By Lemma~\ref{lem:A} and the induction hypothesis, there exists $E'' \subseteq E(G') \setminus \{v'v_3, v'v_4\}$ such that $\varphi'$ is an acyclic coloring of $G' - E''$ and $|E''| = m(G', \varphi') \le n' - k'$. Set $S := \{v v_5\}$ if $k' = k$ and $S := \emptyset$ if $k' = k - 1$. Define $E' := E'' \cup S$. It is easy to show that $|E'| \le n - k$ and $\varphi$ is an acyclic coloring of $G - E'$. We remark that in this case we have $k > 4$, thus we do not need to consider the second statement.
\end{proof}
\begin{figure} [!ht]
\centering
\subfigure[]{\includegraphics[scale = 1.5]{B.pdf}} \label{subfig:B}
\hfil
\subfigure[]{\includegraphics[scale = 1.5]{C.pdf}} \label{subfig:C}\\
\caption{(a) A 4-colored plane triangulation $G$. (b) The plane triangulation $G'$ obtained from $G$ by contracting the path $v_1 v v_3$. The cycle $C$ consists of the thick edges.}
\label{fig:BC}
\end{figure}
The following corollary characterizes plane triangulations $G$ and colorings $\varphi$ that satisfy the eqaulities $m(G, \varphi) = n - 3$ and $m(G, \varphi) = n - 4$, respectively.
\begin{corollary} \label{cor:n-3,n-4}
Let $G$ be a plane triangulation on $n$ vertices and $\varphi$ be a coloring of $G$. Let $\mathcal{V}_{G} := \{G_1, \dots, G_t\}$ and $\varphi_i$ be the restriction of $\varphi$ on $V(G_i)$ for $i \in [t]$. We have that $m(G, \varphi) = n - 3$ if and only if $|\varphi(V(G))| = 3$; and $m(G, \varphi) = n - 4$ if and only if there exists $i \in [t]$ such that $G_i$ is isomorphic to $K_4$ and $|\varphi_j(V(G_j))| = 3$ for all $j \in [t] \setminus \{i\}$.
\end{corollary}
\section{Acyclic 2CC transerval and upper bounds for $m_k(G)$}
\label{sec:subgraph}
Let $G$ be a graph and $\varphi$ a coloring of $G$. We have shown upper bounds on $m(G, \varphi)$ when $G$ is a plane triangulation. In this section, we show that we can choose the 2CC transversal $E'$ so that it induces a forest as well as extend the results to general planar graphs.
\begin{definition}
Let $G$ be a graph and $U \subseteq V(G)$. An edge set $E' \subseteq E(G)$ is \emph{$U$-acyclic} if the graph induced by $E'$ is a forest and contains no path joining two distinct vertices of $U$. With abuse of notation, we say an edge set is $H$-acyclic instead of $V(H)$-acyclic for any subgraph $H$ of $G$, and if $H$ is a graph induced by a single edge $e$, we write $e$-acyclic instead of $H$-acyclic.
\end{definition}
\begin{proposition} \label{pro:forest}
Let $G$ be a plane triangulation and $\varphi$ be a proper coloring of $G$. For any facial cycle $F$ of $G$, there exists an $F$-acyclic $2$CC transversal $E_F$ with respect to $\varphi$.
\end{proposition}
\begin{proof}
We prove by induction on $|V(G)|$. We shall assume $|V(G)| > \max\{6, |\varphi(V(G))|\}$ as the small cases can be readily verified.
Suppose $G$ has some separating triangle $T$. Let $A_1$ and $A_2$ be the components of $G - T$, and for $i \in [2]$, $G_i$ be the subgraph of $G$ induced by $V(A_i) \cup V(T)$. Without loss of generality, assume that $F$ is a facial cycle of $G_1$. By the induction hypothesis, we have an $F$-acyclic 2CC transversal $E_F^1 \subseteq E(G_1)$ of $G_1$ and a $T$-acyclic 2CC transversal $E_T^2 \subseteq E(G_2)$ of $G_2$. It is easy to see that the edge set $E_F := E_F^1 \cup E_T^2$ is an $F$-acyclic 2CC transversal of $G$.
Henceforth, we assume that $G$ has no separating triangle and thus $\delta(G) \ge 4$. Fix $v \in V(G) \setminus V(F)$ such that $d_G(v)=\delta(G) \le 5$. We consider two cases, depending on $d_G(v) = 4$ or $5$.
\smallskip
{\bf Case 1:} $d_G(v) = 4$.
Let $v_1 v_2 v_3 v_4 v_1$ be the cycle induced by $N_G(v)$. Since $|V(G)| > 6$ and $G$ has no separating triangle, we can assume that $v_1, v_3$ have no common neighbor other than $v, v_2, v_4$. If $\varphi(v_1) \neq \varphi(v_3)$, we obtain $G'$ from $G$ by deleting $v$ and adding the edge $v_1 v_3$, and color it with the coloring $\varphi'$ induced from $\varphi$. Clearly, $F$ remains a facial cycle of $G'$. By the induction hypothesis, there exists an $F$-acyclic 2CC transversal $E_F' \subseteq E(G')$ of $G'$. Set $E_F := (E_F' \setminus \{v_1 v_3\}) \cup \{v v_2\}$. One can readily check that $E_F$ is an $F$-acyclic 2CC transversal of $G$.
If $\varphi(v_1) = \varphi(v_3)$, obtain $G'$ from $G$ by contracting $v_1 v v_3$ to a new vertex $v'$ and denote the coloring induced from $\varphi$ by $\varphi'$ so that $\varphi(v') = \varphi(v_1)$. Let $E_F' \subseteq E(G')$ be an $F$-acyclic 2CC transversal of $G'$. Recall that $v_1, v_3$ have no common neighbor other than $v, v_2, v_4$, and hence any path joining $v_1, v_3$ in $G - \{v, v_2, v_4\}$ corresponds to a cycle containing $v'$ in $G'$. We construct $E_F$ as follows. \begin{itemize}
\item If $E_F' \cap \{v'v_2, v'v_4\} = \emptyset$, then $v_1 v_2 v_3 v_4 v_1$ is the only cycle in $G - (E_F' \cup \{v v_2\})$ that possibly uses only two colors. We claim that there exists $j \in \{1, 3\}$ such that $E_F := E_F' \cup \{v v_2, v_j v_2\}$ induces a forest not connecting any distinct vertices from $V(F)$. Suppose it does not hold, then for each $j \in \{1, 3\}$, the graph induced by $E_F'$ in $G$ contains some path joining $v_j$ and $v_2$, or contains two disjoint paths each joining one vertex from $V(F)$ and one vertex of $v_j, v_2$. In any case, the graph induced by $E_F'$ in $G'$ contains some path joining two vertices from $V(F)$ or some cycle, a contradiction. As $G - E_F$ is acyclically colored by $\varphi$, $E_F$ is the desired edge set.
\item If $E_F' \cap \{v'v_2, v'v_4\} = \{v'v_i\}$ for some $i \in \{2, 4\}$, set $E_F := (E_F' \setminus \{v'v_i\}) \cup \{vv_2, v_1v_i, v_3v_i\}$. Similarly to the previous case, it can be shown that $G - E_F$ is acyclically colored by $\varphi$ and the subgraph induced by $E_F$ has no cycle and no path joining distinct vertices from $V(F)$.
\item If $\{v'v_2, v'v_4\} \subseteq E_F'$, then there is a unique path $P$ in $G'-E_F'$ joining $v'$ and $v_2$ using only colors $\varphi(v_1)$ and $\varphi(v_2)$. Therefore $P$ can be viewed as a path in $G - ((E_F' \setminus \{v' v_2, v' v_4\}) \cup E(v_1 v_2 v_3 v_4 v_1))$ connecting $v_2$ and $v_j$ for some $j \in \{1, 3\}$. Since $v_1, v_3$ have no common neighbor other than $v, v_2, v_4$ and the neighbor of $v'$ in $P$ is not $v_4$, the index $j$ is unique. Set $E_F := (E_F' \setminus \{v' v_2, v' v_4\}) \cup \{v v_2, v_j v_2, v_1 v_4, v_3 v_4\}$. Similarly to the previous cases, it is easy to show that $E_F$ is $F$-acyclic. It is left to show that $\varphi$ is an acyclic coloring of $G - E_F$. Suppose to the contrary that there is some 2-colored cycle $C$ in $G - E'$. It is not hard to see that $C$ contains $v_{4 - j}v_2$ but not $v_j$. Then $C - v_{4 - j} v_2$ is a path in $G' - E_F'$ connecting $v'$ and $v_2$ yet different from $P$, a contradiction.
\end{itemize}
{\bf Case 2:} $d_G(v) = 5$.
Let $v_1 v_2 v_3 v_4 v_5 v_1$ be the induced cycle on $N_G(v)$. If $|\varphi(N_G(v))| = 3$, we may assume that $\varphi(v_1) = \varphi(v_3)$ and $\varphi(v_2) = \varphi(v_4)$. Suppose $v_1, v_3$ have a common neighbor $u$ other than $v, v_2$ and $v_2, v_4$ have a common neighbor $u'$ other than $v, v_3$. Since $G$ has no separating triangle, $u = u'$ and $d_G(v_2) = d_G(v_3) = 4$. If $v_2$ or $v_3$ is not incident to $F$, we may revise our choice of $v$ so that $d_G(v) = 4$. Otherwise, $F$ is the cycle $u v_2 v_3 u$ and since $d_G(v) = 5$, there exists some vertex $w \in V(G) \setminus \{v, v_1, v_2, v_3, v_4, u\}$ such that $d_G(w) \le 5$; we may replace $v$ by $w$. Therefore, without loss of generality, we may assume that $v_1, v_3$ have no common neighbor other than $v, v_2$.
Obtain $G'$ from $G$ by contracting $v_1 v v_3$ to a new vertex $v'$ and denote the coloring induced from $\varphi$ by $\varphi'$ so that $\varphi(v') = \varphi(v_1)$. It is clear that $F$ remains a facial cycle of $G'$. Let $E_F' \subseteq E(G')$ be an $F$-acyclic 2CC transversal of $G'$. We construct $E_F$ as follows. \begin{itemize}
\item If $v' v_2 \in E_F'$, set $E_F := (E_F' \setminus \{v' v_2\}) \cup \{v v_2, v_1 v_2, v_2 v_3\}$.
\item If $v' v_2 \notin E_F'$, set $E_F := E_F' \cup \{v v_2\}$.
\end{itemize} In both cases it is easy to show that $E_F$ is an $F$-acyclic 2CC transversal of $G$.
If $|\varphi(N_G(v))| > 3$, we may assume that $\varphi(v_i) = i$ for each $i \in [4]$. Let $G'$ be the graph obtained from $G$ by deleting $v$ and adding edges $v_1v_3, v_1v_4$. Let $\varphi'$ be the restriction of $\varphi$ on $V(G) \setminus \{v\}$. Let $E_F'$ be an $F$-acyclic 2CC transversal of $G$. One can easily show that $E_F := (E_F' \setminus \{v_1v_3, v_1v_4\}) \cup \{vv_5\}$ is an $F$-acyclic 2CC transversal of $G$.
\end{proof}
We remark that the $F$-acyclic 2CC transversal $E_F$ found in Proposition~\ref{pro:forest} induces a forest of at least $|V(F)| = 3$ components and hence has size at most $|V(G)| - 3$. In fact, an $F$-acyclic 2CC transversal of the optimal size $m(G, \varphi)$ does exist due to the following observation.
Note that for any edge set $E' \subseteq E(G)$, $G - E'$ is acyclically colored by a proper $k$-coloring $\varphi$ of $G$ if and only if $E(G) \setminus E'$ is an independent set of the direct sum of the graphic matroids of $G_{ij}$ ($i, j \in [k]$). This yields the following corollary.
\begin{corollary} \label{cor:forest}
Let $G$ be a plane triangluation, $\varphi$ be a proper coloring of $G$ and $F$ be a facial cycle of $G$. There exists an $F$-acyclic $2$CC transversal $E' \subseteq E(G)$ with $|E'| = m(G, \varphi)$.
\end{corollary}
Next, we generalize the results to planar graphs.
\begin{theorem}
Assume $G$ is a planar graph on $n$ vertices and $\varphi$ is a proper coloring of $G$ with $|\varphi(V(G))|=k$. Let $U \subseteq V(G)$ that induces a clique of size $|U| \le 3$. There exists a $U$-acyclic $2$CC transversal $E_U \subseteq E(G)$ with $|E_U| = m(G, \varphi) \le n - k$.
\end{theorem}
\begin{proof}
We prove by induction on $n$. It clearly holds when $n \le k$. From now on we consider $n > k$.
If $G$ has some separator $W \subset V(G)$ such that $|W| \le 3$ and $W$ induces a clique, let $A_1$ be a component of $G - W$ and $A_2$ the union of all other components. Denote by $G_i$ the subgraph of $G$ induced by $V(A_i) \cup W$ and by $\varphi_i$ the restriction of $\varphi$ on $V(G_i)$ ($i \in [2]$). Write $n_i := |V(G_i)|$ and $k_i := |\varphi_i(V(G_i)|$. We have $n_1 + n_2 = n - |W|$ and $k_1 + k_2 \ge k - |W|$. Without loss of generality, we require that $U \subseteq V(G_1)$. By the induction hypothesis, there exist a $U$-acyclic 2CC transversal $E_U'$ of $G_1$ with $|E_U'| \le n_1 - k_1$ and a $W$-acyclic 2CC transversal $E_W'$ of $G_2$ with $|E_W'| \le n_2 - k_2$. It is easy to show that $E_U := E_U' \cup E_W'$ is a $U$-acyclic 2CC transversal with $|E_U| \le n - k$.
We assume that $G$ has no separator $W \subset V(G)$ such that $|W| \le 3$ and $W$ induces a clique. In particular, $G$ is 2-connected and every facial boundary of $G$ is a cycle. We add to $G$ as many edges as possible such that $\varphi$ remains as a proper coloring and $G$ remains as a plane graph. With abuse of notation, we call the new graph $G$. It suffices to prove the statement for the new graph $G$.
If $G$ is a triangulation, we apply Theorem~\ref{thm:n-4} and Corollary~\ref{cor:forest} to conclude that $G$ has some $U$-acyclic 2CC transversal $E_U$ with $|E_U| = m(G, \varphi) \le n - k$.
If any facial cycle of $G$ has a chord, then the end-vertices of the chord form a separator of $G$, contradicting our assumption.
Assume $G$ is not a plane triangulation. As each facial cycle is an induced cycle, and any two non-adjacent vertices of a face are colored by the same color, there exists a facial cycle $v_1v_2v_3v_4v_1$ in $G$ such that $\varphi(v_1) = \varphi(v_3)$ and $\varphi(v_2) = \varphi(v_4)$. If $v_1, v_3$ have 3 common neighbors and $v_2, v_4$ have 3 common neighbors, then $G$ must be isomorphic to the plane graph obtained from the octahedron by deleting one vertex since we assume that $G$ has no separating triangle. One can easily verify that the statement holds for this graph. Thus, without loss of generality, we assume that $v_1, v_3$ have no common neighbor other than $v_2, v_4$. Let $G'$ be obtained from $G$ by identifying $v_1$ and $v_3$ as a new vertex $v'$ and $\varphi'$ be the coloring of $G'$ induced from $\varphi$. Denote $n' := |V(G')|$ and $k' := |\varphi'(V(G'))|$. We have $n' = n - 1$ and $k' = k$. Moreover, we can view $U$ as a vertex set of $G'$ since $U$ contains at most one of $v_1, v_3$. By the induction hypothesis, we have a $U$-acyclic 2CC transversal $E_U'$ of $G'$ with $|E_U'| = m(G', \varphi') \le n' - k'$. We construct $E_U$ as follows. Since the approach is similar to that in the proof of Proposition~\ref{pro:forest}, some details will be omitted.
\begin{itemize}
\item If $E_U' \cap \{v'v_2, v'v_4\} = \emptyset$, then there exists $j \in \{1, 3\}$ such that $E_U := E_U' \cup \{v_j v_2\}$ is $U$-acyclic.
\item If $E_U' \cap \{v'v_2, v'v_4\} = \{v'v_i\}$ for some $i \in \{2, 4\}$, set $E_U := (E_U' \setminus \{v'v_i\}) \cup \{v_1v_i, v_3v_i\}$.
\item If $\{v'v_2, v'v_4\} \subseteq E_U'$, then there is a unique path $P$ in $G' - E_U'$ joining $v'$ and $v_2$ using only colors $\varphi(v_1)$ and $\varphi(v_2)$. We can view $P$ as a path in $G - ((E_F' \setminus \{v' v_2, v' v_4\}) \cup E(v_1 v_2 v_3 v_4 v_1))$ connecting $v_2$ and $v_j$ for some unique $j \in \{1, 3\}$. Set $E_U := (E_U' \setminus \{v' v_2, v' v_4\}) \cup \{v_j v_2, v_1 v_4, v_3 v_4\}$.
\end{itemize} It is not hard to verify that the edge set $E_U$ constructed above is a $U$-acyclic 2CC transversal with $|E_U| \le n - k$. This completes the proof.
\end{proof}
\begin{corollary}
Let $G$ be a planar graph on $n$ vertices. If $n \ge 5$, then $m_4(G) \le n-5$. If $G$ is $3$-colorable, then $m_3(G) \le n-3$.
\end{corollary}
\begin{theorem}
There are infinitely many $4$-connected planar graphs $G$ with $m_4(G)=|V(G)|-5$, and infinitely many $3$-colorable planar graphs with $m_3(G)=|V(G)|-3$.
\end{theorem}
\begin{proof}
It follows from Corollary \ref{cor:n-3,n-4} that for any 3-colorable plane triangulation $G$, $m_3(G)=|V(G)|-3$.
Let $G$ be the 4-connected plane triangulation obtained by joining two independent vertices $u, v$ to every vertex of a cycle $C$ on $n - 2$ vertices with $n \ge 7$ odd. It is obvious that $G$ is not 3-colorable. Let $\varphi$ be any 4-coloring of $G$. Then, without loss of generality, $\varphi(V(C)) = [3]$ and $\varphi(u) = \varphi(v) = 4$. For any $i \in [3]$, $G_{i4}$ is a connected plane graph with $|\varphi^{-1}(i)|$ faces, and for $i,j \in [3]$, $G_{ij}$ is acyclic. Therefore $m(G, \varphi) = \sum_{i \in [3]} (|\varphi^{-1}(i)| - 1) = n - 5$.
\end{proof}
\section{Upper bounds for $m'_k(G)$} \label{sec:m_k}
In this section we study the problem of how many edges we need to remove from a planar graph in order to make it acyclic $k$-colorable for $k = 3, 4$.
\begin{theorem}
Let $G$ be a planar graph on $n$ vertices. We have $m_3(G) \le (13n - 42) / 10$ and $m_4(G) \le (3n - 12) / 5$.
\end{theorem}
\begin{proof}
We first prove that $m_4(G) \le (3n - 12) / 5$. As every plane graph is a spanning subgraph of some plane triangulation, we may assume that $G$ is a plane triangulation on $n$ vertices. Let $\varphi: V(G) \rightarrow [5]$ be an acyclic 5-coloring of $G$. Without loss of generality, assume that \begin{align*}
\sum_{v \in \varphi^{-1}(5)} (d_G(v) - 3) \le \frac15 \sum_{v \in V(G)} (d_G(v) - 3) = \frac{3n - 12}{5}.
\end{align*} Let $v$ be any vertex in $\varphi^{-1}(5)$. Since the neighbors of $v$ span some cycle and $\varphi$ is acyclic, there exist $v_1, v_2, v_3 \in N_G(v)$ whose colors are pairwise distinct. Define $E_v$ to be the set of edges incident to $v$ other than $v v_1, v v_2$ and $v v_3$, and set $\varphi'(v)$ to be the color from $[4]$ other than $\varphi(v_1), \varphi(v_2), \varphi(v_3)$. To complete the construction, we set $E' := \bigcup_{v \in \varphi^{-1}(5)} E_v$ and set $\varphi'(u) := \varphi(u)$ for all $u \in \bigcup_{i \in [4]} \varphi^{-1}(i)$. It is readily to verify that $\varphi'$ is a proper 4-coloring of $G' := G - E'$ and $|E'| = \sum_{v \in \varphi^{-1}(5)} (d_G(v) - 3) \le \frac{3n - 12}{5}$. Suppose $\varphi'$ is not an acyclic coloring of $G'$, then there is a cycle $C$ contained in $\varphi'^{-1}(i) \cup \varphi'^{-1}(j)$ for some distinct $i, j \in [4]$. Note that $C$ cannot contain any $v \in \varphi^{-1}(5)$ since $v$ has precisely three neighbors of three different colors in $G'$. Therefore $C$ is contained in $G'[(\varphi'^{-1}(i) \cup \varphi'^{-1}(j)) \setminus \varphi^{-1}(5)] = G[\varphi^{-1}(i) \cup \varphi^{-1}(j)]$, a contradiction.
This approach can be repeated to show that $m_3(G) \le (13n - 42) / 10$. More precisely, we may assume that \begin{align*}
\sum_{v \in \varphi'^{-1}(4)} (d_{G'}(v) - 2) \le \frac14 \sum_{v \in V(G')} (d_{G'}(v) - 2) = \frac{4n - 12 - 2|E'|}{4}.
\end{align*} It is not hard to see that for any $v \in V(G')$, $|\varphi'(N_{G'}(v))| \ge 2$. Let $v \in \varphi'^{-1}(4)$ and $v_1, v_2 \in N_{G'}(v)$ be of different colors. Define $E_v'$ to be the set of edges incident to $v$ other than $v v_1$ and $v v_2$, and set $\varphi''(v)$ to be the color from $[3]$ other than $\varphi(v_1), \varphi(v_2)$. Set $E'' := E' \cup \bigcup_{v \in \varphi'^{-1}(4)} E_v'$ and set $\varphi''(u) := \varphi'(u)$ for all $u \in \bigcup_{i \in [3]} \varphi'^{-1}(i)$. Again, it is readily to verify that $\varphi''$ is a proper 3-coloring of $G'' := G - E''$ and \begin{align*}
|E''| = |E'| + \sum_{v \in \varphi'^{-1}(4)} (d_{G'}(v) - 2) \le \frac{13n - 42}{10}.
\end{align*} Similarly as before, one can show that $\varphi''$ is an acyclic 3-coloring of $G''$ and hence the result follows.
\end{proof}
We remark that there exist infinitely many planar graphs $G$ on $n$ vertices so that $G - E'$ is not acyclically $4$-colorable for any $E' \subseteq E(G)$ with $|E'| < (n - 2) / 4$. Let $H$ be a 2-face-colorable triangulation and $\mathcal{T}$ be a family of $|E(H)| / 3$ edge-disjoint facial triangles of $H$. Let $G$ be obtained from $H$ by replacing each triangle from $\mathcal{T}$ by an octahedron. Therefore $E(G)$ is partitioned into $|E(H)| / 3$ octahedra, and $n = |V(H)| + |E(H)| = 4|V(H)| - 6$. As the octahedron is not acyclically 4-colorable, any $E' \subseteq E(G)$ satisfying that $G - E'$ is acyclically 4-colorable has size at least $|E(H)| / 3 = \frac{n - 2}{4}$.
\section*{Acknowledgments}
The research of On-Hei Solomon Lo was supported by a Postdoctoral Fellowship of Japan Society for the Promotion of Science and by Natural Sciences and Engineering
Research Council of Canada.
The research of Ben Seamone was supported by Natural Sciences and Engineering
Research Council of Canada.
The research of Xuding Zhu was supported by National Natural Science Foundation of China grant NSFC 11971438 and U20A2068.
\bibliographystyle{abbrv}
|
1,116,691,497,308 | arxiv | \section{Introduction}
Let $E$ be a smooth non-negative function on a Riemannian manifold $X$.
Let $\lambda$ be a positive number and
consider a weighted probability measure
$d\nu^{\lambda}(x)=Z_{\lambda}^{-1}e^{-\lambda E(x)}dx$ on $X$,
where $Z_{\lambda}$ denotes the normalized constant
and $dx$ denotes the Riemannian volume.
We consider a Dirichlet form on $L^2(X,d\nu^{\lambda})$
such that
$$
{\mathcal E}^{\lambda}(F,F)=\int_{X}|\nabla F(x)|^2d\nu^{\lambda}(x),
$$
where $\nabla$ denotes the Levi-Civita covariant derivative.
Under mild assumptions on $E$ and the Riemannian metric,
$1\in {\rm D}({\mathcal E}^{\lambda})$ and
the corresponding lowest eigenvalue $e^{\lambda}_1$
of the generator of the Dirichlet form
is $0$.
The spectral gap $e^{\lambda}_2$ of ${\mathcal E}^{\lambda}$ is defined by
\begin{align}
e_2^{\lambda}&=\inf\Bigl\{{\mathcal E}^{\lambda}(F,F)~\Big |~\|F\|_{L^2(\nu^{\lambda})}=1,
\int_XF(x)d\nu^{\lambda}(x)=0\Bigr\}.
\end{align}
The study on the estimate and the asymptotic behavior of
$e^{\lambda}_2$
as $\lambda\to\infty$ is an interesting and important subject.
In this problem,
one of the simplest cases is the following:
\begin{enumerate}
\item[(i)] $E$ has a unique minimum point $c_0$ and
there are no critical points other than $c_0$,
\item[(ii)] the Hessian of $E$ at $c_0$ is non-degenerate.
\end{enumerate}
In this case, under some additional technical assumptions,
it holds that
$\lim_{\lambda\to\infty}\frac{e_2^{\lambda}}{\lambda}=\sigma_1$,
where $\sigma_1$ is the lowest eigenvalue of the
Hessian of $E$ at $c_0$.
When $X={\mathbb R}^N$ and $E(x)=\frac{|x|^2}{2}$,
the generator of the Dirichlet form is called
the Ornstein-Uhlenbeck operator(=OU operator) and the spectral set is
completely known.
We are interested in the case where $X$ is
an ``infinite dimensional Riemannian manifold''
and $\nu^{\lambda}$ is a probability measure on it.
Let us explain our model.
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold.
Let $x_0, y_0\in M$ and consider
a space of continuous paths
$P_{x_0}(M)=C([0,1]\to M~|~\gamma(0)=x_0)$
and its subset
$P_{x_0,y_0}(M)=\{\gamma\in P_{x_0}(M)~|~\gamma(1)=y_0\}$.
Our $X$ is $P_{x_0}(M)$ or $P_{x_0,y_0}(M)$ and
$\nu^{\lambda}$ is the (pinned) Brownian motion
measure.
The transition probability of the Brownian motion is given by
$p(t/\lambda,x,y)$, where $p(t,x,y)$
denotes the heat kernel of the diffusion semigroup
$e^{t\Delta/2}$ and
$\Delta$ is the Laplace-Bertlami operator.
In many problems, we use the following
heuristically
appealing path integral expression,
$$
d\nu^{\lambda}(\gamma)=\frac{1}{Z_{\lambda}}
\exp\left(-\lambda E(\gamma)\right)
d\gamma,
$$
where $E(\gamma)$ is the energy of
path $\gamma$
and $d\gamma$ is the ``infinite dimensional Riemannian measure''.
Of course, the energy function cannot be defined on the continuous
path spaces on which the (pinned) Brownian motion measures exist
and there do not exist the ``Riemannian measures'' on
the infinite dimensional spaces.
We refer the reader to \cite{andersson-driver, lim, laetsch} for some
rigorous study of the path integral.
On the other hand, by using an $H$-derivative $D$ on $X$
(see the definition in Section~3),
we can define a Dirichlet form ${\mathcal E}^{\lambda}$ on $L^2(X,d\nu^{\lambda})$.
Our interest is in the study of the spectral gap of
${\mathcal E}^{\lambda}$.
Since
the triple $(X,\nu^{\lambda}, {\mathcal E}^{\lambda})$ is formally
an infinite dimensional analogue of the finite dimensional one,
we may conjecture some results on the asymptotics of the spectral gap.
In the case where $X=P_{x_0}(M)$, the critical point of
$E$ on the subset of $H^1$ paths is just a
constant path and this problem corresponds to
the simplest case which we explained.
Fang~\cite{fang} proved the existence of the
spectral gap by establishing the COH(=Clark-Ocone-Haussmann) formula
for functions on $X=P_{x_0}(M)$.
Also it is not difficult to prove that
$\lim_{\lambda\to\infty}\frac{e_2^{\lambda}}{\lambda}=1$
by using the COH formula.
We prove this in Section 3.
Here note that the Hessian of $E$
at the constant path is identity.
On the other hand, if $X$ is the pinned space $P_{x_0,y_0}(M)$,
the set of critical points of the functional
$E$
on the set of $H^1$ paths of $P_{x_0,y_0}$ is
the set of geodesics.
Therefore,
by an analogy of finite dimensional cases,
one may expect that the asymptotic behavior of the low-lying spectrum
of the generator of ${\mathcal E}_{\lambda}$ is related to the set of
the geodesics in this case.
However, it is not even easy to find examples of Riemannian manifolds
on which loop spaces the spectral gaps exist.
In fact, Eberle~\cite{eberle1} gave an example of a Riemannian manifold
which is diffeomorphic to
a sphere over which there is no spectral gap on the loop space.
At the moment, there are no examples of loop spaces over
simply connected compact Riemannian
manifold for which the spectral gap exists.
If $M$ is a Riemannian manifold with a pole $y_0$,
the situation is simpler.
In this case, the function $E$ defined on the
$H^1$ subset of $P_{x_0,y_0}(M)$
satisfies the above mentioned
assumptions (i) and (ii).
The author proved the existence of spectral
gap in that case under additional strong assumptions on
the Riemannian metric in \cite{aida-precise}.
Unfortunately, the assumption is not valid for
hyperbolic spaces.
The existence of the spectral
gap on loop spaces over hyperbolic spaces
was proved by Chen-Li-Wu~\cite{clw1} for the first time
(see \cite{clw11} also).
They used results in \cite{aida-coh, cgg}.
We give an alternative proof of their result
and prove that $\lim_{\lambda\to\infty}\frac{e_2^{\lambda}}{\lambda}=\sigma_1$,
where $\sigma_1$ is the spectrum bottom of the Hessian of $E$ at the
unique geodesic for a certain class of Riemannian manifolds.
Now let us recall a rough idea how to prove the asymptotic behavior of
$e_2^{\lambda}$ under the assumptions (i), (ii) when
$X$ is a finite dimensional space.
By the unitary transformation $M_{\lambda} : F(\in L^2(d\nu^{\lambda}))\mapsto
F\left(Z_{\lambda}^{-1}e^{-\lambda E}\right)^{1/2}(\in L^2(dx))$, the problem
is changed to determine the limit of the gap of spectrum
of a Schr\"odinger operator.
In this context, $\lambda\to\infty$ corresponds to
the semi-classical limit of a physical system.
In a small neighborhood of $c_0$, the Schr\"odinger operator
can be approximated by a harmonic oscillator and we obtain
the main term of the divergence of $e_2^{\lambda}$.
As for outside the neighborhood,
the potential function is very large and it has nothing to do with
low energy
part of the operator.
In the present infinite dimensional problems,
we cannot use the unitary transformation
since there does not exist Riemannian volume measure and
the function $E$ cannot be defined on the whole space $X$.
Moreover, there are difficulties in
the proof of each parts, (a) Local estimate in a neighborhood
$U(c_0)$ of the minimizer,
(b) Estimate outside $U(c_0)$.
In the problem (a), one may think that
the problem can be reduced to
a Gaussian measure case by a certain
``local diffeomorphism''.
A natural candidate of the local diffeomorphism is
an It\^o map.
Certainly, the mapping is measure preserving
but the derivative of the mapping does not behave well
because of the irregularity of the Brownian paths
~\cite{driver1, cruzeiro-malliavin, elworthy-li1}.
In problem (b), it is not clear how to use ``the potential function is
big''
outside $U(c_0)$.
To solve these problems, we use COH formula and
a logarithmic Sobolev inequality on $X$.
Clearly, it is more interesting to
consider the cases where there are two or more local minimum points of
$E$.
We refer the reader to \cite{hks, hf} and references therein
for finite dimensional cases.
Also we note that Eberle~\cite{eberle3}
studied such a problem on certain approximate spaces
of loop spaces.
The paper is organized as follows.
We already explained a rough idea of a proof of the
asymptotic behavior
of $e^{\lambda}_2$.
In Section 2, we give a different proof
based on a log-Sobolev inequality.
Our proof for loop spaces is a modification of the proof.
Also we explain the difficulty of the proof
in the case of loop spaces.
In Section 3,
we prepare necessary definitions and
lemmas and explain our main theorems for
$P_{x_0,y_0}(M)$.
In this case, the minima $c_0$ is the minimal geodesic
$c_{x_0,y_0}$ between $x_0$ and $y_0$.
As we explained, we need local analysis in a neighborhood of
$c_{x_0,y_0}$ of
the generators of Dirichlet forms.
Thus we consider an OU operator
with Dirichlet boundary condition on
a small neighborhood ${\cal D}$ of the minimal geodesic in a loop space
over a Riemannian manifold.
We define the
generalized second lowest eigenvalue
$e^{\lambda}_{Dir,2,{\mathcal D}}$ of the Dirichlet
Laplacian and determine the
asymptotic behavior of
$e^{\lambda}_{Dir,2,{\mathcal D}}$
in our first main theorem (Theorem~\ref{main theorem 1}).
In the second main theorem (Theorem~\ref{main theorem 2}), we consider
a rotationally symmetric
Riemannian manifold $M$ with a pole $y_0$
and a loop space $P_{x_0,y_0}(M)$, where
$x_0$ is an arbitrary point of $M$.
Under certain assumptions on the Riemannian metric,
we prove the existence of the spectral gap and
determine the asymptotic behavior of
$e^{\lambda}_2$.
The class of Riemannian manifolds includes the hyperbolic spaces.
Actually, the same result as in the second main theorem
holds true under the validity of
a certain log-Sobolev inequality and a tail estimate of
a certain random variable describing the size of
$\gamma$.
The log-Sobolev inequality can be proved by a
COH-formula on $P_{x_0,y_0}(M)$.
The diffusion coefficient of the Dirichlet form in the
log-Sobolev inequality is unbounded and it is still an
open problem whether a log-Sobolev inequality with
a bounded coefficient
holds on a loop space over a hyperbolic space .
In this paper, the COH formula plays a crucial role.
Let us recall what COH formula is.
Let $F$ be an $L^2$ random variable on $P_{x_0}(M)$.
By the It\^o theorem,
$F-E^{\nu^{\lambda}}[F]$ can be represented as a
stochastic integral with respect to
the Brownian motion $b$ which is obtained as an
anti-stochastic development of $\gamma$ to
${\mathbb R}^n$ (\cite{hsu}).
The COH formula gives an explicit form of the integrand
as a conditional expectation
of the $H$-derivative $DF$.
As we noted, Fang proved the COH formula on
$P_{x_0}(M)$ when $M$ is a compact Riemannian manifold.
But it is not difficult to prove the same formula
for more general Riemannian manifold
(see Lemma~\ref{fang COH formula}).
In the case of $P_{x_0,y_0}(M)$, it is necessary
to consider a Brownian motion $w$ under the
pinned measure which is obtained by adding a singular drift
to $b$.
The singular drift
is defined by a logarithmic derivative
of $p(t,y_0,z)$.
For this, see Lemma~\ref{coh formula} and \cite{aida-coh, aida-coh2}.
In both cases of $P_{x_0}(M)$ and $P_{x_0,y_0}(M)$,
the integrand in the COH formula is the conditional expectation of
the quantity $A(\gamma)_{\lambda}(DF')$, where $A(\gamma)_{\lambda}$ is a
certain bounded linear operator depending on
the path $\gamma$ and $\lambda$.
$A(\gamma)_{\lambda}$ for $P_{x_0}(M)$
is defined by the Ricci curvature and
the operator norm is uniformly bounded for large
$\lambda$.
On the other hand,
in the case of $P_{x_0,y_0}(M)$,
the definition of
$A(\gamma)_{\lambda}$ contains the Hessian of the heat kernel,
$\nabla_z^2\log p(t/\lambda,y_0,z)$ $(0<t\le 1)$
because the stochastic differential equation of $\gamma$
contains the singular drift term of the logarithmic derivative of
the heat kernel.
To control
this term,
we need results for a short time behavior of
$\lim_{t\to 0}\nabla_z^2\log p(t,x,z)$ which were studied for the first
time by Malliavin and Stroock~\cite{ms}
(see (\ref{loghessian 0}) and Lemma~\ref{gong-ma}).
In view of this, it is easier to study
the spectral gap for $P_{x_0}(M)$ than that for $P_{x_0,y_0}(M)$.
In the final part of this section, we prove
$\lim_{\lambda\to\infty}\frac{e_2^{\lambda}}{\lambda}=1$ for $P_{x_0}(M)$.
In order to show the precise asymptotics of
$e^{\lambda}_{Dir,2,{\mathcal D}}$ and $e^{\lambda}_2$, we need to
identify $A(c_{x_0,y_0})_{\infty}=\lim_{\lambda\to\infty}A(c_{x_0,y_0})_{\lambda}$.
This is necessary for local analysis near $c_{x_0,y_0}$.
In Section 4, first we formally show that
$A(c_{x_0,y_0})_{\infty}$
is an operator which is defined by
the Hessian of the square of the distance function
$k(z)=\frac{d(z,y_0)^2}{2}$.
After that we prove a key relation between
the Hessian of the energy function $E$ at $c_{x_0,y_0}$ and
$A(c_{x_0,y_0})_{\infty}$.
In that proof, Jacobi fields along the geodesic
play an important role.
In Section 5, we prove Theorem~\ref{main theorem 1}.
The proof ${\rm LHS}\le {\rm RHS}$ in $(\ref{main theorem 1 identity})$
relies on an explicit representation $(\ref{representation of e2})$ of
$e^{\lambda}_{Dir,2,{\mathcal D}}$ by the unique eigenfunction (ground state)
$\Psi_{\lambda}$
associated with the first eigenvalue of the Dirichlet Laplacian.
By using this representation and a trial function,
we prove the upper bound.
The trial function is closely related with
``eigenfunctions'' associated with the bottom of the spectrum
of the Hessian of the energy function $E$ at $c_{x_0,y_0}$.
As already mentioned, we need to study
$A(c_{x_0,y_0})_{\infty}$.
In addition,
we need to show that $A(\gamma)_{\lambda}$ can be approximated by
$A(c_{x_0,y_0})_{\infty}$ when $\gamma$ is close to $c_{x_0,y_0}$ and
$\lambda$ is large.
This is correct but not trivial because
$A(\gamma)_{\lambda}$ is defined by solutions of It\^o's
stochastic differential equations driven by $b$
and the solution mappings are not continuous
in usual topology such as the uniform convergence topology.
Actually the solution mappings are continuous
in the topology of rough paths.
Thus, we need to
apply rough path analysis to our problem.
Note that the law of $b$ under the pinned measure
is singular with respect to the Brownian motion measure.
However, the probability distribution of $b$ does not charge
the slim sets in the sense of Malliavin.
Hence, we need to consider Brownian rough paths
for all Brownian paths except a slim set (\cite{aida-loop group}).
After preparation of necessary estimates from rough paths
(see Lemma~\ref{lemma from rough path}), we prove Theorem~\ref{main
theorem 1}.
In Section 6, we prove
the existence of the spectral gap in
a certain general setting as in \cite{clw1}.
This third main theorem (Theorem~\ref{main theorem 3})
implies the first half of the statement in
Theorem~\ref{main theorem 2}.
In Section 7, we complete the proof of
Theorem~\ref{main theorem 2}.
\section{A proof in ${\mathbb R}^N$ and some remarks}
In this section, we show a proof of the asymptotics
$\lim_{\lambda\to\infty}\frac{e^{\lambda}_2}{\lambda}=\sigma_1$
on ${\mathbb R}^N$
under the validity of a log-Sobolev inequality.
Our proof for $P_{x_0,y_0}(M)$ is
a suitable modification of this proof.
In this section, $D$ stands for the usual Fr\'echet derivative on ${\mathbb R}^N$.
Let $E$ be a non-negative $C^{\infty}$ function on ${\mathbb R}^N$
and suppose the following (1), (2), (3), (4).
\begin{enumerate}
\item[(1)] $E(0)=0$ and $0$ is the unique minimum point and
$D^2E(0)>0$.
Further $\liminf_{|x|\to\infty}E(x)>0$.
\item[(2)] Let $\lambda>0$.
Suppose that $e^{-\lambda E(x)}$ is an integrable function and
define a probability measure,
\begin{align}
\nu^{\lambda}(dx)=Z_{\lambda}^{-1}e^{-\lambda E(x)}dx,
\end{align}
where $Z_{\lambda}=\int_{{\mathbb R}^N}e^{-\lambda E(x)}dx$.
\item[(3)]
Let ${\mathcal E}^{\lambda}(F,F)=\int_{{\mathbb R}^N}|DF(x)|^2d\nu^{\lambda}(x)$,
where $F\in C^{\infty}_0({\mathbb R}^N)$.
Also let ${\mathcal E}^{\lambda}$ denote the Dirichlet form which
is the closure of the closable form.
It holds that $|x|, 1\in {\rm D}({\mathcal E}^{\lambda})$
and ${\mathcal E}^{\lambda}(1,1)=0$ for all $\lambda>0$.
The notation $|\cdot|$ denotes the usual Euclidean norm.
\item[(4)]
There exists a constant $C>0$ such that the following
log-Sobolev inequality holds:
\begin{align}
\int_{{\mathbb R}^N}
F(x)^2\log \left(F(x)^2/\|F\|_{L^2(\nu_{\lambda})}^2\right)
d\nu^{\lambda}(x)
&\le
\frac{C}{\lambda}{\mathcal E}^{\lambda}(F,F),\quad
F\in {\rm D}({\mathcal E}^{\lambda}).
\label{LSI RN}
\end{align}
\end{enumerate}
Clearly the spectral bottom $e_1^{\lambda}$ of the Dirichlet form
${\mathcal E}^{\lambda}$ is $0$.
Under the above assumptions, we prove that
\begin{thm}\label{RN}
Let $e_2^{\lambda}$ be the spectral gap of ${\mathcal E}^{\lambda}$.
Then
\begin{align}
\lim_{\lambda\to\infty}\frac{e_2^{\lambda}}{\lambda}=\sigma_1,
\end{align}
where $\sigma_1$ denotes the smallest eigenvalue of
the matrix $D^2E(0)$.
\end{thm}
The log-Sobolev inequality (\ref{LSI RN})
implies the bound
$e_2^{\lambda}\ge 2\lambda/C$ for all $\lambda$.
So it holds that $C\sigma_1\ge 2$.
Note that the assumption in the above is very strong and
we cannot say the result is ``nice''.
\begin{proof}
We prove the lower bound estimate
$\liminf_{\lambda\to\infty}\frac{e^{\lambda}_2}{\lambda}\ge \sigma_1$.
By the assumptions (1) and (2), we have for any $r>0$ there exists $K_{r}$ and
$M_{r}$ such that
\begin{align}
\nu^{\lambda}\left(|x|\ge r\right)\le
K_re^{-\lambda M_r}\qquad \mbox{for all $\lambda\ge 1$}\label{tail estimate0}
\end{align}
and
\begin{align}
\lim_{\lambda\to\infty} \left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}
=\det\left(D^2E(0)\right)^{-1/2}.\label{laplace}
\end{align}
The estimate (\ref{laplace}) can be proved by Laplace's method.
From now on, we always assume $\lambda\ge 1$.
The log-Sobolev inequality (\ref{LSI RN}) implies that
for any bounded measurable function $V$, it holds that
\begin{align}
{\mathcal E}^{\lambda}(F,F)+\int_{{\mathbb R}^N}V(x)F(x)^2d\nu^{\lambda}(x)
&\ge
-\frac{\lambda}{C}\log\left(\int_{{\mathbb R}^N}e^{-\frac{C}{\lambda}V}d\nu^{\lambda}\right)
\|F\|_{L^2(\nu^{\lambda})}^2,\label{GNS0}
\end{align}
where the constant $C$ is the same number as in
(\ref{LSI RN}).
We refer the reader to \cite{gross} for this estimate.
Let $\chi_0$ be a smooth function
with $\chi_0(u)=1$ for $|u|\le 1$
and $\chi_0(u)=0$ for $|u|\ge 2$.
Let $\kappa>0$ be a small number and set
$\chi_{0,\kappa}(x)=\chi_0(\kappa^{-1}|x|)$
and
$\chi_{1,\kappa}(x)=\sqrt{1-\chi_{0,\kappa}^2(x)}$.
Let $F\in {\rm D}({\mathcal E}^{\lambda})$ and assume
$\|F\|_{L^2(\nu^{\lambda})}=1$ and
$\int_{{\mathbb R}^N}F(x)d\nu^{\lambda}(x)=0$.
By an elementary calculation,
\begin{align}
{\mathcal E}^{\lambda}(F,F)&={\mathcal E}^{\lambda}(F\chi_{0,\kappa},F\chi_{0,\kappa})
+{\mathcal E}^{\lambda}(F\chi_{1,\kappa},F\chi_{1,\kappa})\nonumber\\
&\quad
-\int_{{\mathbb R}^N}\left(|D\chi_{0,\kappa}|^2+|D\chi_{1,\kappa}|^2\right)F(x)^2
d\nu^{\lambda}(x).\label{finite dim 1}
\end{align}
This identity is called the IMS localization formula
(\cite{simon}).
We have
$|D\chi_{0,\kappa}|^2+|D\chi_{1,\kappa}|^2\le C'\kappa^{-2}$.
By applying (\ref{GNS0}),
\begin{align}
{\mathcal E}^{\lambda}(F\chi_{1,\kappa},F\chi_{1,\kappa})&=
{\mathcal E}^{\lambda}(F\chi_{1,\kappa},F\chi_{1,\kappa})-\int_{{\mathbb R}^N}
\delta\lambda^2(F\chi_{1,\kappa})^21_{|x|\ge \kappa}d\nu^{\lambda}\nonumber\\
&\quad+
\int_{{\mathbb R}^N}
\delta\lambda^2(F\chi_{1,\kappa})^21_{|x|\ge \kappa}d\nu^{\lambda}\nonumber\\
&\ge -\frac{\lambda}{C}\log\left(\int_{{\mathbb R}^N}
e^{\delta C\lambda 1_{|x|\ge \kappa}}d\nu^{\lambda}
\right)\|F\chi_{1,\kappa}\|_{L^2(\nu^{\lambda})}^2
+\delta\lambda^2\|F\chi_{1,\kappa}\|_{L^2(\nu^{\lambda})}^2\nonumber\\
&\ge \left\{-\frac{\lambda}{C}
\log\left(1+K_{\kappa}e^{\delta C\lambda-M_{\kappa}\lambda}\right)
+\delta\lambda^2
\right\}\|F\chi_{1,\kappa}\|_{L^2(\nu^{\lambda})}^2\nonumber\\
&\ge \left\{-\frac{\lambda}{C}K_{\kappa}e^{(\delta C-M_{\kappa})\lambda}
+\delta\lambda^2\right\}\|F\chi_{1,\kappa}\|_{L^2(\nu^{\lambda})}^2,
\end{align}
where we have used
(\ref{tail estimate0}).
Thus, by choosing $\delta$ so that $\delta C<M_{\kappa}$, there exists
$\delta'>0$
such that for large $\lambda$,
\begin{align}
{\mathcal E}^{\lambda}(F\chi_{1,\kappa},F\chi_{1,\kappa})
&\ge \delta'\lambda^2\|F\chi_{1,\kappa}\|_{L^2(\nu^{\lambda})}^2.
\label{finite dim 2}
\end{align}
We estimate ${\mathcal E}^{\lambda}(F\chi_{0,\kappa},F\chi_{0,\kappa})$.
Note that the support of $F\chi_{0,\kappa}$ is
included in $\{x~|~|x|\le 2\kappa\}$.
Let $V=\{x~|~|x|< 3\kappa\}$.
For small $\kappa$,
by the Morse lemma, there exists an open neighborhood of $0$
$U$ and a
$C^{\infty}$-diffeomorphism
$\Phi$ : $y(\in U)\mapsto x(\in V)$ such that $\Phi(0)=0$ and
$E\left(\Phi(y)\right)=\frac{1}{2}|y|^2$ for all $y\in U$.
We write
$m^{\lambda}(dy)=\left(\frac{\lambda}{2\pi}\right)^{N/2}e^{-\lambda|y|^2/2}dy$.
By using this coordinate, we have
\begin{align}
{\mathcal E}^{\lambda}(F\chi_{0,\kappa},F\chi_{0,\kappa})&=
\int_V|D(F\chi_{0,\kappa})(x)|^2e^{-\lambda E(x)}Z_{\lambda}^{-1}dx\nonumber\\
&=\int_U|D(F\chi_{0,\kappa})\left(\Phi(y)\right)|^2
e^{-\frac{\lambda}{2}|y|^2}Z_{\lambda}^{-1}|\det(D\Phi(y))|dy\nonumber\\
&=\int_U
|\left\{(D\Phi(y))^{\ast}\right\}^{-1}
D\left\{\left(F\chi_{0,\kappa}\right)(\Phi(y))\right\}|^2
e^{-\frac{\lambda}{2}|y|^2}Z_{\lambda}^{-1}|\det(D\Phi(y))|dy.
\end{align}
We may assume that the mappings $y\mapsto \left\{(D\Phi(y))^{\ast}\right\}^{-1}$
and $y\mapsto |\det(D\Phi(y))|$
are Lipschitz continuous.
Let
$\tilde{\sigma}_1$
be the smallest eigenvalue of
$(D\Phi(0))^{-1}\{(D\Phi(0))^{\ast}\}^{-1}$.
Then
there exists a positive function $\varepsilon(\kappa)$ satisfying
$\lim_{\kappa\to 0}\varepsilon(\kappa)=0$ such that
\begin{align}
{\mathcal E}^{\lambda}(F\chi_{0,\kappa},F\chi_{0,\kappa})
&\ge
(1-\varepsilon(\kappa))\tilde{\sigma}_1
|\det
D\Phi(0)|Z_{\lambda}^{-1}\left(\frac{\lambda}{2\pi}\right)^{-N/2}
\int_U|D\left\{\left(F\chi_{0,\kappa}\right)(\Phi(y))\right\}|^2
dm^{\lambda}(y)\nonumber\\
&\ge
(1-\varepsilon(\kappa))
\tilde{\sigma}_1
|\det
D\Phi(0)|Z_{\lambda}^{-1}\left(\frac{\lambda}{2\pi}\right)^{-N/2}\nonumber\\
&\quad \times
\lambda\left\{
\int_{{\mathbb R}^N}(F\chi_{0,\kappa})^2(\Phi(y))dm^{\lambda}(y)
-\left(\int_{{\mathbb R}^N}(F\chi_{0,\kappa})(\Phi(y))dm^{\lambda}(y)\right)^2
\right\},
\end{align}
where we have used the spectral gap
of the generator of the Dirichlet form
$\int_{{\mathbb R}^N}|DF(y)|^2dm^{\lambda}(y)$ is $\lambda$.
We have
\begin{align}
\lefteqn{|\det
D\Phi(0)|Z_{\lambda}^{-1}\left(\frac{\lambda}{2\pi}\right)^{-N/2}
\int_{{\mathbb R}^N}
(F\chi_{0,\kappa})^2(\Phi(y))dm^{\lambda}(y)}\nonumber\\
&=
|\det
D\Phi(0)|Z_{\lambda}^{-1}\left(\frac{\lambda}{2\pi}\right)^{-N/2}
\int_U(F\chi_{0,\kappa})^2(\Phi(y))dm^{\lambda}(y)\nonumber\\
&=|\det
D\Phi(0)|Z_{\lambda}^{-1}\left(\frac{\lambda}{2\pi}\right)^{-N/2}
\int_V(F\chi_{0,\kappa})^2(x)
e^{-\lambda E(x)}\left(\frac{\lambda}{2\pi}\right)^{N/2}
\left|\det(D(\Phi^{-1})(x))\right|dx\nonumber\\
&\ge(1-\varepsilon(\kappa))
Z_{\lambda}^{-1}
\int_V(F\chi_{0,\kappa})^2(x)
e^{-\lambda E(x)}dx\nonumber\\
&=(1-\varepsilon(\kappa))
Z_{\lambda}^{-1}
\int_{{\mathbb R}^N}(F\chi_{0,\kappa})^2(x)
e^{-\lambda E(x)}dx
\end{align}
and
\begin{align}
&\int_{{\mathbb R}^N}
\left(F\chi_{0,\kappa}\right)(\Phi(y))dm^{\lambda}(y)\nonumber\\
&\quad=
\int_U(F\chi_{0,\kappa})(\Phi(y))dm^{\lambda}(y)\nonumber\\
&\quad=
\int_V(F\chi_{0,\kappa})(x)|\det(D(\Phi^{-1})(x))|
\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}d\nu^{\lambda}(x)\nonumber\\
&\quad=
\int_V(F\chi_{0,\kappa})(x)
\left(|\det(D(\Phi^{-1})(x))|-|\det(D(\Phi^{-1})(0))|\right)
\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}d\nu^{\lambda}(x)\nonumber\\
&\qquad\quad+|\det(D(\Phi^{-1})(0))|
\int_V(F\chi_{0,\kappa})(x)
\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}d\nu^{\lambda}(x)\nonumber\\
&\quad=:I_1+I_2.
\end{align}
Here
\begin{align}
|I_1|&\le
\varepsilon(\kappa)\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}
\|F\chi_{0,\kappa}\|_{L^2(\nu^{\lambda})}
\end{align}
and by the Schwarz inequality,
\begin{align}
&|I_2|\nonumber\\
&\le |\det(D(\Phi^{-1})(0))|
\left\{
\left|\int_{{\mathbb R}^N}F(x)d\nu^{\lambda}(x)\right|+
\left|\int_{{\mathbb R}^N}
F(y)\left(\chi_{0,\kappa}(x)-1\right)d\nu^{\lambda}(x)\right|
\right\}\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}
\nonumber\\
&\le |\det(D(\Phi^{-1})(0))|\,
\nu^{\lambda}\left(|x|\ge
\kappa\right)^{1/2}\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}\nonumber\\
&\le |\det(D(\Phi^{-1})(0))|\,
\sqrt{K_{\kappa}}e^{-\lambda M_{\kappa}/2}\left(\frac{\lambda}{2\pi}\right)^{N/2}Z_{\lambda}.
\end{align}
By the definition of $\Phi$, we have
$D^2E(0)=\{(D\Phi(0))^{\ast}\}^{-1}(D\Phi(0))^{-1}$.
Since the set of eigenvalues of
$(D\Phi(0))^{-1}\{(D\Phi(0))^{\ast}\}^{-1}$
and $\{(D\Phi(0))^{\ast}\}^{-1}(D\Phi(0))^{-1}$
are the same, we obtain $\tilde{\sigma}_1=\sigma_1$.
Thus, we get
\begin{align}
\lefteqn{{\mathcal E}^{\lambda}(F\chi_{0,\kappa},F\chi_{0,\kappa})}\nonumber\\
&\ge
\lambda(1-\varepsilon(\kappa))\sigma_1\|F\chi_{0,\kappa}\|_{L^2(\nu^{\lambda})}^2\nonumber\\
&\quad-\lambda Z_{\lambda}^2\left(\frac{\lambda}{2\pi}\right)^{N}
\left\{
\varepsilon(\kappa)
\|F\chi_{0,\kappa}\|_{L^2(\nu^{\lambda})}+
|\det(D(\Phi^{-1})(0))|\,
\sqrt{K_{\kappa}}e^{-\lambda M_{\kappa}/2}
\right\}^2.
\label{finite dim 3}
\end{align}
By (\ref{finite dim 1}), (\ref{finite dim 2}), (\ref{finite dim 3})
and $\chi_{0,\kappa}^2(x)+\chi_{1,\kappa}^2(x)=1$ for all
$x$, we complete the proof of the lower bound.
The upper bound $\limsup_{\lambda\to\infty}\frac{e^{\lambda}_2}{\lambda}\le
\sigma_1$ can be proved by a standard way.
Let $v$ be a unit eigenvector such that $D^2E(0)v=\sigma_1 v$.
For this $v$, let
$F^{\lambda}(x)=\sqrt{\lambda \sigma_1} (x,v)$.
Then we have
$\lim_{\lambda\to\infty}\frac{{\mathcal E}^{\lambda}(F^{\lambda},F^{\lambda})}{\lambda}=\sigma_1$,
$\lim_{\lambda\to\infty}\int_{{\mathbb R}^N}F^{\lambda}(x)d\nu^{\lambda}(x)=0$
and $\lim_{\lambda\to\infty}\|F^{\lambda}\|_{L^2(\nu^{\lambda})}=1$
which imply the upper bound.
\end{proof}
\begin{rem}\label{first remark}
{\rm
(1)~In the estimate of
${\mathcal E}^{\lambda}(F\chi_{0,\kappa},F\chi_{0,\kappa})$,
we reduce the problem to Gaussian case
with the help of the Morse lemma.
The It\^o map is a measure preserving map between
$P_{x_0}(M)$ with the Brownian motion measure and
the Wiener space.
However, the derivative of the
It\^o map is not a bounded linear operator
between two tangent spaces
(\cite{driver1, cruzeiro-malliavin, elworthy-li1}).
In the study of the asymptotic behavior of the lowest eigenvalue
of a Schr\"odinger operator on $P_{x_0}(M)$ in \cite{aida-semiclassical},
the author reduced the local analysis
to the analysis in Wiener spaces by using the It\^o map
and a ground state transformation.
At the moment, it is not clear that similar consideration
can be applied to the local analysis in the present problem.
In this paper, instead, we use the COH formula
in Lemma~\ref{coh formula}.
\noindent
(2)
~Let us consider a Dirichlet form
\begin{align}
{\mathcal E}^{A,\lambda}(F,F)&=\int_{{\mathbb R}^N}|A(x)DF(x)|^2d\nu^{\lambda}(x),
\end{align}
where $A(x)$ is an $N\times N$ regular matrix-valued continuous
mapping
on ${\mathbb R}^N$ satisfying that
there exists a positive number $C>1$ such that
$C^{-1}|\xi|^2\le (A(x)\xi,\xi)\le C|\xi|^2$
for all $x,\xi$.
Suppose ${\mathcal E}^{A,\lambda}$ satisfies the above assumption
(3) and (4).
Then, for the asymptotic behavior of the spectral gap of
${\mathcal E}^{A,\lambda}$, the same result as in Theorem~\ref{RN} holds
replacing $\sigma_1$ by the lowest eigenvalue of
the Hessian of $E$ with respect to the Riemannian metric
defined by $g_{A}(x)(\xi,\xi)=|A(x)^{-1}\xi|^2$.
In that proof, we use the continuity of the map
$x\mapsto A(x)$.
In the case of $P_{x_0,y_0}(M)$,
a local Poincar\'e inequality (\ref{poincare1})
and a log-Sobolev inequality (\ref{LSI loop}) holds.
However the mapping
$\gamma\mapsto A(\gamma)_{\lambda}$
is not a continuous mapping in the uniform convergence topology
and just a continuous
mapping in the topology of rough paths.
In this sense, we need the result in rough paths.
Moreover, in that case, the operator norm of
$A(\gamma)_{\lambda}$ is not uniformly bounded in $\gamma$.
Hence the argument is not so simple as in the above case.
Note that $A(\gamma)_{\lambda}$ depends on $\lambda$.
Hence we need to estimate $A(\gamma)_{\lambda}$ for large
$\lambda$.
In this calculation, we use the short time behavior of
the Hessian of the logarithm of the heat kernel.
}
\end{rem}
\section{Preliminary and Statement of results}\label{statement}
Let $(M,g)$ be an $n$-dimensional complete
Riemannian manifold.
Let $d(x,y)$ denote the Riemannian distance between
$x$ and $y$.
Let $p(t,x,y)$
be the heat kernel of the diffusion semigroup
$e^{t\Delta/2}$ defined by the Laplace-Bertlami operator $\Delta$.
We refer the readers to \cite{hsu, ikeda-watanabe}
for stochastic analysis on manifolds.
The following assumption is natural for analysis on
Riemannian manifolds.
\begin{assumption}\label{assumption A}
\noindent
$(1)$
~There exist positive constants $C, C'$
such that for any $0<t\le 1$, $x,y\in M$,
\begin{align}
p(t,x,y)&\le Ct^{-n/2}e^{-C'd(x,y)^2/t}.
\end{align}
\noindent
$(2)$ The Ricci curvature of $M$ is bounded,
{\it i.e.}, $\|{\rm Ric}\|_{\infty}<\infty$.
\end{assumption}
The condition (2) implies $\int_Mp(t,x,y)dy=1$ holds for all
$t>0$ and $x\in M$,
where $dy$ denotes the Riemannian volume.
In second main theorem (Theorem~\ref{main theorem 2}),
we consider rotationally symmetric
Riemannian metrics.
We prove the above assumption holds true in such a case
by using the following observation in Lemma~\ref{gong-ma}.
Assumption~\ref{assumption A} (1) holds true
if the Ricci curvature is bounded from below and
the volume of small balls have uniform lower bound (\cite{li-yau}).
That is,
there exist $C>0$ and $l_0>0$ such that
${\rm vol}(B_l(x))\ge C l^{n}$ for all $0<l<l_0$ and any
$x\in M$.
Here ${\rm vol}(B_l(x))$ denotes the volume
of the open metric ball $B_l(x)$ centered at $x$ with radius $l$.
In order to define (pinned) Brownian motion measure,
we assume $M$ satisfies Assumption~\ref{assumption A}.
Let $x_0\in M$.
The probability measure $\nu^{\lambda}_{x_0}$
on $P_{x_0}(M)$ satisfying the following is called
the Brownian motion measure starting at $x_0$:
\noindent
For any Borel measurable subsets $A_k\subset M$~$(1\le k\le m)$
and
$0=t_0<t_1<\cdots<t_{m}\le 1$,
\begin{align}
&\nu^{\lambda}_{x_0}
\left(\{\gamma~|~
\gamma(t_1)\in A_1,\ldots,\gamma(t_m)\in A_m\}\right)
\nonumber\\
&\quad =\int_{M^m}
\prod_{k=1}^{m}p\left((t_{k}-t_{k-1})/\lambda,x_{k-1},x_k\right)
1_{A_k}(x_k)dx_1\cdots
dx_{m}.
\end{align}
The process $\gamma(t)$ under $\nu^{\lambda}_{x_0}$ is a semimartingale.
When $M={\mathbb R}^n$, $\gamma(t)$ is the ordinary Brownian
motion whose covariance matrix is
equal to
$tI/\lambda$.
Let $\pi : O(M)\to M$ be the orthonormal frame bundle
with the Levi-Civita connection.
We fix a frame $u_0=\{\varepsilon_i\}_{i=1}^n\in \pi^{-1}(x_0)$.
By the mapping $u_0 :{\mathbb R}^n\to T_{x_0}M$,
we identify ${\mathbb R}^n$ with $T_{x_0}M$.
Let $\tau(\gamma)_t : T_{x_0}M\to T_{\gamma(t)}M$
denote the stochastic parallel translation along
$\gamma$.
For
smooth cylindrical function
$F(\gamma)=f(\gamma(t_1),\ldots,\gamma(t_m))
\in {\cal FC}^{\infty}_b(P_{x_0}(M))$
$(0<t_1<\cdots<t_m\le 1)$,
the $H$-derivative $DF(\gamma)$
is defined by
\begin{align}
DF(\gamma)_t=
\sum_{i=1}^mu_0^{-1}\tau(\gamma)_{t_i}^{-1}(\nabla_i f)(\gamma(t_1),\ldots,\gamma(t_m))
t\wedge t_i,
\end{align}
where
$\nabla_i f$ denotes the derivative of $f$ with respect
to the $i$-th variable.
Note that $DF(\gamma)\in \H:=H^1([0,1]\to {\mathbb R}^n~|~h(0)=0)$.
Under Assumption~\ref{assumption A},
the symmetric form
\begin{align}
{\mathcal E}^{\lambda}(F,F) =
\int_{P_{x_0}(M)}|DF(\gamma)|_{\H}^2d\nu^{\lambda}_{x_0}(\gamma),
\qquad F\in {\cal FC}^{\infty}_b(P_{x_0}(M))
\end{align}
is closable.
We refer the reader to \cite{driver0, hsu, hsu1} for the closability.
The Dirichlet form of the smallest closed extension is denoted by the same
notation and the
the generator $-L_{\lambda}$ is
a natural generalization
of OU operators in Gaussian cases.
We now consider the pinned case.
It is elementary fact that regular conditional probability
(pinned Brownian motion measure)
$\nu^{\lambda}_{x_0,y}(\cdot)=\nu^{\lambda}_{x_0}(\cdot~|~\gamma(1)=y)$
exists on $P_{x_0,y}(M)$ for
$p(1,x_0,y)dy$-almost all
$y$.
However, it is necessary for us to define
$\nu^{\lambda}_{x_0,y}$ for all
$y\in M$ .
Actually, under Assumption~\ref{assumption A} (1) and (2), one can prove
that
the regular conditional probability
$\nu^{\lambda}_{x_0,y}$
on $P_{x_0,y}(M)$
exists for all $y\in M$.
This can be checked by using the volume comparison theorem and
the Kolmogorov criterion
(see \cite{aida-coh, hsu, driver01}).
Moreover, the pinned Brownian motion measure is equivalent to
the Brownian motion measure up to any time $t<1$ with respect to
the natural $\sigma$-field generated by the paths.
This implies that the pinned Brownian motion is a semimartingale
for $t<1$.
Hence the stochatsic parallel translation is well defined
and one can define the $H$-derivative
of a smooth $F(\gamma)=f(\gamma(t_1),\ldots,\gamma(t_m))\in
{\cal FC}^{\infty}_b(P_{x_0,y_0}(M))$ ($t_m<1$) by
$
D_0F(\gamma)_t=
P_0\left(DF(\gamma)\right)_t,
$
where $P_0$ is the projection operator on
$\H$ onto
the subspace ${\rm H}_0:=\{h\in {\rm H}~|~h(1)=0\}$.
Using $D_0$ on ${\cal FC}^{\infty}_b(P_{x_0,y_0(M)})$, we can define
a symmetric bilinear form ${\mathcal E}^{\lambda}$
similarly to non-pinned case.
However, we need additional assumption on the Riemannian manifold
$M$ to prove the closability since $M$ may be non-compact.
Hence we consider the following assumption.
\begin{assumption}\label{assumption B}
$({\mathcal E}^{\lambda},{\cal FC}^{\infty}_b(P_{x_0,y_0}(M)))$ is closable.
\end{assumption}
We explain the reason why we need additional assumption.
Let
$b(t)=\int_0^tu_0^{-1}\tau(\gamma)_s^{-1}\circ d\gamma(s)$
$(0\le t\le 1)$, where $\circ d$ means Stratonovich integral.
The process $b(t)$ is anti-stochastic development of $\gamma(t)$.
Under the law $\nu^{\lambda}_{x_0}$, $b(t)$ is the ordinary Brownian motion
with variance $1/\lambda$.
We will discuss $b(t)$ later again in the explanation of the COH formula.
Note that the law of $\{b(t)\}_{0\le t\le 1}$ is singular with respect
to the Brownian motion measure
under $\nu^{\lambda}_{x_0,y_0}$.
This is related to the singularity of the pinned Brownian motion itself.
The closability of ${\mathcal E}^{\lambda}$ can be proved by using the integration
by parts(=IBP) formula for $D, D_0$.
The formula contains
stochastic integrals with respect to
$b(t)$ and the integrability of the stochastic integrals
when $t$ converges to $1$ is the main issue to establish
the formula for the pinned measure.
See \cite{driver01, aida-coh, enchev-stroock1, enchev-stroock2, hsu, gordina}
for this problem.
If either
(i) $M$ is compact, or
(ii) $M$ is diffeomorphic to ${\mathbb R}^n$ and the metric is flat outside a
certain bounded set,
holds, by applying the Malliavin's quasi-sure analysis,
we can prove the integrability of the stochastic integrals
and we obtain the IBP formula and the closability.
Also, under the condition,
\begin{align}
& \mbox{There exists a positive constant $C$ such that for any $0<t\le 1$ and
$z\in M$},\nonumber\\
&
|\nabla_z\log p(t,y_0,z)|\le C\frac{d(y_0,z)}{t}+\frac{C}{\sqrt{t}},
\end{align}
the IBP formula and the closability hold.
This inequality holds for any compact Riemannian manifolds
(\cite{hsu}).
For rotationally symmetric Riemannian manifolds,
we will give a sufficient condition for this.
See Assumption~\ref{assumption C} and
Lemma~\ref{gong-ma} (2).
We now define
a Dirichlet Laplacian
on a certain domain ${\cal D}$
in $P_{x_0,y_0}(M)$.
\begin{dfi}
Let $l$ be a positive number with $l>d(x_0,y_0)$.
let $B_l(y_0)$ denote the open ball centered at $y_0$
with radius $l$.
Define
\begin{align}
{\cal D}_l&=\{\gamma\in P_{x_0,y_0}(M)~|~
\gamma(t)\in B_l(y_0)~~\mbox{for all $0\le t\le1$}
\}.
\end{align}
For $l=+\infty$, we set
${\cal D}_{\infty}=P_{x_0,y_0}(M)$.
\end{dfi}
We may omit the subscript $l$ for simplicity.
In order to define the $H^1$-Sobolev spaces,
we assume Assumption~\ref{assumption B} for the moment.
Let $H^{1,2}(P_{x_0,y_0}(M),\nu^{\lambda}_{x_0,y_0})$ denote the $H^1$-Sobolev space
which is the closure of ${\cal FC}^{\infty}_b(P_{x_0,y_0}(M))$ with respect to
the norm
$\|F\|_{H^1}^2=\|F\|^2_{L^2(\nu^{\lambda}_{x_0,y_0})}+{\mathcal E}^{\lambda}(F,F)$.
Let
\begin{align}
H^{1,2}_0({\cal D},\nu^{\lambda}_{x_0,y_0})=
\left\{F\in H^{1,2}(P_{x_0,y_0}(M),\nu^{\lambda}_{x_0,y_0})~|~
\mbox{$F=0$~$\nu^{\lambda}_{x_0,y_0}$-a.s. outside ${\cal D}$}
\right\}
\end{align}
which is a closed linear subspace of
$H^{1,2}(P_{x_0,y_0}(M),\nu^{\lambda}_{x_0,y_0})$.
The non-positive generator $L_{\lambda}$ corresponding to the
densely defined closed form
$$
{\mathcal E}^{\lambda}(F,F),~~F\in H^{1,2}_0({\mathcal D},{\nu}^{\lambda}_{x_0,y_0})
$$
in the Hilbert space
$L^2({\mathcal D},{\nu}^{\lambda}_{x_0,y_0})$
is the Dirichlet Laplacian on
${\mathcal D}$.
Let
\begin{align}
e^{\lambda}_{Dir,1,{\mathcal D}}&=
\inf_{F(\ne 0)\in H^{1,2}_0\left({\mathcal D}\right)}
\frac{\int_{{\mathcal D}}|D_0F|^2d{\nu}^{\lambda}_{x_0,y_0}}
{\|F\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2}.
\end{align}
This is equal to $\inf\sigma(-L_{\lambda})$, where
$\sigma(-L_{\lambda})$ denotes the spectral set of $-L_{\lambda}$.
We next introduce
\begin{align}
&e^{\lambda}_{Dir,2,{\mathcal D}}\nonumber\\
&=
\sup_{G(\ne 0)\in L^2(\nu^{\lambda}_{x_0,y_0})}
\inf\Biggl\{
\frac{\int_{{\mathcal D}}|D_0F|^2d{\nu}^{\lambda}_{x_0,y_0}}
{\|F\|_{L^2({\nu}^{\lambda}_{x_0,y_0})}^2}~\Bigg |~
F\in H^{1,2}_0\left({\mathcal D}\right),
\quad (F,G)_{L^2({\nu}^{\lambda}_{x_0,y_0})}=0\Biggr\}.
\end{align}
This is the generalized second lowest eigenvalue of
$-L_{\lambda}$.
When $l=+\infty$,
$e^{\lambda}_{Dir,1,{\mathcal D}}=0$ and
$e^{\lambda}_{Dir,2,{\mathcal D}}$ is equal to the spectral gap
of $-L_{\lambda}$ on the whole space
$P_{x_0,y_0}(M)$.
We use the notations
$e^{\lambda}_{1}$ and $e^{\lambda}_2$
instead of $e^{\lambda}_{Dir,1,{\mathcal D}}$ and $e^{\lambda}_{Dir,2,{\mathcal D}}$
respectively in this case.
To state our first main theorem,
let us define the energy of $H^1$ path $\gamma$ belonging
to
$P_{x_0,y_0}(M)$,
\begin{align}
E(\gamma)=\frac{1}{2}\int_0^1|\gamma'(t)|^2_{T_{\gamma(t)}M}\,dt.
\label{energy function}
\end{align}
We use the same notation $D_0$ for the derivative
of the smooth function on the Hilbert manifold of the $H^1$ subset of
$P_{x_0,y_0}(M)$.
Note that $D^2_0E(c_{x_0,y_0})$ is a symmetric bounded linear operator
on ${\rm H}_0$.
See Lemma~\ref{S and T1} for the explicit form.
The following is our first main theorem.
\begin{thm}\label{main theorem 1}
Assume $M$ satisfies Assumptions~{\rm \ref{assumption A}},
{\rm \ref{assumption B}}.
Let $0<l<\infty$.
Assume that $l$ satisfies the
following.
\noindent
$(1)$ $l$ is smaller than the injectivity radius
at $y_0$.
In particular,
there are no intersection of the closure of
$B_l(y_0)$ and ${\rm Cut}(y_0)$,
where ${\rm Cut}(y_0)$ denotes the cut-locus
of $y_0$.
\noindent
$(2)$
The Hessian of $k(z)=\frac{1}{2}d(z,y_0)^2$
satisfies that
$
\inf_{z\in B_{l}(y_0)}\nabla^2k(z)>1/2.
$
Then we have
\begin{align}
\lim_{\lambda\to\infty}
\frac{e^{\lambda}_{Dir,2,{\mathcal D}}}{\lambda}=\sigma_1,\label{main theorem 1 identity}
\end{align}
where
$\sigma_1=\inf\sigma((D_0^2 E)(c_{x_0,y_0}))$.
\end{thm}
Since $\nabla_z^2k(z)|_{z=y_0}=I_{T_{y_0}M}$,
the above conditions (1), (2) hold true for small
$l$.
Also, if $M$ is negatively curved manifold,
the condition (2) holds for all $l$.
We need condition (2)
to prove
a COH formula by applying Lemma 3.1 in \cite{aida-coh}
although this may be just a technical condition.
Under the above condition, clearly
the minimal geodesic $c_{x_0,y_0}=c_{x_0,y_0}(t)$~
$(0\le t\le 1)$~$(c_{x_0,y_0}(0)=x_0, c_{x_0,y_0}(1)=y_0)$
belongs to ${\mathcal D}$.
Further,
$\lim_{\lambda\to\infty}\nu^{\lambda}_{x_0,y_0}({\cal D})=1$
holds true by a large deviation result
(see Section 5).
For a certain class of Riemannian manifolds $M$,
the same result holds for $P_{x_0,y_0}(M)$.
It is the second main theorem.
Let $M$ be a Riemannain manifold with a pole $y_0$.
That is, the exponential map
$\exp_{y_0} : T_{y_0}M\to M$ is a diffeomorphism.
We pick an
orthonormal frame $\tilde{u}_0$ of
$T_{y_0}M$.
Let $S^{n-1}$ be the unit sphere centered at the origin
in ${\mathbb R}^n$.
We identify ${\mathbb R}^n\setminus \{0\}$ with $(0,+\infty)\times S^{n-1}$ by
$(r,\Theta)(\in (0,+\infty)\times S^{n-1})
\mapsto r\Theta\in ({\mathbb R}^n\setminus \{0\})$.
Let us define $\Psi : (0,+\infty)\times S^{n-1}\to M$
by
$x=\Psi(r,\Theta)=\exp_{y_0}\left(\tilde{u}_0(r\Theta)\right)$.
Then $r=d(y_0,x)$ holds.
The Riemannian metric $g$ is called rotationally symmetric at $y_0$
if the pull back of $g$ by $\Psi$ can be expressed as
\begin{align}
\Psi^{\ast}g=
dr^2+f(r)^2d\Theta^2, \label{rs metric}
\end{align}
$d\Theta^2$ denotes the standard Riemannian metric on the sphere.
Note that if $g$ is a smooth Riemannian metric on
$M$, $f(r)$ is a $C^{\infty}$ function on $[0,\infty)$
satisfying $f(0)=0$ and $f'(0)=1$.
We consider the following assumption on $f$.
\begin{assumption}\label{assumption C}
Let $\varphi(r)=\log \frac{f(r)}{r}$.
The function $\varphi$ satisfies the following.
\noindent
$(1)$~$\varphi$ is a $C^{\infty}$ function on $[0,\infty)$.
The $k$-th derivative $\varphi^{(k)}(r)$ is
bounded function on $[0,\infty)$ for all $1\le k\le 4$.
\noindent
$(2)$~
There exists a $C^{\infty}$ function $\phi$ on $[0,\infty)$
such that
$\varphi(r)=\phi(r^2)$.
\noindent
$(3)$~$\inf_{r>0}r\varphi'(r)>-\frac{1}{2}$.
\end{assumption}
By Lemma A.2 in \cite{chow},
it is easy to deduce that
for any smooth function $f$ on $[0,\infty)$
satisfying $f(0)=0, f'(0)=1$ and
Assumption~\ref{assumption C} (2),
the Riemannian metric $dr^2+f(r)^2d\Theta^2$ on
${\mathbb R}^n\setminus \{0\}$ can be extended
to a smooth Riemannian metric on ${\mathbb R}^n$.
The above condition on $\varphi$ appeared in \cite{aida-precise}.
In \cite{aida-precise}, we assume all derivatives
$\varphi^{(k)}$ are bounded.
However we see that it is enough to assume the boundedness for
$1\le k\le 4$ by checking the calculations there.
We give examples of $\varphi$ which satisfies the above assumption.
\begin{exm}{\rm
For the hyperbolic space with the sectional curvature
$K=-a$,
$\varphi_a(r)=\log\frac{\sinh \sqrt{a}r}{\sqrt{a}r}$.
This
satisfies Assumption~$\ref{assumption C}$.
Actually $\varphi_a'(r)\ge 0$ for all $r$.
Clearly, small perturbations of $\varphi_a(r)$
satisfy the assumption.
Also if $\varphi_i$~$(1\le i\le n)$ satisfy
the assumption, then
so do the function $\sum_{i=1}^np_i\varphi_i$
for any positive numbers
$\{p_i\}$ with $\sum_{i=1}^np_i=1$.}
\end{exm}
The function $f$ satisfies the Jacobi equation
$f''(r)+K(r)f(r)=0$, where $K$ is the radial curvature
function.
It is natural to put the assumptions on $K$ instead of $f$.
In fact, it is proved in \cite{sasamori}
that necessary all estimates for the validity of
our second main theorem (Theorem~\ref{main theorem 2})
hold true under some assumptions on $K$.
Further related work is in progress.
The quantity $r\varphi'(r)$ is related to
the second derivative of the squared distance function
as in the following lemma
(\cite{greene-wu, aida-precise}).
\begin{lem}\label{hessian of d2}
For $r=d(y_0,z)$,
we have
\begin{align}
\nabla^2_z\left(\frac{r^2}{2}\right)
&=I_{T_zM}+r\varphi'(r)P_z^{\perp},
\end{align}
where
$v_z\in T_zM$ is the element such that
$\exp_z(v_z)=y_0$ and
$P_z^{\perp}$ denotes the orthogonal projection
onto the orthogonal complement of the $1$ dimensional
subspace spanned by $v_z\in T_zM$.
\end{lem}
By this lemma, we see that Assumption~\ref{assumption C} (3)
implies the condition (2) in Theorem~\ref{main theorem 1}
with $l=+\infty$.
\begin{lem}\label{sufficient condition for assumption A}
Suppose $f$ satisfies Assumption~{\rm \ref{assumption C}} $(1), (2)$
and $\inf_{r>1}f(r)>0$.
Then
Assumptions~{\rm \ref{assumption A}}, {\rm \ref{assumption
B}}
hold.
\end{lem}
\begin{proof}
By Lemma 1.21 in \cite{chow} (see also Proposition 9.106 in
\cite{besse}),
it is easy to see the boundedness of the Ricci curvature
under the Assumption~\ref{assumption C} (1), (2).
To prove the Gaussian upper bound in Assumption~\ref{assumption A} (1),
it suffices to prove that there exists $C>0$ such that
$\inf_{x\in M}{\rm vol}(B_l(x))\ge Cl^n$ for small $l>0$
because the Ricci curvature is bounded.
Also under the assumption $\|\varphi'\|_{\infty}<\infty$,
we obtain there exist positive constants $C(\varepsilon,R)$ and
$c(\varepsilon,R)$ for any $\varepsilon>0$ and $R>0$ such that
\begin{align}
c(\varepsilon,R)\le \frac{f(r')}{f(r)}\le
C(\varepsilon,R)
\qquad \mbox{for any $r, r'$ with
$r,r'\ge R, |r-r'|\le \varepsilon$}
\end{align}
and
$\lim_{\varepsilon\to 0}c(\varepsilon,R)=\lim_{\varepsilon\to 0}C(\varepsilon,R)=1$.
By using this and $\inf_{r\ge 1}f(r)>0$,
it is not difficult to show the uniform lower boundedness of the volume
by this estimate.
Assumption~\ref{assumption B} follows from
the estimate of $\nabla_z\log p(t,y_0,z)$
in (\ref{loggrad}).
\end{proof}
Actually, (1.58) in \cite{chow} implies that
the sectional curvature is bounded
under Assumption~\ref{assumption C}.
Hence, we may use comparison theorem of heat kernels
to prove the Gaussian upper bound.
We refer the reader to \cite{hsu} for the comparison theorem.
Also we note that $\inf_{r>0} r\varphi'(r)>-1$ implies
$\inf_{r>1}f(r)>0$.
The following is
our second main theorem.
We prove the positivity of $e^{\lambda}_2$ in
more general setting in
Theorem~\ref{main theorem 3}.
\begin{thm}\label{main theorem 2}
Let $M$ be a rotationally symmetric Riemannian manifold
with a pole $y_0$.
Suppose $f$ in $(\ref{rs metric})$ satisfies
Assumption~{\rm \ref{assumption C}}.
Then $e_2^{\lambda}>0$ holds for all $\lambda>0$ and
\begin{align}
\lim_{\lambda\to\infty}\frac{e_2^{\lambda}}{\lambda}=\sigma_1,\label{asymptotics of ela2}
\end{align}
where $\sigma_1$ is the same number as in
Theorem~$\ref{main theorem 1}$.
\end{thm}
We make remarks on Theorem~\ref{main theorem 1}
and Theorem~\ref{main theorem 2}.
\begin{rem}
{\rm
\noindent
$(1)$
It is not clear whether the same result as in Theorem~\ref{main theorem 2}
holds or not for $P_{x_0,y}(M)$ ($y\ne y_0$)
under Assumption~\ref{assumption C}.
It is more interesting to study non-rotationally general cases.
\noindent
$(2)$~By checking the proof, the same results as
in Theorem~\ref{main theorem 2} hold if the following
are satisfied,
\begin{itemize}
\item[(i)] $d(x_0,y_0)$ is smaller than $l$ which satisfies
Theorem~\ref{main theorem 1} (1) and (2),
\item[(ii)] the log-Sobolev inequality (\ref{LSI loop}) holds,
\item[(iii)] the tail estimate (\ref{tail estimate}) holds.
\end{itemize}
\noindent
$(3)$
If the sectional curvature along
the geodesic $c_{x_0,y_0}$ is positive, then
$\inf\sigma(D_0^2 E(c_{x_0,y_0}))<1$
and the bottom of the spectrum is an eigenvalue
of $D_0^2E(c_{x_0,y_0})$ and
is not an essential spectrum.
While the curvature is strictly negative,
$\inf\sigma(D_0^2 E(c_{x_0,y_0}))=1$
and $1$ is not an eigenvalue and belongs to essential spectrum.
This suggests that the second lowest eigenvalue, or more generally,
some low-lying spectrum of the OU operator (with Dirichlet boundary
condition)
on ${\mathcal D}$ or $P_{x_0,y_0}(M)$
over a positively curved manifold belongs to the discrete spectrum,
while the second lowest eigenvalue is embedded in the
essential spectrum in the case of negatively curved manifolds.
In fact, in the proof of upper bound in the main theorems,
we use ``approximate second eigenfunctions'' which
are defined by the eigenfunction which achieves the value
$\inf\sigma(D_0^2 E(c_{x_0,y_0}))$ approximately.
If some isometry group acts on $M$ with the fixed points
$x_0$ and $y_0$, we may expect the discrete spectrum have some
multiplicities.
We show these kind of results in the case where $M$ is a compact
Lie group in a forthcoming paper.
}
\end{rem}
As mentioned in the Introduction,
the spectral gap $e_2^{\lambda}$ for $P_{x_0}(M)$
is defined similarly
and $e_2^{\lambda}>0$ for all $\lambda$.
This is due to Fang.
He established a
COH formula and proved the existence of the
spectral gap
in the case where $M$ is compact and $\lambda=1$.
However, it is obvious that
the same result holds true
on a complete Riemannian manifold with bounded
Ricci curvature for all $\lambda>0$ .
See also \cite{chl, gong-ma, aida-semiclassical0, aida-gradient}.
The variant of the
COH formula in the loop space case is important in our case also.
To explain the COH formula,
we need some preparations.
Let ${\mathfrak F}_t=\sigma\left(\gamma(s), 0\le s\le t\right)
\vee {\cal N}$,
where ${\cal N}$ is the set of all null sets
with respect to $\nu^{\lambda}_{x_0}$.
Then $b(t)=\int_0^tu_0^{-1}\tau(\gamma)_s^{-1}\circ d\gamma(s)$
is an ${\mathfrak F}_t$-Brownian motion with the covariance
$E^{\nu^{\lambda}_{x_0}}[(b(t),u)(b(s),v)]=(u,v)\frac{t\wedge s}{\lambda}$
~$(u,v\in {\mathbb R}^n)$ on ${\mathbb R}^n$ under $\nu^{\lambda}_{x_0}$.
We simply say $b(t)$ is a Brownian motion with variance $1/\lambda$
in this paper.
We recall the notion of the trivialization.
Let $T\in \Gamma(TM\otimes T^{\ast}M))$ be a
$(1,1)$-tensor on $M$,
that is, $T$ is a linear transformation on
each tangent space.
We write
\begin{align}
\overline{T(\gamma)}_t=u_0^{-1}\tau(\gamma)_t^{-1}T(\gamma(t))\tau(\gamma)_tu_0
\in L({\mathbb R}^n,{\mathbb R}^n).
\end{align}
The definition for general $T\in \Gamma\left((\otimes^pTM)\otimes
(\otimes^q T^{\ast}M)\right)$ is similar.
We now state COH formula on
$P_{x_0}(M)$.
Below, we use the notation
${\rm L^2}:=L^2([0,1]\to {\mathbb R}^n, dt)$.
\begin{lem}\label{fang COH formula}
Assume $\|{\rm Ric}\|_{\infty}<\infty$.
Let $F\in H^1(P_{x_0}(M),\nu^{\lambda}_{x_0})$.
Then
\begin{align}
F(\gamma)-E^{\nu^{\lambda}_{x_0}}[F]&=
\int_0^1
\left(E\left[\left\{\left((I+R_{0,\lambda}(\gamma))^{-1}\right)^{\ast}
(DF)(\gamma)'\right\}_t~|~{\mathfrak F}_t\right],db(t)\right),
\end{align}
where $\left(R_{0,\lambda}(\gamma)\varphi\right)(t)=\frac{1}{2\lambda}
\overline{{\rm Ric}(\gamma)}_t\int_0^t\varphi(s)ds$,
$\ast$ indicates the adjoint operator
on ${\rm L^2}$ and
$DF(\gamma)_t'=\frac{d}{dt}DF(\gamma)_t$.
Also $I$ denotes the identity operator on ${\rm L^2}$.
\end{lem}
The second derivative of $\log p(t,x,y)$ is related
to the COH formula on $P_{x_0,y_0}(M)$.
Under Assumption~\ref{assumption C},
we have a good estimate on the first and second derivatives
of $\log p(t,y_0,z)$ with respect to $z$.
Similar estimates of
the heat kernel hold in a
compact set outside cut-locus
when $M$ is a compact Riemannian manifold.
This is studied by Malliavin and Stroock~\cite{ms}
and Gong-Ma~\cite{gong-ma}.
Their results clearly can be extended to
non-compact ${\mathbb R}^n$ with a nice Riemannian metric which
coincides with the Euclidean metric
outside a bounded set.
The estimates are as follows.
\begin{assumption}\label{assumption D}
For any compact subset $F\subset {\rm Cut}(y_0)^c$ and
$0<t\le 1$ there exists $C_F>0$ such that
\begin{align}
\sup_{z\in F}
\left|t\nabla^2_z\log p(t,y_0,z)+
\nabla^2_z\left(\frac{1}{2}
d(y_0,z)^2\right)\right|\le C_Ft^{1/2}.
\label{loghessian 0}
\end{align}
\end{assumption}
The following (1) and (2) can be found in \cite{aida-precise}
and \cite{gong-ma}
respectively.
\begin{lem}\label{gong-ma}
$(1)$
Let $M$ be a compact Riemannian manifold
or ${\mathbb R}^n$ with a Riemannian metric which
coincides with the Euclidean metric
outside a bounded set.
Then Assumption~{\rm \ref{assumption D}}
is satisfied.
\noindent
$(2)$~Suppose Assumption~{\rm \ref{assumption C}} $(1)$ and $(2)$.
Then Assumption~{\rm \ref{assumption D}} is satisfied.
Actually the following stronger
inequalities are valid:
Let $T>0$.
There exist positive constants $C_1, C_2$
which may depend on $T$ such that for all $0<t\le T$,
\begin{align}
&\sup_{z\in M}\left|t\nabla_z\log p(t,y_0,z)-v_z\right|
\le C_1t,\label{loggrad}\\
&\sup_{z\in M}\left|
t\nabla_z^2\log p(t,y_0,z)+
I_{T_zM}+d(y_0,z)\varphi'(d(y_0,z))P_z^{\perp}
\right|\le C_2t,\label{loghessian}
\end{align}
where $v_z$ and $P_z^{\perp}$
are defined in Lemma~$\ref{hessian of d2}$.
\end{lem}
The important point in the estimate (\ref{loghessian})
is that the norm of the second derivative of
$t\log p(t,y_0,z)$ is bounded from above by a linear function
of $d(y_0,z)$.
Probably, the estimates (\ref{loggrad}) and (\ref{loghessian})
hold under weaker assumptions on $\varphi$.
It is natural and interesting to study
non-rotationally symmetric general cases.
Our Dirichlet Laplacian is defined
on the set of paths which are restricted in
the small ball.
Therefore, even if we vary the Riemannian metric outside the
ball, the spectral property of the operator
would not change.
We explain this reasoning more precisely.
Let $(M,g)$ and $(M',g')$ be Riemannian manifolds satisfying
Assumption~\ref{assumption B}.
Let $y_0\in M, y_0'\in M'$ and
$B_l(y_0)\subset M, B_l(y_0')\subset M'$ be open metric
balls. Let $x_0\in B_l(y_0)$.
Let $l_{\ast}>l$.
Assume that $l_{\ast}$ is smaller than the injectivity radius at
$y_0$.
We assume that there exists a
Riemannian isometry $\Phi : B_{l_{\ast}}(y_0)\to B_{l_{\ast}}(y_0')$.
Then $\Phi(B_l(y_0))=B_l(y_0')$.
Let $x_0'=\Phi(x_0)$.
Let $\nu^{\lambda}_{M,x_0,y_0}$ and $\nu^{\lambda}_{M',x_0',y_0'}$
denote the pinned measures on each manifold.
We write
\begin{align*}
{\cal D}&=\{\gamma\in P_{x_0,y_0}(M)~|~\gamma(t)\in B_l(y_0)
~\mbox{for all $0\le t\le 1$}\},\\
{\cal D}'&=\{\gamma\in P_{x_0',y_0'}(M')~|~\gamma(t)\in B_l(y_0')
~\mbox{for all $0\le t\le 1$}\}.
\end{align*}
Let $A\subset {\cal D}$ be a Borel measurable subset.
Define $\Phi : {\cal D}\to {\cal D}'$ by
$\Phi(\gamma)(t)=\Phi(\gamma(t))$.
$p^M(t,x,y)$ and $p^{M'}(t,x',y')$ denote the heat kernels on
$M$ and $M'$.
Note that $p^M(t,x,y)\ne p^{M'}(t,\Phi(x),\Phi(y))$ $x,y\in
B_{l_{\ast}}(y_0)$
generally.
However, by the uniqueness of the solution of
stochastic differential equations,
we have
\begin{align}
\frac{\nu^{\lambda}_{M,x_0,y_0}(A)}{\nu^{\lambda}_{M,x_0,y_0}({\cal D})}
&=
\frac{\nu^{\lambda}_{M',x_0',y_0'}(\Phi(A))}{\nu^{\lambda}_{M',x_0',y_0'}({\cal D}')}.
\end{align}
By this, for any bounded Borel measurable function
$F$ on ${\cal D}'$,
\begin{align}
\int_{{\cal D}}F(\Phi(\gamma))
\frac{d\nu^{\lambda}_{M,x_0,y_0}(\gamma)}{\nu^{\lambda}_{M,x_0,y_0}({\cal
D})}
&=
\int_{{\cal D}'}F(\gamma)
\frac{d\nu^{\lambda}_{M',x_0',y_0'}(\gamma)}
{\nu^{\lambda}_{M',x_0',y_0'}({\cal D}')}.
\end{align}
Let $F\in H^{1,2}(P_{x'_0,y'_0}(M'))$.
If $F\in H^{1,2}_0({\cal D}',\nu^{\lambda}_{x_0',y_0'})$, then
$$
\tilde{F}(\gamma):=
F\left(\Phi(\gamma)\right) \chi\left(\sup_{0\le t\le 1}
d'(\Phi(\gamma)(t),y_0')\right)
\in H^{1,2}_0({\cal D},\nu^{\lambda}_{x_0,y_0}),
$$
where $\chi=\chi(t)$ is a non-negative smooth function such that
$\chi(t)=1$ for $t\le \frac{l+l_{\ast}}{2}$ and
$\chi(t)=0$ for $t\ge \frac{l+2l_{\ast}}{3}$.
Moreover
$\|D_0F\|_{L^2(\nu^{\lambda}_{M',x'_0,y'_0}/\nu^{\lambda}_{M',x_0',y_0'}({\cal D}'))}
=\|D_0\tilde{F}\|_{L^2(\nu^{\lambda}_{M,x_0,y_0}/\nu^{\lambda}_{M,x_0,y_0}({\cal
D}))}$.
To prove these results, we need
$\sup_{0\le t\le 1}d(\gamma(t),\Phi^{-1}(y_0'))\in
H^{1,2}(P_{x_0,y_0}(M))$
which can be found in Lemma 2.2 and Remark 2.4 in
\cite{aida-coh}.
The above argument implies that
$$
e^{\lambda}_{Dir,2,{\cal D}}=e^{\lambda}_{Dir,2,{\cal D}'}.
$$
Hence, in the proof of Theorem~\ref{main theorem 1},
we may assume that
$M$ is diffeomorphic to ${\mathbb R}^n$
and the Riemannian metric is flat outside a certain bounded
subset and Assumption~\ref{assumption D} is satisfied.
The key ingredient of the proof of Theorem~\ref{main theorem 1}
is a version of the COH formula in
\cite{aida-coh2} which can be extended to the above non-compact
${\mathbb R}^n$ case with a nice Riemannian metric.
Since the COH formula is strongly related to the heat kernel $p(t,x,y)$
on $M$ itself, the above observation is important.
We explain COH formula on $P_{x_0,y_0}(M)$.
Let
$V_{y_0}^{\lambda}(t,z)=\text{\rm grad}_z\log p\left(\frac{1-t}{\lambda},y_0,z\right)$
~$(0\le t<1)$.
We write
$$
\overline{V_{y_0}^{\lambda}(t,\gamma)}_t=
u_0^{-1}\tau(\gamma)_t^{-1}V_{y_0}^{\lambda}(t,\gamma(t))
\in {\mathbb R}^n.
$$
Also
$\overline{\nabla V_{y_0}^{\la}(t,\gamma)}_t$
denotes an $n\times n$ matrix.
More explicitly,
\begin{equation}
\overline{\nabla V_{y_0}^{\la}(t,\gamma)}_t=
u_0^{-1}\tau(\gamma)_t^{-1}\nabla_z\text{\rm grad}_z \log p\left(\frac{1-t}{\lambda},y_0,z\right)
\Bigg |_{z=\gamma(t)}\tau(\gamma)_tu_0.
\end{equation}
Let
$w(t)=b(t)-\frac{1}{\lambda}\int_0^t\overline{V_{y_0}^{\lambda}(s,\gamma)}_sds$.
This process is defined for $t<1$ and it is not difficult to check that
this can be extended continuously up to $t=1$.
Let $\mathcal{N}^{x_0,y_0,t}$ be the set of all null sets of
$\nu^{\lambda}_{x_0,y_0}|_{{\mathfrak F}_t}$ and set
${\mathfrak G}_t={\mathfrak F}_t\vee \mathcal{N}^{x_0,y_0,1}$.
Then $w$ is an ${\mathfrak G}_t$-adapted Brownian
motion for $0\le t\le 1$ such that
$E^{\nu^{\lambda}_{x_0,y_0}}
[\left(w(t),u\right)\left(w(s),v\right)]
=\frac{t\wedge s}{\lambda}(u,v)$ for any
$u,v\in {\mathbb R}^n$.
Let
\begin{align}
K(\gamma)_{\lambda,t}=
-\frac{1}{2\lambda}\overline{\text{\rm Ric}(\gamma)}_t
+\frac{1}{\lambda}\overline{\nabla V_{y_0}^{\la}(t,\gamma)}_t.
\end{align}
Let $M(\gamma)_{\lambda,t}$ be the linear mapping on ${\mathbb R}^n$
satisfying the differential equation:
\begin{align}
M(\gamma)_{\lambda,t}'&=K(\gamma)_{\lambda,t}M(\gamma)_{\lambda,t}
\quad 0\le t<1,\\
M(\gamma)_{\lambda,0}&=I.
\end{align}
Using $M$ and $K$, we define for a bounded measurable function
$\varphi$ with ${\rm supp}\, \varphi\subset [0,1)$,
\begin{align}
J(\gamma)_{\lambda}\varphi(t)&=
(M(\gamma)_{\lambda,t}^{\ast})^{-1}\int_t^1
M(\gamma)_{\lambda,s}^{\ast}K(\gamma)_{\lambda,s}\varphi(s)ds.
\end{align}
The operator $\left((I+R_{0,\lambda}(\gamma))^{-1}\right)^{\ast}$
in the COH formula in Lemma~\ref{fang COH formula}
coincides with $J(\gamma)_{\lambda}$ which is
obtained by setting
$K(\gamma)_{\lambda}=-\frac{1}{2\lambda}\overline{{\rm Ric}(\gamma)_t}$
in the above.
Also let
\begin{equation}
A(\gamma)_{\lambda}=I+J(\gamma)_{\lambda}.
\end{equation}
We are ready to state our COH formula for functions on
$P_{x_0,y_0}(M)$
and its immediate consequences.
\begin{lem}\label{coh formula}~
\noindent
$(1)$ Assume $M$ is diffeomorphic to ${\mathbb R}^n$ and
the Riemannian metric is flat outside a bounded subset.
Let $0<l<\infty$.
Suppose ${\cal D}(={\cal D}_l)$
satisfies conditions $(1), (2)$ in Theorem~$\ref{main theorem 1}$.
Let $F\in H^{1,2}_0({\mathcal D})$.
\begin{enumerate}
\item[{\rm (i)}] It holds that
$D_0F(\gamma)=0$ for $\nu^{\lambda}_{x_0,y_0}$-almost all
$\gamma\in {\mathcal D}^c$.
\item[{\rm (ii)}]
There exists $\lambda_{\ast}>0$ such that
$A(\gamma)_{\lambda}$ can be extended to a
bounded linear operator on ${\rm L^2}$ for each $\gamma$
for all $\lambda\ge\lambda_{\ast}$.
Let $a(\lambda)={\rm esssup}\left\{\|A(\gamma)_{\lambda}\|_{op}^2~|~\gamma\in
{\cal D}\right\}$.
Here $\|\cdot\|_{op}$ denotes the operator norm.
$\displaystyle{\sup_{\lambda\ge \lambda_{\ast}}a(\lambda)<\infty}$ holds
and for $\lambda\ge \lambda_{\ast}$,
the following COH formula holds:
\begin{align}
E^{\nu^{\lambda}_{x_0,y_0}}[F|{\mathfrak G}_t]=
E^{\nu^{\lambda}_{x_0,y_0}}[F]+
\int_0^t\left(H(s,\gamma),dw(s)\right),\quad 0\le t\le 1,
\label{COH loop 1}
\end{align}
where
\begin{align}
H(s,\gamma)=
E^{\nu^{\lambda}_{x_0,y_0}}\left[
A(\gamma)_{\lambda}(D_0F(\gamma)')(s)| {\mathfrak G}_s\right].
\quad\label{COH loop 2}
\end{align}
and $D_0F(\gamma)'_t=\frac{d}{dt}(D_0F)(\gamma)_t$.
Moreover
the following inequalities hold for $\lambda\ge \lambda_{\ast}$.
\begin{align}
E^{{\nu}^{\lambda}_{x_0,y_0}}\left[
F^2\log \left(F^2/\|F\|^2_{L^2({\nu}_{x_0,y_0}^{\lambda})}\right)
\right]
&\le
\frac{2a(\lambda)^2}{\lambda}E^{{\nu}^{\lambda}_{x_0,y_0}}\left[
|D_0F|^2\right],\label{lsi1}\\
\lambda
E^{\nu^{\lambda}_{x_0,y_0}}\left[\left(F-E^{{\nu}^{\lambda}_{x_0,y_0}}[F]\right)^2
\right]
&\le E^{\nu^{\lambda}_{x_0,y_0}}\left[
|A(\gamma)_{\lambda}D_0F|^2\right].\label{poincare1}
\end{align}
\end{enumerate}
\noindent
$(2)$~Assume $M$ is a rotationally symmetric Riemannian manifold
with a pole $y_0$. Suppose Assumption~{\rm \ref{assumption C}}.
\begin{itemize}
\item[{\rm (i)}] The operator $A(\gamma)_{\lambda}$ can be extended to
a bounded linear operator on ${\rm L^2}$ for each $\gamma$ for all $\lambda>0$.
Moreover for each $\lambda_0>0$, there exists a positive constant $C_0$
which depends only on $\varphi$ and $\lambda_0$ such that
for all $\lambda\ge \lambda_0$,
\begin{align}
\|A(\gamma)_{\lambda}\|_{op}&\le
C_0\rho_{y_0}(\gamma)\qquad \mbox{for any $\gamma$},
\end{align}
where $\rho_{y_0}(\gamma)=1+\max_{0\le t\le 1}d(y_0,\gamma(t))$.
\item[{\rm (ii)}] For $F\in H^{1,2}(P_{x_0,y_0}(M),\nu^{\lambda}_{x_0,y_0})$,
the COH formula~$(\ref{COH loop 1}), (\ref{COH loop 2})$ hold.
\item[{\rm (iii)}] For each $\lambda_0>0$, there exists a positive constant $C_1$
which depends only on $\varphi$ and $\lambda_0$ such that
for any $\lambda\ge \lambda_0$ and $F\in {\cal FC}^{\infty}_b(P_{x_0,y_0}(M))$,
\begin{align}
&\int_{P_{x_0,y_0}(M)}
F^2(\gamma)\log\left(F(\gamma)^2/\|F\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2\right)
d\nu^{\lambda}_{x_0,y_0}(\gamma)\nonumber\\
&\le \int_{P_{x_0,y_0}(M)}
\frac{C_1}{\lambda}\rho_{y_0}(\gamma)^2
|D_0F(\gamma)|^2d\nu^{\lambda}_{x_0,y_0}(\gamma).\label{LSI loop}
\end{align}
\end{itemize}
\end{lem}
\begin{proof}
The proof of $(1)$ is similar to that in \cite{aida-coh2}.
$(2)$ follows from
Lemma 3.2 and Theorem 3.3 in \cite{aida-coh}
and Lemma 2.3 in \cite{aida-coh2}.
In the present case, we have
\begin{align}
K(\gamma)_{\lambda,t}=
\frac{1}{1-t}\left(-\alpha+C_1(t)\right)+C_2(t),
\end{align}
where
\begin{align*}
\alpha&=1+\inf_{r>0}r\varphi'(r)>1/2,\\
C_1(t)&=\left\{\left(\inf_{r>0}r\varphi'(r)\right)-d(y_0,\gamma(t))
\varphi'\Bigl(d(y_0,\gamma(t))\Bigr)\right\}
\overline{P^{\perp}(\gamma)}_{\gamma(t)}\\
&\qquad\quad +
\left(\inf_{r>0} r\varphi'(r)\right)
\overline{P(\gamma)}_{\gamma(t)},\\
|C_2(t)|&\le \frac{C}{\lambda}
\end{align*}
and $C$ is a positive constant.
The case $\lambda=1$ is considered in \cite{aida-coh}
and the estimate in the hyperbolic space case with general $\lambda$
can be found in Remark 2.4 in \cite{aida-coh2}.
The proof of general cases are similar to them.
\end{proof}
Under the assumption in the lemma above,
$A(\gamma)_{\lambda}$ is a bounded linear operator on
${\rm L^2}$ for $\nu^{\lambda}_{x_0,y_0}$ almost all $\gamma$.
However, we cannot expect the usual continuity property
of the mapping $\gamma\mapsto A(\gamma)_{\lambda}$
because they are defined by using It\^o's stochastic integrals.
The inequality (\ref{poincare1}) implies that
$\liminf_{\lambda\to\infty}\frac{e^{\lambda}_{Dir,2,{\mathcal D}}}{\lambda}>0$.
On the other hand, we cannot conclude $e_2^{\lambda}>0$ for
$P_{x_0,y_0}(M)$ by the
log-Sobolev inequality (\ref{LSI loop}) because
the operator norm
$A(\gamma)_{\lambda}$ is not uniformly bounded.
As mentioned in the Introduction,
the same result as in Theorem~\ref{main theorem 2}
holds for $P_{x_0}(M)$.
We prove it
as a warm up before proving our main theorems.
For simplicity, we assume $M$ is compact.
After the proof, we explain different points of the proof
in the loop space case.
\begin{thm}\label{e2la on Px0}
Let $M$ be a compact Riemannian manifold.
Let $e_2^{\lambda}$ be the spectral gap
of the Dirichlet form ${\mathcal E}^{\lambda}$
on $P_{x_0}(M)$ with $\nu^{\lambda}_{x_0}$.
Then $e_2^{\lambda}>0$ for all $\lambda>0$
and
\begin{align}
\lim_{\lambda\to\infty}
\frac{e_2^{\lambda}}{\lambda}=1.\label{e2 asymptotics 2}
\end{align}
\end{thm}
\begin{proof}
We use the COH formula
(\ref{fang COH formula}).
By using
\begin{align}
\|(I+R_{0,\lambda}(\gamma))^{-1}\|_{op}\le 1+\frac{C}{\lambda}
\quad \mbox{for any $\lambda\ge \lambda_0>0$},\label{coefficient operator 1}
\end{align}
we get
\begin{align}
E^{\nu^{\lambda}_{x_0}}
\left[(F-E^{\nu^{\lambda}_{x_0}}[F])^2\right]
&\le
\frac{1}{\lambda}\left(1+\frac{C}{\lambda}\right)^2E\left[|DF(\gamma)|^2\right].
\end{align}
Here $C$ depends on $\lambda_0$.
Since $e^{\lambda}_1=0$ and the corresponding eigenfunction is
a constant function,
we have $e_2^{\lambda}\ge \lambda(1+\frac{C}{\lambda})^{-2}$ which proves
that $\liminf_{\lambda\to\infty}\frac{e^{\lambda}_2}{\lambda}\ge 1$.
We prove converse estimate.
To this end, we consider a candidate of
approximate second (generalized) eigenfunction.
Let $\varphi\in {\rm L^2}$ and assume
$\|\varphi\|_{{\rm L^2}}=1$.
Let $F(\gamma)=\sqrt{\lambda}\int_0^1\left(\varphi(t),db(t)\right)$.
Then $E^{\nu^{\lambda}_{x_0}}[F]=0$ and
$E^{\nu^{\lambda}_{x_0}}[F^2]=1$.
We have $F\in {\rm D}({\mathcal E})$ and
\begin{align*}
D_{h}
\int_0^1\left(\varphi(t),db(t)\right)&=
\int_0^1\left(\varphi(t),h'(t)\right)dt
+\int_0^1
\Big\langle\varphi(t),
\int_0^t\left(\overline{R(\gamma)}_s(h(s),\circ db(s))(\circ db(t))\right)
\Big\rangle,
\end{align*}
where $\overline{R(\gamma)}_t$ is the trivialization
of the Riemannian curvature tensor
and $\langle\cdot,\cdot\rangle$ also denotes the inner
product in ${\mathbb R}^n$.
The readers are referred to
\cite{cruzeiro-malliavin, aida-irred} for this formula.
See \cite{driver0, leandre1} also.
Since
\begin{align}
&\int_0^1
\Big\langle \varphi(t),
\int_0^t\left(\overline{R(\gamma)}_s(h(s),\circ db(s))(\circ db(t))\right)
\Big\rangle\nonumber\\
&=\int_0^1\varphi^j(t)
\Big\langle
\varepsilon_j,\int_0^t\left(\overline{R(\gamma)}_s(h(s),\circ db(s))(\circ db(t))\right)
\Big\rangle\nonumber\\
&=\int_0^1
\int_0^t\Big\langle \overline{R(\gamma)}_s(h(s),\circ db(s))(\varepsilon_j),
\circ d\int_t^1\varphi^j(s)db(s)\Big\rangle\nonumber\\
&=\int_0^1
\Big\langle\overline{R(\gamma)}_t(\circ db(t),h(t))(\varepsilon_j),
\int_t^1\varphi^j(s)db(s)\Big\rangle\nonumber\\
&=\int_0^1\Big\langle\overline{R(\gamma)}_t(\varepsilon_j,\int_t^1\varphi^j(s)db(s))(\circ
db(t)), h(t)
\Big\rangle\nonumber\\
&=\int_0^1\Big\langle\overline{R(\gamma)}_t(\int_t^1\varphi(s)db^i(s),\varepsilon_i)
(\circ db(t)), h(t)\Big\rangle\nonumber\\
&=\int_0^1
\Big\langle\int_t^1\overline{R(\gamma)}_u\left(\int_u^1\varphi(s)db^i(s),
\varepsilon_i\right)(\circ db(u)), h'(t)\Big\rangle dt,
\end{align}
we obtain
\begin{align}
DF(\gamma)'_t&=\sqrt{\lambda}\varphi(t)+
\sqrt{\lambda}\int_t^1\overline{R(\gamma)}_u\left(\int_u^1\varphi(s)db^i(s),
\varepsilon_i\right)(\circ db(u))\nonumber\\
&=\sqrt{\lambda}\varphi(t)+\sqrt{\lambda}\int_t^1\overline{R(\gamma)}_u\left(\varepsilon_j,
\varepsilon_i\right)(\circ db(u))\int_0^1\varphi^j(s)db^i(s)\nonumber\\
&\quad -\sqrt{\lambda}\int_t^1\overline{R(\gamma)}_u\left(\int_0^u\varphi(s)db^i(s),
\varepsilon_i\right)(\circ db(u)).
\end{align}
By a standard calculation, we have
\begin{align}
\int_0^1 E\left[|DF(\gamma)'|_t^2\right]dt&\le
\lambda+C.
\end{align}
This implies (\ref{e2 asymptotics 2}).
\end{proof}
As in the proof above,
the COH formula and the estimate
$\lim_{\lambda\to\infty}\|I+R_{0,\lambda}(\gamma)\|_{op}=1$
immediately implies the lower bound of the
limit.
In the loop space case, $A(\gamma)_{\lambda}$ is not uniformly
bounded in $\gamma$ and the existence of the spectral gap is
not obvious.
This difficulty can be solved by using the
log-Sobolev inequality (\ref{LSI loop}).
In order to obtain precise asymptotics of the spectral gap,
we need
continuity theorem in rough path analysis.
For this purpose, we need to consider
the operator $A(c_{x_0,y_0})_{\infty}=\lim_{\lambda\to\infty}A(c_{x_0,y_0})_{\lambda}$.
In the next section, we study some relations between
$A(c_{x_0,y_0})_{\infty}$
and the Hessian of the energy function $E$ at $c_{x_0,y_0}$.
\section{Square root of Hessian of the energy function and
Jacobi fields}
In this section, we assume
$d(x_0,y_0)$ is smaller than the injectivity radius at
$y_0$.
We begin by determining
$\lim_{\lambda\to\infty}K(c_{x_0,y_0})_{\lambda,t}$.
By using (\ref{loghessian 0}), we have
\begin{align}
\lim_{\lambda\to\infty}K(c_{x_0,y_0})_{\lambda,t}&=
-\lim_{\lambda\to\infty}\frac{1}{2\lambda}\overline{R(c_{x_0,y_0})_t}
+\lim_{\lambda\to\infty}\frac{1}{\lambda}\overline{V^{\lambda}_{y_0}(t,c_{x_0,y_0})_t}
\nonumber\\
&=\frac{1}{1-t}\lim_{\lambda\to\infty}
\frac{1-t}{\lambda}\overline{V^{\lambda}_{y_0}(t,c_{x_0,y_0})_t}\nonumber\\
&=-\frac{1}{1-t}\overline{\nabla^2k(c_{x_0,y_0})_t}.
\end{align}
We write
\begin{align}
K(t)&=-\frac{1} {1-t}\overline{\nabla^2k(c_{x_0,y_0})_t}.
\end{align}
It is natural to conjecture that $A(c_{x_0,y_0})_{\infty}$ is equal to
the operator in ${\rm L^2}$ given by
\begin{align}
\varphi(t)\mapsto
\varphi(t)+M(t)^{\ast}\int_t^1M(s)^{\ast}K(s)\varphi(s)ds,
\label{J}
\end{align}
where
$M(t)$ is the solution to
\begin{align}
M(t)'&=K(t)M(t)\quad \qquad 0\le t<1,\label{M and K}\\
M(0)&=I.
\end{align}
In fact, this is true and we prove it later in more general form
in Lemma~\ref{perturbation of M}.
We study the relation between the operator of (\ref{J})
and $D^2E(c_{x_0,y_0})$.
First, recall that we fix an frame $u_0\in O(M)$ at $x_0$.
Let us choose $\xi\in {\mathbb R}^n$ so that
$\exp_{x_0}(tu_0(\xi))=c_{x_0,y_0}(t)$~$(0\le t\le 1)$, where
$\exp_{x_0}$ stands for the exponential mapping at $x_0$.
Clearly it holds that $d(x_0,y_0)=|\xi|$.
Let $c_{y_0,x_0}(t)=c_{x_0,y_0}(1-t)$ denote the reverse geodesic path from
$y_0$ to $x_0$.
In order to see the explicit expression of
the Hessian of
$k(z)$~$(z\in c_{x_0,y_0})$, we recall the notion of
Jacobi fields.
Let $R$ be the curvature tensor and define $R(t)=
\overline{R(c_{x_0,y_0})}_t(\cdot,\xi)(\xi)$
which is a linear mapping on ${\mathbb R}^n$.
Also we define $R^{\leftarrow}(t)=R(1-t)$.
Let $v\in {\mathbb R}^n$ and
$W(t,v)$ be the solution to the following ODE:
\begin{equation}
W''(t,v)+R^{\leftarrow}(t)W(t,v)=0~~0\le t\le 1,
~~
W(0,v)=0,~ W'(0,v)=v.
\end{equation}
Since $t\mapsto W(t,v)$ is linear,
let $W(t)$ denote the corresponding
$n\times n$ matrix.
Of course, $W(0)=0, W'(0)=I$.
Since ${\rm Cut}(y_0)\cap\{c_{y_0,x_0}(t)~|~0\le t\le 1\}=\emptyset$,
$W(t)$ is an invertible linear mapping for all $0<t\le 1$ and
$\tilde{W}(t,v)=W(t)W(1)^{-1}v$ is the solution to
$$
\tilde{W}''(t,v)+R^{\leftarrow}(t)\tilde{W}(t,v)=0,~~
\tilde{W}(0,v)=0,~\tilde{W}(1,v)=v
$$
and
$(\nabla^2k(c_{y_0,x_0}(1))(u_0v,u_0v)=(\tilde{W}'(1,v),\tilde{W}(1,v))
=(W'(1)W(1)^{-1}v,v)$.
This result can be found in many standard books
in differential geometry, {\it e.g.} \cite{jost}.
Let $0<T\le 1$.
We can obtain explicit form of the Jacobi field along
$c_{y_0,x_0}(t)$~$(0\le t\le T)$ with given terminal value at $T$ using
$W$.
Let $\tilde{W}_T(t,v)=W(Tt)W(T)^{-1}v$.
Then $\tilde{W}_T(t,v)$~$0\le t\le 1$ satisfies
the Jacobi equation
\begin{align}
\tilde{W}_T''(t,v)+R^{\leftarrow}(tT)T^2\tilde{W}_T(t,v)=0,~~
\tilde{W}_T(0,v)=0,~ \tilde{W}_T(1,v)=v.
\end{align}
Hence
$\nabla^2k(c_{y_0,x_0}(t))
\left(\tau(c_{x_0,y_0})_{1-t}u_0v,\tau(c_{x_0,y_0})_{1-t}u_0v\right)
=t\left(W'(t)W(t)^{-1}v,v\right)$.
Next we prove that
$
A(t):=tW'(t)W(t)^{-1}
$
is a symmetric matrix for $0<t\le 1$.
This can be checked by the following argument.
Note that
$W(t)=tI+\int_0^t\int_0^s\int_0^rW'''(u)du$.
This follows from the equation of $W$.
By this observation,
if we extend $A=A(t)$ by setting $A(0)=I$,
then $A(t)$ is continuously differentiable on
$[0,1]$ and $A'(0)=0$.
We have
\begin{align}
A'(t)&=W'(t)W(t)^{-1}+tW''(t)W(t)^{-1}
-tW'(t)W(t)^{-1}W'(t)W(t)^{-1}\nonumber\\
&=-tR^{\leftarrow}(t)-\frac{A(t)^2}{t}+\frac{A(t)}{t}.\label{eq for A}
\end{align}
Let $B(t)=A(t)-A(t)^{\ast}$, where
$A(t)^{\ast}$ denotes the transposed matrix.
Since $R^{\leftarrow}(t)$ is a symmetric matrix,
(\ref{eq for A}) implies
\begin{align}
B(t)&=\frac{1}{t}\int_0^t(I-A(s)^{\ast})B(s)ds+
\frac{1}{t}\int_0^tB(s)(I-A(s))ds,
\qquad 0<t\le 1.
\end{align}
Noting
\begin{align}
\lefteqn{
\frac{1}{t}\int_0^t(I-A(s)^{\ast})B(s)ds}\nonumber\\
&=
\frac{I-A(t)^{\ast}}{t}\int_0^tB(s)ds
+\frac{1}{t}\int_0^t\left(A(s)^{\ast}\right)'\left(\int_0^sB(r)dr\right)ds
\end{align}
and using Gronwall's inequality, we obtain
$B(t)=0$ for all $t$ which implies the desired result.
Let $f(t)=W(1-t)$.
Then $f$ satisfies
\begin{align}
f''(t)+R(t)f(t)=0,\qquad 0\le t\le 1,\qquad f(1)=0,~~ f'(1)=-I.
\end{align}
Since $f'(t)f(t)^{-1}$ is a symmetric matrix,
we have the following key relations:
\begin{align}
& \overline{\nabla^2k(c_{x_0,y_0})}_t=
-(1-t)f'(t)f(t)^{-1}\\
& K(t)=-\frac{1}{1-t}\overline{\nabla^2k(c_{x_0,y_0})}_t=
f'(t)f(t)^{-1}.
\end{align}
Let
\begin{align}
\tilde{K}(t)=K(t)+\frac{1}{1-t}.\label{decomposition of K}
\end{align}
Since
$\tilde{K}(t)=\frac{I-A(1-t)}{1-t}$,
we see that $\tilde{K}(t)$
$(0\le t\le 1)$ is a matrix-valued continuous mapping.
Let $N(t)$ be the solution to
$$
N'(t)=\tilde{K}(t)N(t),~N(0)=I.
$$
Then $\sup_t(\|N(t)\|_{op}+\|N^{-1}(t)\|_{op})<\infty$
and $M(t)=(1-t)N(t)$,
where $M(t)$ is the solution to
(\ref{M and K}).
Also we have
$M(t)=f(t)f(0)^{-1}$.
We write ${\rm L^2_0}=\{\varphi\in {\rm L^2}~|~\int_0^1\varphi(t)dt=0\}$.
Then
$
\left(U\varphi\right)(t)=\int_0^t\varphi(s)ds
$
is a bijective linear isometry from
${\rm L^2_0}$ to ${\rm H}_0$.
Also $U^{-1}h(t)=\dot{h}(t)$.
Let us introduce an operator
\begin{align}
\left(S\varphi\right)(t)&=\varphi(t)-f'(t)f(t)^{-1}\int_0^t\varphi(s)ds,\\
{\rm D}(S)&={\rm L^2_0}.
\end{align}
By Hardy's inequality,
\begin{align}
\int_0^1\left|\frac{1}{1-t}\int_t^1\varphi(s)ds\right|^2dt
\le 4\int_0^1|\varphi(s)|^2ds
\qquad \mbox{for any $\varphi\in {\rm L^2}$},
\end{align}
we see that $S$ is a bounded linear operator from
${\rm L^2_0}$ to ${\rm L^2}$.
The following lemma shows that
$S$ is a square root of the Hessian of
the energy function $E$.
This relation is key to identify the limit
of $e^{\lambda}_{Dir,2,{\mathcal D}}$.
\begin{lem}\label{S and T1}
Let $T$ be the bounded linear operator on
${\rm L^2_0}$ such that
\begin{align}
(T\varphi)(t)=-\int_t^1R(s)\left(\int_0^s\varphi(u)du\right)ds+
\int_0^1\left(\int_t^1R(s)\Bigl(\int_0^s\varphi(u)du\Bigr)ds\right)dt.
\end{align}
Then $T$ is a symmetric operator and for any $\varphi\in {\rm L^2_0}$,
\begin{equation}
\|S\varphi\|^2=
((I+T)\varphi,\varphi),\label{S and T}
\end{equation}
where $I$ denotes the identity operator on ${\rm L^2_0}$.
Moreover,
\begin{equation}
(D_0^2E)(c_{x_0,y_0})=U\left(I+T\right)U^{-1},
\end{equation}
where $E$ is the energy function of the path~$(\ref{energy function})$.
\end{lem}
\begin{proof}
The symmetry of $T$ follows from direct calculation.
Using
\begin{align*}
\lim_{t\to 1}\frac{1}{1-t}\left|\int_t^1\varphi(s)ds\right|^2=0,
\quad &
f''(t)=-R(t)f(t),
\quad
\mbox{$f'(t)f(t)^{-1}$ is symmetric},\\
(f(t)^{-1})'&=-f(t)^{-1}f'(t)f(t)^{-1},
\end{align*}
we have
\begin{align}
\|S\varphi\|^2&=\|\varphi\|^2-2\int_0^1\left(
f'(t)f(t)^{-1}\int_0^t\varphi(s)ds,\varphi(t)\right)dt\nonumber\\
& +\int_0^1\left|f'(t)f(t)^{-1}\int_0^t\varphi(s)ds\right|^2dt\nonumber\\
&=\|\varphi\|^2+\int_0^1\left(
\left(f'(t)f(t)^{-1}\right)'\int_0^t\varphi(s)ds,
\int_0^t\varphi(s)ds\right)dt\nonumber\\
&+\int_0^1\left|f'(t)f(t)^{-1}\int_0^t\varphi(s)ds\right|^2dt\nonumber\\
&=
\|\varphi\|^2-\int_0^1
\left(R(t)\int_0^t\varphi(s)ds,\int_0^t\varphi(s)ds\right)
dt\nonumber\\
&=\left((I+T)\varphi,\varphi\right).\label{hessian and R}
\end{align}
By the second variation formula of the energy function
along geodesics (\cite{jost}),
we have
\begin{align*}
(D_0^2E)(c_{x_0,y_0})(U\varphi,U\varphi)=\left((I+T)\varphi,\varphi\right).
\end{align*}
Thus the proof is completed.
\end{proof}
Let
\begin{equation}
(S_2\varphi)(t)=\varphi(t)+f'(t)\int_0^tf(s)^{-1}\varphi(s)ds.
\end{equation}
Then again by Hardy's inequality
$S_2$ is a bounded linear operator on ${\rm L^2}$.
Moreover, it is easy to see that
$\Im(S_2)\subset {\rm L^2_0}$,
$SS_2=I_{{\rm L^2}}$ and $S_2S=I_{{\rm L^2_0}}$.
Therefore, $S_2=S^{-1}$ and
$\Im(S)={\rm L^2}$.
Moreover we have
$S^{\ast}S=I+T$ on ${\rm L^2_0}$ by (\ref{S and T}).
Note that by identifying the dual space of a Hilbert space
with the Hilbert space itself using Riesz's theorem,
we view $S^{\ast} : ({\rm L^2})^{\ast} \to ({\rm L^2_0})^{\ast}$ as
the operator from ${\rm L^2}$ to ${\rm L^2_0}$.
We have the following explicit expression
of $S^{-1}$, $S^{\ast}$ and $(S^{-1})^{\ast}$.
\begin{lem}\label{S explicit form}
$(1)$~$S^{-1} : {\rm L^2}\to {\rm L^2_0}$,
$S^{\ast} : {\rm L^2}\to {\rm L^2_0}$ are bijective linear maps and
we have for any $\varphi\in {\rm L^2}$,
\begin{align}
\left(S^{-1}\varphi\right)(t)&=
\varphi(t)+f'(t)\int_0^tf(s)^{-1}\varphi(s)ds\\
\left(S^{\ast}\varphi\right)(t)&=
\varphi(t)-\int_0^1\varphi(t)dt+\int_0^tf'(s)f(s)^{-1}\varphi(s)ds
\nonumber\\
&-\int_0^1\left(\int_0^tf'(s)f(s)^{-1}\varphi(s)ds\right)dt.
\end{align}
\noindent
$(2)$ $(S^{-1})^{\ast}$ is a bijective linear map
from ${\rm L^2_0}$ to ${\rm L^2}$.
If we define $(S^{-1})^{\ast}$ is equal to $0$ on the subset
of constant functions, then for any
$\varphi\in {\rm L^2}$,
\begin{align}
\left((S^{-1})^{\ast}\varphi\right)(t)&=
\varphi(t)+
\left(f(t)^{\ast}\right)^{-1}\int_t^1
f(s)^{\ast}f'(s)f(s)^{-1}\varphi(s)ds.\label{Sinverseast}
\end{align}
Also $(S^{-1})^{\ast}\varphi$ can be written using $M(t)$ and $K(t)$
as
\begin{equation}
\left((S^{-1})^{\ast}\varphi\right)(t)=
\varphi(t)+
(M(t)^{\ast})^{-1}\int_t^1M(s)^{\ast}K(s)\varphi(s)ds.
\end{equation}
\end{lem}
\begin{proof}
All the calculation are almost similar and so
we show how to calculate $(S^{-1})^{\ast}$ only.
Using $(f'(t)f(t)^{-1})^{\ast}=f'(t)f(t)^{-1}$,
we have for $\varphi\in {\rm L^2}$
and $\psi\in {\rm L^2}$,
\begin{align}
& \left(S^{-1}\varphi,\psi\right)_{{\rm L^2}}\nonumber\\
& \quad=(\varphi,\psi)-\int_0^1
\Bigg\langle\int_0^tf(s)^{-1}\varphi(s)ds,
\left(
\int_t^1f(s)^{\ast}f'(s)f(s)^{-1}\psi(s)ds\right)'
\Bigg\rangle dt\nonumber\\
&\quad =(\varphi,\psi)+
\int_0^1\Big\langle
\varphi(t), \left(f(t)^{-1}\right)^{\ast}
\int_t^1f(s)^{\ast}f'(s)f(s)^{-1}\psi(s)ds
\Big\rangle dt.
\end{align}
This shows (\ref{Sinverseast})
and $(S^{-1})^{\ast}\mbox{const}=0$.
\end{proof}
We summarize the relation between $S$ and $T$
in the proposition below.
\begin{pro}\label{S and T2}
$(1)$~
We have
\begin{align*}
I+T=S^{\ast}S,\quad
(S^{-1})^{\ast}(I+T)=S,\quad
(I+T)^{-1}=S^{-1}(S^{-1})^{\ast}.
\end{align*}
\noindent
$(2)$~The following identities hold.
\begin{align}
\inf\sigma(I+T)=\inf
\left\{\|S\varphi\|^2~|~\|\varphi\|_{{\rm L^2}}=1,
\varphi\in {\rm L^2_0}\right\}=
\frac{1}{\|(S^{-1})^{\ast}\|_{op}^2}.
\end{align}
\end{pro}
\begin{proof}
$I+T=S^{\ast}S$ follows from Lemma~\ref{S and T1}.
$(I+T)^{-1}=S^{-1}(S^{-1})^{\ast}$ follows from
$(S^{-1})^{\ast}=(S^{\ast})^{-1}$.
(2) follows from (1).
\end{proof}
The identity
$\|S\varphi\|^2=\left((I+T)\varphi,\varphi\right)$
~$(\varphi\in {{\rm L^2_0}})$
is used to prove the upper bound estimate, while
the inequality $\inf\sigma((I+T))\le \frac{1}{\|(S^{-1})^{\ast}\|_{op}^2}$
is used for the proof of the lower bound estimate
in Theorem~\ref{main theorem 1}.
See (\ref{varphiep}) and (\ref{lbd of F}).
\section{Proof of Theorem~\ref{main theorem 1}}\label{proof of main
theorem 1}
We prove Theorem~\ref{main theorem 1}.
So we assume that ${\cal D}$ satisfies conditions (1), (2)
in the theorem
throughout this Section.
As explained already, furthermore,
we may assume $M$ is diffeomorphic to
${\mathbb R}^n$ and the Riemannian metric is flat outside a compact set.
Therefore, Assumptions~\ref{assumption A}, \ref{assumption B},
\ref{assumption D} are satisfied.
We consider the ground state function of $L_{\lambda}$.
Let $\tilde{\chi}_{\delta}(\gamma)=
\chi_{\delta}\Bigl(\max_{0\le t\le 1}d(\gamma(t),c_{x_0,y_0}(t))\Bigr)$,
where $\chi_{\delta}$ is a non-negative smooth function such that
$\chi_{\delta}(u)=1$ for $|u|\le \delta$ and $\chi_{\delta}(u)=0$
for $|u|\ge 2\delta$.
Here $\delta$ is a sufficiently small positive number.
Note that there exists $C_{\delta}>0$ such that
$\nu^{\lambda}_{x_0,y_0}\left(\max_{0\le t\le 1}d(\gamma(t),c_{x_0,y_0}(t))\ge
\delta\right)
\le e^{-\lambda C_{\delta}}$.
This can be proved by a large deviation result for
solutions of SDE.
Since the proof is similar to that of
(\ref{rough path exponential decay}),
we omit the proof.
Thus $\|\tilde{\chi}_{\delta}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}\ge
1-Ce^{-C'\lambda}$.
Also we have
$
\|D_0\tilde{\chi}_{\delta}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}
\le Ce^{-C'\lambda}.
$
Here we have used that the function
$q(\gamma)=\max_{0\le t\le 1}d(\gamma(t),c_{x_0,y_0}(t))$
belongs to ${\rm D}({\mathcal E}^{\lambda})$ and
$|D_0q(\gamma)|\le 1$ $\nu^{\lambda}_{x_0,y_0}$-a.s. $\gamma$.
This is proved in a similar way to Lemma 2.2 (2) in \cite{aida-coh}.
Hence
\begin{align}
e^{\lambda}_{Dir,1,{\mathcal D}}\le
Ce^{-\lambda C'}.\label{estimate on e1}
\end{align}
On the other hand, it is proved in \cite{aida-coh2} that
$\liminf_{\lambda\to\infty}\frac{e^{\lambda}_{Dir,2,{\cal D}}}{\lambda}>0$.
In \cite{aida-coh2}, we studied the case of compact manifolds.
However, the proof works as well as the present case by the
assumption on $M$.
These estimates imply
that $e^{\lambda}_{Dir,1,{\cal D}}$ is a simple eigenvalue.
Let $\Psi_{\lambda}$ denote the normalized non-negative
eigenfunction (ground state function).
It is clear that
$\Psi_{\lambda}\in H^{1,2}_0\left({\mathcal
D},{\nu}^{\lambda}_{x_0,y_0}\right)$.
From (\ref{estimate on e1}), we obtain
$\|D_0\Psi_{\lambda}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}\le Ce^{-C'\lambda}$.
It is plausible that $\Psi_{\lambda}$ is strictly positive for
$\nu^{\lambda}_{x_0,y_0}$ almost all $\gamma$
which
follows from the
positivity improving property of the corresponding
$L^2$-semigroup.
However, we do not need such a property in this paper
and we do not consider such a problem.
We use the following representation of $e^{\lambda}_{Dir,2,{\mathcal D}}$
to prove ${\rm LHS}\le {\rm RHS}$ in (\ref{main theorem 1 identity}) in
Theorem~\ref{main theorem 1}.
\begin{align}
e^{\lambda}_{Dir,2,{\mathcal D}}&=
\inf\Biggl\{
\frac{\int_{{\mathcal D}}
|D_0(F-(\Psi_{\lambda},F)\Psi_{\lambda})|^2d\nu^{\lambda}_{x_0,y_0}}
{\|F-(\Psi_{\lambda},F)\Psi_{\lambda}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2}
~\Bigg |
~F\in H^{1,2}_0({\mathcal D})~\nonumber\\
& \qquad\mbox{and}~
\|F-(\Psi_{\lambda},F)\Psi_{\lambda}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}\ne 0
\Biggr\}.\label{representation of e2}
\end{align}
The following estimate is necessary for the proof of Theorem~\ref{main theorem 1}.
\begin{lem}\label{ground state}
We have
\begin{equation}
\|\Psi_{\lambda}-1\|_{L^2(P_{x_0,y_0}(M),\nu^{\lambda}_{x_0,y_0})}\le C e^{-C'\lambda},
\end{equation}
where $C,C'$ are positive constants.
\end{lem}
\begin{proof}
By the COH formula,
$$
\|\Psi_{\lambda}-\left(\Psi_{\lambda},1\right)
_{L^2({\nu}^{\lambda}_{x,y})}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}
\le Ce^{-C'\lambda}.
$$
This implies
$$
1-\left(\Psi_{\lambda},1\right)_{L^2({\nu}^{\lambda}_{x,y})}^2
=\left(\Psi_{\lambda},\Psi_{\lambda}-\left(\Psi_{\lambda},1\right)_{L^2({\nu}^{\lambda}_{x,y})}\right)
_{L^2(\nu^{\lambda}_{x_0,y_0})}
\le Ce^{-C'\lambda}
$$
which shows
$\|\Psi_{\lambda}-1\|_{L^2(P_{x_0,y_0}(M),\nu^{\lambda}_{x_0,y_0})}^2\le
2Ce^{-C'\lambda}$.
\end{proof}
We need the following lemma to prove that
$A(\gamma)_{\lambda}$ can be approximated by $A(c_{x_0,y_0})_{\infty}(=(S^{-1})^{\ast})$
when $\gamma$ is close to $c_{x_0,y_0}$ and $\lambda$ is large.
\begin{lem}\label{perturbation of M}
Recall that we have defined
\begin{align}
K(t)=-\frac{\overline{\nabla^2k(c_{x_0,y_0})}_t}{1-t}.
\end{align}
We consider a perturbation
of $K(t)$ such that
$$
K_{\varepsilon}(t)=K(t)+\frac{C_{\varepsilon}(t)}{(1-t)^{\delta}},
$$
where $0<\delta<1$ is a constant and
$C_{\varepsilon}(t)$~$(0\le \varepsilon\le 1)$ is a symmetric matrix-valued
continuous function satisfying $\sup_{t}\|C_{\varepsilon}(t)\|\le \varepsilon$.
Let
$M_{\varepsilon}(t)$ be the solution to
\begin{align}
M_{\varepsilon}'(t)&=K_{\varepsilon}(t)M_{\varepsilon}(t)\quad 0\le t<1,\\
M_{\varepsilon}(0)&=I.
\end{align}
Define
\begin{align}
(J_{\varepsilon}\varphi)(t)&=
(M_{\varepsilon}(t)^{\ast})^{-1}\int_t^1
M_{\varepsilon}(s)^{\ast}K_{\varepsilon}(s)\varphi(s)ds.
\end{align}
Then for sufficiently small $\varepsilon$,
there exists a positive constant $C$ which is independent of
$\varepsilon$ such that
\begin{align}
\|J_{\varepsilon}-J_0\|_{op}\le C\varepsilon.\label{difference of J}
\end{align}
\end{lem}
By Lemma~\ref{S explicit form}, we see that
$(S^{-1})^{\ast}=I+J_0$ holds.
\begin{proof}
As already mentioned,
$\tilde{K}(t)=\frac{1}{1-t}+K(t)$ is
a matrix-valued continuous mapping for $0\le t\le 1$.
Taking this into account, we rewrite
$$
K_{\varepsilon}(t)=-\frac{1}{1-t}+\tilde{K}_{\varepsilon}(t),
$$
where $\tilde{K}_{\varepsilon}(t)=\tilde{K}(t)+\frac{C_{\varepsilon}(t)}{(1-t)^{\delta}}$.
Let $N_{\varepsilon}(t)$ be the solution to
\begin{align}
N_{\varepsilon}(t)'=\tilde{K}_{\varepsilon}(t)N_{\varepsilon}(t)\quad 0\le t<1,\quad
N_{\varepsilon}(0)=I.
\label{Nept}
\end{align}
Clearly, the solution to this equation exists.
Moreover, $\lim_{t\to 1}N_{\varepsilon}(t)$ exists
and $\sup_{0\le t<1}\|N_{\varepsilon}(t)\|<\infty$.
To see this, we prove the continuity of
$N_{\varepsilon}(t)$ with respect to $t$.
Note that for $0\le s\le t<1$,
\begin{align}
\|N_{\varepsilon}(t)-N_{\varepsilon}(s)\|&\le
\int_s^tC\left(1+\frac{1}{(1-u)^{\delta}}\right)\|N_{\varepsilon}(u)\|du
\nonumber\\
&\le \|N_{\varepsilon}(s)\|C\left((t-s)+
\frac{(1-t)^{1-\delta}-(1-s)^{1-\delta}}{1-\delta}\right)\nonumber\\
&\quad
+\int_s^tC\left(1+\frac{1}{(1-u)^{\delta}}\right)\|N_{\varepsilon}(u)-N_{\varepsilon}(s)\|
du.
\end{align}
Hence by the Gronwall inequality, we have
\begin{align}
\|N_{\varepsilon}(t)-N_{\varepsilon}(s)\|&\le
\|N_{\varepsilon}(s)\|C\left((t-s)+
\frac{(1-t)^{1-\delta}-(1-s)^{1-\delta}}{1-\delta}\right)
\nonumber\\
&\qquad
\times \exp\left\{
C
\left((t-s)+
\frac{(1-t)^{1-\delta}-(1-s)^{1-\delta}}{1-\delta}\right)
\right\}
\end{align}
which implies the desired result.
Note that $\tilde{K}_0(t)=\tilde{K}(t)$ and
$N_{0}(t)=N(t)$.
Then $M_{\varepsilon}(t)=(1-t)N_{\varepsilon}(t)$.
Also we have $N_{\varepsilon}(t)$ $(0\le t<1)$ is invertible
and
\begin{align*}
N_{\varepsilon}(s)N_{\varepsilon}(t)^{-1}&=N_{\varepsilon}^t(s-t) \qquad 0\le t\le s<1,
\end{align*}
where $N_{\varepsilon}^t(u)$~$(0\le u<1-t)$
is the solution to the equation
\begin{align*}
\partial_uN^t_{\varepsilon}(u)=\tilde{K}_{\varepsilon}(t+u) N^t_{\varepsilon}(u)
\quad 0\le u<1-t, \qquad N^t_{\varepsilon}(0)=I.
\end{align*}
By a similar calculation to $N_{\varepsilon}$, we have
$\sup_{\varepsilon, t, 0\le u<1-t}\|N^{t}_{\varepsilon}(u)\|<\infty$.
By the definition of $J_{\varepsilon}$, we have
\begin{align}
\left(J_{\varepsilon}\varphi\right)(t)=
\frac{1}{1-t}\int_t^1(1-s)N_{\varepsilon}^t(s-t)^{\ast}K_{\varepsilon}(s)\varphi(s)ds.
\end{align}
Hence by Hardy's inequality,
in order to estimate $J_{\varepsilon}-J_0$, it suffices to estimate
$N_{\varepsilon}^t-N_0^t$.
Note that for $0\le u<1-t$,
\begin{align*}
N_{\varepsilon}^t(u)=N_0^t(u)\left(I+\int_0^u N_0^t(\tau)^{-1}
\frac{C_{\varepsilon}(t+\tau)}{(1-(t+\tau))^{\delta}}N_{\varepsilon}^t(\tau)d\tau\right).
\end{align*}
This and the estimate for $C_{\varepsilon}$ and $N^{t}_{\varepsilon}(u)$ imply
\begin{align*}
\sup_t|N_{\varepsilon}^t(u)-N_0^t(u)|\le C\varepsilon,
\end{align*}
which completes the
proof of (\ref{difference of J}).
\end{proof}
Let us apply the lemma above in the case where
$K_{\varepsilon}(t)=K(\gamma)_{\lambda,t}$.
We have
\begin{align}
K(\gamma)_{\lambda,t}
&=K(t)+
\frac{1}{1-t}
\left(\frac{1-t}{\lambda}\overline{\nabla^2\log
p\left(\frac{1-t}{\lambda},y_0,\gamma\right)}_t+
\overline{\nabla^2k(c_{x_0,y_0})}_t\right)-
\frac{1}{2\lambda}\overline{\text{\rm Ric}(\gamma)}_t\nonumber\\
&=K(t)+
\frac{1}{1-t}\left(\frac{1-t}{\lambda}\overline{\nabla^2\log
p\left(\frac{1-t}{\lambda},y_0,\gamma\right)}_t+
\overline{\nabla^2k(\gamma)}_t\right)\nonumber\\
& \quad+\frac{1}{1-t}\left(\overline{\nabla^2k(c_{x_0,y_0})}_t-
\overline{\nabla^2k(\gamma)}_t\right)
-\frac{1}{2\lambda}\overline{\text{\rm Ric}(\gamma)}_t.
\end{align}
Therefore,
\begin{align}
C_{\varepsilon}(t)&=
\frac{1}{(1-t)^{1-\delta}}
\left(
\frac{1-t}{\lambda}\overline{\nabla^2\log
p\left(\frac{1-t}{\lambda},y_0,\gamma\right)}_t
+\overline{\nabla^2k(\gamma)}_t\right)\nonumber\\
&\quad +\frac{1}{(1-t)^{1-\delta}}
\left(\overline{\nabla^2k(c_{x_0,y_0})}_t-\overline{\nabla^2k(\gamma)}_t\right)
\nonumber\\
&\quad -\frac{(1-t)^{\delta}}{2\lambda}\overline{\text{\rm Ric}(\gamma)}_t.\label{Cept}
\end{align}
We need to show that if $\gamma$ and $c_{x_0,y_0}$
are close enough and $\lambda$ is large, then
$C_{\varepsilon}(t)$ is small.
Then by Lemma~\ref{perturbation of M},
we obtain that $\|J(\gamma)_{\lambda}-(S^{-1})^{\ast}\|_{op}$
is small.
Let us check each term of
$C_{\varepsilon}(t)$.
If $\gamma(t)\in B_l(y_0)$ for all $0\le t\le 1$,
the first term converges to $0$ by Lemma~\ref{gong-ma} (1)
as $\lambda\to \infty$ for $\delta>1/2$.
It is trivial to see that the third term goes to $0$.
Hence, it suffices to prove that
if $\gamma$ and $c_{x_0,y_0}$ is close enough, then
the difference
$\overline{\nabla^2k(c_{x_0,y_0})}_t-\overline{\nabla^2k(\gamma)}_t$
is small.
To this end, we use the results in rough path analysis.
Here, we summarize necessary results from rough path analysis.
The readers are referred to
\cite{lyons98, lq, lcl, friz-victoir, friz-hairer} for rough path analysis.
In Section~\ref{statement},
we define a Brownian motion $b$
with variance $1/\lambda$ on ${\mathbb R}^n$ by using the stochastic parallel translation
along $\gamma$ and $b$ is a functional of
$\gamma$.
Conversely, $\gamma$ can be obtained by solving a stochastic
differential equation driven by a Brownian motion $b(t)$.
We may use notation $b_t$ instead of $b(t)$.
From now on, $\mu^{\lambda}$ denotes the Brownian motion measure with variance
$1/\lambda$.
We use the notation $\mu$ when $\lambda=1$.
Let $\{L_i\}_{i=1}^n$ be the canonical horizontal vector fields
and consider an SDE on $O(M)$:
\begin{align}
dr(t,u,b)&=\sum_{i=1}^nL_i(r(t,u,b))\circ db^i(t) \label{horizontal sde}\\
r(0,u,b)&=u\in O(M).
\end{align}
Let $X(t,b)=\pi(r(t,u_0,b))$.
Then the law of $X(\cdot,b)$ coincides with
$\nu^{\lambda}_{x_0}$.
Also it holds that
\begin{align}
\overline{\nabla^2 k(X(b))}_t&=
r(t,u_0,b)^{-1}
(\nabla^2k)(X(t,b))r(t,u_0,b)\qquad \mbox{$\mu^{\lambda}$-a.s. $b$}.
\label{nabla2 k}
\end{align}
Note that if $b$ is the anti-stochastic development of
the Brownian motion $\gamma(t)$ on $M$,
then it holds that
$\tau(\gamma)_t=r(t,u_0,b)u_0^{-1}$ $\nu^{\lambda}_{x_0}$-a.s. $\gamma$.
Since we assume $M$ is diffeomorphic to
${\mathbb R}^n$, we have a global coordinate
$x=(x^i)\in {\mathbb R}^n$
and the Riemannian metric
$g(x)=(g_{ij}(x))$
on the tangent space $T_xM$ which can be identified with
${\mathbb R}^n$.
Then the SDE of
$r(t,u_0,b)=(X^i(t,b), e^{k}_l(t,b))$~($e(t,b)=(e^k_l(t,b))\in GL(n,{\mathbb R})$)
can be written down explicitly (see \cite{ikeda-watanabe, hsu}) as
\begin{align}
dX^i(t)&=e^i_j(t)\circ db^j(t) \label{frame bundle sde}\\
de^i_j(t)&=-\sum_{k,l}\Gamma^i_{kl}(X(t))e^{l}_j(t)\circ dX^k(t)
\label{frame bundle sde2}.
\end{align}
Moreover, the coefficients of the
SDE are $C^{\infty}_b$ because
the Riemannian metric is flat outside a certain compact subset.
Therefore we can apply rough path analysis and Malliavin calculus
to the solution of
the SDE.
Now let us recall the definition of the Brownian rough path.
Let $b(N)$ be the dyadic polygonal approximation of $b$
such that $b(N)_{k2^{-N}}=b_{k2^{-N}}$
and $b(N)_t$ is linear for $k2^{-N}\le t \le (k+1)2^{-N}$
with $0\le k\le 2^{N}-1$.
Define
$b(N)^1_{s,t}=b(N)_t-b(N)_s$,
$b(N)^2_{s,t}=\int_s^t\left(b(N)_u-b(N)_s\right)\otimes db(N)_u$
for $0\le s\le t\le 1$.
Let $\Omega$ be all elements $b$ belonging to the Wiener space
$W^n$ such that
$b(N)^1_{s,t}$ and $b(N)^2_{s,t}$ converge in the
Besov type norm $\|\cdot\|_{4m,2\theta}$ and $\|\cdot\|_{2m,\theta}$
respectively (\cite{aida-loop group}).
Here $2/3<\theta<1$ and
$m$ is a sufficiently large positive number.
It is proved in \cite{aida-loop group} that
$\Omega^c$ is a slim set in the sense of Malliavin
with respect to the Brownian motion measure
$\mu$.
However, it is easy to check that
the same result holds for
the Brownian motion measure $\mu^{\lambda}$
with variance $1/\lambda$ for any $\lambda>0$.
Moreover, if $b\in \Omega$, then
$b+h\in \Omega$ for any element $h\in \H$.
For $b\in \Omega$, we define $b^1_{s,t}=\lim_{N\to\infty}b(N)^1_{s,t}$
and $b^2_{s,t}=\lim_{N\to\infty}b(N)^2_{s,t}$.
The triple $(1,b^1_{s,t},b^2_{s,t})$ is a
$p$-rough path $(2<p=\frac{2}{\theta}<3)$ and its control function
is given by $\omega(s,t)=C(b)|t-s|$.
$C(b)$ depends on the Besov norm of $b^1$ and $b^2$.
For $h\in \H$,
we have, $(b+h)^1_{s,t}=b^1_{s,t}+h^1_{s,t}$
and
$$
(b+h)^2_{s,t}=b^2_{s,t}+h^2_{s,t}+\int_s^t(b_u-b_s)\otimes dh_u+
\int_s^t(h_u-h_s)\otimes db_u.
$$
Note that solutions of rough differential equations driven by
geometric rough paths are smooth.
See Definition 7.1.1 and Corollary 7.1.1 in \cite{lq}.
Therefore, considering the composition of the two maps,
$b(\in \Omega)\mapsto (1,b^1_{s,t},b^2_{s,t})$
and the solution map between geometric rough paths,
we obtain a smooth version $r(t,u_0,b)$ of the solution to
(\ref{frame bundle sde}) and (\ref{frame bundle sde2}).
Here smooth means
\begin{enumerate}
\item the mapping $b(\in \Omega)\mapsto r(t,u_0,b)$
is differentiable in the
$H$-direction and smooth in the sense of Malliavin,
\item the mapping $b(\in \Omega)\mapsto r(t,u_0,b)$ is
$\infty$-quasi-continuous
(See Theorem 3.2 in \cite{aida-loop group}).
\end{enumerate}
In the terminology of Malliavin calculus,
$r(t,u_0,b)$ is a version of redifinition of the solution
to (\ref{horizontal sde}).
By the uniform ellipticity of (\ref{frame bundle sde}),
we have the following estimate for the Malliavin covariance matrix.
For $p\ge 1$, there exists $p'>0$ such that for large $\lambda$,
\begin{align}
E[\left\{\det \left(DX(1,b)DX(1,b)^{\ast}\right)\right\}^{-p}]
\le C\lambda^{p'}.
\end{align}
Thus the probability measure
$
d\mu^{\lambda}_{x_0,y_0}=\frac{\delta_{y_0}(X(1,b))d\mu^{\lambda}}
{c(y_0)p(1/\lambda,x_0,y_0)}
$
is well-defined,
where $c(y_0)=\sqrt{\det(g_{ij}(y_0))}$ and
$\delta_{y_0}$ denotes Dirac's delta function
on ${\mathbb R}^n$ and
$\delta_{y_0}(X(1,b))d\mu^{\lambda}$ is a generalized Wiener functional
(\cite{watanabe}).
Note that $\mu^{\lambda}_{x_0,y_0}$ does not charge the slim sets.
Thus the image measure
$X_{\ast}\mu^{\lambda}_{x_0,y_0}$ is well-defined for
smooth $X(b)$.
Moreover, we have
\begin{align}
\mbox{The joint law of
$(b, \gamma)$ under
$\nu^{\lambda}_{x_0,y_0}$}
=\mbox{The joint law of $(b,X(b))$ under
$\mu^{\lambda}_{x_0,y_0}$}.\label{joint law}
\end{align}
This observation implies that one can use estimates on integration
with respect to (Brownian) rough paths to study the estimate on the
stochastic integrals for the pinned Brownian motion.
In the proof in Section 2, we use cut-off functions
$\chi_{1,\kappa}, \chi_{2,\kappa}$.
In our problem, the existence of such cut-off functions
is not trivial.
The existence of such an appropriate cut-off functions
are proved in
\cite{aida-semiclassical}.
We use the following result in rough paths.
Below, $r(t,u_0,b)$ may be denoted by $r(t,b)$ for simplicity.
\begin{lem}\label{lemma from rough path}
\noindent
$(1)$~In this statement, we consider the
smooth version $r(t,b)$ for $b\in \Omega$.
By adopting this version,
a version of $\overline{\nabla^2 k(X(b))}_t$ can be defined
as $r(t,u_0,b)^{-1}
(\nabla^2k)(X(t,b))r(t,u_0,b)$
which is smooth in the above sense.
Let $l_{\xi}(t)=t\xi$, where
$\xi$ is chosen as
$\exp_{x_0}\left(u_0\xi\right)=y_0$.
Let us define
\begin{align}
\Xi(b)&=\|b^1\|_{4m,\theta/2}^{4m}+
\|b^2\|_{2m,\theta}^{2m}\qquad b\in \Omega.
\end{align}
Then for any $\varepsilon>0$, there exists $\varepsilon'>0$
such that if
$\Xi(b-l_{\xi})\le \varepsilon'$
and $X(1,b)=y_0$,
\begin{align}
\left|X(t,b)-c_{x_0,y_0}(t)\right|&\le \varepsilon t^{\theta/2} \quad 0\le t\le 1,
\label{rough path estimate1}\\
\left|
\overline{\nabla^2 k(X(b))}_t-
\overline{\nabla^2 k(X(l_{\xi}))}_t
\right|&\le
\varepsilon (1-t)^{\theta/2}\quad 0\le t\le 1, \label{rough path estimate2}\\
\left|
\sup_{0\le t\le 1}\left|I^2_{t,1}(b)-I^2_{t,1}(l_{\xi})\right|
\right|&\le
\|\varphi'\|_{\infty}\varepsilon,\label{rough path estimate3}
\end{align}
where
\begin{align}
I^2(b)_{s,t}=
\int_t^1\overline{R(X(b))}_s\left(\int_s^1\varphi(r)db^i(r),\varepsilon_i\right)
\circ db(s)
\end{align}
and $\varphi\in C^1([0,1],{\mathbb R}^n)$.
The integral is defined in the sense of rough paths.
\noindent
$(2)$ In this statement, let $b$ be the
Brownian motion which is obtained by the anti-stochastic development
of the pinned Brownian motion $\gamma$.
Let $\eta$ be a $C^1_b$ function with compact support
on ${\mathbb R}$.
Let
$\tilde{\eta}(\gamma)=\eta\left(\Xi(b-l_{\xi})\right)$.
Then there exists a constant $C>0$ such that for all $\lambda\ge 1$
\begin{align}
|D_0\tilde{\eta}(\gamma)|_{{\rm H}_0}\le C
\quad \mbox{for $\nu^{\lambda}_{x_0,y_0}$-almost all $\gamma$}.
\end{align}
\end{lem}
\begin{proof}
(1)~
(\ref{rough path estimate1}) and (\ref{rough path estimate3})
follow from the fact that
$c_{x_0,y_0}(t)=X(t,l_{\xi})$ and the continuity theorem
for $p$-rough path $(2<p=\frac{2}{\theta}<3)$.
We prove (\ref{rough path estimate2}).
We have
\begin{align}
\lefteqn{\overline{\nabla^2 k(X(b))}_t-
\overline{\nabla^2 k(X(l_{\xi}))}_t}\nonumber\\
&=\left\{
\overline{\nabla^2 k(X(b))}_t-
\overline{\nabla^2 k(X(b))}_1
\right\}-
\left\{
\overline{\nabla^2 k(X(l_{\xi}))}_t
-\overline{\nabla^2 k(X(l_{\xi}))}_1
\right\}\nonumber\\
&\quad
+\overline{\nabla^2 k(X(b))}_1-
\overline{\nabla^2 k(X(l_{\xi}))}_1\nonumber\\
&=\left\{
\overline{\nabla^2 k(X(b))}_t-
\overline{\nabla^2 k(X(b))}_1
\right\}-
\left\{
\overline{\nabla^2 k(X(l_{\xi}))}_t
-\overline{\nabla^2 k(X(l_{\xi}))}_1
\right\},
\end{align}
where we have used $(\nabla^2k)(y_0)=I_{T_{y_0}M}$
and $X(1,b)=c_{x_0,y_0}(1)=y_0$.
Hence it suffices to apply the continuity theorem
for $p$-rough path $(2<p=\frac{2}{\theta}<3)$.
\noindent
(2)
In the case of the derivative $D$, this immediately follows from
Lemma 7.11 in \cite{aida-semiclassical}.
The proof for $D_0$ is the same.
Here we give a sketch of the proof.
Recall that
\begin{align}
(D_0)_{h}b(t)=
h(t)+\int_0^t\int_0^s\overline{R(\gamma)}_u(h(u),\circ db(u))(\circ
db(s)).
\label{D0b}
\end{align}
We already used this formula
in the proof of Theorem~\ref{e2la on Px0} for the derivative
$D$.
From this formula, we see that
$D_0\left(\Xi(b-l_{\xi})\right)$
are given by iterated stochastic integrals of
$b$ and $\gamma$.
By (\ref{joint law}),
we can apply estimates for integration
with respect to the Brownian rough path for $b\in \Omega$.
Thus, the iterated integrals of solutions of rough differential equations
can be estimated by the control function of the
Brownian rough path.
Since the support of $\eta$ is compact, this implies the desired estimate.
\end{proof}
Now, we are ready to prove our first main theorem.
\begin{proof}[Proof of Theorem~$\ref{main theorem 1}$]
First we prove the upper bound estimate.
This will be done by using (\ref{representation of e2})
and choosing appropriate functions $F$ below.
For that purpose, we prepare a large deviation estimate.
Below, several constants depending on parameters
$\kappa, \varepsilon$ appear.
We use the notation $M(x)$ to denote
positive functions of $x$ which may diverge as
$x\to 0$.
On the other hand, we use the notation $C(x)$ to
denote positive functions of $x$ which converge to $0$
as $x\to 0$.
$M(x)$ and $C(x)$ may change line by line.
Let $\eta$ be a non-negative smooth function such that
$\eta(u)=1$ for $u\le 1$ and
$\eta(u)=0$ for $u\ge 2$.
Let $0<\kappa<1$ and set
\begin{align}
{\eta}_{1,\kappa}(\gamma)&=
\eta\left(\kappa^{-1}\Xi(b-l_{\xi})\right),\quad
{\eta}_{2,\kappa}(\gamma)
=\left\{1-{\eta}_{1,\kappa}(\gamma)^2\right\}^{1/2}.
\end{align}
By (\ref{D0b}) and Lemma~\ref{lemma from rough path} (2),
there exists a positive constant $M(\kappa)$ such that
\begin{align}
|D_0\eta_{1,\kappa}(\gamma)|+
|D_0\eta_{2,\kappa}(\gamma)|\le M(\kappa)
\qquad \nu_{x_0,y_0}^{\lambda}-a.s. \gamma.
\label{estimate on cut-off}
\end{align}
From (\ref{rough path estimate1}),
for any $\varepsilon>0$,
$\sup_{0\le t\le 1}|X(t,b)-c_{x_0,y_0}(t)|\le \varepsilon$ holds
if $\kappa$ is sufficiently small and $\eta_{1,\kappa}(\gamma)\ne 0$.
Hence $\eta_{1,\kappa}\in H^{1,2}_0({\cal D})$.
Let $\psi$ be a smooth non-negative function on ${\mathbb R}$
satisfying $\psi(u)=0$ for $u\le \delta_1$
and $\psi(u)=1$ for $u\ge \delta_2$,
where $0<\delta_1<\delta_2$.
Then there exist $C, C'>0$ which depend on $\psi$
such that for large $\lambda$
\begin{align}
E^{\nu^{\lambda}_{x_0,y_0}}\left[\psi\left(\Xi(b-l_{\xi})\right)
\right]\le C e^{-C'\lambda}.\label{rough path exponential decay}
\end{align}
We prove this estimate.
Let $B$ be a standard Brownian motion on ${\mathbb R}^n$.
Since the Wiener functional $B\mapsto
X\left(1,\frac{B}{\sqrt{\lambda}}\right)$
is non-degenerate, by using the integration by parts formula
(see \cite{nualart, shigekawa}),
\begin{align}
\lefteqn{E^{\nu^{\lambda}_{x_0,y_0}}\left[\psi\left(\Xi(b-l_{\xi})\right)
\right]}\nonumber\\
&=\left(c(y_0)p(1/\lambda,x_0,y_0)\right)^{-1}
E\left[\psi\left(\Xi\left(\frac{B}{\sqrt{\lambda}}-l_{\xi}\right)\right)\delta_{y_0}
\left(X\left(1,\frac{B}{\sqrt{\lambda}}\right)\right)\right]\nonumber\\
&=\left(c(y_0)p(1/\lambda,x_0,y_0)\right)^{-1}
E\left[
\tilde{\psi}\left(\Xi\left(\frac{B}{\sqrt{\lambda}}-l_{\xi}\right)\right)
G(\varepsilon,\lambda,B)\phi_{\varepsilon}\left(X\left(1,\frac{B}{\sqrt{\lambda}}\right)-y_0\right)
\right],
\end{align}
where $\tilde{\psi}, \phi_{\varepsilon}$ are bounded continuous functions on
${\mathbb R}$ and ${\mathbb R}^n$ respectively such that
$\tilde{\psi}\subset [\delta_1,\infty)$
and ${\rm supp}\, \phi_{\varepsilon}\subset B_{\varepsilon}(0)$.
Also the random variable $G(\lambda,\varepsilon,B)$ satisfies that for any $p>1$
\begin{align}
E\left[|G(\lambda,\varepsilon,B)|^p\right]^{1/p}\le C_{\varepsilon,p}(\lambda),
\end{align}
where $C_{\varepsilon,p}(\lambda)$ is a polynomial function of
$\lambda$.
Let $q=p/(p-1)$.
By the H\"older inequality,
\begin{align}
E^{\nu^{\lambda}_{x_0,y_0}}\left[
\psi\left(\Xi(b-l_{\xi})\right)
\right]\le p(1/\lambda,x_0,y_0)^{-1}
C_{\varepsilon,p}(\lambda)\mu(A_{\varepsilon})^{1/q},
\end{align}
where
\begin{align}
A_{\varepsilon}=\left\{B~\Big |~\Xi\left(\frac{B}{\sqrt{\lambda}}-l_{\xi}\right)\ge
\delta_1,
\, \,
\left|X\left(1,\frac{B}{\sqrt{\lambda}}\right)-y_0\right|\le \varepsilon
\right\}.
\end{align}
By the large deviation estimate for Brownian rough path
(\cite{friz-victoir, inahama, lqz}),
we have
\begin{align}
\limsup_{\lambda\to\infty}
\frac{1}{\lambda}
\log \mu\left(A_{\varepsilon}\right)
\le
-\frac{1}{2}\inf\left\{\|h\|_{\H}^2~|~
\Xi\left(h-l_{\xi}\right)\ge \delta_1,\,\,
\left|X(1,h)-y_0\right|\le\varepsilon
\right\}=:J_{\varepsilon}.
\end{align}
For sufficiently small $\varepsilon$,
it holds that $J_{\varepsilon}<-\frac{1}{2}d(x_0,y_0)^2$
which can be proved by a contradiction.
Suppose there exists $h_{\varepsilon}\in \H$ such that
$\lim_{\varepsilon\to 0}\|h_{\varepsilon}\|_{\H}\le d(x_0,y_0)$,
$\Xi\left(h_{\varepsilon}-l_{\xi}\right)\ge \delta_1$
and $\left|X(1,h_{\varepsilon})-y_0\right|\le\varepsilon$.
Let $h_0$ be a weak limit point of $h_{\varepsilon}$.
Then $\|h_0\|_{\H}\le d(x_0,y_0)$.
By Lemma 7.12 in \cite{aida-semiclassical},
$\Xi\left(h_0-l_{\xi}\right)=\lim_{\varepsilon\to
0}\Xi\left(h_{\varepsilon}-l_{\xi}\right)\ge \delta_1$
and $X(1,h_{0})=\lim_{\varepsilon\to 0}X(1,h_\varepsilon)=y_0$.
By the uniqueness of the minimal geodesic between
$x_0$ and $y_0$,
we have $h_0=l_{\xi}$.
This contradicts $\Xi\left(h_0-l_{\xi}\right)\ge \delta_1$.
Hence there exist $\varepsilon>0$ and $\delta>0$ such that
\begin{align}
E^{\nu^{\lambda}_{x_0,y_0}}\left[
\psi\left(\Xi(b-l_{\xi})\right)
\right]\le C_{\varepsilon,p}(\lambda)p(1/\lambda,x_0,y_0)^{-1}
\exp\left\{-\lambda\left(\frac{d(x_0,y_0)^2+\delta}{2q}\right)\right\}.
\end{align}
Since $\lim_{\lambda\to\infty}\frac{\lambda^{n/2}\exp\left(-\lambda d(x_0,y_0)^2/2\right)}
{p(1/\lambda,x_0,y_0)}$ exists,
by taking $p$ sufficiently large, this proved the desired inequality.
We now apply (\ref{representation of e2}) to prove the upper bound.
Let us fix a positive number $\varepsilon>0$ and
choose $\varphi_{\varepsilon}\in {\rm L^2_0}\cap C^1([0,1],{\mathbb R}^n)$
with $\|\varphi_{\varepsilon}\|=1$
such that
\begin{align}
\sigma_1\le \|S\varphi_{\varepsilon}\|^2\le \|(I+T)\varphi_{\varepsilon}\|\le \sigma_1+\varepsilon.
\label{varphiep}
\end{align}
This is possible because of
Lemma~\ref{S and T1} and Proposition~\ref{S and T2}.
Note that $\|\varphi_{\varepsilon}'\|_{\infty}$ may diverge when
$\varepsilon\to 0$.
Define
\begin{align}
F_{\varepsilon}(\gamma)
&=
\sqrt{\lambda}\left(\int_0^1(\varphi_{\varepsilon}(t),db(t))-
\int_0^1(\varphi_{\varepsilon}(t),\xi)dt\right).
\end{align}
Let $\tilde{F}_{\varepsilon}=F_{\varepsilon}\eta_{1,\kappa}\in H^{1,2}_0({\cal D})$.
We estimate the numerator of the ratio in
(\ref{representation of e2}) for $\tilde{F}_{\varepsilon}$.
Since the Besov norm is stronger than the supremum norm,
we have
\begin{align}
|\tilde{F}_{\varepsilon}(\gamma) |\le C\sqrt{\lambda}M(\varepsilon)C(\kappa).
\end{align}
By (\ref{D0b})
\begin{align}
\left(D_0F_{\varepsilon}(\gamma),h\right)_{{\rm H}_0}
&=\sqrt{\lambda}\int_0^1\left(\varphi_{\varepsilon}(t),h'(t)\right)dt
+\sqrt{\lambda}\int_0^1\left(
\varphi_{\varepsilon}(t),\int_0^t\overline{R(\gamma)}_u(h(u),\circ db(u))\circ
db(t)\right)
\nonumber\\
&=
\sqrt{\lambda}\int_0^1\left(\varphi_{\varepsilon}(t),h'(t)\right)dt\nonumber\\
&\quad +
\int_0^1\left(
\int_t^1\overline{R(\gamma)}_s\left(\int_s^1\varphi_{\varepsilon}(u)db^i(u),\varepsilon_i
\right)\circ
db(t), h'(t)
\right)dt
\end{align}
and so we have
\begin{align}
D_0F_{\varepsilon}(\gamma)_t'&=
\sqrt{\lambda}\varphi_{\varepsilon}(t)+
\sqrt{\lambda}\int_t^1\overline{R(\gamma)}_s\left(
\int_s^1\varphi_{\varepsilon}(r)db^i(r),\varepsilon_i\right)\circ db(s)\nonumber\\
& \quad -\sqrt{\lambda}\int_0^1\int_t^1\overline{R(\gamma)}_s\left(
\int_s^1\varphi_{\varepsilon}(r)db^i(r),\circ \varepsilon_i\right)(\circ db(s))dt\nonumber\\
&=\sqrt{\lambda}\varphi_{\varepsilon}(t)-
\sqrt{\lambda}\int_t^1R(s)\left(
\int_0^s\varphi_{\varepsilon}(u)du\right)ds\nonumber\\
&\quad +\sqrt{\lambda}\int_0^1\int_t^1R(s)\left(
\int_0^s\varphi_{\varepsilon}(u)du\right)dsdt
+I(\lambda)_t\nonumber\\
&=\sqrt{\lambda}(I+T)(\varphi_{\varepsilon})(t)+I(\lambda)_t,
\end{align}
where $R(s)=\overline{R(c_{x_0,y_0})}_s(\cdot,\xi)(\xi)$ and
$I(\lambda)_t=(D_0F)(X(\cdot,b))_t'-
(D_0F)(X(\cdot,l_{\xi}))_t'.
$
Note that we have used $\varphi_{\varepsilon}\in {\rm L^2_0}$ in the above.
By (\ref{rough path estimate3}), we have
\begin{align}
\sup_{0\le t\le 1}|I(\lambda)_t|\le
\sqrt{\lambda}C(\kappa)M(\varepsilon)
\quad \mbox{if $\eta_{1,\kappa}(\gamma)\ne 0.$}
\label{ICkappa}
\end{align}
Thus we have
\begin{align}
|D_0\tilde{F}_{\varepsilon}(\gamma)|^2&=
\lambda|(I+T)\varphi_{\varepsilon}|^2\eta_{1,\kappa}^2+
|I(\lambda)|^2\eta_{1,\kappa}^2+
2\sqrt{\lambda}((I+T)\varphi_{\varepsilon},I(\lambda))\eta_{1,\kappa}^2\nonumber\\
&\quad +
F_{\varepsilon}^2|D_0\eta_{1,\kappa}|^2+
2(D_0F_{\varepsilon},D_0\eta_{1,\kappa})\eta_{1,\kappa}.
\end{align}
By (\ref{rough path exponential decay}) and (\ref{ICkappa}),
we get
\begin{align}
\|D_0\tilde{F}_{\varepsilon}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
\le \lambda\|(I+T)\varphi_{\varepsilon}\|_{L^2}^2+
\lambda C(\kappa)M(\varepsilon)+\lambda CM(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}.\label{upper bound 1}
\end{align}
Combining $\|D_0\Psi_{\lambda}\|\le Ce^{-C'\lambda}$,
we obtain
\begin{align}
\|D_0\tilde{F}_{\varepsilon}-
\left(\tilde{F}_{\varepsilon},\Psi_{\lambda}\right)D_0\Psi_{\lambda}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
\le\lambda\|(I+T)\varphi_{\varepsilon}\|_{L^2}^2+\lambda C(\kappa)M(\varepsilon)+
\lambda M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}.\label{Dirichlet norm}
\end{align}
We next turn to the estimate of the denominator in
(\ref{representation of e2}) for
$\tilde{F}_{\varepsilon}$.
To do so, we use COH formula.
For large $\lambda>0$, by taking $\kappa$ sufficiently small
and combining Lemma~\ref{gong-ma}, Lemma~\ref{lemma from rough path} (1)
and Lemma~\ref{perturbation of M},
we have
\begin{align}
|(J(\gamma)_{\lambda}-J_0)(D_0\tilde{F}_{\varepsilon})(\gamma)'|_{L^2(0,1)}&\le
\varepsilon |D_0\tilde{F}_{\varepsilon}(\gamma)'|_{L^2(0,1)}.
\end{align}
Therefore,
using $A(\gamma)_{\lambda}=I+J(\gamma)_{\lambda}$,
$(S^{-1})^{\ast}=I+J_0$ and
$(S^{-1})^{\ast}(I+T)=S$,
we have
\begin{align}
& A(\gamma)_{\lambda}(D_0\tilde{F}_{\varepsilon}(\gamma)')_t\nonumber\\
&=
\left(S^{-1}\right)^{\ast}\left(D_0\tilde{F}_{\varepsilon}(\gamma)'\right)_t
+(J(\gamma)_{\lambda}-J_0)(D_0\tilde{F}_{\varepsilon}(\gamma)')_t
\nonumber\\
&=
\sqrt{\lambda}\left(S^{-1}\right)^{\ast}(I+T)\varphi_{\varepsilon}(t)\eta_{1,\kappa}+
(S^{-1})^{\ast}I(\lambda)_t\eta_{1,\kappa}+
F_{\varepsilon}(\gamma)(S^{-1})^{\ast}\left(D_0\eta_{1,\kappa}\right)'_t\nonumber\\
&\qquad +(J(\gamma)_{\lambda}-J_0)(D_0\tilde{F}_{\varepsilon}(\gamma)')_t
\nonumber\\
&=
\sqrt{\lambda}S\varphi_{\varepsilon}(t)+I_2(\lambda),
\label{COHF1}
\end{align}
and
\begin{align}
\|I_2(\lambda)\|_{L^2(\nu^{\lambda}_{x_0,y_0})}&\le
\sqrt{\lambda}M(\varepsilon)e^{-C(\kappa)\lambda}+\sqrt{\lambda}C(\kappa)M(\varepsilon)
+\sqrt{\lambda}M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}\nonumber\\
&\quad +
\varepsilon\sqrt{\lambda}\left(C+C(\kappa)M(\varepsilon)+M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}\right).
\label{COHF2}
\end{align}
Since $S\varphi_{\varepsilon}(t)$ is a non-random function,
from (\ref{COHF1}) and (\ref{COHF2})
and the COH formula (\ref{COH loop 1}), we obtain
\begin{align}
\|\tilde{F}_{\varepsilon}-E^{\nu^{\lambda}_{x_0,y_0}}
[\tilde{F}_{\varepsilon}]\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
&\ge \|S\varphi_{\varepsilon}\|^2-C(\kappa)M(\varepsilon)-
M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}\nonumber\\
&\quad -
\varepsilon\left(C+C(\kappa)M(\varepsilon)+M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}\right).
\label{upper bound 2}
\end{align}
Using Lemma~\ref{ground state},
\begin{align}
\|\tilde{F}_{\varepsilon}-
\left(\tilde{F}_{\varepsilon},\Psi_{\lambda}\right)\Psi_{\lambda}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
&=\|\tilde{F}_{\varepsilon}-(\tilde{F}_{\varepsilon},1)\|_{L^2}^2-2
\left(\tilde{F}_{\varepsilon}-(\tilde{F}_{\varepsilon},\Psi_{\lambda}),
(\tilde{F}_{\varepsilon},\Psi_{\lambda})(\Psi_{\lambda}-1)\right)
\nonumber\\
&\quad +(\tilde{F}_{\varepsilon},1-\Psi_{\lambda})^2+(\tilde{F}_{\varepsilon},\Psi_{\lambda})^2
\|1-\Psi_{\lambda}\|^2\nonumber\\
&\ge\|\tilde{F}_{\varepsilon}-(\tilde{F}_{\varepsilon},1)\|_{L^2}^2
-M(\varepsilon)M(\kappa)e^{-C'\lambda}.
\label{L2 norm}
\end{align}
Now we set $\varepsilon$ sufficiently small and next $\kappa$
sufficiently small.
By using the estimates (\ref{Dirichlet norm}), (\ref{upper bound 2}),
(\ref{L2 norm})
and (\ref{varphiep}), we obtain for large $\lambda$,
\begin{align}
\frac{\|D_0\tilde{F}_{\varepsilon}-(\tilde{F}_{\varepsilon},
\Psi_{\lambda})D_0\Psi_{\lambda}\|^2_{L^2(\nu^{\lambda}_{x_0,y_0})}}
{\|\tilde{F}_{\varepsilon}-(\tilde{F}_{\varepsilon},\Psi_{\lambda})\Psi_{\lambda}\|^2
_{L^2(\nu^{\lambda}_{x_0,y_0})}}
&\le\frac{\lambda\|(I+T)\varphi_{\varepsilon}\|_{L^2}^2+\lambda \varepsilon+
\lambda M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}}
{\|S\varphi_{\varepsilon}\|_{L^2}^2-C\varepsilon-M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}}\nonumber\\
&
\le\frac{\lambda(\sigma_1+\varepsilon)^2+\lambda \varepsilon+
\lambda M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}}
{\sigma_1-C\varepsilon-M(\varepsilon)M(\kappa)e^{-C(\kappa)\lambda}}.
\end{align}
This completes the proof of the upper bound.
We next prove lower bound estimate.
Take $F\in H^{1,2}_0({\mathcal D})$
such that $\|F\|_{L^2(\nu^{\lambda}_{x_0,y_0})}=1$ and
$(F,\eta_{1,\kappa})=0$.
By the IMS localization formula,
\begin{equation}
{\mathcal E}(F,F)=
\sum_{i=1,2}{\mathcal E}(F\eta_{i,\kappa},F\eta_{i,\kappa})
-\sum_{i=1,2}
E^{\nu^{\lambda}_{x_0,y_0}}[|D_0\eta_{i,\kappa}|^2F^2].
\end{equation}
For any $\varepsilon>0$, by taking $\kappa$ sufficiently small and large $\lambda$,
by Lemma~\ref{coh formula} (1), Lemma~\ref{perturbation of M},
Lemma~\ref{lemma from rough path},
\begin{align}
\lefteqn{
\|F\eta_{1,\kappa}-E^{\nu^{\lambda}_{x_0,y_0}}[F\eta_{1,\kappa}]\|
_{L^2(\nu^{\lambda}_{x_0,y_0})}^2}
\nonumber\\
& \quad \le
\frac{\left(\|(S^{-1})^{\ast}\|_{op}+C\varepsilon\right)^2}{\lambda}
E^{\nu^{\lambda}_{x_0,y_0}}\left[
|D_0(F\eta_{1,\kappa})|^2
\right].\label{COH Feta}
\end{align}
Thus
we have
\begin{align}
\|F\eta_{1,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
\le
\frac{\left(\|(S^{-1})^{\ast}\|_{op}+C\varepsilon\right)^2}{\lambda}
E^{\nu^{\lambda}_{x_0,y_0}}\left[
|D_0(F\eta_{1,\kappa})|^2
\right].\label{Feta1}
\end{align}
Now we estimate the Dirichlet norm of $F\eta_{2,\kappa}$.
The log-Sobolev inequality (\ref{lsi1}) implies that
there exists a positive constant $C$ such that
for any $F\in H^{1,2}_0({\mathcal D})$ and
bounded measurable function $V$ on $P_{x_0,y_0}(M)$,
\begin{equation}
{\mathcal E}(F,F)+E^{\nu^{\lambda}_{x_0,y_0}}
\left[\lambda^2 VF^2\right]
\ge
-\frac{\lambda}{C}\log E^{\nu^{\lambda}_{x_0,y_0}}
\left[e^{-C\lambda V}\right]\|F\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2.
\label{GNS inequality}
\end{equation}
See Theorem 7 in \cite{gross}.
Also see Lemma~\ref{GNS} in the present paper.
Let $\delta$ be a sufficiently small positive number
and define
$
V(\gamma)=\delta 1_{\eta_{2,\kappa}\ne 0}(\gamma),
$
where $1_A$ denotes the indicator function of a set $A$.
By (\ref{GNS inequality}), there exists $\delta'>0$ such that
\begin{align}
&{\mathcal E}(F\eta_{2,\kappa},F\eta_{2,\kappa})\nonumber\\
&\quad =
{\mathcal E}(F\eta_{2,\kappa},F\eta_{2,\kappa})-\lambda^2
E^{{\nu}^{\lambda}_{x_0,y_0}}\left[V(F\eta_{2,\kappa})^2\right]
+\lambda^2
E^{{\nu}^{\lambda}_{x_0,y_0}}\left[V(F\eta_{2,\kappa})^2\right]\nonumber\\
&\quad \ge
-\frac{\lambda}{C} \log E^{{\nu}^{\lambda}_{x_0,y_0}}
\left[e^{C\lambda V}\right]\|F\eta_{2,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
+\lambda^2\delta\|F\eta_{2,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2\nonumber\\
&\quad\ge
-\frac{\lambda}{C}\log\left(1+e^{-\lambda\delta'}\right)
\|F\eta_{2,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
+\lambda^2\delta\|F\eta_{2,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2\nonumber\\
&\quad\ge (\lambda^2\delta-C\lambda e^{-\lambda\delta'})
\|F\eta_{2,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2,\label{Feta2}
\end{align}
where in the third inequality we have used the estimate
(\ref{rough path exponential decay}).
By the estimates (\ref{estimate on cut-off}), (\ref{Feta1}),
(\ref{Feta2}) and the fact that
$\|F\eta_{1,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
+\|F\eta_{2,\kappa}\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2=1$,
we get
\begin{equation}
{\mathcal E}^{\lambda}(F,F) \ge
\lambda\min\left(\left(\|(S^{-1})^{\ast}\|_{op}+C\varepsilon\right)^{-2},
\lambda \delta-Ce^{-\lambda\delta'}\right)-M(\kappa).\label{lbd of F}
\end{equation}
By the definition of $e^{\lambda}_{Dir,2,{\mathcal D}}$, this completes the proof.
\end{proof}
\begin{rem}{\rm
Eberle~{\rm \cite{eberle3}} defined a local spectral gap on
${\cal D}$ by
\begin{align}
e^{\lambda}_{E}&=
\inf_{F(\ne 0)\in H^{1,2}_0({\cal D})}
\frac{\int_{{\cal D}}|D_0F|^2d\nu^{\lambda}_{x_0,y_0}}
{\int_{{\cal D}}\left(
F-\frac{1}{\nu^{\lambda}_{x_0,y_0}({\cal D})}\int_{{\cal D}}
Fd\nu^{\lambda}_{x_0,y_0}\right)^2d\nu^{\lambda}_{x_0,y_0}}.
\end{align}
When ${\cal D}$ satisfies
conditions (1), (2) in Theorem~\ref{main theorem 1},
the above proof shows also that
\begin{align}
\lim_{\lambda\to\infty}\frac{e^{\lambda}_E}{\lambda}&=
\sigma_1.
\end{align}
Actually, $e^{\lambda}_E$ is more related to $e^{\lambda}_2$ than
$e^{\lambda}_{Dir,2,{\mathcal D}}$.
}
\end{rem}
\section{A proof of existence of spectral gap}\label{proof of existence
of spectral gap}
We consider the following setting.
Let $(\Omega,{\mathfrak F},\nu)$ be a probability space
and consider a Dirichlet form $({\mathcal E},{\cal F})$ defined on
$L^2(\Omega,\nu)$.
We assume the existence of square field operator $\Gamma$ such that
$$
{\mathcal E}(F,F)=\int_{\Omega}\Gamma(F,F)d\nu,\qquad F\in {\cal F}.
$$
Also we assume $1\in {\cal F}$ and the diffusion property.
That is, for any $\varphi\in C^1_b({\mathbb R})$ and $F\in {\cal F}$,
it holds that $\varphi(F)\in {\cal F}$ and
\begin{align}
\Gamma(\varphi(F),\varphi(F))=\Gamma(F,F)\varphi'(F)^2.
\end{align}
We write $\Gamma(F)=\Gamma(F,F)$.
We already used the following well known estimate
(\cite{gross}).
\begin{lem}\label{GNS}
Suppose that for any $F\in {\cal F}$,
\begin{align}
\int_{\Omega}F(w)^2\log(F(w)^2/\|F\|_{L^2(\nu)}^2)d\nu &\le
\alpha{\mathcal E}(F,F).\label{LSI}
\end{align}
Then
for any bounded measurable function $V$, we have
\begin{align}
{\mathcal E}(F,F)+\int_{\Omega}V(w)F(w)^2d\nu(w)&
\ge -\frac{1}{\alpha}\log\left(
\int_{\Omega}e^{-\alpha V(\omega)}d\nu(\omega)\right)
\|F\|_{L^2(\nu)}^2\qquad \mbox{for any $F\in {\cal F}$}.
\end{align}
\end{lem}
Note that in the above lemma,
$({\mathcal E},{\cal F})$ is not necessarily a closed form
and the lemma holds for any bilinear form $({\mathcal E},{\cal F})$
satisfying the logarithmic Sobolev inequality
(\ref{LSI}).
The spectral gap $e_2$ is defined by
$$
e_2=\inf\left\{
{\cal E}(F,F)~\Big |~\|F\|_{L^2(\nu)}=1, \int_{\Omega}F(w)d\nu(w)=0,
F\in {\cal F}
\right\}.
$$
\begin{thm}\label{main theorem 3}
Let ${\cal F}_0$ be a dense linear subset of
${\cal F}$ with respect to ${\cal E}_1$-norm.
Suppose that there exist positive numbers $\alpha,\beta,r_0$
and $\rho\in {\cal F}$ such that
$\Gamma(\rho)(w)\le 1$ $\nu$-a.s. $w$ and
\begin{align}
\int_{\Omega}F(w)^2\log(F(w)^2/\|F\|_{L^2(\nu)}^2)d\nu &\le
\alpha\int_{\Omega}\rho(w)^2\Gamma(F,F)(w)d\nu(w),\quad
\mbox{for all $F\in {\cal F}_0$},\label{LSI general}\\
\nu\left(\rho\ge r\right)&\le e^{-\beta r^2},\qquad \mbox{for all
$r\ge r_0$}.
\end{align}
Then
\begin{align}
e_2&\ge
\frac{1}{4}
\min\left(
\frac{1}{8\alpha R(\alpha,\beta,r_0)^2}, ~\frac{\beta}{36\alpha}
\right),
\end{align}
where
\begin{align}
R(\alpha,\beta,r_0)&=
\max\left(\sqrt{\frac{2}{\beta}},~
\frac{192\alpha}{\sqrt{\beta}},~
48\sqrt{\frac{\alpha}{\beta}},
~r_0\right).
\end{align}
\end{thm}
\begin{proof}
Let $R\ge r_0$.
We consider a partition of unity $\{\chi_k\}_{k\ge 0}$
on $[0,\infty)$
such that
\begin{itemize}
\item[(i)] $\chi_k$ is a $C^1$ function,
\item[(ii)] $\chi_0(u)=1$ for $0\le u\le R$
and $\chi_0(u)=0$ for $u\ge 2R$,
\item[(iii)]
${\rm supp}\, \chi_k\subset [Rk, R(k+2)]$\qquad $(k\ge 1)$,
\item[(iv)]
$\sum_{k=0}^{\infty}\chi_k(u)^2=1$ for all $u\ge 0$.
\item[(v)] $\sup_{k,u}|\chi_k'(u)|\le\frac{2}{R}$,
\end{itemize}
Define $\tilde{\chi}_k(w)=\chi_k(\rho(w))$.
Let $F\in {\cal F}_0$ and assume $\|F\|_{L^2(\nu)}=1$
and $\int_{\Omega}F(w)d\nu(w)=0$.
By the IMS localization formula,
we have
\begin{align}
{\mathcal E}(F,F)&=\sum_{k=0}^{\infty}{\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)-
\sum_{k=0}^{\infty}\int_{\Omega}\Gamma(\tilde{\chi}_k)F^2d\nu
\ge\sum_{k=0}^{\infty}{\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)
-\frac{8}{R^2}\int_{\rho\ge R}F^2d\nu.\label{IMS1}
\end{align}
We estimate each term ${\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)$.
First, we estimate ${\mathcal E}(F\tilde{\chi}_0,F\tilde{\chi}_0)$.
We have
\begin{align}
\left|\int_{\Omega}F(w)\tilde{\chi}_0(w)d\nu(w)\right|&=
\left|\int_{\Omega}F(w)(\tilde{\chi}_0(w)-1)d\nu(w)\right|
\le \nu\left(\rho\ge R\right)^{1/2}\le
e^{-\beta R^2/2}.
\end{align}
The log-Sobolev inequality implies the Poincar\'e inequality and we have
\begin{align}
{\mathcal E}(F\tilde{\chi}_0,F\tilde{\chi}_0)
&\ge \frac{1}{2\alpha R^2}
\left(\|F\tilde{\chi}_0\|_{L^2(\nu)}^2-e^{-\beta
R^2}\right).\label{lower bound E0}
\end{align}
Next we estimate ${\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)$ for $k\ge 1$.
Let $\phi_k(w)=1_{[Rk,R(k+2)]}(\rho(w))$ and $\delta>0$.
Then by (\ref{LSI general}) and Lemma~\ref{GNS},
\begin{align}
{\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)&=
{\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)-
\int_{\Omega}\delta\phi_k(w)
(F\tilde{\chi}_k)^2(w)d\nu(w)+
\int_{\Omega}\delta\phi_k(w)
(F\tilde{\chi}_k)^2(w)d\nu(w)\nonumber\\
&\ge
-\frac{1}{\alpha R^2(k+2)^2}
\log\left(\int_{\Omega}
e^{\alpha\delta R^2(k+2)^2\phi_k(w)}
d\nu(w)\right)
\|F\tilde{\chi}_k\|_{L^2(\nu)}^2\nonumber\\
&\qquad +
\delta\|F\tilde{\chi}_k\|_{L^2(\nu)}^2.
\end{align}
By the tail estimate of $\rho$, we have
\begin{align}
\int_{\Omega}
e^{\alpha\delta R^2(k+2)^2\phi_k(w)}d\nu(w)
&\le
1+e^{\alpha\delta R^2(k+2)^2-\beta (Rk)^2}.
\end{align}
Hence
\begin{align}
{\mathcal E}(F\tilde{\chi}_k,F\tilde{\chi}_k)&\ge
\left(
\delta-
\frac{\exp\left\{\left(\alpha\delta
(k+2)^2-\beta k^2\right)R^2\right\}}
{\alpha R^2(k+2)^2}
\right)\|F\tilde{\chi}_k\|_{L^2(\nu)}^2.
\label{lower bound Ek}
\end{align}
For simplicity, we write
\begin{align}
G(\delta,\alpha,\beta,R)&=
\delta-\sup_{k\ge 1}\frac{\exp\left\{\left(\alpha\delta
(k+2)^2-\beta k^2\right)R^2\right\}}
{\alpha R^2(k+2)^2}.
\end{align}
Summing the both sides in the inequalities (\ref{lower bound E0}),
(\ref{lower bound Ek})
and by using the property (iv),
we obtain the following inequality
\begin{align}
{\mathcal E}(F,F)&\ge
\min\left(\frac{1}{2\alpha R^2},
G(\delta,\alpha,\beta,R)\right)
-\frac{e^{-\beta R^2}}{2\alpha R^2}
-\frac{8}{R^2}\int_{\rho\ge R}F^2d\nu\label{lower bound Ea}
\end{align}
which is denoted by $I(\delta,\alpha,\beta,R)$.
If $\frac{1}{2\alpha}>8$, this inequality
with large $R$ and small $\delta$ implies
the existence of spectral gap.
In general, we need more considerations.
Since $\sum_{k=1}^{\infty}\|F\tilde{\chi}_k\|_{L^2(\nu)}^2\ge
\int_{\rho\ge 2R}F(w)^2d\nu(w)$,
by (\ref{IMS1}) and (\ref{lower bound Ek}),
\begin{align}
{\mathcal E}(F,F)&\ge
G(\delta,\alpha,\beta,R)
\int_{\rho\ge 2R}F^2(w)d\nu(w)-\frac{8}{R^2}.\label{lower bound Eb}
\end{align}
Let $0\le \varepsilon\le 1$.
Multiplying both sides on the inequality
$I(\delta,\alpha,\beta,2R)$ by $1-\varepsilon$
and the both sides on (\ref{lower bound Eb}) by $\varepsilon$
and taking summation,
we obtain
\begin{align}
{\mathcal E}(F,F)&\ge
(1-\varepsilon)\min\left(\frac{1}{8\alpha R^2},
G(\delta,\alpha,\beta,2R)
\right)\nonumber\\
&-\frac{(1-\varepsilon)e^{-4\beta R^2}}{8\alpha R^2}
-\frac{8\varepsilon}{R^2}
+\left(\varepsilon G(\delta,\alpha,\beta,R)-\frac{2(1-\varepsilon)}{R^2}\right)
\int_{\rho\ge 2R}F^2(w)d\nu(w).
\end{align}
Now let $\delta=\frac{\beta}{18\alpha}$.
Then by an elementary calculation,
\begin{align}
G(\delta,\alpha,\beta,R)\ge
\frac{\beta}{18\alpha}-\frac{e^{-\beta R^2/2}}{9\alpha R^2}.
\end{align}
Hence,
if
\begin{align}
\frac{\beta}{18\alpha}\ge\frac{e^{-\beta R^2/2}}{9\alpha R^2}
+\frac{2(1-\varepsilon)}{R^2\varepsilon}, \label{tail estimate integral}
\end{align}
then
\begin{align}
{\mathcal E}(F,F)&\ge
(1-\varepsilon)\min\left(\frac{1}{8\alpha R^2},
\frac{\beta}{18\alpha}-\frac{e^{-2\beta R^2}}{36\alpha R^2}
\right)
-\frac{(1-\varepsilon)e^{-4\beta R^2}}{8\alpha R^2}
-\frac{8\varepsilon}{R^2}\label{lower bound Ec}
\end{align}
By choosing $\varepsilon,R$ appropriately,
we give a lower bound for ${\mathcal E}(F,F)$.
First, let us choose $\varepsilon$ such that
\begin{align}
\varepsilon=\min\left(\frac{1}{2},~\frac{1}{512\alpha}\right).
\end{align}
We next choose $R$ such that
\begin{align}
\max\left(\frac{e^{-\beta R^2/2}}{9\alpha R^2},~
\frac{2}{R^2\varepsilon}\right)
&\le \frac{\beta}{36\alpha}.\label{R1}
\end{align}
This condition is equivalent to
\begin{align}
e^{-\beta R^2/2}\le \frac{1}{4}\beta R^2,\quad
R^2\ge \frac{72\alpha}{\beta \varepsilon}.
\end{align}
Under this condition, the inequality (\ref{tail estimate integral}) holds and
by using (\ref{lower bound Ec}), we have
\begin{align}
{\mathcal E}(F,F)&\ge
\frac{1}{2}\min\left(\frac{1}{8\alpha R^2},~
\frac{\beta}{36\alpha}
\right)
-\frac{e^{-4\beta R^2}}{8\alpha R^2}
-\frac{8\varepsilon}{R^2}.\label{lower bound Ed}
\end{align}
Furthermore, we restrict $R$ so that
\begin{align}
\max\left(
\frac{e^{-4\beta R^2}}{8\alpha R^2},~
\frac{8\varepsilon}{R^2}
\right)\le
\frac{1}{8}\min\left(\frac{1}{8\alpha R^2},~
\frac{\beta}{36\alpha}
\right).\label{R2}
\end{align}
This condition is equivalent to
\begin{align}
e^{-2\beta R^2}\le \frac{1}{8},\quad
e^{-4\beta R^2}\le \frac{\beta R^2}{36},\quad
\varepsilon\le \frac{1}{512\alpha}, \quad
R^2\ge
\frac{48^2\alpha}{\beta}\varepsilon.
\end{align}
Thus,
(\ref{R1}) and (\ref{R2}) hold if
\begin{align}
R\ge\max\left(\sqrt{\frac{2}{\beta}},~
48\sqrt{\frac{\alpha}{\beta}}, ~\sqrt{\frac{72\alpha}{\beta\varepsilon}}\right).
\end{align}
Combining the inequalities (\ref{lower bound Ed})
and (\ref{R2}), we obtain the desired estimate.
\end{proof}
\section{Proof of Theorem~\ref{main theorem 2}}
We prove Theorem~\ref{main theorem 2}
by using the argument in the proof of
Theorem~\ref{main theorem 1} and Theorem~\ref{main theorem 3}.
To this end, we need a tail estimate of
$\rho_{y_0}(\gamma)$.
\begin{lem}\label{main lemma 3}
Let $M$ be an $n$-dimensional rotationally symmetric Riemannian manifold
with a pole $y_0$.
Suppose $\|\varphi'\|_{\infty}<\infty$
and Assumption~{\rm \ref{assumption A}}
is satisfied.
Let $\lambda_0>0$.
Let $\rho_{y_0}(\gamma)=1+\max_{0\le t\le 1}d(y_0,\gamma(t))$.
Then there exists a positive constant
$r_0$ which depends on $\varphi$, $\lambda_0$, $d(x_0,y_0)$ and
the dimension $n$
and a positive constant $C_2$ which depends only on $n$
such that
\begin{align}
\nu^{\lambda}_{x_0,y_0}\left(\rho_{y_0}(\gamma)\ge r\right)&
\le e^{-C_2\lambda r^2}
\quad \mbox{for all $r\ge r_0$ and $\lambda\ge \lambda_0$}.\label{tail estimate}
\end{align}
\end{lem}
\begin{proof}
Let $z_0$ be a point either $x_0$ or
$y_0$.
Let $X_t$ be the Brownian motion starting at $z_0$ on
$M$
whose generator is $\Delta/(2\lambda)$.
First, we give a tail estimate on
$\rho_{y_0}$ with respect to
$\nu^{\lambda}_{z_0}$.
Let $Y_t=d(X_t,y_0)$.
Note that $\Delta_x
d(x,y_0)=(n-1)\left(\frac{1}{d(x,y_0)}+\varphi'(d(x,y_0))\right)$
and $|\nabla_x d(x,y_0)|=1$.
By the It\^o formula, we have
\begin{align}
Y_t=
d(z_0,y_0)+\frac{1}{\sqrt{\lambda}}B_t
+\int_0^t\frac{n-1}{2\lambda}
\left(\frac{1}{Y_s}+\varphi'(Y_s)\right)ds.
\label{sde1}
\end{align}
Here $B_t$ is $1$-dimensional standard Brownian motion.
We can rewrite this equation as
\begin{align}
\sqrt{\lambda}Y_t&=
\sqrt{\lambda}d(z_0,y_0)+B_t+
\frac{n-1}{2\sqrt{\lambda}}\|\varphi'\|_{\infty}t+
\int_0^t\frac{n-1}{2\sqrt{\lambda}Y_s}ds\nonumber\\
&\quad +\int_0^t\frac{n-1}{2\sqrt{\lambda}}
\left(\varphi'(Y_s)-\|\varphi'\|_{\infty}\right)ds.
\label{sde2}
\end{align}
Let $\tilde{Z}_t$ be the strong solution to the
SDE:
\begin{align}
\tilde{Z}_t&=
\sqrt{\lambda}d(z_0,y_0)+
B_t+\frac{n-1}{2\sqrt{\lambda}}\|\varphi'\|_{\infty}t+
\int_0^t\frac{n-1}{2\tilde{Z}_s}ds,\label{sde3}
\end{align}
where $B_t$ is the same Brownian motion as in
(\ref{sde2}).
Then by the comparison theorem of 1 dimensional SDE
(see Chapter \Roman{fff} in \cite{ikeda-watanabe}),
we see
\begin{align}
\sqrt{\lambda}Y_t\le \tilde{Z}_t\qquad t\ge 0.
\end{align}
Let us define
$\hat{Z}_t=\tilde{Z}_t-\frac{n-1}{2\sqrt{\lambda}}\|\varphi'\|_{\infty}t$.
Then $\hat{Z}_t$ satisfies the SDE
\begin{align}
\hat{Z}_t=\sqrt{\lambda}d(z_0,y_0)+\int_0^t
\frac{n-1}{2}\frac{1}{\hat{Z}_s+
\frac{n-1}{2\sqrt{\lambda}}\|\varphi'\|s}ds
+B_t.
\end{align}
Now consider the $n-1$ dimensional
Bessel process $Z_t$ as the strong solution of the SDE:
\begin{align}
Z_t=\sqrt{\lambda}d(z_0,y_0)+\int_0^t
\frac{n-1}{2}\frac{1}{Z_s}ds
+B_t.
\end{align}
Again by the comparison theorem, we have
\begin{align}
\hat{Z}_t\le Z_t \qquad t\ge 0.
\end{align}
The law of $\{Z_t\}_{t\ge 0}$ is the same as
the law of $\{|B^{(n)}_t+\sqrt{\lambda}d(z_0,y_0){\bf e}|\}$,
where $B^{(n)}$ is the standard Brownian motion starting at $0$ and
${\bf e}$ is the unit vector in ${\mathbb R}^n$.
Thus, for any $r>0$, we have
\begin{align}
P\left(\max_{0\le t\le 1}Y_t\ge r\right)
&\le P\left(\max_{0\le t\le 1}|B^{(n)}_t+\sqrt{\lambda}
d(z_0,y_0){\bf e}|+\frac{n-1}{2\sqrt{\lambda}}\|\varphi'\|_{\infty}\ge
\sqrt{\lambda}r\right)\nonumber\\
&\le P\left(\max_{0\le t\le 1}|B^{(n)}_t|\ge
\sqrt{\lambda}\left(r-d(z_0,y_0)-
\frac{n-1}{2\lambda}\|\varphi'\|_{\infty}\right)\right).
\end{align}
Let $C_n=E[\max_{0\le t\le 1}|B^{(n)}_t|]$.
Then there exists $C>0$ such that for any $r>C_n$,
\begin{align}
P\left(\max_{0\le t\le 1}|B^{(n)}_t|\ge r\right)
\le C \exp\left(-\frac{1}{2}(r-C_n)^2\right).
\end{align}
Hence, if
$r>d(z_0,y_0)+\frac{n-1}{2\lambda}\|\varphi'\|_{\infty}+\frac{C_n}{\sqrt{\lambda}}$,
then
\begin{align}
P\left(\max_{0\le t\le 1}
Y_t\ge r\right)&\le
C\exp\left[
-\frac{\lambda}{2}
\left(r-d(z_0,y_0)-\frac{n-1}{2\lambda}\|\varphi'\|_{\infty}
-\frac{C_n}{\sqrt{\lambda}}\right)^2\right].
\end{align}
This shows that there exists $r_0>0$ which depends only on
$d(z_0,y_0), \lambda_0$ and a positive constant $C$
such that
\begin{align}
\nu^{\lambda}_{z_0}\left(\rho_{y_0}(\gamma)\ge r\right)\le
e^{-\lambda C r^2}\quad \mbox{for all $r\ge r_0$}.\label{tail estimate for rho1}
\end{align}
The tail estimate for $\nu^{\lambda}_{x_0,y_0}$
can be proved by using
the absolute continuity
of $\nu^{\lambda}_{x_0,y_0}$
with respect to
$\nu^{\lambda}_{x_0}$
up to time $t<1$.
The density is given by
\begin{align}
\frac{d\nu^{\lambda}_{x_0,y_0}}{d\nu^{\lambda}_{x_0}}(\gamma)
\Big |_{{\mathfrak F}_t}=
\frac{p\left(\frac{1-t}{\lambda},y_0,\gamma(t)\right)}
{p\left(\frac{1}{\lambda},y_0,x_0\right)}=\varphi_{x_0,y_0}(t,\gamma).
\end{align}
Recall that Gaussian upper bound holds
for all $0<t\le 1$ and $x,y\in M$,
\begin{align}
p(t,x,y)&\le C't^{-n/2}e^{-C''d(x,y)^2/t}.
\end{align}
By Varadhan's heat kernel estimate,
for any $\varepsilon>0$, we have for sufficiently large $\lambda$,
\begin{align}
p(1/\lambda,y_0,x_0)\ge
e^{-\lambda\frac{d(y_0,x_0)^2+\varepsilon}{2}}.
\label{heat kernel estimate}
\end{align}
By using these estimates,
we obtain
\begin{align}
\varphi_{x_0,y_0}\left(\frac{1}{2},\gamma\right)\le
C' \lambda^{n/2}e^{\frac{\lambda}{2}\left(d(x_0,y_0)^2+\varepsilon\right)}.
\label{density estimate}
\end{align}
This estimate and (\ref{tail estimate for rho1}) implies that
\begin{align}
\nu^{\lambda}_{x_0,y_0}\left(1+\max_{0\le t\le 1/2}d(y_0,\gamma(t))\ge r\right)
\le C' \lambda^{n/2}e^{\frac{\lambda}{2}\left(d(x_0,y_0)^2+\varepsilon\right)-\lambda Cr^2}
\quad \mbox{for all $r\ge r_0$}
\end{align}
Since
\begin{align}
\nu^{\lambda}_{x_0,y_0}\left(1+\max_{1/2\le t\le 1}d(y_0,\gamma(t))\ge
r\right)
=
\nu^{\lambda}_{y_0,x_0}\left(1+\max_{0\le t\le 1/2}d(y_0,\gamma(t))\ge r\right),
\end{align}
using (\ref{tail estimate for rho1}) with $z_0=y_0$,
similarly,
we obtain the desired tail estimate for $\rho_{y_0}$ under
$\nu^{\lambda}_{x_0,y_0}$.
\end{proof}
\begin{proof}[Proof of Theorem~$\ref{main theorem 2}$]
Let $\lambda_0>0$ and consider a positive number
$\lambda\ge \lambda_0$.
By Lemma~\ref{main lemma 3},
the assumptions in Theorem~\ref{main theorem 3} are valid for
$\rho=\rho_{y_0}$, $\alpha=C_1/\lambda, \beta=C_2\lambda$ and $r_0$.
Hence Theorem~\ref{main theorem 3} implies $e^{\lambda}_2>0$
for all $\lambda>0$.
We need to prove the asymptotic behavior
(\ref{asymptotics of ela2}).
We argue similarly to the proof of Theorem~\ref{main theorem 3}.
That is, we use the same functions there and choose
$R, \delta, \varepsilon$ which were defined there.
Let $F\in {\cal FC}^{\infty}_b(P_{x_0,y_0}(M))$
and assume $\|F\|_{L^2(\nu^{\lambda}_{x_0,y_0})}=1$ and
$E^{\nu^{\lambda}_{x_0,y_0}}\left[F\right]=0$.
Then by the IMS localization formula
$(\ref{IMS1})$,
we get
\begin{align}
{\mathcal E}^{\lambda}(F,F)&\ge
{\mathcal E}^{\lambda}(F\tilde{\chi}_0,F\tilde{\chi}_0)+
(C\lambda^2-C'\lambda)\sum_{k=1}^{\infty}\|F\tilde{\chi}_k\|_{L^2}^2
-\frac{8}{R^2}.
\end{align}
Next we estimate ${\mathcal E}^{\lambda}(F\tilde{\chi}_0,F\tilde{\chi}_0)$.
Since this is a local estimate, we may vary the Riemannian metric
so that the metric is flat outside certain compact subset.
Take the same function $\eta_{1,\kappa}, \eta_{2,\kappa}$
as in the proof of the lower bound estimate in
Theorem~\ref{main theorem 1}.
Then by the estimate (\ref{rough path exponential decay}),
$|E^{\nu^{\lambda}_{x_0,y_0}}[F\tilde{\chi}_0\eta_{1,\kappa}]|
\le Ce^{-\lambda C}$.
In a similar way to the proof of the lower bound in
Theorem~\ref{main theorem 1},
we obtain
\begin{align*}
{\mathcal E}^{\lambda}(F\tilde{\chi}_0,F\tilde{\chi}_0)\ge \lambda
\min\left(\left(\left(\|(S^{-1})^{\ast}\|_{op}+C\varepsilon\right)^{-2},
\lambda \delta-Ce^{-\lambda\delta'}\right)\right)
\|F\tilde{\chi}_0\|_{L^2(\nu^{\lambda}_{x_0,y_0})}^2
-Ce^{-\lambda C}-M(\kappa).
\end{align*}
Combining the above, the proof of the lower bound is completed.
The upper bound estimate immediately follows from the
estimate (\ref{upper bound 1}) and (\ref{upper bound 2}).
\end{proof}
\noindent
{\bf Acknowledgement}
\noindent
This research was partially supported by Grant-in-Aid for
Scientific Research (B) No.24340023.
The author would like to thank referees for their
valuable comments and suggestions which improve the
quality of the paper.
|
1,116,691,497,309 | arxiv | \section{Introduction}
Let $(M
, g)$ be a compact Riemannian manifold of dimension $n\geq3$ with smooth boundary $\partial M$ and $\overline{M}:=M\cup\partial M$.
In this paper, we study the Dirichlet problem for a class of Hessian quotient equations
\begin{equation}\label{Eq}
\left\{
\begin{aligned}
&\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}(\lambda[U])=\psi(x,u,\nabla u) &&in~
M,\\
&u = \varphi &&on~\partial M,
\end{aligned}
\right.
\end{equation}
where $U=\tau(\Delta u)g-\nabla^2u$ with $\tau\geq1$, $\nabla^2u$ denotes the Hessian of $u$, $\lambda[U] = (\lambda_1,\cdots, \lambda_n)$ are the
eigenvalues of $U$ with respect to the metric $g$ and
$\psi$ is a positive $C^{\infty}$ function with respect to $(x,z,p)\in \overline{M}\times \mathbb{R}\times T_xM$, where $T_xM$ denotes the tangent space of $M$ at $x$.
Our interest on the solvability of equation \eqref{Eq} is motivated from
the complex Monge-Amp\`ere type equations. Recently, Harvey-Lawson \cite{Ha12, Ha11-} introduced a class of functions $u\in C^2(\mathbb{C}^n)$, named $(n-1)$-plurisubharmonic, such that the complex Hessian matrix
\begin{eqnarray}\label{c-1}
\bigg[\Big(\sum_{i=1}^{n}\frac{\partial^2u}{\partial
z_m\partial\overline{z}_m}\Big)\delta_{ij}-\frac{\partial^2u}{\partial
z_i\partial\overline{z}_j}\bigg]_{1\leq i,j\leq n}
\end{eqnarray}
is nonnegative definite. For $(n-1)$-plurisubharmonic functions, one can consider the following complex Monge-Amp\`ere equations
\begin{eqnarray}\label{c}
\mathrm{det}\bigg(\Big(\sum_{i=1}^{n}\frac{\partial^2u}{\partial
z_m\partial\overline{z}_m}\Big)\delta_{ij}-\frac{\partial^2u}{\partial
z_i\partial\overline{z}_j}\bigg)=\psi.
\end{eqnarray}
If $\psi$ does not depend on $\nabla u$,
the Dirichlet problem for \eqref{c} on strict pseudo-convex domains in $\mathbb{C}^n$ was solved by Li \cite{Li04}, who also considered a general class of operators. Tosatti-Weinkove \cite{To17, To19} showed that the associated complex
Monge-Amp\`ere equation can be solved on any compact K\"{a}hler manifold.
Harvey-Lawson \cite{Ha11, Ha12} investigated the corresponding complex Monge-Amp\`ere equation and sloved the Dirichlet problem with $\psi=0$ on suitable domains. Then Han-Ma-Wu \cite{Ha09}
considered $k$-convex solutions of complex Laplace equation.
Moreover, the complex Hessian equation involving a gradient term on the left hand sides has attracted the interest of many authors due to its geometric applications such as the Gauduchon conjecture, which was solved by Sz\'ekelyhidi-Tosatti-Weinkove in their work \cite{Sz17}, and also see Guan-Nie \cite{Guan21}.
For more references, we refer the readers to \cite{Ga84,Fu10, Fu15, Sz18} and
references therein.
If the complex Hessian matrix is replaced by the real Hessian matrix in \eqref{c-1},
a natural question is whether we can study the regularity and solvability to
the Dirichlet boundary problem for this kind of fully nonlinear equation (such as \eqref{Eq}). This work is a further study on the Dirichlet problem for \eqref{Eq} with gradient terms on the right sides of the equation following a recent work by Chu-Jiao \cite{Chu20}.
To ensure the ellipticity of \eqref{Eq}, we need $\lambda[U]\in \Gamma_k$. Hence we introduce the following definition.
\begin{definition}\label{def-1}
A function $u\in C^2(M)$ is called admissible (i.e.$(\eta, k)$-convex) if $\lambda[U] \in \Gamma_k$ for any $x\in M$,
where $\Gamma_k$ is the Garding cone
\begin{eqnarray*}\label{cone}
\Gamma_{k}=\{\lambda \in \mathbb{R} ^n: \sigma_{j}(\lambda)>0, \forall ~ 1\leq j \leq k\}.
\end{eqnarray*}
\end{definition}
The main theorem is as follows.
\begin{theorem}\label{main}
Let $l+2\leq k\leq n$, $\varphi \in C^{\infty}(\partial M)$, $\psi\in C^{\infty}(\overline{M}\times \mathbb{R}\times T_xM)$ with $\psi, \psi_z>0$.
Assume that there exists an admissible subsolution $\underline{u}\in C^2(\overline{M})$ satisfying
\begin{equation}\label{Eq-sub}
\left\{
\begin{aligned}
&\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}\left(\lambda[\underline{U}]\right)\geq \psi(x,\underline{u},\nabla\underline{u}) &&in~
M,\\
&\underline{u} = \varphi &&on~\partial M,
\end{aligned}
\right.
\end{equation}
where $\underline{U}=\tau(\Delta \underline{u})g-\nabla^2\underline{u}$. Then the Dirichlet problem \eqref{Eq} is uniquely solvable for $u\in C^{\infty}(\overline{M})$ with $\lambda[U] \in\Gamma_k$.
\end{theorem}
In order to prove Theorem \ref{main}, a major challenge comes from second order estimates for a domain with arbitrary boundary shape except being smooth. We establish the following global second order estimates.
\begin{theorem}\label{main-1}
Let $l+2\leq k\leq n$, $\varphi \in C^{2}(\partial M)$, $\psi\in C^{2}(\overline{M}\times \mathbb{R}\times T_xM)$ with $\psi, \psi_z>0$, $u\in C^4(M)\cap C^2(\overline{M})$ be an admissible solution of Dirichlet problem \eqref{Eq}.
Assume that there exists an admissible subsolution $\underline{u}\in C^2(\overline{M})$ satisfying \eqref{Eq-sub}. There exists $C$ depending on $n, k, l, \|u\|_{C^1}, \|\underline{u}\|_{C^2}, \inf \psi, \|\psi\|_{C^2}$ and the curvature
tensor $R$ such that
$$\sup_{ \overline{M}} |\nabla^2 u | \leq C.$$
\end{theorem}
If $U=\tau(\Delta u)g-\nabla^2u$ is replaced by the Hessian matrix $\nabla^2u$, equation \eqref{Eq} becomes the classical Hessian quotient equation
\begin{equation}\label{hq-Eq-1}
\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}\left(\lambda[\nabla^2 u]\right)=\psi(x,u,\nabla u) \quad \mbox{in}~
M,
\end{equation}
which has been widely studied in the past decades for the Euclidean case.
When $\psi=\psi(x)$,
$C^2$ estimates were treated by Caffarelli-Nirenberg-Spruck \cite{CNS85} for $l=0$, where they treated a general class of fully nonlinear equations under conditions on the geometry of $\partial M$, followed by \cite{Guan94,LY90}. Then such estimates for \eqref{hq-Eq-1} have been established by Trudinger \cite{Tr95}, Ivochkina-Trudinger-Wang \cite{ITW04} who considered the degenerate case, Guan \cite{Guan14} who considered to treat a general class of fully nonlinear equations on Riemannian
manifolds, without geometric restrictions to the boundary.
When $\psi=\psi(x, u, \nabla u)$, equation \eqref{hq-Eq-1} falls into the setup of Guan-Jiao \cite{Guan15} (see also \cite{Guan99}), and the $C^2$ estimate was obtained under the concavity assumption of $\psi$ on $\nabla u$. In Theorem \ref{main-1}, we remove this concavity assumption for equation \eqref{Eq}.
It would be worthwhile to note that this type of equation \eqref{Eq} arise naturally from many other important geometric problems. Another example is a class of prescribed curvature problems. A $(0, 2)$-tensor on a hypersurface $M\subset \mathbb{R}^{n+1}$ is defined by
\begin{eqnarray*}
\eta_{ij}=Hg_{ij}-h_{ij},
\end{eqnarray*}
where $g_{ij}$ is the induced metric of $M$ from $\mathbb{R}^{n+1}$,
$h_{ij}$ and $H$ are the second fundamental form and the mean
curvature of $M$ respectively. The $(n-1)$-convex
hypersurface (i.e. $\eta_{ij}$ is nonnegative definite) has
been studied intensively by Sha \cite{S86, S87}, Wu \cite{Wu87}, and
Harvey-Lawson \cite{Ha13}. Recently, Chu-Jiao \cite{Chu20}
considered the following prescribed curvature problem
\begin{eqnarray*}
\sigma_{k}(\eta_{ij}(X))=\psi(X, \nu(X)), \quad X \in M,
\end{eqnarray*}
where $\nu$ was the unit outer normal vector of $M$. Later on,
The authors \cite{Chen20} studied the corresponding Hessian
quotient type prescribed curvature problem. Moreover,
an analogue of equation \eqref{Eq} on compact manifolds also appeared naturally in conformal geometry, see Gursky-Viaclovsky \cite{Ger03}, Li-Sheng \cite{LS11} and Sheng-Zhang \cite{Sheng07}.
The organization of the paper is as follows. In Section 2 we start with some preliminaries.
Our proof of the estimates heavily depends on
results in Section 3 and 4. $C^1$ estimates are given in Section 3. In Section 4 we derive the global estimates for second derivatives, and finish the proof of Theorem \ref{main} and Theorem \ref{main-1}.
\section{Preliminaries}
Let $\lambda=(\lambda_1,\dots,\lambda_n)\in\mathbb{R}^n$, we recall
the definition of elementary symmetric function for $1\leq k\leq n$
\begin{equation*}
\sigma_k(\lambda)= \sum _{1 \le i_1 < i_2 <\cdots<i_k\leq
n}\lambda_{i_1}\lambda_{i_2}\cdots\lambda_{i_k}.
\end{equation*}
We also set $\sigma_0=1$ and $\sigma_k=0$ for $k>n$ or $k<0$. The Garding cone is defined by
\begin{equation*}
\Gamma_k = \{ \lambda \in \mathbb{R}^n :\sigma _i (\lambda ) >
0,\forall 1 \le i \le k\}.
\end{equation*}
We denote $\sigma_{k-1}(\lambda|i)=\frac{\partial
\sigma_k}{\partial \lambda_i}$ and
$\sigma_{k-2}(\lambda|ij)=\frac{\partial^2 \sigma_k}{\partial
\lambda_i\partial \lambda_j}$. Next, we list some properties of
$\sigma_k$ which will be used later.
\begin{proposition}\label{sigma}
Let $\lambda=(\lambda_1,\dots,\lambda_n)\in\mathbb{R}^n$ and $1\leq
k\leq n$, then we have
(1) $\Gamma_1\supset \Gamma_2\supset \cdot\cdot\cdot\supset
\Gamma_n$;
(2) $\sigma_{k-1}(\lambda|i)>0$ for $\lambda \in \Gamma_k$ and
$1\leq i\leq n$;
(3) $\sigma_k(\lambda)=\sigma_k(\lambda|i)
+\lambda_i\sigma_{k-1}(\lambda|i)$ for $1\leq i\leq n$;
(4)
$\sum_{i=1}^{n}\frac{\partial[\frac{\sigma_{k}}{\sigma_{l}}]^{\frac{1}{k-l}}}
{\partial \lambda_i}\geq [\frac{C^k_n}{C^l_n}]^{\frac{1}{k-l}}$ for
$\lambda \in \Gamma_{k}$ and $0\leq l<k$;
(5) $\Big[\frac{\sigma_k}{\sigma_l}\Big]^{\frac{1}{k-l}}$ are
concave in $\Gamma_k$ for $0\leq l<k$;
(6) If $\lambda_1\geq \lambda_2\geq \cdot\cdot\cdot\geq \lambda_n$,
then $\sigma_{k-1}(\lambda|1)\leq \sigma_{k-1}(\lambda|2)\leq
\cdot\cdot\cdot\leq \sigma_{k-1}(\lambda|n)$ for $\lambda \in
\Gamma_k$;
(7)
$\sum_{i=1}^{n}\sigma_{k-1}(\lambda|i)=(n-k+1)\sigma_{k-1}(\lambda)$.
\end{proposition}
\begin{proof}
All the properties are well known. For example, see Chapter XV in
\cite{Li96} or \cite{Hui99} for proofs of (1), (2), (3), (6) and
(7); see Lemma 2.2.19 in \cite{Ger06} for the proof of (4); see
\cite{CNS85} and \cite{Li96} for the proof of (5).
\end{proof}
The generalized Newton-MacLaurin inequality is as follows, which
will be used later.
\begin{proposition}\label{NM}
For $\lambda \in \Gamma_m$ and $m > l \geq 0$, $ r > s \geq 0$, $m
\geq r$, $l \geq s$, we have
\begin{align}
\Bigg[\frac{{\sigma _m (\lambda )}/{C_n^m }}{{\sigma _l (\lambda
)}/{C_n^l }}\Bigg]^{\frac{1}{m-l}} \le \Bigg[\frac{{\sigma _r
(\lambda )}/{C_n^r }}{{\sigma _s (\lambda )}/{C_n^s
}}\Bigg]^{\frac{1}{r-s}}. \notag
\end{align}
\end{proposition}
\begin{proof}
See \cite{S05}.
\end{proof}
In this paper, $\nabla$ denotes the Levi-Civita connection on $(M , g)$ and the curvature tensor
is defined by
$$R(X, Y )Z = - \nabla_X \nabla_Y Z + \nabla_Y \nabla_X Z + \nabla_{[X,Y]}Z.$$
Let $e_1,e_2,\cdots,e_n$ be local frames on $M$ and denote $g_{ij}=g(e_i,e_j)$, $\{g^{ij}\}=\{g_{ij}\}^{-1}$,
while the Christoffel symbols $\Gamma^k_{ij}$ and curvature coefficients are given respectively by $\nabla_{e_i}e_j=\Gamma^k_{ij}e_k$ and
$$R_{ijkl}=g(R(e_k,e_l)e_j,e_i),\quad R^i_{jkl}=g^{im}R_{mjkl}.$$
We shall write $\nabla_i=\nabla_{e_i}$, $\nabla_{ij}=\nabla_i\nabla_j-\Gamma^k_{ij}\nabla_k$, etc.
For a differentiable function $u$ defined on $M$, we usually identify $\nabla u$ with its gradient,
and use $\nabla^2 u$ to denote its Hessian which is locally given by $\nabla_{ij} u= \nabla_i(\nabla_j u)
-\Gamma^k_{ij}\nabla_k u$. We note that $\nabla_{ij} u=\nabla_{ji} u$ and
\begin{equation}\label{req1}
\nabla_{ijk} u-\nabla_{jik} u=R^l_{kij}\nabla_lu,
\end{equation}
\begin{equation}\label{req0}
\nabla_{ij}(\nabla_ku) = \nabla_{ijk}u + \Gamma^l_{ik}\nabla_{jl}u +\Gamma^l_{jk}\nabla_{il}u + \nabla_{\nabla_{ij}e_k}u,
\end{equation}
\begin{equation}\label{req2}
\nabla_{ijkl}u-\nabla_{ikjl}u=R^m_{ljk}\nabla_{im}u+\nabla_iR^m_{ljk}\nabla_mu,
\end{equation}
\begin{equation}\label{req3}
\nabla_{ijkl}u-\nabla_{jikl}u=R^m_{kij}\nabla_{ml}u+R^m_{lij}\nabla_{km}u.
\end{equation}
From \eqref{req2} and \eqref{req3}, we obtain
\begin{eqnarray}\label{req4}
\nonumber \nabla_{ijkl}u-\nabla_{klij}u&=& R^m_{ljk}\nabla_{im}u+\nabla_iR^m_{ljk}\nabla_mu+R^m_{lik}\nabla_{jm}u \\
&& +R^m_{jik}\nabla_{lm}u+R^m_{jil}\nabla_{km}u+\nabla_kR^m_{jil}\nabla_mu.
\end{eqnarray}
For convenience, we introduce
the following notations
$$F(U)=\bigg[\frac{\sigma_k(\lambda[U])}{\sigma_l(\lambda[U])}\bigg]^{\frac{1}{k-l}},
\quad F^{ij}=\frac{\partial F}{\partial U_{ij}}, \quad F^{ij, r
s}=\frac{\partial^2 F}{\partial U_{ij}\partial U_{rs}},\quad
Q^{ij}=\frac{\partial F}{\partial u_{ij}}, \quad Q^{ij, r
s}=\frac{\partial^2 F}{\partial u_{ij}\partial u_{rs}}.$$
Let $u\in C^{\infty}(\overline{M})$ be an admissible solution of equation \eqref{Eq}. Under orthonormal local frames $e_1,\cdots,e_n$, equation \eqref{Eq} is expressed in the form
\begin{equation}\label{FU}
F(U):=f(\lambda[U])=\psi.
\end{equation}
For simplicity, we shall still write equation \eqref{Eq} in the form \eqref{FU} even if $e_1,\cdots,e_n$ are not necessarily orthonormal, although more precisely it should be
$$F([\gamma^{ik}U_{kl}\gamma^{lj}])=\psi,$$
where $\gamma^{ij}$ is the square root of $g^{ij}: \gamma^{ik}\gamma^{kj}=g^{ij}$. Whenever we differentiate the equation, it will make no difference as long as we use covariant derivatives.
Assume that $\overline{A}$ is an $n\times n$ matrix and $T:\overline{A}\rightarrow T(\overline{A})$ is defined as $T(\overline{A})=\tau (tr(\overline{A}))I-\overline{A}$. Let $Q=F\small{\circ}T$, then equation \eqref{Eq}
can also be written as
\begin{equation*}
Q(\nabla^2u):=\widetilde{f}(\widetilde{\lambda}[\nabla^2u])=\psi,
\end{equation*}
Hence $Q^{ij}=\frac{\partial Q}{\partial u_{ij}}=\frac{\partial F}{\partial u_{ij}}=\tau\sum_lF^{ll}\delta_{ij}-F^{ij}$ and then
\begin{equation}\label{Quii}
Q^{ij}u_{ij}=F^{ij}U_{ij}=f_i\lambda_i=\widetilde{f}_i\widetilde{\lambda}_i=\psi.
\end{equation}
Differentiating \eqref{FU}, we get
\begin{equation}\label{Quiik}
Q^{ij}\nabla_ku_{ij}=F^{ij}\nabla_kU_{ij}=\psi_k+\psi_zu_k+\psi_{p_i}u_{ik}.
\end{equation}
The following propositions are essential which will be used later. More details can be seen in \cite{Chen20}.
\begin{proposition}\label{ellipticconcave}
Let $M$ be a smooth $(\eta, k)$-convex closed hypersurface in $\mathbb{R}^{n+1}$
and $0\leq l< k-1$. Then the operator
\begin{equation*}
F(U)=\left(\frac{\sigma_k(\lambda[U])}{\sigma_{l}(\lambda[U])}\right)^{\frac{1}{k-l}}
\end{equation*}
is elliptic and concave with respect to $U$. Moreover we have
\begin{equation*}
\sum F^{ii} \geq \left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}.
\end{equation*}
\end{proposition}
\begin{proposition}\label{th-lem-07}
Let $U$ be a diagonal matrix with $\lambda[U]\in \Gamma_k$, $0\leq l \leq k-2$ and $k\geq 3$. Then
\begin{equation*}
-F^{1i, i1}(U)=\frac{F^{11}-F^{ii}}{U_{ii}-U_{11}},\quad \forall~i\geq2.
\end{equation*}
\end{proposition}
\section{$C^1$ Estimates}
In this section, we consider the lower and upper bounds, gradient estimates for the admissible solution to equation \eqref{Eq}.
\begin{lemma}\label{C0}
Let $u\in C^{\infty}(\overline{M})$ be an admissible solution for equation \eqref{Eq}.
Under the assumptions mentioned in Theorem \ref{main}, then there exists a positive constant $C$ depending only on $ \sup_{\partial M}\varphi$ and the subsolution $\underline{u}$ such that
$$\sup_{x \in \overline{M}} |u(x)|\leq C.$$
\end{lemma}
\begin{proof}
On the one hand, according to Definition \ref{def-1}, it is easy to see that
$\lambda[U]\in\Gamma_k\subset\Gamma_1$, which implies that $tr(\lambda[U])=(\tau n-1)\Delta u>0$.
Combined with the maximum principle, we have
$$\sup_{\overline{M}}u\leq\sup_{\partial M}\varphi.$$
On the other hand, we know that there exists an admissible subsolution $\underline{u}\in C^2(\overline{M})$ satisfying \eqref{Eq-sub}.
By the fact $\psi_z>0$ and the comparison principle,
$$u\geq\underline{u}, \quad \forall~x \in\overline{M}.$$
\end{proof}
\begin{lemma}\label{C1-0}
Let $l+2\leq k\leq n$, $\varphi \in C^{\infty}(\partial M)$, $\psi\in C^{\infty}(\overline{M}\times \mathbb{R}\times T_xM)$ with $\psi, \psi_z>0$. If $u\in C^2(\overline{M})$ is the solution of equation \eqref{Eq},
then
$$\sup_{M}|\nabla u|\leq C(1+\sup_{\partial M}|\nabla u|),$$
where $C$ is a constant depending on $n,k,l,\|u\|_{C^0},\|\psi\|_{C^1}$ and the curvature tensor $R$.
\end{lemma}
\begin{proof}
Consider the auxiliary function
$$P(x)=ve^{\phi(u)},$$
where $v=1+\frac{1}{2}|\nabla u|^2$, $\phi(u): \mathbb{R}\longrightarrow \mathbb{R}$ is a function satisfying
$$\phi'(u)>0, \quad \phi''(u)-(\phi'(u))^2\geq
\varepsilon$$ for some positive constant $\varepsilon$ depending on $\|u\|_{C^0}$.
Suppose that $P$ attains its maximum at $x_0\in M$. By rotating the coordinates, we diagonal the matrix $\nabla^2u$. In the following, we write simply $u_i=\nabla_iu$, $u_{ij}=\nabla_{ij}u$ and $u_{ijk}=\nabla_ku_{ij}$, then at $x_0$,
\begin{equation}\label{Pi}
0=P_i=(u_{ii}u_i+v\phi'u_i)e^{\phi(u)},
\end{equation}
and
\begin{eqnarray}\label{Pii}
0\geq P_{ii}=\left(u_{ii}^2+u_ku_{kii}+2u_i^2u_{ii}\phi'+u_i^2v\left(\phi''+(\phi')^2\right)+v\phi'u_{ii}\right)
e^{\phi (u)}.
\end{eqnarray}
We assume that $v\leq |\nabla u|^2$, i.e., $|\nabla u|^2\geq 2$. Otherwise our result holds.
Let
$$\mathcal{S}=\{i\in (1, \cdots, n) \mid u_i \neq 0\}.$$
Obviously $\mathcal{S}\neq \emptyset$ and we derive
$$u_{ii}=-v\phi^{\prime}<0 , \quad i \in \mathcal{S}$$
by \eqref{Pi}. From the mean curvature $H > 0$, we have
$$Q^{ii}\geq\sum_{l\neq i} F^{ll}\geq \frac{1}{2} \sum_{l} F^{ll},$$
which implies
\begin{eqnarray}\label{Qii}
\nonumber Q^{ii}u_i^2&=&\sum_{i\in S} Q^{ii}u_i^2\geq \sum_{i\in S} \left(\frac{1}{2} \sum_{l} F^{ll}\right)u_i^2\\
&=& \left(\frac{1}{2} \sum_{l} F^{ll}\right) |\nabla u|^2=\frac{1}{2(\tau n-1)}\left( \sum_{l} Q^{ll}\right) |\nabla u|^2.
\end{eqnarray}
Since $Q^{ii}\geq0$ and by Ricci identity, we have $u_{kij}=u_{ijk}+R^l_{jki}u_l$, then
\begin{eqnarray}\label{c1eq}
\nonumber 0&\geq& Q^{ii}\left(u_ku_{kii}+2u_i^2u_{ii}\phi'+u_i^2v\left(\phi''+(\phi')^2\right)+v\phi'u_{ii}\right)\\
\nonumber &=& \psi_ku_k+\psi_zu_k^2+\psi_{p_k}u_ku_{kk}+R^l_{iki}Q^{ii}u_ku_l+2\phi'Q^{ii}u_i^2u_{ii}\\
\nonumber&&+v\left(\phi''+(\phi')^2\right)Q^{ii}u_{i}^2
+v\phi'Q^{ii}u_{ii}\\
\nonumber&\geq& \psi_ku_k-v\phi'\psi_{p_k}u_k+R^l_{iki}Q^{ii}u_ku_l+v\left(\phi''-(\phi')^2\right)Q^{ii}u_{i}^2+v\phi'\psi\\
&\geq&\sum_lQ^{ll}\left(\frac{\varepsilon}{4(\tau n-1)}|\nabla u|^4-\overline{C}|\nabla u|^2\right)-
\overline{C}\phi'|\nabla u|^3-\overline{C}\phi'|\nabla u|^2-\overline{C}|\nabla u|,
\end{eqnarray}
where $\overline{C}$ is a constant depending on $\|\psi\|_{C^1}$ and the curvature tensor $R$.
On the one hand, if $\frac{\varepsilon}{4(\tau n-1)}|\nabla u|^4-\overline{C}|\nabla u|^2\leq0$, then $|\nabla u|\leq C$. Otherwise by \eqref{c1eq} and the fact
$\sum_lQ^{ll}\geq(\tau n-1)\left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}$, we derive
$$(\tau n-1)\left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}\left(\frac{\varepsilon}{4(\tau n-1)}|\nabla u|^4-\overline{C}|\nabla u|^2\right)-
\overline{C}\phi'|\nabla u|^3-\overline{C}\phi'|\nabla u|^2-\overline{C}|\nabla u|\leq0,$$
then $|\nabla u|\leq C$, the lemma is proved.
\end{proof}
Next, we derive the global $C^1$ estimates for the solution of equation \eqref{Eq}.
\begin{theorem}\label{C1}
Let $u\in C^{\infty}(\overline{M})$ be an admissible solution for equation \eqref{Eq}.
Under the assumptions mentioned in Theorem \ref{main}, then
$$\sup_{\overline M}|\nabla u|\leq C,$$
where $C$ is a constant depending on $n, k, l$, $\|u\|_{C^0}$, $\|\underline{u}\|_{C^1}$, $\|\varphi\|_{C^1}$, $\|\psi\|_{C^1}$ and the curvature tensor $R$.
\end{theorem}
\begin{proof}
From Lemma \ref{C1-0}, we are left with the task of estimating the exterior normal derivative of $u$ on $\partial M$. Let $h$ be the harmonic function in $M$ which equals $\varphi$ on $\partial M$, then we derive
\begin{equation*}
\left\{
\begin{aligned}
&\Delta (u-h)>0 &&in~
M,\\
&u-h=0 &&on~\partial M.
\end{aligned}
\right.
\end{equation*}
The maximum principle implies $u\leq h$ in $M$. Therefore,
$$\underline{u}\leq u\leq h \quad \mbox{in}~M.$$
Since they are all equal to $\varphi$ on $\partial M$, then
$$\nabla_{\nu}h\leq\nabla_{\nu}u\leq\nabla_{\nu}\underline{u} \quad \mbox{on}~\partial M,$$
where $\nu$ is the exterior normal derivative of $u$ on $\partial M$.
Thus, we have
$$\sup_{\partial M}|\nabla u|\leq C,$$
which completes the proof.
\end{proof}
\section{Global Estimates for second derivatives}
In this section, we prove the global second order estimates and give the proof of Theorem \ref{main} and \ref{main-1}. Firstly, we need to derive the following theorem.
\begin{theorem}\label{C2-0}
Let $u\in C^{\infty}(M)$ be an admissible solution for equation \eqref{Eq}. Then
there exists a constant $C$ depending only on $n, k, l, \|u\|_{C^1}, \|\underline{u}\|_{C^2}, \|\psi\|_{C^2}$ and the curvature tensor
$R$
such that
$$\sup_{M} |\nabla^2 u | \leq C(1+\sup_{\partial M}|\nabla^2 u |).$$
\end{theorem}
\begin{proof}
Taking the auxiliary function
\begin{equation*}
\widehat{H}=\log \widetilde{\lambda}_{\mbox{max}}(\nabla^2u)+\frac{a}{2}|\nabla u|^2+A(\underline{u}-u),
\end{equation*}
where $\widetilde{\lambda}_{\mbox{max}}(\nabla^2u)$ is the largest eigenvalue of $\nabla^2u$, $a\leq1$ and $A\geq1$ are constants to be determined later, $x_0$ is the maximum point of $\widehat{H}$. We choose a local orthonormal frame $\{e_{1}, e_{2}, \cdots, e_{n}\}$ near $x_0$ such that $\nabla_{e_i}e_j=0$, i.e. $\Gamma_{ij}^k=0$ at $x_0$ for any $1\leq i,j,k\leq n$.
For convenience, we write $u_i=\nabla_iu, u_{ij}=\nabla_{ij}u, u_{ijl}=\nabla_lu_{ij}$ , $u_{ijrs}=\nabla_{rs}u_{ij}$ and $R^m_{ijs;l}=\nabla_lR^m_{ijs}$. Assume that
$$u_{11}\geq u_{22}\geq \cdots \geq u_{nn}$$
at $x_0$. Recalling that $U_{ii}=\tau\Delta u-u_{ii}$, we have
$$U_{11}\leq U_{22}\leq\cdots\leq U_{nn}.$$
It can follows that
$$
F^{11}\geq F^{22}\geq\cdots\geq F^{nn}\quad \mbox{and} \quad Q^{11}\leq Q^{22}\leq\dots\leq Q^{nn}.$$
We define a new function $\widetilde{H}$ by
\begin{equation*}
\widetilde{H}=\log u_{11}+\frac{a}{2}|\nabla u|^2+A(\underline{u}-u).
\end{equation*}
Then at $x_0$, we have
\begin{equation}\label{Hi}
0=\widetilde{H}_i=\frac{u_{11i}}{u_{11}}+au_iu_{ii}+A(\underline{u}-u)_i,
\end{equation}
\begin{equation}\label{Hii}
0\geq \widetilde{H}_{ii}=\frac{u_{11ii}u_{11}-u_{11i}^2}{u_{11}^2}+au_{ii}^2+au_ku_{kii}+A(\underline{u}-u)_{ii}.
\end{equation}
We divide our proof into four steps.
\textbf{Step 1}: We show that
\begin{eqnarray}\label{ht-c2-1}
\nonumber0&\geq& - \frac{2}{u_{11}} \sum_{i\geq 2} Q^{1i, i1} u_{1i1}^2 -\frac{Q^{ii}u_{11i}^2}{u_{11}^2}
+\frac{aQ^{ii}u_{ii}^2}{2}+AQ^{ii}(\underline{u}-u)_{ii}-C_0\sum_i Q^{ii}\\
&&-\frac{C_0^2\sum_iQ^{ii}}{2au_{11}^2}-\frac{C_0\sum_iQ^{ii}}{u_{11}}-C_0u_{11}-\frac{C_0}{u_{11}}-AC_0,
\end{eqnarray}
where $C_0$ depends on $\|\psi\|_{C^2}$ , $\|u\|_{C^1}$, $\|\underline{u}\|_{C^2}$ and the curvature tensor $R$.
Since $Q^{ii}\geq0$, then by \eqref{req4} and \eqref{Hii},
\begin{eqnarray}\label{uii11}
\nonumber0 &\geq& \frac{Q^{ii}u_{11ii}}{u_{11}}-\frac{Q^{ii}u_{11i}^2}{u_{11}^2}+aQ^{ii}u_{ii}^2+aQ^{ii}u_ku_{kii}
+AQ^{ii}(\underline{u}-u)_{ii}\\
\nonumber&=&\frac{Q^{ii}u_{ii11}}{u_{11}}+\frac{Q^{ii}}{u_{11}}\left(2R^1_{i1i}u_{11}+2R^i_{11i}u_{ii}+R^m_{i1i;i}u_m+
R^m_{11i;i}u_m\right)\\
\nonumber&&-\frac{Q^{ii}u_{11i}^2}{u_{11}^2}+aQ^{ii}u_{ii}^2+au_kQ^{ii}u_{kii}+AQ^{ii}(\underline{u}-u)_{ii}\\
\nonumber&\geq&\frac{Q^{ii}u_{ii11}}{u_{11}}-\frac{Q^{ii}u_{11i}^2}{u_{11}^2}+aQ^{ii}u_{ii}^2+aQ^{ii}u_ku_{kii}
+AQ^{ii}(\underline{u}-u)_{ii}\\
&&-C_1\sum_iQ^{ii}-\frac{C_1Q^{ii}|u_{ii}|}{u_{11}}-\frac{C_1\sum_iQ^{ii}}{u_{11}},
\end{eqnarray}
where $C_1$ is a constant depending only on $\|u\|_{C^1}$ and the curvature tensor $R$.
Differentiating \eqref{FU} twice, we get
\begin{eqnarray}\label{Ff}
\nonumber Q^{ij,rs}u_{ij1}u_{rs1}+Q^{ii}u_{ii11} &=& \psi_{11}+2\psi_{1z}u_1+2\psi_{1p_1}u_{11}+\psi_{zz}u_1^2+2\psi_{zp_1}u_1u_{11} \\
&& +\psi_zu_{11}+\psi_{p_1p_1}u_{11}^2+\psi_{p_i}u_{i11}.
\end{eqnarray}
Note that
\begin{equation}\label{ijrs}
-Q^{ij,rs}u_{ij1}u_{rs1}\geq-2\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2.
\end{equation}
By \eqref{Hi}, \eqref{Ff} and \eqref{ijrs},
\begin{eqnarray}\label{ij1}
\nonumber \frac{Q^{ii}u_{ii11}}{u_{11}}
&\geq&-\frac{1}{u_{11}}Q^{ij,rs}u_{ij1}u_{rs1}+\psi_{p_i}\frac{u_{i11}}{u_{11}}\\
\nonumber&\geq&-\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2+\psi_{p_i}\left(-au_iu_{ii}-A\underline{u}_i+Au_i+
\frac{R^l_{1i1}u_l}{u_{11}}\right)\\
\nonumber&&-Cu_{11}-\frac{C}{u_{11}}-C\\
&\geq&-\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2-a\psi_{p_i}u_iu_{ii}-C_2u_{11}-\frac{C_2}{u_{11}}-AC_2,
\end{eqnarray}
where $C_2$ is a constant depending only on $\|u\|_{C^1}$ , $\|\underline{u}\|_{C^1}$, $\|\psi\|_{C^2}$ and the curvature tensor $R$.
Using \eqref{req1} and \eqref{Quiik}, we have
\begin{eqnarray}\label{aQii}
\nonumber aQ^{ii}u_ku_{kii}&=&au_kQ^{ii}\left(u_{iik}+R^l_{iki}u_l\right)\\
\nonumber&=&au_k(\psi_k+\psi_zu_k+\psi_{p_k}u_{kk})+aR^l_{iki}Q^{ii}u_ku_l\\
&\geq&a\psi_{p_k}u_ku_{kk}-C_3\sum_iQ^{ii}-C_3,
\end{eqnarray}
where $C_3$ is a constant depending only on $\|\psi\|_{C^1}$ , $\|u\|_{C^1}$ and the curvature tensor $R$.
Then \eqref{ht-c2-1} can be derived by \eqref{uii11}, \eqref{ij1} and \eqref{aQii}.
\textbf{Step 2}:
There exists a positive constant $\delta<\frac{1}{n-2}$ such that
$$\frac{C_{n-1}^{k-1} (\tau-\tau(n-2)\delta)^{k-1} +(\tau-1-\tau(n-1)\delta)C_{n-1}^{k-2} (\tau+\tau(n-2)\delta)^{k-2} }{C_n^l (\tau+\tau(n-2)\delta)^l } >\frac{C_{n-1}^{k-1}}{2C_n^l}.$$
We shall show that there exists a constant $B_1=\max\left\{1,\frac{\widetilde{R}}{1-\delta(n-2)}, C_0\left(\frac{a\delta^2}{4n}\left(\frac{C_n^k}{C_n^l}\right)^\frac{1}{k-l}\right)^{-1}\right\}$ for given positive constants $\widetilde{R},\theta,\xi$ such that
$$\frac{a}{4}Q^{ii}u_{ii}^2+\frac{A}{2}Q^{ii}\left(\underline{u}-u\right)_{ii}\geq C_0u_{11},$$
if $u_{11}\geq B_1>1$ and
\begin{equation}\label{A1}
A=\|\psi\|_{C^0}^{k-l-1}\frac{4k(\tau n-1)C_n^lC_0}{\theta(n-k+l)C_{n-1}^{k-1}}+\frac{4(\tau n-1)}{\theta}\left(\frac{6C_0^4}{1-\xi}+2C_0+\frac{C_0^2}{2a}\right).
\end{equation}
Case 1: $|u_{ii}|\leq \delta u_{11}$ for all $i\geq 2$.\\
In this case we have
$$\left(\tau-1-\tau(n-1)\delta\right)u_{11}\leq U_{11}\leq \left(\tau-1+\tau(n-1)\delta\right)u_{11},$$ $$\left(\tau-\tau(n-2)\delta\right)u_{11}\leq U_{22}\leq \cdots \leq U_{nn}\leq \left(\tau+\tau(n-2)\delta\right)u_{11}.$$
By Theorem 2.18 in \cite{Guan12}, there exist positive constants $\widetilde{R},\theta$ such that
$$F^{ii}(\underline{U}-U)_{ii}\geq\theta(1+\sum_iF^{ii}),$$
when $|\lambda[U]|\geq\widetilde{R}$. Hence, if $u_{11}\geq B_1\geq\frac{\widetilde{R}}{1-\delta(n-2)}$, then
$$\frac{A}{2}Q^{ii}(\underline{u}-u)_{ii}=\frac{A}{2}F^{ii}(\underline{U}-U)_{ii}\geq\frac{A\theta}{2}
\left(1+\sum_iF^{ii}\right)=\frac{A\theta}{2}
\left(1+\frac{1}{\tau n-1}\sum_iQ^{ii}\right).$$
By the definition of $Q^{ii}$, we obtain
\begin{eqnarray*}
\nonumber\sum_iQ^{ii}&=&(\tau n-1)\sum_iF^{ii} \\
\nonumber &\geq&\frac{1}{k-l}\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}-1}
\frac{(n-k+l)\sigma_{k-1}\sigma_l-(n-l+1)\sigma_k\sigma_{l-1}}{\sigma_l^2}\\
\nonumber&\geq&\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}-1}
\frac{\sigma_{k-1}/C_n^{k-1}}{\sigma_l/C_n^k}\\
\nonumber&=&\frac{C_n^k}{C_n^{k-1}}\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}-1}
\frac{\sigma_{k-1}(U|1)+U_{11}\sigma_{k-2}(U|1)}{\sigma_l}\\
\nonumber &\geq&\frac{C_n^k}{C_n^{k-1}}\psi^{1-k+l}\frac{C_{n-1}^{k-1}\left(\tau-\tau(n-2)\delta\right)^{k-1}+\left(\tau-1
-\tau(n-1)\delta\right)C_{n-1}^{k-2}\left(\tau+\tau(n-2)\delta\right)^{k-2}}{C_n^l\left(\tau+\tau(n-2)
\delta\right)^l}u_{11}\\
&\geq&\psi^{1-k+l}\frac{(n-k+1)C_{n-1}^{k-1}}{2kC_n^l}u_{11},
\end{eqnarray*}
which implies that
$$\frac{A}{2}Q^{ii}(\underline{u}-u)_{ii}\geq C_0u_{11}.$$
Case 2: $u_{22} > \delta u_{11}$ or $u_{nn} <- \delta u_{11}$.\\
In this case, we have
\begin{equation*}
\begin{aligned}
\frac{a Q^{ii} u_{ii}^2}{4}&\geq \frac{a}{4} \left(Q^{22} u_{22}^2+Q^{nn} u_{nn}^2\right)
\geq \frac{a\delta^2}{4} Q^{22} u_{11}^2\\
&\geq \frac{a\delta^2}{4n} \sum_i F^{ii}u^2_{11}\geq \left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}} \frac{a\delta^2 u_{11}}{4n}u_{11}. \\
\end{aligned}
\end{equation*}
Then, we have
$$\frac{a}{4} Q^{ii} u_{ii}^2\geq C_0 u_{11},$$
if
$$u_{11} \geq \left(\left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}} \frac{a\delta^2}{4n} \right)^{-1}C_0.$$
\textbf{Step 3}: We show that
$$|u_{ii}|\leq C_4 A,\quad \forall~i\geq2,$$
if $u_{11} \geq B_1>1$, where $C_4$ is a constant depending on $n,k,l$, $\|\psi\|_{C^2}$ , $\|u\|_{C^1}$ and the curvature tensor $R$.
Combining with Step 1 and Step 2, we obtain
\begin{eqnarray}\label{ht-c2-32}
\nonumber0&\geq& - \frac{2}{u_{11}} \sum_{i\geq 2} Q^{1i, i1} u_{1i1}^2 -\frac{Q^{ii}u_{11i}^2}{u_{11}^2}
+\frac{aQ^{ii} u_{ii}^2}{4}+\frac{A}{2}Q^{ii}(\underline{u}-u)_{ii}\\
&&-C_0\sum_iQ^{ii}-\frac{C_0^2\sum_iQ^{ii}}{2au_{11}^2}-\frac{C_0}{u_{11}}\sum_iQ^{ii}-C_0\frac{1}{u_{11}}-AC_0.
\end{eqnarray}
Using \eqref{Hi} and Cauchy-Schwarz inequality, we have
\begin{equation}\label{cau}
\frac{u_{11i}^2}{u_{11}^2}=(au_iu_{ii}+A(\underline{u}-u)_i)^2\leq2a^2u_i^2u_{ii}^2+2A^2(\underline{u}-u)_i^2.
\end{equation}
By the concavity of $F$ and the definition of $Q^{ii}$,
\begin{equation}\label{FQ}
\sum_{i\geq2}Q^{1i,i1}=\sum_{i\geq2}F^{1i,i1}\leq0.
\end{equation}
Choosing $a\leq\min\{\frac{1}{64\sup|\nabla u|^2},1\}$. \eqref{ht-c2-32}-\eqref{FQ} imply that
\begin{eqnarray}\label{a2}
\nonumber0 &\geq& \left(\frac{a}{4}-2a^2u_i^2\right)Q^{ii}u_{ii}^2-2A^2Q^{ii}(\underline{u}-u)_i^2-2C_0\sum_iQ^{ii}\\
\nonumber&&-\frac{C_0^2}{2a}\sum_iQ^{ii}-\frac{AC_0}{2}\sum_iQ^{ii}-\frac{AC_0}{2}-(A+1)C_0\\
&\geq&\frac{a}{8}Q^{ii}u_{ii}^2-\left(2C_0+\frac{C_0^2}{2a}+\frac{AC_0}{2}\right)\sum_iQ^{ii}
-\left(2A^2+\frac{3A}{2}+1\right)C_0,
\end{eqnarray}
if $u_{11}\geq B_1>1$.
Note that
$$Q^{ii}\geq Q^{22}\geq\frac{1}{n}\sum_iF^{ii}=\frac{1}{n(\tau n-1)}\sum_iQ^{ii}\geq\frac{1}{n}\left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}},\quad\forall~i\geq2.$$
Thus \eqref{a2} gives that
$$\frac{a}{8n(\tau n-1)}\left(\sum_{k\geq2}u_{kk}^2\right)\sum_iQ^{ii}\geq\left(\left(2C_0+\frac{C_0^2}{2a}+\frac{AC_0}{2}\right)
+\frac{C_0\left(2A^2+\frac{3A}{2}+1\right)}{\tau n-1}\left(\frac{C_n^k}{C_n^l}\right)^{-\frac{1}{k-l}}\right)\sum_iQ^{ii},$$
which implies
$$\sum_{k\geq 2} u_{kk}^2 \leq C_4^2 A^2.$$
\textbf{Step 4}:
We shall show that there exists a constant $C$ depending on $n, k, l$, $\|u\|_{C^1}$, $\|\underline{u}\|_{C^2}$, $\|\psi\|_{C^2}$ and the curvature tensor $R$ such that
$$u_{11}\leq C.$$
Without loss of generality, we assume that
\begin{equation}\label{ab}
u_{11} \geq \max \left\{B, \left(\frac{64A^2|\nabla\underline{u}-\nabla u|^2}{a}\right)^{\frac{1}{2}}, \frac{C_4 A}{\xi}\right\},
\end{equation}
where $B=\max\left\{B_1,(n-2)C_4A+\widetilde{R}\right\}$ and $\xi\leq\frac{1}{2}$ is a constant.
By \eqref{Hi}, \eqref{ab} and Cauchy-Schwarz inequality,
\begin{eqnarray}\label{idn}
\nonumber\frac{Q^{11}u_{111}^2}{u_{11}^2}&=& Q^{11}\left(au_1u_{11}+A(\underline{u}_1-u_1)\right)^2 \\
\nonumber&\leq&2a^2|\nabla u|^2Q^{11}u_{11}^2+2A^2Q^{11}(\underline{u}_1-u_1)^2\\
&\leq&\frac{a}{16}Q^{11}u_{11}^2.
\end{eqnarray}
Combining with Step 3 and \eqref{ab}, we know that $|u_{ii}|\leq \xi u_{11}$ for any $i\geq 2$.
Thus
\begin{equation}\label{beta}
\frac{1-\xi}{u_{11}-u_{ii}}\leq\frac{1}{u_{11}}\leq \frac{1+\xi}{u_{11}-u_{ii}}.
\end{equation}
By \eqref{beta} and Proposition \ref{th-lem-07}, we obtain
\begin{eqnarray}\label{ht-c2-42}
\nonumber\sum_{i\geq 2} \frac{Q^{ii}u_{11i}^2}{u_{11}^2}&=&\sum_{i\geq 2} \frac{Q^{ii}-Q^{11}}{u_{11}^2}u_{11i}^2 +\sum_{i\geq 2} \frac{Q^{11}u_{11i}^2}{u_{11}^2}\\
\nonumber&\leq& \frac{1+\xi}{u_{11}} \sum_{i\geq 2} \frac{Q^{ii}-Q^{11}}{u_{11}-u_{ii}}u_{11i}^2+\sum_{i\geq 2} \frac{Q^{11}u_{11i}^2}{u_{11}^2}\\
\nonumber&=&\frac{1+\xi}{u_{11}} \sum_{i\geq 2} \frac{F^{11}-F^{ii}}{U_{ii}-U_{11}}u_{11i}^2+\sum_{i\geq 2} \frac{Q^{11}u_{11i}^2}{u_{11}^2}\\
\nonumber&\leq&-\frac{3}{2u_{11}} \sum_{i\geq 2}F^{1i, i1}u_{11i}^2 +\sum_{i\geq 2} \frac{Q^{11}u_{11i}^2}{u_{11}^2}\\
&=&-\frac{3}{2u_{11}} \sum_{i\geq 2}Q^{1i, i1}u_{11i}^2 +\sum_{i\geq 2} \frac{Q^{11}u_{11i}^2}{u_{11}^2},
\end{eqnarray}
the last equality comes from the fact $Q^{1i,i1}=F^{1i,i1}$ for any $i\geq2$.
Using \eqref{Hi}, \eqref{ab} and Cauchy-Schwarz inequality, we get
\begin{eqnarray}\label{q11}
\nonumber\sum_{i\geq 2} \frac{Q^{11}u_{11i}^2}{u_{11}^2}&\leq&
2\sum_{i\geq 2}a^2Q^{11}u_i^2u_{ii}^2+2\sum_{i\geq2}A^2Q^{11}(\underline{u}_i-u_i)^2 \\
\nonumber&\leq& 2a^2\xi^2|\nabla u|^2Q^{11}u_{11}^2+2A^2Q^{11}|\nabla\underline{u}-\nabla u|^2\\
&\leq&\frac{a}{16}Q^{11}u_{11}^2.
\end{eqnarray}
By Cauchy-Schwarz inequality and Ricci identity, we have
\begin{eqnarray}\label{CSR}
\nonumber -\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2 &=& -\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}(u_{11i}+R^l_{1i1}u_l)^2\\
&\geq& -\frac{3}{2u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{11i}^2+\frac{6}{u_{11}}\sum_{i\geq2}Q^{1i,i1}(R^l_{1i1}u_l)^2.
\end{eqnarray}
Then \eqref{FQ}, \eqref{beta}-\eqref{CSR} and Proposition \ref{th-lem-07} imply that
\begin{eqnarray}\label{sum}
\nonumber\sum_{i\geq 2} \frac{Q^{ii}u_{11i}^2}{u_{11}^2} &\leq& -\frac{3}{2u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{11i}^2 +\frac{a}{16}Q^{11}u_{11}^2 \\
\nonumber &\leq& -\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2-\frac{6}{u_{11}}\sum_{i\geq2}Q^{1i,i1}(R^l_{1i1}u_l)^2
+\frac{a}{16}Q^{11}u_{11}^2\\
\nonumber&\leq&-\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2+6C_0\sum_{i\geq2}\frac{Q^{ii}-Q^{11}}{u_{11}-u_{ii}}
+\frac{a}{16}Q^{11}u_{11}^2\\
&\leq&-\frac{2}{u_{11}}\sum_{i\geq2}Q^{1i,i1}u_{1i1}^2+\frac{6C_0}{1-\xi}\sum_{i\geq2}(Q^{ii}-Q^{11})
+\frac{a}{16}Q^{11}u_{11}^2,
\end{eqnarray}
if $u_{11}\geq B>1$. Note that
$$\sum_{i\geq2}(Q^{ii}-Q^{11})=\sum_iQ^{ii}-nQ^{11}\leq\sum_iQ^{ii},$$
then substituting \eqref{idn} and \eqref{sum} into \eqref{ht-c2-32}, we derive
\begin{eqnarray*}
\nonumber 0 &\geq& -\frac{6C_0^4}{1-\xi}\sum_iQ^{ii}+\frac{aQ^{ii}u_{ii}^2}{8}
+\frac{A}{2}Q^{ii}(\underline{u}_{ii}-u_{ii})
-C_0(A+1)-2C_0\sum_iQ^{ii}-\frac{C_0^2}{2a}\sum_iQ^{ii}\\
\nonumber &\geq&\frac{A}{4}Q^{ii}(\underline{u}_{ii}-u_{ii})-\frac{6C_0^4}{1-\xi}\sum_iQ^{ii}-2C_0\sum_iQ^{ii}
-\frac{C_0^2}{2a}\sum_iQ^{ii}+\frac{C_0}{2}u_{11}-C_0(A+1)\\
&\geq&\frac{C_0}{2}u_{11}-C_0(A+1),
\end{eqnarray*}
if $u_{11}\geq B\geq(n-2)C_4A+\widetilde{R}$ and $A$ defined as \eqref{A1}.
It follows that
$$u_{11}\leq 2(A+1).$$
\end{proof}
Now we consider the estimates for the second order derivatives on the
boundary $\partial M$. For any fixed $x_0\in\partial M$, we can choose smooth orthonormal local frames $e_1, \cdots,e_n$ around
$x_0$ such that when restricted on $\partial M$, $e_n$ is normal to $\partial M$. For $x\in\overline{M}$,
let $\rho(x)$ and $d(x)$ denote the distances from $x$ to $x_0$ and $\partial M$ respectively,
$$\rho(x)=dist_{M}(x,x_0),\quad d(x)=dist_{M}(x,\partial M),$$
and $M_{\delta}=\{x\in M:\rho(x)<\delta\}$.
Since $\nabla_{ij}\rho^2(x_0) = 2\delta_{ij}$, we may assume $\rho$ is smooth in $M_{\delta_0}$ for fixed $\delta_0>0$ and
$$I\leq\nabla_{ij}\rho^2\leq3I\quad in~M_{\delta_0}.$$
Then we get the following important lemma, which plays key role in
our boundary estimates.
\begin{lemma}\label{LQ}
Let
$$L=Q^{ij}\nabla_{ij}-\psi_{p_i}\nabla_i, \quad v=u-\underline{u}+td-\frac{N}{2}d^2,$$
then for a positive constant $\varepsilon_0$, there exist some uniform constants $t, \delta$ sufficiently small and $N$ sufficiently large such that
\begin{equation*}
\left\{
\begin{aligned}
&Lv\leq-\frac{\varepsilon_0}{4}(1+\sum_i F^{ii}) &&in~
M_{\delta},\\
&v\geq0 &&on~\partial M_{\delta}.
\end{aligned}
\right.
\end{equation*}
\end{lemma}
\begin{proof}
It is easy to see that $v(x)=0$ for any $x\in\partial M\bigcap B_{\delta}$. Then we can choose $\delta<\frac{2t}{N}$ such that $v(x)\geq0$ for any $x\in M\cap \partial B_{\delta}$. Therefore
$$v\geq0 \quad \mbox{on}~\partial M_{\delta}.$$
Let $\mu=\lambda[\underline{U}]$ and $\lambda=\lambda[U]$ be the eigenvalues of $\underline{U}$ and $U$ respectively.
As the result in \cite{Guan14}, denote $\nu_{\chi}:=\frac{D f(\chi)}{|Df(\chi)|}$ to be the unit normal vector to the level hypersurface $\partial \Gamma^{f(\chi)}$ for $\chi \in \Gamma$, $\Gamma$ is a symmetric open and convex cone in $\mathbb{R}^n$ with $\Gamma_n\subset\Gamma$.
Note that $\{\mu(x)\mid x\in \overline{M}\}$ is a compact subset of $\Gamma$, there exists a uniform constant $\beta\in(0, \frac{1}{2\sqrt{n}})$ such that
$$\nu_{\mu(x)} -2 \beta \mathbf{1} \in \Gamma_n, \quad \forall x\in \overline{M}.$$
We divide into two cases to estimate $Lv$.
Case 1: $|\nu_{\mu}-\nu_{\lambda}|<\beta$.
Since $\nabla_n d(x_0)=1$, $\nabla_{\alpha}d(x_0)=0$ for all $\alpha <n$, we can choose a
constant $\delta_0$ such that
\begin{eqnarray*}
\frac{1}{2}\leq |\nabla d|\leq 1, \quad -\widetilde{C}_{1}I\leq \nabla^2d \leq \widetilde{C}_{1}I, \quad \forall x\in M_{\delta}
\end{eqnarray*}
for any $\delta < \delta_0$, where $\widetilde{C}_{1}$ depends on the geometry of $\partial M$.
Note that $\nu_{\lambda}-\beta\mathbf{1}\in\Gamma_n$, $\mathbf{1}=(1,\cdots,1)$, then
\begin{eqnarray}\label{DHQ-lem-for-1}
F^{ii}\geq\frac{\beta}{\sqrt{n}}\sum_kF^{kk},\quad \forall~1\leq i\leq n.
\end{eqnarray}
By the definition of $v$, we have
\begin{eqnarray}\label{DHQ-lem-for-11}
\nonumber Lv &=& Q^{ij}\nabla_{ij}v-\psi_{p_i}\nabla_i v \\
\nonumber &=&Q^{ij}\left(\nabla_{ij}(u-\underline{u})+t \nabla_{ij}d - N \nabla_id \nabla_jd -Nd \nabla_{ij}d\right)\\
\nonumber&&-\psi_{p_i}\left(\nabla_i(u-\underline{u}) +t \nabla_id - N d\nabla_id \right)\\
&\leq&Q^{ij}\nabla_{ij}(u-\underline{u})+ (t-Nd)Q^{ij}\nabla_{ij}d-N Q^{ij}\nabla_id \nabla_jd+\widetilde{C}_{2}+\widetilde{C}_{2}t+\widetilde{C}_{2}N \delta,
\end{eqnarray}
where $\widetilde{C}_{2}$ depends on $\|\psi\|_{C^1}$, $\|u\|_{C^1}$ and $\|\underline{u}\|_{C^1}$.
By the concavity of $F$, we have
$$Q^{ij}\nabla_{ij}(u-\underline{u})=F^{ij} (\nabla_{ij}U- \nabla_{ij}\underline{U})\leq 0.$$
Thus, we have
\begin{eqnarray}\label{eq1}
Lv&\leq& (t-Nd)Q^{ij}\nabla_{ij}d-N Q^{ij}\nabla_id \nabla_jd+\widetilde{C}_{2}+\widetilde{C}_{2}t+\widetilde{C}_{2}N \delta\\
\nonumber&\leq& \widetilde{C}_{1} t\sum_i Q^{ii}+\widetilde{C}_{2}+\widetilde{C}_{2} t+ N \widetilde{C}_{1}\delta \sum_i Q^{ii}-N Q^{ij}\nabla_id \nabla_jd+\widetilde{C}_{2}N \delta.
\end{eqnarray}
By \eqref{DHQ-lem-for-1}, we have
\begin{eqnarray}\label{DHQ-lem-for-2}
Q^{ij}\nabla_id \nabla_jd=\tau \sum_l F^{ll} |\nabla d|^2- F^{ij} \nabla_id \nabla_jd\geq \frac{(n-1)\beta}{4\sqrt{n}} \sum_l F^{ll}.
\end{eqnarray}
Note that
\begin{eqnarray}\label{DHQ-lem-for-3}
\sum_l F^{ll} =\frac{1}{\tau n-1}\sum_iQ^{ii}\geq \left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}:=\widetilde{C}_{3}.
\end{eqnarray}
Combining with \eqref{eq1}-\eqref{DHQ-lem-for-3}, we get
\begin{eqnarray*}
\nonumber Lv&\leq& \widetilde{C}_{1} t(\tau n-1) \sum_l F^{ll}+\frac{\widetilde{C}_{2}+\widetilde{C}_{2} t}{\widetilde{C}_{3}} \sum_l F^{ll}\\
\nonumber&&+ N \widetilde{C}_{1}(\tau n-1)\delta \sum_l F^{ll}-N \frac{(n-1)\beta}{4\sqrt{n}} \sum_l F^{ll}+\frac{\widetilde{C}_{2}N \delta}{\widetilde{C}_{3}}\sum_l F^{ll}\\
&\leq & -\frac{\widetilde{C}_{2}}{\widetilde{C}_{3}} \sum_{l} F^{ll},
\end{eqnarray*}
if we choose the constants $t, N, \delta$ satisfying
\begin{equation*}
\left\{
\begin{aligned}
&t\leq \frac{\widetilde{C}_{2}}{\widetilde{C}_{1} \widetilde{C}_{3}(\tau n-1)+\widetilde{C}_{2}},\\
&N\geq \frac{20\widetilde{C}_{2} \sqrt{n}}{\widetilde{C}_{3} (n-1) \beta},\\
&\delta\leq \min\{\delta_0, \frac{2t}{N}\}.
\end{aligned}
\right.
\end{equation*}
Case 2: $|\nu_{\mu}-\nu_{\lambda}|\geq\beta$.
From Lemma 2.1 in \cite{Guan14}, we know that for some uniform constant $\varepsilon_0>0$,
$$Q^{ij}\nabla_{ij}(\underline{u}-u)=F^{ij}\nabla_{ij}(\underline{U}-U)\geq f_i(\mu_i-\lambda_i)\geq\varepsilon_0(1+\sum_iF^{ii}).$$
In according to \eqref{DHQ-lem-for-11}, we have
\begin{eqnarray}\label{w1}
\nonumber Lv &\leq&-\frac{\varepsilon_0}{2}(1+\sum_iF^{ii})- \frac{1}{2} \left( Q^{ij}\nabla_{ij}(\underline{u}-u)+2N Q^{ij}\nabla_id \nabla_jd\right)\\
&&+ (t-Nd)\widetilde{C}_{1}(\tau n-1) \sum_l F^{ll}+\widetilde{C}_{2}+\widetilde{C}_{2}t+\widetilde{C}_{2}N \delta.
\end{eqnarray}
By the concavity of $F$,
\begin{eqnarray}\label{w2}
\nonumber Q^{ij}\nabla_{ij}(\underline{u}-u)+2NQ^{ij}\nabla_id\nabla_jd &=& F^{ij}\nabla_{ij}(\underline{U}-U)+2NF^{ij}(\tau|Dd|^2\delta_{ij}-\nabla_id\nabla_jd) \\
\nonumber &\geq&F\left(\nabla_{ij}\underline{U}+2N(\tau|Dd|^2\delta_{ij}-\nabla_id\nabla_jd)\right)-F(\nabla_{ij}U)\\
\nonumber &\geq& \left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}(\mu+2N\lambda[A])-\psi,
\end{eqnarray}
where $\mu=(\mu_1,\mu_2,\cdots,\mu_n)$ and $A=\left[\tau|Dd|^2\delta_{ij}-d_id_j\right]_{n\times n}$. Since $\lambda[A]\geq \frac{1}{4}\mbox{diag}(0,1,\cdots,1)$, then we have
\begin{eqnarray}\label{w2}
\nonumber Q^{ij}\nabla_{ij}(\underline{u}-u)+2NQ^{ij}\nabla_id\nabla_jd \geq \left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}(\mu+ \overline{\lambda})-\widetilde{C}_{2},
\end{eqnarray}
where $\overline{\lambda}= \mbox{diag}(0, \frac{N}{2}, \cdots, \frac{N}{2})$.
Next, based on the range of $k$, we consider the following two conditions.
When $k=n$, since $F(\underline{U})\geq \psi(x, \underline{u}, D\underline{u})>0$, we know that $\sigma_n(\mu)\geq \widetilde{C}_{4}$, then
\begin{equation}\label{w3}
\frac{\sigma_n}{\sigma_l}(\mu+ \overline{\lambda})\geq
\frac{\widetilde{C}_{4} N^{n-1}}{C_n^l(\mu_{\mbox{max}}+N)^l}\geq \widetilde{C}_{5} N^{n-1-l},
\end{equation}
where $\widetilde{C}_{5}$ depends on $\inf \psi, n, k, l$ and $\|\underline{u}\|_{C^2}$.
When $3\leq k\leq n-1$, we assume that $N>|4\mu|^2+1$, then
\begin{eqnarray}\label{w4}
\frac{\sigma_k}{\sigma_l}(\mu+ \overline{\lambda})&\geq& \frac{\sigma_k}{\sigma_l} \left(\mbox{diag}(\mu_1, \frac{N}{4}, \cdots, \frac{N}{4})\right)\\
\nonumber &=& \frac{C_{n-1}^k (\frac{N}{4})^{k}+\mu_1 C_{n-1}^{k-1} (\frac{N}{4})^{k-1}}{C_{n-1}^l (\frac{N}{4})^{l}+\mu_1 C_{n-1}^{l-1} (\frac{N}{4})^{l-1}}\\
\nonumber&\geq& \frac{C_{n-1}^k (\frac{N}{4})^{k}- \frac{N^{\frac{1}{2}}}{4} C_{n-1}^{k-1} (\frac{N}{4})^{k-1}}{C_{n-1}^l (\frac{N}{4})^{l}+C_{n-1}^{l-1} (\frac{N}{4})^{l}}\\
\nonumber&\geq& \widetilde{C}_{5} N^{k-l-1}.
\end{eqnarray}
By \eqref{w1}-\eqref{w4}, we can choose $t, N, \delta$ satisfying
\begin{equation*}
\left\{
\begin{aligned}
&t\leq \min \left\{\frac{\varepsilon_0}{12 \widetilde{C}_{1}(\tau n-1)}, 1\right\},\\
&N\geq \max\left\{ \left(10\widetilde{C}_{2}\widetilde{C}_{5}^{-\frac{1}{k-l}}\right)^{\frac{k-l}{k-l-1}}, 16n\mu_{\mbox{max}}^2+1\right\},\\
&\delta\leq \min\left\{\delta_0, \frac{2t}{N}\right\}.
\end{aligned}
\right.
\end{equation*}
Thus
\begin{eqnarray*}
Lv &\leq&-\frac{\varepsilon_0}{2}(1+\sum_iF^{ii})-\frac{1}{2}\left(\widetilde{C}_{5}N^{k-l-1}\right)^{\frac{1}{k-l}}+ 3t\widetilde{C}_{1}(\tau n-1) \sum_l F^{ll}+5\widetilde{C}_{2}\\
&\leq&-\frac{\varepsilon_0}{4}(1+\sum_iF^{ii}) .
\end{eqnarray*}
\end{proof}
\begin{theorem}\label{C2-1}
Let $u\in C^{\infty}(M)$ be an admissible solution for equation \eqref{Eq}. Under the assumptions mentioned in Theorem \ref{main},
there exists a constant $C$ depending only on $n, k, l, \|u\|_{C^1}, \|\underline{u}\|_{C^2}, \inf \psi$ , $\|\psi\|_{C^2}$ and the curvature tensor $R$ such that
$$\sup_{ \overline{M}} |\nabla^2 u | \leq C.$$
\end{theorem}
\begin{proof}
By Theorem \ref{C2-0}, we only need to derive boundary estimates.
For any $ x_0\in\partial M$, we can choose the local frames $e_1,\cdots,e_n$ around $x_0$ such
that $e_n$ is interior normal to $\partial M$.
$\mathbf{Case~1:} $ Estimates of $\nabla_{\alpha\beta}u, \alpha, \beta=1,\cdots,n-1$ on $\partial M$.
Consider a point $x_0\in\partial M$. Since $u-\underline{u}=0$ on $\partial M$, therefore,
$$\nabla_{\alpha\beta}(u-\underline{u})=-\nabla_{n}(u-\underline{u})B_{\alpha\beta} \quad \mbox{on} ~\partial M,$$
where $B_{\alpha\beta} = \langle\nabla_{\alpha}e_{\beta},e_n\rangle$ denotes the second fundamental form of $\partial M$. Therefore,
\begin{eqnarray*}
|\nabla_{\alpha\beta}u| \leq C \quad \mbox{on} ~\partial M,
\end{eqnarray*}
where $C$ depends on $\|u\|_{C^1}$ and $\|\underline{u}\|_{C^2}$.
$\mathbf{Case~2:}$ Estimates of $ \nabla_{\alpha n}u$, $\alpha=1,\cdots,n-1$ on $\partial M$.\\
Let
\begin{equation}\label{Phi}
\Phi=A_1v+A_2\rho^2-A_3\sum_{\beta<n}|\nabla_{\beta}(u-\underline{u})|^2,
\end{equation}
then combining with Lemma \ref{LQ}, we claim that
\begin{equation*}
\left\{
\begin{aligned}
& L(\Phi\pm\nabla_{\alpha}(u-\underline{u}))\leq0 &&in~M_{\delta},\\
& \Phi\pm\nabla_{\alpha}(u-\underline{u})\geq0 &&on~\partial M_{\delta},
\end{aligned}
\right.
\end{equation*}
for suitably chosen positive constants $A_1, A_2, A_3$ and $L$ , $v$ are defined in Lemma \ref{LQ}. First we have for some uniform constant $\widehat{C}_0$,
\begin{equation*}
L(\rho^2)=Q^{ij}\nabla_{ij}(\rho^2)-\psi_{p_i}\nabla_i(\rho^2)\leq \widehat{C}_0(1+\sum_iQ^{ii}),
\end{equation*}
and by \eqref{req0}, \eqref{Quii} and \eqref{Quiik},
\begin{eqnarray}\label{L2}
\nonumber |L\nabla_{\alpha}(u-\underline{u})|&\leq&2Q^{ij}\Gamma_{i\alpha}^l\nabla_{jl}u+C(1+\sum_iQ^{ii})\\
&\leq& \widehat{C}_1(1+\sum_i\widetilde{f}_i|\widetilde{\lambda}_i|+\sum_i\widetilde{f}_i),
\end{eqnarray}
where $\widetilde{\lambda}_i (i=1, \cdots, n)$ are the eigenvalues of $\nabla^2u$.
Furthermore, we get
\begin{eqnarray}\label{nab}
\nonumber L|\nabla_{\beta}(u-\underline{u})|^2 &=& 2Q^{ij}\nabla_{\beta}(u-\underline{u})\nabla_{ij}\nabla_{\beta}(u-\underline{u})+2Q^{ij}\nabla_i\nabla_{\beta}(u-\underline{u}) \nabla_{j}\nabla_{\beta}(u-\underline{u})\\
\nonumber&& -2\psi_{p_i}\nabla_{\beta}(u-\underline{u})\nabla_i\nabla_{\beta}(u-\underline{u})\\
&\geq& 2Q^{ij}u_{i\beta}u_{j\beta}-\widehat{C}_2\left(1+\sum_i\widetilde{f}_i|\widetilde{\lambda}_i|+\sum_i\widetilde{f}_i\right).
\end{eqnarray}
By Proposition 2.19 in \cite{Guan12}, we know that there exists an index $r$ such that
\begin{equation}\label{gueq1}
\sum_{\beta<n}Q^{ij}u_{i\beta}u_{j\beta}\geq\frac{1}{2}\sum_{i\neq r}\widetilde{f}_i\widetilde{\lambda}_i^2.
\end{equation}
Since $\widetilde{f}$ satisfies $\widetilde{f}_i=\frac{\partial\widetilde{f}}{\partial\widetilde{\lambda}_i}=\sum_iQ^{ii}>0$ , $\sum_i\widetilde{f}_i\widetilde{\lambda}_i=\sum_iQ^{ii}u_{ii}=\psi>0$ and $\widetilde{f}$ is a concave function, then by Corollary 2.21
in \cite{Guan12}, for index $r$ and $\varepsilon>0$,
\begin{equation}\label{gueq2}
\sum_i\widetilde{f}_i|\widetilde{\lambda}_i|\leq\varepsilon\sum_{i\neq r}\widetilde{f}_i\widetilde{\lambda}_i^2+\frac{C}{\varepsilon}\sum_i\widetilde{f}_i+Q(r),
\end{equation}
where $Q(r)=\widetilde{f}(\widetilde{\lambda})-\widetilde{f}(\mathbf{1})$ if $\lambda_r\geq0$, $\mathbf{1}=(1,\cdots,1)$ and for some constant $K_0\geq0$,
$$Q(r)=\varepsilon nK_0^2\min_{1\leq i\leq n}\frac{1}{\widetilde{f}_i},\quad \mbox{if} ~\lambda_r<0.$$
Hence \eqref{L2}-\eqref{gueq2} yield that
\begin{eqnarray*}
\nonumber &&A_3\sum_{\beta<n}L|\nabla_{\beta}(u-\underline{u})|^2\pm L(\nabla_{\alpha}(u-\underline{u}))\\
\nonumber&\geq& 2A_3\sum_{\beta<n}Q^{ij}u_{i\beta}u_{j\beta}
-\widehat{C}_1\left(1+\sum_i\widetilde{f}_i|\widetilde{\lambda}_i|+\sum_i\widetilde{f}_i\right)\\
\nonumber&&-A_3\widehat{C}_2(n-1)\left(1+\sum_i\widetilde{f}_i|\widetilde{\lambda}_i|+\sum_i\widetilde{f}_i\right)\\
\nonumber&\geq&A_3\sum_{i\neq r}\widetilde{f}_i\widetilde{\lambda}_i^2-(A_3\widehat{C}_2(n-1)+\widehat{C}_1)\left(1+\varepsilon\sum_{i\neq r}\widetilde{f}_i\widetilde{\lambda}_i^2+\frac{C}{\varepsilon}\sum_i\widetilde{f}_i+\sum_i\widetilde{f}_i+Q (r)\right)\\
\nonumber&\geq&\left(A_3-A_3\widehat{C}_2(n-1)\varepsilon-\widehat{C}_1\varepsilon\right)\sum_{i\neq r}\widetilde{f}_i\widetilde{\lambda}_i^2-A_3\widehat{C}_3(1+\sum_i\widetilde{f}_i)\\
&\geq&-A_3\widehat{C}_3(1+\sum_i\widetilde{f}_i),
\end{eqnarray*}
by choosing $0<\varepsilon<\min\left\{\frac{1}{\widehat{C}_2(n-1)},1\right\}$ and $A_3>\max\left\{\frac{\widehat{C}_1\varepsilon}{1-\widehat{C}_2(n-1)\varepsilon},1\right\}$.
Combine with Lemma \ref{LQ} and choose $A_1\gg A_2\gg A_3\gg1$, then
\begin{equation*}
\left\{
\begin{aligned}
&L\left(\Phi\pm\nabla_{\alpha}(u-\underline{u})\right)\leq0\quad &&in~M_{\delta},\\
&\Phi\pm\nabla_{\alpha}(u-\underline{u})\geq0\quad &&on~\partial M_{\delta}.
\end{aligned}
\right.
\end{equation*}
Therefore by the maximum principle, we have
$$\Phi\pm\nabla_{\alpha}(u-\underline{u})\geq0\quad in ~M_{\delta}.$$
Thus we obtain
$$|\nabla_{n\alpha}u(x_0)|\leq\nabla_n\Phi(x_0)+|\nabla_{n\alpha}\underline{u}(x_0)|\leq C, \quad\alpha=1, \cdots, n-1. $$
$\mathbf{Case~3:} $ Estimates of $\nabla_{nn}u$ on $\partial M$.\\
We only need to show the uniform upper bound
$$\nabla_{nn}u(x_0)\leq C, \quad \forall~x_0\in\partial M,$$
since $\Gamma_k\subset \Gamma_1$ implies $\Delta u \geq 0$ and the lower bound for $\nabla_{nn} u$ follows from the estimate of $\nabla_{\alpha \beta} u$ and $\nabla_{\alpha n} u$.
We will divide the proof into two conditions. The case $\tau=1$ is more complicated and need classified discussion.
When $\tau>1$, we have
\begin{eqnarray*}
\nonumber [U(x_0)]&=&\tau \Delta u(x_0) I- \nabla^2u(x_0)\\
\nonumber &\geq& \mbox{diag}(\tau \nabla_{nn}u(x_0), \cdots, \tau \nabla_{nn}u(x_0), (\tau-1) \nabla_{nn}u(x_0)) -\overline{C}_0I\\
\nonumber &\geq& ((\tau-1) \nabla_{nn}u(x_0)-\overline{C}_0)I,
\end{eqnarray*}
where $\overline{C}_0$ depends on $\|\nabla_{\alpha\beta}u\|_{C^0}$ and $\|\nabla_{\alpha n}u\|_{C^0}$. It is clear that
\begin{eqnarray*}
\nonumber \psi(x_0, u(x_0), \nabla u(x_0))=F(U)(x_0)&=& F^{ij}(x_0) U_{ij}(x_0)\\
&\geq & ((\tau-1) \nabla_{nn}u(x_0)-\overline{C}_0) \sum_l F^{ll}.
\end{eqnarray*}
Thus we obtain the upper bound as desired.
When $\tau=1$. By lemma 1.2 of \cite{CNS85} and the estimates of $\nabla_{\alpha \beta}u$, $\nabla_{\alpha n}u$, we can choose $R_1>0$ sufficiently large such that if $\nabla_{nn}u(x_0)> R_1$,
\begin{equation*}
\left\{
\begin{aligned}
\widetilde{\lambda}_i[\nabla_{ij}u(x_0)]&=\widetilde{\lambda}_i^\prime [\nabla_{\alpha \beta}u(x_0)]+ o(1), \quad i=1, \cdots, n-1,\\
\widetilde{\lambda}_n [\nabla_{ij}u(x_0)]&=\nabla_{nn}u(x_0)\left(1+O(\frac{1}{\nabla_{nn}u(x_0)})\right). \end{aligned}
\right.
\end{equation*}
Here $\widetilde{\lambda}[\nabla_{ij}u] =(\widetilde{\lambda}_1[\nabla_{ij}u], \cdots, \widetilde{\lambda}_{n}[\nabla_{ij}u])$ denotes the eigenvalues of the $n\times n$ matrix $\nabla^2u$ and $\widetilde{\lambda}^\prime[\nabla_{\alpha \beta}u] =(\widetilde{\lambda}_1^\prime[\nabla_{\alpha \beta}u], \cdots, \widetilde{\lambda}_{n-1}^\prime[\nabla_{\alpha \beta}u])$ denotes the eigenvalues of the $(n-1)\times (n-1)$ matrix $\left[\nabla_{\alpha\beta}u\right]_{1\leq \alpha, \beta\leq n-1}$. For convenience, we denote
\begin{eqnarray*}
\widetilde{\lambda}_i= \widetilde{\lambda}_i[\nabla_{ij}u], \quad \widetilde{\lambda}_{\alpha}^\prime= \widetilde{\lambda}_{\alpha}^\prime [\nabla_{\alpha \beta}u], \quad \widehat{\lambda}_i= \sum_{l=1}^n \widetilde{\lambda}_{l}-\widetilde{\lambda}_i ,\quad \widehat{\lambda}_{\alpha}^\prime= \sum_{i=1}^{n-1} \widetilde{\lambda}_i^\prime- \widetilde{\lambda}_{\alpha}^\prime.
\end{eqnarray*}
If $k<n$,
\begin{eqnarray*}
&& F^{k-l}(U)(x_0)\\
\nonumber&=& \frac{\sigma_k}{\sigma_l}\left(\sum_i \widetilde{\lambda}_i(x_0) -\widetilde{\lambda}_1(x_0), \cdots, \sum_i \widetilde{\lambda}_i(x_0) -\widetilde{\lambda}_n(x_0)\right)\\
\nonumber&= & \frac{\sigma_k}{\sigma_l}\left(\widetilde{\lambda}_n(x_0)+\widehat{\lambda}_{1}^\prime(x_0)+o(1), \cdots, \widetilde{\lambda}_n(x_0)+\widehat{\lambda}_{n-1}^\prime(x_0)+o(1), \sum_{i=1}^{n-1} \widetilde{\lambda}_i^\prime(x_0) +o(1)\right)\\
\nonumber&\geq & \frac{\widetilde{\lambda}^{k}_n(x_0)+ o\left(\widetilde{\lambda}^{k-1}_n(x_0)\right)}{C_n^l\widetilde{\lambda}^{l}_n(x_0)+O\left(
\widetilde{\lambda}^{l-1}_n(x_0)\right)},
\end{eqnarray*}
which implies the uniform upper bound of $\nabla_{nn}u(x_0)$.
If $k=n$, we show the uniform upper bound of $\nabla_{nn}u(x_0)$ by proving that there are uniform constants $\overline{C}_1$ such that
\begin{equation}\label{w89}
\min_{x \in \partial M} tr([\nabla_{\alpha\beta}u])\geq \overline{C}_1>0.
\end{equation}
Suppose we have found such $\overline{C}_1$, then
\begin{eqnarray*}
&& F^{n-l}(U)(x_0)\\
\nonumber&= & \frac{\sigma_n}{\sigma_l}\left(\widetilde{\lambda}_n(x_0)+\widehat{\lambda}_{1}^\prime(x_0)+o(1), \cdots, \widetilde{\lambda}_n(x_0)+\widehat{\lambda}_{n-1}^\prime(x_0)+o(1), \sum_{i=1}^{n-1} \widetilde{\lambda}_i^\prime(x_0) +o(1)\right)\\
\nonumber&\geq & \frac{\overline{C}_1\widetilde{\lambda}^{n-1}_n(x_0)+ o\left(\widetilde{\lambda}^{n-1}_n(x_0)\right)}{C_n^l\widetilde{\lambda}^{l}_n(x_0)+O\left(\widetilde{\lambda}^{l-1}_n(x_0)\right)},
\end{eqnarray*}
which implies the uniform upper bound of $\nabla_{nn}u(x_0)$. Hence we only need to prove \eqref{w89}.
Suppose that $tr([\nabla_{\alpha\beta}u])$ attains its minimum at $x_1 \in \partial M$. To show \eqref{w89}, we may assume $tr([\nabla_{\alpha\beta}u(x_1)])< \frac{1}{2}tr([\nabla_{\alpha\beta}\underline{u}(x_1)])$, since otherwise we are done as $tr([\nabla_{\alpha\beta}\underline{u}(x_1)])=\sum_i \widetilde{\lambda}_i[\nabla_{ij}\underline{u}] -\widetilde{\lambda}_n[\nabla_{ij}\underline{u}]>\overline{C}_2$. Let us compute
\begin{eqnarray*}
\nabla_{\alpha\alpha}u=\nabla_{\alpha\alpha}\underline{u}- \nabla_n(u-\underline{u}) B_{\alpha\alpha} \quad \mbox{on}~\partial M.
\end{eqnarray*}
It follows that
\begin{eqnarray}\label{w42}
\nonumber\nabla_n(u-\underline{u})(x_1) \sum_{\alpha} B_{\alpha\alpha}(x_1)&=&tr([\nabla_{\alpha\beta}\underline{u}(x_1)])-tr([\nabla_{\alpha\beta}u(x_1)])\\
&\geq& \frac{1}{2}tr([\nabla_{\alpha\beta}\underline{u}(x_1)])>\frac{\overline{C}_2}{2}.
\end{eqnarray}
For any $x\in \partial M$ near $x_0$, applying that $tr([\nabla_{\alpha\beta}u])\mid_{\partial M}$ is minimized at $x_1$ yields,
$$\nabla_n(u-\underline{u})(x) \sum_{\alpha}B_{\alpha\alpha}(x)\leq tr([\nabla_{\alpha\beta}\underline{u}(x)])-tr([\nabla_{\alpha\beta}\underline{u}(x_1)]) +\nabla_n(u-\underline{u})(x_1) \sum_{\alpha}B_{\alpha\alpha}(x_1).$$
Note that $B_{\alpha\alpha}$ is smooth near $\partial M$ and $0<u-\underline{u}\leq C$, adding \eqref{w42}, we can choose a constant $\delta$ sufficiently small such that
$$\sum_{\alpha} B_{\alpha\alpha} \geq \overline{C}_3>0 \quad \mbox{in}~ M\cap B_{\delta}(x_1)$$
for some uniform constant $\overline{C}_3>0$. Therefore
\begin{eqnarray*}
\nabla_n(u-\underline{u})(x_1) = \Psi(x_1), \quad \nabla_n(u-\underline{u})(x) \leq \Psi(x)\quad \mbox{on} ~B_{\delta}(x_1)\cap \partial M,
\end{eqnarray*}
where $\Psi= \left(\sum_{\alpha} B_{\alpha\alpha} (x) \right)^{-1} \left(tr([\nabla_{\alpha\beta}\underline{u}(x)])-tr([\nabla_{\alpha\beta}\underline{u}(x_1)]) +\nabla_n(u-\underline{u})(x_1) \sum_{\alpha}B_{\alpha\alpha}(x_1)\right)$ is smooth in $M\cap B_{\delta}(x_1)$.
We now apply the argument of $\nabla_{\alpha n}$ again. For $A_1\gg A_2\gg A_3\gg 1$, it remains to prove that
\begin{equation*}
\left\{
\begin{aligned}
&L\left(\Phi+\Psi-\nabla_n(u-\underline{u})\right)\leq0\quad &&in~ M\cap B_{\delta}(x_1),\\
&\Phi+\Psi-\nabla_n(u-\underline{u})\geq 0\quad &&on~ \partial \left(M\cap B_{\delta}(x_1)\right).
\end{aligned}
\right.
\end{equation*}
According to the maximum principle, we have
$$\Phi+\Psi-\nabla_n(u-\underline{u})\geq 0\quad \mbox{in}~ M\cap B_{\delta}(x_1).$$
Thus $\nabla_n\Psi(x_1)-\nabla_{nn}(u-\underline{u})(x_1)\geq-\nabla_n\Phi(x_1)\geq-C$, which implies that $\nabla_{nn}u(x_1)\leq C$.
Therefore,
$$\widetilde{\lambda}_i(x_1)\leq C, \quad i=1,\cdots,n.$$
It is clear that $\widehat{\lambda}=(\widehat{\lambda}_1,\cdots,\widehat{\lambda}_n)\in \Gamma_n$. Hence
\begin{eqnarray*}
\psi^{n-l}(x_1, u(x_1), \nabla u(x_1))&=&\frac{\sigma_n(\widehat{\lambda})}{\sigma_l(\widehat{\lambda})}\leq \frac{\sigma_n(\widehat{\lambda})}{ C_n^l \sigma^{\frac{l}{n}}_n(\widehat{\lambda})}=\frac{\sigma^{1-\frac{l}{n}}_n(\widehat{\lambda})}{C_n^l}.
\end{eqnarray*}
It follows that
$$\widehat{\lambda}_i (x_1) \geq \overline{C}_4.$$
We assume without loss of generality that the eigenvalue $\widehat{\lambda}_i(x_1)$ satisfy $\widehat{\lambda}_1(x_1)\leq \widehat{\lambda}_2(x_1)\leq \cdots \leq\widehat{\lambda}_n(x_1)$. According to the Cauchy interlacing inequalities (see e.g. \cite{Wil63}, p. 103-104),
$$\widehat{\lambda}_{\alpha} (x_1)\leq \widehat{\lambda}_{\alpha}^\prime(x_1)\leq \widehat{\lambda}_{\alpha+1} (x_1).$$
Hence the claim \eqref{w89} holds and we obtian the upper bound of $\nabla_{nn}u(x_0)$ as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main-1}]
The theorem can be easily obtained by Lemma \ref{C0}, Theorem \ref{C1}, \ref{C2-0} and \ref{C2-1}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main}]
From Theorem \ref{main-1}, we have
uniform estimates in $C^2(\overline{M}
)$ for classical elliptic solutions of the Dirichlet problems:
\begin{equation*}
\left\{
\begin{aligned}
&\left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}(U)=tf(x,u,\nabla u)+(1-t) \left(\frac{\sigma_k}{\sigma_l}\right)^{\frac{1}{k-l}}(\underline{U})&&in~
M,\\
&u = \varphi &&on~\partial M,
\end{aligned}
\right.
\end{equation*}
for $0\leq t\leq 1$. Theorem \ref{main} then follows from the Evans-Krylov second derivative H$\ddot{o}$lder estimates of
Evans, Krylov and Caffarelli-Nirenberg-Spruck, and the method of continuity, for more details see \cite{Gi98}. The uniqueness assertion is immediate from the maximum principle.
\end{proof}
|
1,116,691,497,310 | arxiv | \section{Introduction}
Effective explainable AI systems are critical to constructive human-robot collaboration in a variety of settings. Humans performing separate roles in distinct contexts require different types of information about their AI teammates in order to effectively perform their tasks \cite{sanneman2020situation}. One context in which explainable AI is particularly important is the value alignment setting in which a human and an autonomous agent work together to maximize some reward, but only the human knows the true reward function \cite{fisac2020pragmatic, hadfield2016cooperative}. Through interaction, the agent infers the reward in order to become a better collaborator. It is ideal for the human to behave ``pedagogically'' in this circumstance, taking actions that best teach the agent about the true reward, but this requires the human to track the agent's beliefs about the reward over the course of the interaction \cite{fisac2020pragmatic}. This might be difficult or intractable in some settings given human cognitive limitations. Therefore, enabling an agent to provide feedback to a human about its current understanding of the reward function, as depicted in Figure \ref{twoway}, could be of value.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{twowaycomm1.png}
\caption{Bidirectional Communication for Value Alignment}
\label{twoway}
\end{figure}
In order to understand how to best explain agent reward functions in a value alignment setting, it is first important to understand which approaches for explaining reward functions are most effective in which contexts. To our knowledge, no comprehensive study comparing reward explanation techniques in different types of domains has previously been performed. In addition, no comprehensive way of assessing human reward understanding has been proposed. In this paper, we first outline two overarching categories of reward explanations and a subset of explanation modalities within each category. We then suggest a suite of assessment techniques and metrics for human reward understanding, define a set of axes characterizing domain complexity, and discuss a planned human subject experiment designed to better understand which modalities of reward explanations are most effective in domains of varying complexity. We scope this work to consider human understanding of linear reward functions in particular.
\section{Related Work}
\label{related}
Previous works have studied the efficacy of a variety of explanation techniques through human subject experiments in different settings. For example, \citet{chakraborti2019plan} assessed whether human participants could identify optimal versus satisficing plans after receiving explanations intended to reconcile their models with a robot's. The authors also asked post-hoc Likert scale questions about whether the explanations were helpful and easy to understand and whether participants were satisfied with the explanations. In another experiment, \citet{tabrez2019explanation} asked Likert scale-based questions about the helpfulness, sociability, and intelligence of a robot that provided explanations about its reward function to human participants. \citet{wang2016impact} further assessed human-agent team performance, the percentage of correct human decisions, and a human's understanding of an agent's decisions given explanations related to the different components of the agent's POMDP-based representation of a task. Finally, \citet{lage2019evaluation} measured a person's understanding of explanations of various sizes and that provided different types of information using a combination of assessment techniques including simulation of the system's actions, verification of a system's response, and counterfactual reasoning.
Other works have compared multiple explanation techniques in the context of a single domain. For example, \citet{anderson2019explaining} compared a person's understanding of an agent's reward function given both saliency maps and decomposed reward bars provided at each decision point in a simple Real-Time Strategy game. They measured the person's reward understanding through the accuracy of the person's predictions of the agent's actions coupled with an open-ended questionnaire asking participants to describe the agent's approach or method for making decisions. In another study, \citet{huang2019enabling} assessed a person's reward understanding by asking them to identify an agent's optimal trajectory after being provided with demonstrations of the agent's behavior which were generated either assuming that the human will perform exact inference or approximate inference of the agent's objectives. In their experiment, the human is assumed to know the correct set of features that the agent is using to make decisions. So far, we have not identified any work that has compared a broad set of reward explanation techniques provided through multiple explanation modalities in multiple domains in an experimental setting with human subjects. Further, while some preliminary assessment techniques for human reward understanding have been applied in a subset of these studies, no comprehensive way of assessing human reward understanding exists.
\section{Reward Explanation Techniques}
\label{xaitechniques}
We group reward explanation techniques into two categories: feature space techniques and policy space techniques. Feature space techniques explain the reward function in terms of the individual features that comprise the reward function and their relative weights. Policy space techniques explain the reward function through demonstrations of actions in the environment along with how the demonstrated state-action pairs relate to the policy (best/worst actions, important states, etc.). Note that the goal in both cases is to communicate information about the features and their weights and to support understanding of what this means in terms of action in the environment, but the modality of communication differs between the two categories. Feature space techniques may be most applicable when the reward function can be easily represented by a limited number of interpretable features, while policy space techniques may be ideal when reward functions are uninterpretable or otherwise difficult to reason about in terms of translation into actions in the environment. Here we introduce sub-categories of feature and policy space techniques as well as examples from the literature that fall into each category. These categories represent a broad range of common reward explanation modalities. We intend to select one technique from each category for our future human subject experiment.
\subsection{Feature Space Techniques}
\subsubsection{Direct Reward Function}
One straightforward approach to communicating reward information to humans is to show them the reward function, including all features and their weights, directly. While this might be the most direct and complete way of communicating reward information, there are a number of potential shortcomings of this approach. First, if the domain involves a large number of features, it might be difficult for a person to reason over all of these features simultaneously. Second, in cases in which features are uninterpretable to humans (as with deep reinforcement learning), explaining reward information directly may be infeasible. Finally, even if humans are able to understand and reason over all features and their weights, they may not be able to convert this information into an optimal plan or otherwise use the information for their tasks.
\subsubsection{Feature Subset}
Reward information can also be communicated to humans in feature space through subsets of features and their relative weights. This might be a better approach if there are too many features for the human to reason over simultaneously or in order to help the human to focus only on the most important aspects of robot decision-making, for example. Displaying subsets of features has previously been applied in the context of classification tasks, including producing prototypes of different classes \cite{kim2014bayesian} and identifying the optimal feature subset given a budget of information to display \cite{ribeiro2016should}. While these explanation techniques show users subsets of features, they are not directly applied to reward functions, which we study in this work. \citet{tabrez2019explanation} introduce an technique that infers a human's reward function based on their actions and explains expected missing information. While this technique inherently provides humans with a subset of reward features, in this experiment we are interested in cases in which the reward function is explained from scratch. In our assessment, we will apply a similar approach to that introduced by \citet{ribeiro2016should}.
\subsubsection{Reward Abstractions}
Finally, reward functions can be explained to humans in feature space using abstractions of features and their relative weights. For example, multiple features may be combined to create one feature or high-level concepts can be combined to create an alternate representation of the reward function. This approach may be especially beneficial if there are too many features for a human to reason over simultaneously or if the features are uninterpretable to humans in some way. Previous works have leveraged user-defined interpretable concepts to learn human-understandable representations of the reward function \cite{lage2020human, sreedharan2020bridging}. In our assessment of reward explanation techniques, we intend to pre-define a set of concepts such as those introduced by \citet{lage2020human} and \citet{sreedharan2020bridging} and learn the appropriate weights via regression.
\subsection{Policy Space Techniques}
\subsubsection{Trajectory Demonstrations}
Another way of revealing information about the robot's reward function to human teammates is through trajectory demonstrations. Trajectory demonstrations could include one or multiple sequences of states and actions generated based on the robot's reward function or policy. For example, the robot could demonstrate the optimal trajectory based on its reward function, much as humans are often assumed to do when teaching a robot through learning from demonstration \cite{argall2009survey}. The robot could also provide the most legible trajectory, where a legible trajectory is defined by \citet{dragan2013legibility} as a trajectory that enables an observer to confidently infer the correct robot goal. Finally, the robot could demonstrate the least optimal trajectory based on its reward function in order to illustrate examples of unfavorable state-action pairs. Note that in many cases, there are many optimal or otherwise equivalent trajectories that could be provided to users, and strategies exist to down-select from these multiple possibilities, for example through the selection of maximally informative trajectories \cite{huang2019enabling, lee2021machine, cakmak2012algorithmic}. In our assessment, we will provide users with the most and least optimal trajectories as a simple baseline.
\subsubsection{Policy Summarization}
Policy summarization techniques demonstrate agent behavior in a subset of informative states given different conditions and scenarios \cite{amir2018agent, lage2019exploring}. Such techniques may be beneficial when the state and action space is large and when it might be difficult for a human user to extrapolate important aspects of the agent's behavior based solely on one or a few optimal or legible trajectories. Given a budget of trajectory segments to provide in a summary, the state-action pairs to include in each segment can be determined based on the importance and diversity of states \cite{amir2018highlights}. Multiple definitions of important states have been proposed, including both Q-function-based \cite{amir2018highlights} and entropy-based \cite{huang2018establishing} definitions. \citet{amir2018highlights} define important states as states from which taking a wrong action can lead to a significant decrease in future rewards, as determined by
the agent’s Q-values. In particular, they consider the most important states to be states in which the difference between the Q-values associated with the best and worst actions is maximized. We will leverage this definition and the approach introduced by \citet{amir2018highlights} in providing policy summaries.
\subsubsection{Factored Policies via Reward Decomposition}
Finally, reward functions can be explained in policy space through factored policies derived from factored rewards \cite{anderson2019explaining, juozapaitis2019explainable}. In these approaches, the reward function is broken into individual components (in a linear reward function, these might correspond to each feature, for example). A decomposed Q-function with components corresponding to each individual reward component is then learned. With this decomposed Q-function, the contribution of each state-action pair to the individual reward components can be displayed. In our assessment, we will display factored Q-function information for different state-action pairs. We will select states for which we will display factored Q-function information by leveraging a similar approach to the state importance-based selection strategy described by \citet{amir2018highlights}.
\section{Reward Understanding Assessments}
In order to assess a person's reward understanding, we suggest the use of a suite of four different assessment techniques: free response, feature sub-selection, preference elicitation, and best demonstration elicitation. Since a variety of assessment techniques have been applied in the past and there is not currently a standard way of assessing a human's reward understanding, using a suite of techniques will allow us to consider multiple types of human input and compare the results from each. The following are descriptions of the assessment techniques along with the associated metric for each. We also introduce three composite metrics that are based on the four individual metrics proposed here.
\subsection{Free Response (FR)}
For the free response assessment, subjects will be asked how they think the robot makes decisions in each domain. They will be able to provide free-form answers about the factors they think the robot uses in decision-making and how important each factor is relative to the others. Their responses will be coded in a similar way as in previous literate \cite{anderson2019explaining, kim2017collaborative, hoffman2018metrics}, and
the coded response will be used to produce the set of features that the human believes are important, $F_H^{fr}$, as well as the set of pairwise comparisons of their relative weights, $W_H^{fr}$ (e.g. $w_A > w_B$, where $w_i$ is the weight of feature $i$). Given the robot's ground truth set of features, $F_R^{fr}$, and pairwise comparisons of its relative weights, $W_R^{fr}$, the metric for the free response assessment, based on the similarity metrics used by \citet{shah2020interactive}, can be defined as the intersection over union of the set of human features and pairwise rankings and the ground truth set of robot features and pairwise rankings: \[FR = \dfrac{(F_H^{fr} \cup W_H^{fr}) \cap (F_R^{fr} \cup W_R^{fr})}{(F_H^{fr} \cup W_H^{fr}) \cup (F_R^{fr} \cup W_R^{fr})}\]
\subsection{Feature Sub-selection (FS)}
For the feature sub-selection assessment, subjects will be provided with a list of possible features (only a subset of which are actually used by the robot), and they will be asked to select the ones they believe are relevant to the scenario and assign relative weights to each. $F_H^{fs}$, $W_H^{fs}$, $F_R^{fs}$, and $W_R^{fs}$ are defined as in the previous section and yield the following metric for feature sub-selection (also based on the previously-discussed similarity metric): \[FS = \dfrac{(F_H^{fs} \cup W_H^{fs}) \cap (F_R^{fs} \cup W_R^{fs})}{(F_H^{fs} \cup W_H^{fs}) \cup (F_R^{fs} \cup W_R^{fs})}\]
\subsection{Preference Elicitation (PE)}
Preference elicitation involves presenting subjects with multiple trajectories and asking them to select the best, similar to an active learning approach (Settles 2012). We generate queries using the maximum information gain strategy proposed by \citet{biyik2019asking}. We define the set of human responses to these queries as $q_H$ and the set of ground truth correct responses from the robot as $q_R$. Given these, the metric for preference elicitation is defined as the percent of correct human responses (i.e. recall): \[PE = \dfrac{|q_H \cap q_R|}{|q_R|}\]
\subsection{Best Demonstration (BD)}
For the best demonstration assessment, subjects will be asked to provide demonstrations that they believe are optimal given what they know about the agent’s reward function. This assessment is similar to the ``simulation'' assessment used by \citet{lage2019evaluation}. The metric we consider for the best demonstration is the complement of the normalized regret: \[BD = 1 - \dfrac{R(\xi^{*}) - R(\xi^{H})}{R(\xi^{*})} \] $R(\xi^{*})$ is the reward for the optimal trajectory $\xi^{*}$ and $R(\xi^{H})$ is the reward for the human's demonstration $\xi^{H}$. We normalize regret in this case, because all of the other assessment metrics are normalized, and we take the complement, since larger values indicate better understanding for the other assessments. These two steps allowed us to combine the four assessment metrics into composite metrics more readily.
\subsection{Composite Metrics}
Finally, we propose three composite metrics based on combinations of four individual metrics. Just as we divided reward explanation techniques into feature space techniques and policy space techniques, we can similarly divide our assessment metrics into feature space metrics, which ask directly about features and their weights ($FR$ and $FS$), and policy space metrics, which ask about the behaviors that result from reward functions ($PE$ and $BD$). All four individual metrics are normalized, and so we weight them equally in combining them to form the composite metrics. Accordingly, we propose the composite feature space metric: $F = FR + FS$
We also propose the composite policy space metric: $P = PE + BD$
Finally, we propose the overall composite metric: $C = F + P$
\section{Axes of Domain Complexity}
In characterizing domains, we consider four different axes of complexity: reward function complexity, feature complexity, environment complexity, and situational complexity. When considering reward functions that are linear in features, we can vary reward function complexity by considering reward functions with more or fewer features. Feature complexity is related to how complex each individual feature within the linear reward function is. While there might be many ways to characterize feature complexity, we consider the interpretability of individual features as a measure of their complexity. Environment complexity includes factors such as the size of the state and action spaces, whether the state and action spaces are discrete or continuous, or whether a domain is Markovian or non-Markovian. Finally, situational complexity indicates whether a person will need to perform other tasks at the same time as receiving the explanation and the number and difficulty of those tasks.
\section{Proposed Experiment}
We propose an experiment to test each of the different explanation techniques in domains of varying complexity as defined by the four axes outlined in the previous section. We have selected four domains in order to cover a broad range of complexities. Our domains include a simple grid world scenario,
OpenAI Gym's Lunar Lander
\cite{brockman2016openai}, the threats and waypoints domain proposed by \citet{shah2018bayesian},
and the threats and waypoints domain combined with a secondary task in which the human needs to monitor a robot traversing in rocky terrain.
\subsection{Hypotheses}
Our hypotheses for the proposed experiment include those listed below. We intend to assess our hypotheses using the proposed metrics for reward understanding.
\begin{hyp}
Feature space techniques will lead to better reward understanding than policy space techniques in domains of low versus high reward, feature, and environment complexity.
\end{hyp}
\begin{hyp}
Policy space techniques will lead to better reward understanding than feature space techniques in domains of high versus low reward, feature, and environment complexity.
\end{hyp}
\begin{hyp}
The best modality of information (feature versus policy space) will not change between scenarios with low versus high situational complexity in domains of the same reward, feature, and environment complexities.
\end{hyp}
\begin{hyp}
Reward understanding will be worse in scenarios with high versus low situational complexity for both feature space techniques and policy space techniques.
\end{hyp}
\section{Conclusion}
\label{conclusion}
In this paper, we define categories of existing reward explanation techniques representing a broad set of explanation modalities, and we identify a specific approach we plan to implement in the context of a human subject experiment from each category. We also suggest a suite of assessment techniques and metrics for human reward understanding. These techniques and metrics integrate multiple modalities of human information understanding, including both feature-based information and behavior-/policy-based information. Finally, we define four axes of domain complexity and outline a future experiment to better understand which reward explanation techniques are most effective in which contexts. We hope that the proposed characterization of reward explanation techniques along with the assessment techniques and metrics will contribute to a more systematic understanding of which reward explanation techniques are most beneficial in different contexts through future human subject experiments.
\newpage
|
1,116,691,497,311 | arxiv | \section{Introduction}\label{s1}
For a smooth compact Riemannian manifold $M$ without boundary, the Hamiltonian $H$ is usually characterized as a $C^{r\geq 2}-$smooth function on the cotangent bundle $T^*M$, with the associated Hamilton equation defined by
\begin{eqnarray}\label{eq:ham}
{\sf (Conservative)\quad }\left\{
\begin{aligned}
\dot x&=\partial_p H(x,p)\\
\dot p&=-\partial_x H(x,p)
\end{aligned}
\right.
\end{eqnarray}
for each initial point $(x,p)\in T^*M$. From the physical aspect, the Hamiltonian equation describes the movement of particles with conservative energy, since the Hamiltonian $H(x,p)$ verifies to be a {\sf First Integral} of (\ref{eq:ham}). In particular, if
the potential periodically depends on the time $t$ (for systems with periodic propulsion or procession), we can introduce an augmented Hamiltonian
\begin{eqnarray}
\widetilde H(x,p,t,I)=I+H(x,p,t),\quad\quad(x,p,t,I)\in T^*M\times T^*\mathbb{T}
\end{eqnarray}
such that the associated Hamiltonian equation
\begin{eqnarray}\label{eq:ham-aug}
{\sf (Conservative)\quad }\left\{
\begin{aligned}
\dot x&=\partial_p H(x,p,t)\\
\dot p&=-\partial_x H(x,p,t)\\
\dot t&=1\\
\dot I &=-\partial_t H(x,p,t)
\end{aligned}
\right.
\end{eqnarray} still preserves $\widetilde H$. \medskip
However, the realistic motion of the masses inevitably sustains a dissipation of energy, due to the friction from the environment, e.g. the wind, the fluid, interface etc. That urges us to make rational modification of previous equations.
In the current paper, the damping is assumed to be time-periodically proportional to the momentum. Precisely, we modify \eqref{eq:ham-aug} into
\begin{eqnarray}\label{eq:dis}
{\sf (Dissipative)\quad }\left\{
\begin{aligned}
\dot x&=\partial_p H(x,p,t)\\
\dot p&=-\partial_x H(x,p,t)-f(t)p\\
\dot t&=1\\
\dot I &=-\partial_t H(x,p,t)-f'(t)u-f(t)I\\
\dot u&=\langle H_p,p\rangle-H+\alpha-f(t)u
\end{aligned}
\right.
\end{eqnarray}
with $\alpha\in\mathbb{R}$ being a constant of initial energy and $f\in C^{r\geq 2}(\mathbb{T}:=\mathbb{R}\slash[0,1], \mathbb{R})$. Notice that the former three equations of (\ref{eq:ode1}) is decoupled with the latter two, so we can denote the flow of the former three equations in \eqref{eq:dis} by $\varphi_H^t$ and by $\widehat \varphi_{ H}^t$ the flow of the whole \eqref{eq:dis}. The following individual cases of $f(t)$ will be considered:
\begin{itemize}
\item {\bf (H0$^-$)} $[f]:=\int_0^1f(t)dt>0$
\item {\bf (H0$^+$)} $[f]<0$
\item {\bf (H0$^0$)} $[f]=0$
\end{itemize}
Besides, we
propose the following {\sf standing assumptions} for the Hamiltonian:
\begin{itemize}
\item {\bf (H1)} {\sf [Smoothness]} $H:TM\times\mathbb{T}\rightarrow\mathbb{R}$ is $C^{r\geq 2}$ smooth;
\item {\bf (H2)} {\sf [Convexity]} For any $(x,t)\in M\times\mathbb{T}$, $H(x,\cdot,t)$ is strictly convex on $ T_x^*M$;
\item {\bf (H3)} {\sf [Superlinearity]} For any $(x,t)\in M\times\mathbb{T}$, $\lim_{|p|_x\rightarrow +\infty}H(x,p,t)/|p|_x=+\infty$ where $|\cdot|_x$ is the norm deduced from the Riemannian metric.
\item {\bf (H4)} {\sf [Completeness]} For any $(x,p,\theta)\in T^*M\times\mathbb{T}$, the flow $\varphi_H^t(x,p,\theta)$ exists for all $t\in\mathbb{R}$.
\end{itemize}
\begin{rmk}\label{rmk:pro}
\begin{itemize}
\item[i)] As we can see, the three different cases of {\bf (H0)} respectively leads to a {\sf dissipation, acceleration and periodic conservation} of energy along $\widehat \varphi_H^t$ in the forward time, if we take
\begin{eqnarray}\label{eq:ham-main}
\widehat H(x,p,t,I, u)=\widetilde H(x,p,t,I)+f(t)u-\alpha.
\end{eqnarray}
This is because $\dfrac d{dt}\widehat H=-f(t)\widehat H$.
\item {\bf (H1-H3)} are usually called {\sf Tonelli conditions}. As for {\bf (H4)}, the completeness of $\varphi_H^t$ is actually equivalent to the completeness of $\widehat \varphi_H^t$. A sufficient condition to {\bf (H4)} is the following:
\[
|H_x|\leq \kappa (1+|p|_x) \text{ for all }(x,p,t)\in T^*M\times\mathbb{T}
\]
for some constant $\kappa$.
\item[ii)] Observe that the time-1 map $\varphi_H^1:\{(x,p,t=0)\}\rightarrow\{(x,p,t=0)\}$ is {\sf conformally symplectic}, i.e.
\[
(\varphi_H^1)^*dp\wedge dx=e^{[f]}dp\wedge dx.
\]
Such maps have wide applications in astronomy \cite{CC}, optimal transport \cite{WL}, biological physics \cite{Ca2} and economics \cite{B} etc (see Sec. \ref{sp} for more details).
\end{itemize}
\end{rmk}
In the following, we will explore the dynamics of \eqref{eq:dis} by using an analogue of Aubry Mather theory \cite{Mat} or weak KAM theory \cite{Fa}. Similar research was exploited in \cite{CCJW,WWY2,WWY3}, for generalized $1^{st}$ order PDEs. The current paper have a lot of similarities in the methodology with these works.
\subsection{Variational Principle and Hamilton Jacobi equation} As the dual of the Hamiltonian, the {\sf Lagrangian} can be defined as the following
\begin{eqnarray}\label{eq:led}
L(x,v,t):=\max_{p\in T^*_xM} \langle p,v\rangle-H(x,p,t),\quad (x,v,t)\in TM\times\mathbb{T}.
\end{eqnarray}
of which the maximum is achieved for $v=H_p(x,p,t)\in T_xM$, once {\bf (H1-H3)} are assumed. Therefore, the {\sf Legendre transformation}
\begin{equation}
\mathcal{L}: T^*M\times\mathbb{T}\rightarrow TM\times\mathbb{T}, \quad\text{via } (x,p,t)\rightarrow (x, H_p(x,p,t),t)
\end{equation}
is a diffeomorphism. Notice that the Lagrangian $L:TM\times\mathbb{T}\rightarrow\mathbb{R}$ is also $C^r-$smooth and convex, superlinear in $p\in T_x M$, so by a slight abusion of notions we say it satisfies {\bf (H1-H3)} as well.
As a conjugation of $\varphi^t_H$, the {\sf Euler-Lagrangian flow} $\varphi_L^t$ is defined by
\begin{equation}\label{eq:e-l}\tag{E-L}
\left\{
\begin{aligned}
&\dot x=v,\\
&\frac d{dt}L_v(x,v,t)=L_x(x,v,t)-f(t)L_v(x,v,t),\\
&\dot t=1.
\end{aligned}
\right.
\end{equation}
It has equivalently effective in exploring the dynamics of (\ref{eq:dis}), and the completeness of $\varphi_L^t$ is equivalent to the completeness of $\varphi_L^t$.
To present \eqref{eq:e-l} a variational characterization, we introduce a minimal variation on the absolutely continuous curves with fixed endpoints
\[
h_\alpha^{s,t}(x,y)=\inf_{\substack{\gamma\in C^{ac}([s,t],M) \\ \gamma(s)=x,\gamma(t)=y}}\int^t_se^{F(\tau)}(L(\gamma,\dot{\gamma},\tau)+\alpha)\mbox{d}\tau,
\]
where $F(t)=\int^t_0f(\tau)\mbox{d}\tau$ and $\alpha\in\mathbb{R}$.
It is a classical result in the calculus of variations, the infimum is always available for all $s<t\in\mathbb{R}$, which is actually $C^r-$ smooth and satisfies (\ref{eq:e-l}), once {\bf (H4)} is assumed (due to the {\sf Weierstrass Theorem} in \cite{Mat} or Theorem 3.7.1 of \cite{Fa}).
\begin{thm}[main 1]\label{thm:1} For $f(t)$ satisfying {\bf (H0$^-$)}, $H(x,p,t)$ satisfying {\bf (H1-H4)} and any $\alpha\in\mathbb{R}$, the following
\[
u_\alpha^-(x,t):=\inf_{\substack{\gamma\in C^{ac}((-\infty,t],M)\\\gamma(t)=x}}\int^t_{-\infty}e^{F(s)-F(t)}(L(\gamma(s),\dot{\gamma}(s),s)+\alpha)\mbox{d}s
\]
is well defined for $(x,t)\in M\times\mathbb{R}$ and satisfies
\begin{enumerate}
\item {\sf (Periodicity)} $u_\alpha^-(x,t+1)=u_\alpha^-(x,t)$ for any $x\in M$ and $t\in\mathbb{R}$. By taking $\bar t\in[0,1)$ with $t\equiv \bar t\ (mod\ 1)$ for any $t\in\mathbb{R}$, we can interpreted $u_\alpha^-$ as a function on $M\times\mathbb{T}$.
\item {\sf (Lipschitzness)} $u_\alpha^-:M\times\mathbb{T}\rightarrow \mathbb{R}$ is Lipschitz, with the Lipschitz constant depending on $L$ and $f$;
\item {\sf (Domination\footnote{Any function $\omega\in C(M\times\mathbb{T},\mathbb{R})$ satisfying (\ref{eq:dom}) is called a {\sf (viscosity) subsolution} of (\ref{eq:sta-hj}) and denoted by $\omega\prec_f L+\alpha$.
})} For any absolutely continuous curve $\gamma:[s,t]\rightarrow M$ connecting $(x,\bar s)\in M\times \mathbb{T}$ and $(y,\bar t)\in M\times\mathbb{T}$, we have
\begin{eqnarray}\label{eq:dom}
e^{F(t)}u_\alpha^-(y,\bar t)-e^{F(s)}u_\alpha^-(x,\bar s)\leq \int_s^t e^{F(\tau)}\Big(L(\gamma,\dot\gamma,\tau)+\alpha\Big)d\tau.
\end{eqnarray}
\item {\sf (Calibration)} For any $(x,\theta)\in M\times\mathbb{T}$, there exists a {\sf backward calibrated curve} curve $\gamma_{x,\theta}^-:(-\infty,\theta]\rightarrow M$, $C^r-$smooth and ending with $\gamma_{x,\theta}^-(\theta)=x$, such that for all $s\leq t\leq\theta$, we have
\begin{eqnarray}\label{eq:cal}
& &e^{F(t)}u_\alpha^-(\gamma_{x,\theta}^-(t),\bar t)-e^{F( s)}u_\alpha^-(\gamma_{x,\theta}^-(s),\bar s)\nonumber\\
&=&\int_s^t e^{F(\tau)}\Big(L(\gamma_{x,\theta}^-,\dot\gamma_{x,\theta}^-,\tau)+\alpha\Big)d\tau.
\end{eqnarray}
\item {\sf (Viscosity)} $u_\alpha^-:M\times\mathbb{T}\rightarrow\mathbb{R}$ is a viscosity solution of the following {\sf Stationary Hamilton-Jacobi equation} (with time periodic damping):
\begin{equation}\label{eq:sta-hj}\tag{HJ$_+$}
\partial_t u+f(t)u+H(x,\partial_x u,t)=\alpha,\quad(x,t)\in M\times\mathbb{T},\ \alpha\in\mathbb{R}.
\end{equation}
\end{enumerate}
\end{thm}
\begin{thm}[main 1']\label{cor:1}
For $f(t)$ satisfying {\bf (H0$^0$)} and $H(x,p,t)$ satisfying {\bf (H1-H4)}, there exists a unique $c(H)\in\mathbb{R}$ {\sf (Ma\~n\'e Critical Value)} such that
\begin{equation}\label{eq:cv}
u^-_{z,\bar{\varsigma}}(x,\bar{t}):=\varliminf_{\substack{\bar{\varsigma}\equiv\varsigma, \bar{t}\equiv t(mod\; 1) \\ t-\varsigma\to+\infty}}\bigg(\inf_{\substack{\gamma\in C^{ac}([\varsigma,t],M) \\\gamma(\varsigma)=z,\gamma(t)=x }}\int^t_{\varsigma}e^{F(\tau)-F(t)}\big(L(\gamma,\dot{\gamma},\tau)+c(H)\big)\mbox{d}\tau\bigg)
\end{equation}
is well defined on $M\times\mathbb{T}$ (for any fixed $(z,\bar{\varsigma})\in
M\times\mathbb{T}$) and satisfies
\begin{enumerate}
\item {\sf (Lipschitzness)} $u_{z,\bar{\varsigma}}^-:M\times\mathbb{T}\rightarrow \mathbb{R}$ is Lipschitz.
\item {\sf (Domination)}
For any Lipschitz continuous curve $\gamma:[s,t]\rightarrow M$ connecting $(x,\bar s)\in M\times \mathbb{T}$ and $(y,\bar t)\in M\times\mathbb{T}$, we have
\begin{eqnarray}\label{eq:dom-c}
e^{F(t)}u_{z,\bar{\varsigma}}^-(y,\bar t)-e^{F(s)}u_{z,\bar{\varsigma}}^-(x,\bar s)
\leq \int_s^t e^{F(\tau)}\Big(L(\gamma,\dot\gamma,\tau)+c(H)\Big)d\tau.
\end{eqnarray}
Namely, $u_{z,\bar{\varsigma}}^-\prec_f L+c(H)$.
\item {\sf (Calibration)} For any $(x,\theta)\in M\times\mathbb{T}$, there exists a $C^r$ curve $\gamma_{x,\theta}^-:(-\infty,\theta]\rightarrow M$ with $\gamma_{x,\theta}^-(\theta)=x$, such that for all $s\leq t\leq\theta$, we have
\begin{eqnarray}\label{eq:cal-c}
& &e^{F(t)}u_{z,\bar{\varsigma}}^-(\gamma_{x,\theta}^-(t),\bar t)-e^{F( s)}u_{z,\bar{\varsigma}}^-(\gamma_{x,\theta}^-(s),\bar s)\nonumber\\
&=&\int_s^t e^{F(\tau)}\Big(L(\gamma_{x,\theta}^-,\dot\gamma_{x,\theta}^-,\tau)+c(H)\Big)d\tau.
\end{eqnarray}
\item {\sf (Viscosity)} $u_{z,\bar{\varsigma}}^-$ is a viscosity solution of
\begin{equation}\label{eq:sta-hj2}\tag{HJ$_0$}
\partial_t u+f(t)u+H(x,\partial_x u,t)=c(H),\quad(x,t)\in M\times\mathbb{T}.
\end{equation}
\end{enumerate}
\end{thm}
Following the terminologies in \cite{Fa,MS}, it's appropriate to call the function given in Theorem \ref{thm:1} (resp. Theorem \ref{cor:1}) a {\sf weak KAM solution}. Such a solution can be used to pick up different types of invariant sets with variational meanings of \eqref{eq:dis}:
\begin{thm}[main 2]\label{thm:2}
For $f(t)$ satisfying {\bf (H0$^-$)}, $H(x,p,t)$ satisfying {\bf (H1-H4)} and any $\alpha\in\mathbb{R}$, we can get the following sets: \smallskip
\begin{itemize}
\item {\sf (Aubry Set)} $\gamma:\mathbb{R}\rightarrow M$ is called {\sf globally calibrated}, if for any $s<t\in\mathbb{R}$, (\ref{eq:cal}) holds on $[s,t]$. There exists a $\varphi_L^t-$invariant set defined by
\[
\widetilde \mathcal{A}:=\{(\gamma(t),\dot\gamma(t),\bar t)\in TM\times\mathbb{T}|\gamma \text{ is globally calibrated}\}
\]
with the following properties:
\begin{itemize}
\item $\widetilde \mathcal{A}$
is a Lipschitz graph over the {\sf projected Aubry set} $\mathcal{A}:=\pi\widetilde \mathcal{A}\subset M\times\mathbb{T}$, where $\pi:T^*M\times\mathbb{T}\rightarrow M\times\mathbb{T}$ is the standard projection.
\item $\widetilde \mathcal{A}$
is upper semicontinuous w.r.t. $L:TM\times\mathbb{T}\rightarrow\mathbb{R}$
\item $u_\alpha^-$ is differentiable on $\mathcal{A}$.\medskip
\end{itemize}
\item {\sf (Mather Set)} Suppose $\mathfrak M_{L}$ is the set of all $\varphi_L^t-$invariant probability measure, then $\tilde{\mu}\in\mathfrak M_L$ is called a {\sf Mather measure} if it minimizes
\[
\min_{\tilde{\nu}\in\mathfrak M_L}\int_{TM\times\mathbb{T}}L+\alpha- f(t)u_\alpha^-\mbox{d}\tilde{\nu}.
\]
Let's denote by $\mathfrak M_m$ the set of all Mather measures. Accordingly, the {\sf Mather set} is defined by
\[
\widetilde \mathcal{M}:=\overline{\bigcup\{supp\ \tilde{\mu}|\tilde{\mu}\in\mathfrak M_m\}}
\]
which satisfies
\begin{enumerate}
\item $\widetilde \mathcal{M}\neq\emptyset$ and $\widetilde \mathcal{M}\subset\widetilde \mathcal{A}$.
\item $\widetilde \mathcal{M}$ is a Lipschitz graph over the {\sf projected Mather set} $\mathcal{M}:= \pi\widetilde \mathcal{M} \subset M \times\mathbb{T}$.
\end{enumerate}
\medskip
\item {\sf (Maximal Global Attractor)} Define
\begin{eqnarray*}
\widehat \Sigma_H^-&:=& \big\{(x,p,\bar s,\alpha-f(s)u-H(x,p,s), u)\in T^*M\times T^*\mathbb{T}\times\mathbb{R}\big|\\
& & \quad u> u_\alpha^-(x,s)\big\}
\end{eqnarray*}
and
\begin{eqnarray*}
\widehat \Sigma_H^0&:=&\big\{(x,p,\bar s,\alpha-f(s)u-H(x,p,s), u)\in T^*M\times T^*\mathbb{T}\times\mathbb{R}\big|\\
& &\quad u= u_\alpha^-(x,s)\big\},
\end{eqnarray*}
then $\Omega:=\bigcap_{t\geq 0}\widehat \varphi_{ H}^t( \widehat \Sigma_H^-\cup\widehat \Sigma_H^0)$ is the maximal $\widehat \varphi_{ H}^t-$invariant set, which satisfies:
\begin{enumerate}
\item If the $p-$component of $\Omega$ is bounded, then the $u-$ and $I-$component of $\Omega$ are also bounded.
\item If $\Omega$ is compact, it has to be
a {\sf global attractor} in the sense that for any point $(x,p,\bar{s},I,u)\in T^*M\times T^*\mathbb{T}\times\mathbb{R}$ and any open neighborhood $\mathcal{U}\supseteq\Omega$, there exists a $T_{\Omega}(\mathcal{U})$ such that for all $t\geq T_{\Omega}(\mathcal{U})$, $\widehat \varphi_{ H}^t(x,p,\bar{s})\in \mathcal{U}$.
Besides, the followings hold:
\begin{itemize}
\item $\Omega$ is a maximal attractor set, i.e. it isn't strictly contained in any other global attractor;
\item $\widehat \mathcal{A}$ is the maximal invariant set contained in $\widehat \Sigma_H^0$, where
\begin{eqnarray*}
\widehat \mathcal{A}&:=&\Big\{\Big(\mathcal{L}(x,\partial_x u_\alpha^-(x,s), \bar{s}), \partial_tu_\alpha^-(x,s), u_\alpha^-(x,s)\Big)\in TM\times\\
& &\quad T^*\mathbb{T}\times\mathbb{R}\Big|(x,\bar{s})\in\mathcal{A}\Big\}.
\end{eqnarray*}
\end{itemize}
\end{enumerate}
\end{itemize}
\end{thm}
\begin{thm}[main 2']\label{cor:critical}
For $f(t)$ satisfying {\bf (H0$^0$)} and $H(x,p,t)$ satisfying {\bf (H1-H4)}, the Ma\~n\'e Critical Value $c(H)$ has an alternative expression
\begin{eqnarray}\label{eq:mea-var}
-c(H)=\dfrac{\inf_{\tilde{\mu}\in\mathfrak M_{ L}}\int_{TM\times\mathbb{T}} e^{F(t)}L(x,v,t)\mbox{d}\tilde{\mu}}{\int_0^1e^{F(t)}\mbox{d}t}.
\end{eqnarray}
Moreover, the minimizer achieving the right side of (\ref{eq:mea-var}) has to be a {\sf Mather measure}. Similarly we can define the {\sf Mather set} $\widetilde \mathcal{M}$ as the union of the support sets of all the Mather measures, which is Lipschitz-graphic over the {\sf projected Mather set} $\mathcal{M}:=\pi\widetilde \mathcal{M}$.
\end{thm}
\subsection{Parametrized viscosity solutions and asymptotic dynamics} In this section we deal with two kinds of parametrized viscosity solutions with practical meanings. The first case corresponds to a Hamiltonian
\begin{eqnarray}\label{eq:ham-par}
\widehat H_{\delta}(x,p,t,I, u):=I+ H(x,p,t)+f_\delta(t)u,
\end{eqnarray}
with $(x,p,\bar{t},I,u)\in T^*M\times T^*\mathbb{T}\times\mathbb{R}$ and $f_\delta\in C^r(\mathbb{T},\mathbb{R})$ continuous of $\delta\in\mathbb{R}$. For suitable $\alpha\in\mathbb{R}$,
we can seek the weak KAM solution of
\begin{eqnarray}\label{eq:hj-par}
\partial_tu_{\delta}(x,t)+H(x,\partial_x u_{\delta},t)+f_\delta(t) u_{\delta}=\alpha
\end{eqnarray}
as we did in previous theorems. Consequently, it's natural to explore the convergence of viscosity solutions w.r.t. the parameter $\delta$:
\begin{thm}[main 3]\label{thm:3}
Suppose $f_\delta$ converges to $f_0$ w.r.t. the uniform norm as $\delta\rightarrow 0_+$ such that $[f_0]=0$ and the right derivative of $f_\delta$ w.r.t. $\delta$ exists at $0$, i.e.
\begin{eqnarray}\label{eq:1-jet}
f_1(t):=\lim_{\delta\rightarrow 0_+}\frac{f_\delta(t)-f_0(t)}{\delta}>0.
\end{eqnarray}
If $H(x,p,t)$ satisfying {\bf (H1-H4)},
then there exists a unique $c(H)\in\mathbb{R}$ given by (\ref{eq:mea-var}) and a $\delta_0>0$, such that the weak KAM solution $u^-_\delta(x,t)$ of (\ref{eq:hj-par}) associated with $f_\delta$ and $\alpha_{\delta}\equiv c(H)$ for all $\delta\in(0,\delta_0]$ converges to a uniquely identified viscosity solution of
\begin{eqnarray}\label{eq:hj-criti}
\partial_tu(x,t)+H(x,\partial_x u,t)+f_0(t) u=c(H),
\end{eqnarray}
which equals
\[
\sup\Big\{u\prec_{f_0}L+c(H)\Big|\int_{TM\times\mathbb{T}}e^{F_0(t)} f_1(t)\cdot u(x,t)d
\tilde{\mu}\leq 0,\ \forall\; \tilde{\mu}\in\mathfrak M_m(\delta=0)\Big\}
\]
with $F_0(t)=\int_0^tf_0(\tau)\mbox{d}\tau$ and $\mathfrak M_m(0)$ being the set of Mather measures for the system with $\delta=0$.
\end{thm}
\begin{rmk}
The convergence of the viscosity solutions $1^{st}$ order PDEs was earlier discussed in \cite{CCIZ,DFIZ,WYZ,Z}, where the {\sf Comparison principle} was used to guarantee the uniqueness of viscosity solution for \eqref{eq:hj-par}. However, in our case we didn't assume $f_\delta$ to be nonnegative, which invalidates this principle and brings new new difficulties to prove the equi-boundedness and equi-Lipschitzness of $\{u_\delta^-\}_{\delta>0}$. Fortunately, by analyzing the properties of the {\sf Lax-Oleinik semigroups} we can still overcome these difficulties,
see Sec. \ref{s4} for more details.
\end{rmk}
The second parametrized problem we concern takes $M=\mathbb{T}$ and a mechanical $H(x,p,t)$. We can involve a cohomology parameter $c\in H^1(\mathbb{T},\mathbb{R})$ to
\begin{eqnarray}
\widehat H(x,p,t,I,u)=I+\underbrace{\frac1 2(p+c)^2+V(x,t)}_{H(x,p,t)}+f(t)u
\end{eqnarray}
of which $H(x,p,t)$ surely satisfies {\bf (H1-H4)},
then (\ref{eq:dis}) becomes
\begin{eqnarray}\label{eq:ode1}
{\sf (Dissipative)\quad } \left\{
\begin{aligned}
\dot x&=p+c\\
\dot p&=-V_x-f(t)p\\
\dot t&=1\\
\dot I &=-V_t-f'(t)u-f(t)I\\
\dot u&=\frac12(p^2-c^2)-V(x,t)-f(t)u.
\end{aligned}
\right.
\end{eqnarray}
In physical models, the former three equations of (\ref{eq:ode1}) is usually condensed into a single equation
\begin{eqnarray}\label{eq:ode0}
\ddot x+V_x(x,t)+f(t)(\dot x-c)=0,\quad (x,t)\in M\times\mathbb{T}.
\end{eqnarray}
\begin{thm}[main 4]\label{thm:4}
For $f(t)$ satisfying {\bf (H0$^-$)}, the following conclusions hold for equation (\ref{eq:ode0}):
\begin{itemize}
\item For any $c\in H^1(\mathbb{T},\mathbb{R})$, there exists a unified {\sf rotation number} of $\widetilde \mathcal{A}(c)$, which is defined by
\[
\rho(c):=\lim_{T\rightarrow+\infty}\frac1 T\int_0^Td\gamma,\quad\forall\ \text{globally calibrated curve } \gamma.
\]
\item $\rho(c)$ is continuous of $c\in H^1(\mathbb{T},\mathbb{R})$; Moreover, we have
\begin{eqnarray}\label{eq:rot-num-app}
|\rho(c)-c|\leq \varsigma([f])\cdot\|V(x,t)\|_{C^1}
\end{eqnarray}
for some constant $\varsigma$ depending only on $[f]$. Consequently, for any $p/q\in \mathbb{Q}$ irreducible, there always exists a $c_{p/q}$ such that $\rho(c_{p/q})=p/q$.
\item There exists an compact maximal global attractor $\Omega\subset T^*\mathbb{T}\times T^*\mathbb{T}\times\mathbb{R}$ of the flow $\widehat \varphi_{ H}^t$.
\end{itemize}
\end{thm}
\vspace{10pt}
\noindent{\bf Organization of the article:} The paper is organized as follows: In Sec. \ref{sp}, we exhibit a list of physical models with time-periodic damping. For these models, we state some notable dynamic phenomena and show how these phenomena can be linked to our main conclusions. In Sec. \ref{s2}, we prove Theorem \ref{thm:1} and Theorem \ref{cor:1}. In Sec. \ref{s3}, we get an analogue Aubry Mather theory for systems satisfying {\bf (H0$^-$)} condition, and prove Theorem \ref{thm:2}. Besides, we also prove Theorem \ref{cor:critical} for systems satisfying {\bf (H0$^0$)} condition. In Sec. \ref{s4}, we discuss the parametrized viscosity solutions of (\ref{eq:hj-par}), and prove the convergence of them. In Sec. \ref{s5}, for 1-D mechanical systems with time periodic damping, we prove Theorem \ref{thm:4}, which is related to the dynamic phenomena of the models in Sec \ref{sp}. For the consistency of the proof, parts of preliminary conclusions are postponed to the Appendix.
\vspace{10pt}
\noindent{\bf Acknowledgements:} The first author is supported by Natural Scientific Foundation of China (Grant No.11501437). The second author is supported by National Natural Science Foundation of China (Grant No. 11631006, 11790272) and Shanghai Science and Technology Commission (Grant No. 17XD1400500). The third author is supported by the Natural Scientific Foundation of China (Grant No. 11901560). All the authors are grateful to Prof. Wei Cheng for helpful discussion about the details.
\section{Zoo of practical models}\label{sp}
In this section we display a bunch of physical models with time-periodic damping, and introduce some practical problems (related with our main conclusions) around them.
\subsection{Conformally symplectic systems} For $f(t)\equiv \lambda>0$ being constant, we get a so called {\sf conformally symplectic system (or discount system)}. The associated ODE becomes
\begin{eqnarray}
\left\{
\begin{aligned}
\dot x&=\partial_p H(x,p,t),\\
\dot p&=-\partial_x H(x,p,t)-\lambda p.
\end{aligned}
\right.
\end{eqnarray}
This kind of systems has been considered in \cite{CCD,DFIZ,MS}, although earlier results on Aubry-Mather sets have been discussed by Le Calvez \cite{LC} and Casdagli \cite{Ca} for $M=\mathbb{T}$. Besides, we need to specify that the Duffing equation with {\sf viscous damping} also conforms to this case, which concerns all kinds of oscillations widely found in electromagnetics \cite{M} and elastomechanics \cite{MH}.
A significant property this kind of systems possess is that
\[
(\varphi_H^1)^*dp\wedge d x= e^{\lambda }dp\wedge d x.
\]
When $H(x,p,t)$ is mechanical, the equation usually describes the low velocity oscillation of a solid in a fluid medium (see Fig. \ref{fig1}), which can be formally expressed as
\begin{eqnarray}
\ddot{x}+\lambda\dot x+\partial_x V(x,t)=0, \quad x\in\mathbb{T},\;\lambda>0.
\end{eqnarray}
Chaos and bifurcations topics of this setting has ever been rather popular in 1970s \cite{H}.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{pendulum.png}
\caption{A dissipative pendulum with $\lambda=1/5$ and $V(t,x)=1-\cos x$.}
\label{fig1}
\end{center}
\end{figure}
\subsection{Tidal torque model} The {\sf tidal torque model} was firstly introduced by \cite{P}, describing the motion of a rigid satellite $S$ under the gravitational influence of a point-mass planet $P$. Due to the internal non-rigidity of the body, a tidal torque will causes a time-periodic dissipative to the motion of $S$, which can be formalized by
\begin{eqnarray}\label{eq:tidal}
\ddot x +\varepsilon V_x(x,e,t)+\kappa\eta(e,t)(\dot x-c(e))=0,\quad (x,t)\in\mathbb{T}^2,
\end{eqnarray}
with the parameter $e$ is the {\sf eccentricity} of the elliptic motion $S$ around $P$. Due to the astronomical observation, $\varepsilon$ is the {\sf equatorial ellipticity} of the satellite and
\[
\kappa\propto \frac 1 {a^3}\cdot\frac{m_P}{m_S},
\]
with $a$ being the {\sf semi-major} and $m_P$ (resp. $m_S$) being the mass respectively.\\
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{spin.png}
\caption{A tidal torque model for Moon-Earth and Mercury-Sun.}
\label{fig2}
\end{center}
\end{figure}
Although this model might seem very special, there are several examples in the solar system for which such a model yields a good description of the motion, at least in a first approximation, and anyhow represents a first step toward the understanding of the problem. For instance, in the pairs
Moon-Earth, Enceladus-Saturn, Dione-Saturn, Rhea-Saturn even Mercury-Sun this model is available. Besides, we need to specify that usually $\kappa\ll \varepsilon$ in all these occasions.\medskip
A few interesting phenomena has been explained by numerical approaches, e.g. the $1:1$ resonance for Moon-Earth system which make the people can only see one side of the Moon from the Earth. However, the Mercury-Sun model shows a different $3:2$ resonance because of the large eccentricity, see Fig. \ref{fig2}.\medskip
Due to Theorem \ref{thm:2} and Theorem \ref{thm:3}, such a resonance seems to be explained by the following aspect: {\bf any trajectory within the global attractor $\Omega$ of (\ref{eq:tidal}) has a longtime stability of velocity, namely, the average velocity is close to certain rotation number, or even asymptotic to it.} In Sec. \ref{s5} we will show that variational minimal trajectories indeed match this description.
\begin{rmk}
As a further simplification, a {\sf spin-orbit model} with $\eta(e)$ being a constant is also widely concerned, which is actually a conformally symplectic system. In \cite{CCD} they further discussed the existence of KAM torus for this model and proved the local attraction of the KAM torus.
\end{rmk}
\subsection{Pumping of the swing} The pumping of a swing is usually modeled as a rigid object forced to rotate back and forth at the lower ends of supporting ropes. After a series of approximations and reasonable simplifications, the pumping of the swing can be characterized as a harmonic oscillator with driving and parametric terms \cite{Ca2}. Therefore, this model has a typical meaning in understanding the dynamics of motors.\medskip
As shown in Fig. \ref{fig3}, the length of the ropes supporting the swinger is $l$, and $s$ is the distance between the center of mass of the swinger to the lower ends of the rope. The angle of the supporting rope to the vertical position is denoted by $\phi$, and the angle between the symmetric axis of the swinger and the rope is $\theta$, which varies as $\theta=\theta_0\cos\omega t$. So we get the equation of the motion by
\begin{eqnarray}\label{eq:swing}
(l^2-2ls\cos\theta+s^2+R^2)\ddot\phi&=&-gl\sin\phi+gs\sin(\phi+\theta)-ls\sin\theta\dot\theta^2\\
& &+(ls\cos\theta-s^2-R^2)\ddot\theta-2ls\sin\theta\dot\theta\dot\phi,\quad \phi\in\mathbb{T}\nonumber
\end{eqnarray}
where $g$ is the gravity index and $mR^2$ is the moment of inertia of the center ($m$ is the mass of swinger). {\bf We can see that by reasonable adjustment of $l,s,\omega$ parameters, this system can be dissipative, accelerative or critical.}\medskip
Notice that numerical research of this equation for $|\phi|\ll1$ has been done by numerical experts in a bunch of papers, see \cite{PGDB} for a survey of that. Those results successfully simulate the swinging at small to modest amplitudes. As the amplitude grows these results become less and less accurate, and that's why we resort to a theoretical analysis in this paper.
\begin{figure}
\begin{center}
\includegraphics[width=2in]{swing.png}
\caption{A simulation of the pumping of the swing}
\label{fig3}
\end{center}
\end{figure}
\section{Weak KAM solution of (\ref{eq:sta-hj})}\label{s2}
Due to the superlinearity of $L(x,v,t)$,
for each $k\geq 0$, there exists $C(k)\geq 0$, such that
$$
L(x,v,t)\geq k|v|-C(k), k>0,x\in M.
$$
Moreover, the compactness of $M$ implies that for each $k>0$, there exists $C_k>0$ such that
$$
\max_{\substack{(x,t)\in M\times\mathbb{T}\\|v|\leq k}}L(x,v,t)\leq C_k.
$$
\subsection{Weak KAM solution of (\ref{eq:sta-hj}) in the condition \textbf{(H0$^-$)}}
Note that $[f]>0$. The following conclusion can be easily checked.
\begin{lem}\label{Sec3:inequivality}
Assume $t>s$, then
\begin{enumerate}
\item $F(s)-F(t)\leq 2k_0-(t-s-1)[f];$
\item
$
\int^t_se^{F(\tau)-F(t)}\mbox{d}\tau\leq \frac{e^{2k_0+[f]}}{[f]}\big(1-e^{-(t-s)[f]}\big);
$
\item $
\int^t_{-\infty}e^{F(\tau)-F(t)}\mbox{d}\tau\leq \frac{e^{2k_0+[f]}}{[f]},
$
\end{enumerate}
where $k_0
=\max_{s\in[0,2]}\big|\int^s_0f(\tau)\mbox{d}\tau\big|
$.
\end{lem}
Now we define a function $u_{\alpha}^-:M\times\mathbb{R}\to\mathbb{R}$ by
\begin{eqnarray}\label{Sec3:solution}
u_\alpha^-(x,t)&:=&\inf\int^t_{-\infty}e^{F(s)-F(t)}(L(\gamma(s),\dot{\gamma}(s),s)+\alpha)\mbox{d}s
\end{eqnarray}
where the infimum is taken for all $\gamma\in C^{ac}((-\infty,t],M)$\footnote{aboslutely continuous curves} with $\gamma(t)=x$.
We can easily prove this function is bounded, since
$$
-|C(k=0)-\alpha|\cdot\frac{e^{2k_0+[f]}}{[f]}\leq u_{\alpha}^-(x,t)\leq |C_{k=0}+\alpha|\frac{e^{2k_0+[f]}}{[f]},
$$
where $C(0)$ and $C_0$ have been defined in the beginning of Sec. \ref{s2}.
\begin{lem}{\bf [(1) of Theorem \ref{thm:1}]}\label{lem:per}
$u_{\alpha}^-(x,t)$ is 1-periodic with respect to $t$, i.e.,
$$
u_{\alpha}^-(x,t+1)=u_{\alpha}^-(x,t).
$$
\end{lem}
\proof
By the definition of $u_{\alpha}^-$,
\begin{eqnarray*}
& &u_{\alpha}^-(x,t+1)\\
&=&\inf_{\gamma(t+1)=x}\bigg\{\int^{t+1}_{-\infty}e^{F(s)-F(t+1)}\big(L(\gamma(s),\dot{\gamma}(s),s)+\alpha\big)\mbox{d}s\bigg\}\\
&= &\inf_{\gamma(t+1)=x}\bigg\{\int^{t}_{-\infty}e^{F(s+1)-F(t+1)}\big(L(\gamma(s+1),\dot{\gamma}(s+1),s+1)+\alpha\big)\mbox{d}s\bigg\}\\
&=&\inf_{\eta(t)=x}\bigg\{\int^{t}_{-\infty}e^{F(s)-F(t)}\big(L(\eta(s),\dot{\eta}(s),s)+\alpha\big)\mbox{d}s\bigg\}\\
&=&u_{\alpha}^-(x,t)
\end{eqnarray*}
as desired.\qed
\endproof
\begin{lem}{\bf [(3) of Theorem \ref{thm:1}]}\label{Sec3:dominant}
Let $\gamma:[s_1,s_2]\to M$ be an absolutely continuous curve. Then,
\begin{eqnarray}\label{eq:dominant}
& &e^{F(s_2)}u_{\alpha}^-(\gamma(s_2),s_2)-e^{F(s_1)}u_{\alpha}^-(\gamma(s_1),s_1)\\
&\leq&
\int^{s_2}_{s_1}e^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau.\nonumber
\end{eqnarray}
\end{lem}
\proof
Let $\{\gamma_n\}$ be a sequence of absolutely continuous curve from $(-\infty,s_1]$ to $M$ with $\gamma_n(s_1)=\gamma(s_1)$, such that
\[
e^{F(s_1)}u_{\alpha}^-(\gamma(s_1),s_1)=\lim_{n\to \infty}\int_{-\infty}^{s_1}e^{F(\tau)}(L(\gamma_n(\tau),\dot{\gamma}_n(\tau),\tau)+\alpha)\mbox{d}\tau.
\]
Let $\hat{\gamma}_n=\gamma_n*\gamma$ for each $n\in\mathbb{N}$. Hence,
\begin{eqnarray*}
e^{F(s_2)}u_{\alpha}^-(\gamma(s_2),s_2)&\leq &\int^{s_2}_{-\infty}e^{F(\tau)}(L(\hat{\gamma}_n(\tau),\dot{\hat{\gamma}}_n(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\leq& \int^{s_2}_{s_1}e^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau\\
& &+\int^{s_1}_{-\infty}e^{F(\tau)}(L(\gamma_n(\tau),\dot{\gamma}_n(\tau),\tau)+\alpha)\mbox{d}\tau.
\end{eqnarray*}
Taking the limit $n\to\infty$, we derive (\ref{eq:dominant}) is true.\qed
\endproof
\begin{lem}\label{Sec3:minimizer}
For each $(x,t)\in M\times\mathbb{R}$ and $s<t$, it holds
\begin{eqnarray}\label{eq:dominant_min}
& &e^{F(t)}u_{\alpha}^-(x,t)\\
&=&\inf_{\substack{\gamma\in C^{ac}([s,t],M)\\ \gamma(t)=x }}\bigg\{ e^{F(s)}u_{\alpha}^-(\gamma(s),s)+\int^t_se^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau \bigg\}.\nonumber
\end{eqnarray}
Moreover, the infimum in (\ref{eq:dominant_min}) can be achieved by a $C^r$ smooth minimizer.
\end{lem}
\proof
Due to Lemma \ref{Sec3:dominant},
$$
e^{F(t)}u_{\alpha}^-(x,t)\leq\inf_{\substack{\gamma\in C^{ac}([s,t],M)\\ \gamma(t)=x
}} \big\{e^{F(s)}u_{\alpha}^-(\gamma(s),s)+\int^t_se^{F(\tau)}(L(\gamma,\dot{\gamma},\tau)+\alpha)\mbox{d}\tau\big\}.
$$
For each $\epsilon>0$, there exists an absolutely continuous curve $\gamma:(-\infty,t]\to M$ with $\gamma(t)=x$, such that
\begin{eqnarray*}
e^{F(t)}u_{\alpha}^-(x,t)+\epsilon&\geq& \int^t_{-\infty}e^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau \\
&=&\int^t_se^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau\\
& &+\int^s_{-\infty}e^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\geq& e^{F(s)}u_{\alpha}^-(\gamma(s),s)+\int^t_se^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau.
\end{eqnarray*}
Hence, (\ref{eq:dominant_min}) proves to be an equality. Therefore, we can find a sequence of absolutely continuous curve $\{\gamma_n\}$ with $\gamma_n(t)=x$ such that
$$
e^{F(t)}u_{\alpha}^-(x,t)=\lim_{n\to\infty}\bigg\{e^{F(s)}u_{\alpha}^-(\gamma_n(s),s)+\int^t_se^{F(\tau)}(L(\gamma_n,\dot{\gamma}_n,\tau)+\alpha)\mbox{d}\tau\bigg\}.
$$
Hence, there exists a constant $c$ independent of $n$, such that
\begin{equation}\label{eq_lem315_bounded}
\int^t_se^{F(\tau)}(L(\gamma_n(\tau),\dot{\gamma}_n(\tau),\tau)+\alpha)\mbox{d}\tau\leq c.
\end{equation}
Due to Dunford-Petti Theorem (Theorem 6.4 in \cite{DFIZ}), there exists a subsequence $\{\gamma_{n_k}\}$ converging to a curve $\gamma_*$
such that
\begin{equation}\label{eq_lem315_dominiant}
\int^t_se^{F(\tau)}(L(\gamma_*,\dot{\gamma}_*,\tau)+\alpha)\mbox{d}\tau\leq\varliminf_{k\to\infty}\int^t_se^{F(\tau)}(L(\gamma_{n_k},\dot{\gamma}_{n_k},\tau)+\alpha)\mbox{d}\tau.
\end{equation}
Hence, (\ref{eq:dominant_min}) can be achieved at $\gamma_*: [s,t]\to M$, which definitely solves the Euler-Lagrange equation (\ref{eq:e-l}). Due to the { Weierstrass Theorem} in \cite{Mat}, $\gamma_*$ is $C^r$ smooth.\qed
\endproof
\begin{lem}{\bf [(4) of Theorem \ref{thm:1}]}\label{Sec3:calibrated}
For each $\alpha\in\mathbb{R}$ and $(x,t)\in M\times\mathbb{R}$, there exists a curve $\gamma_{x,t}^-:(-\infty,t]\to M$ with $\gamma_{x,t}^-(t)=x$ such that for each $t_1<t_2\leq t$,
\begin{eqnarray}\label{eq:calibrated_0}
& &e^{F(t_2)}u_{\alpha}^-(\gamma_{x,t}^-(t_2),t_2)-e^{F(t_1)}u_{\alpha}^-(\gamma_{x,t}^-(t_1),t_1)\\
&=&\int^{t_2}_{t_1}e^{F(\tau)}(L(\gamma_{x,t}^-(\tau),\dot{\gamma}_{x,t}^-(\tau),\tau)+\alpha)\mbox{d}\tau.\nonumber
\end{eqnarray}
\end{lem}
\proof
By Lemma \ref{Sec3:minimizer}, for each $n\in\mathbb{N}$, there exists a sequence of $C^r$ curve $\gamma_n:[t-n,t]\to M$ with $\gamma_n(t)=x$ such that
$$
e^{F(t)}u_{\alpha}^-(x,t)=e^{F(t-n)}u_{\alpha}^-(\gamma_n(t-n),t-n)+\int^t_{t-n}e^{F(\tau)}(L(\gamma_n(\tau),\dot{\gamma}_n(\tau),\tau)+\alpha)\mbox{d}\tau.
$$
It is easy to see for each interval $[a,b]\subset [t-n,t]$
\begin{eqnarray}\label{eq:ca}
& &e^{F(b)}u_{\alpha}^-(\gamma_n(b),b)-e^{F(a)}u_{\alpha}^-(\gamma_n(a),a)\\
&=&\int^b_ae^{F(\tau)}(L(\gamma_n(\tau),\dot{\gamma}_n(\tau),\tau)+\alpha)\mbox{d}\tau.\nonumber
\end{eqnarray}
Due to diagonal approach, we derive there exists a subsequence of $\{\gamma_n\}$, denoted by $\{\gamma_{n_k}\}$ and a curve $\gamma_{x,t}^-:(-\infty,t]\to M$ such that
$\gamma_{n_k}$ converges uniformly to $\gamma_{x,t}^-$ on each finite subinterval of $(-\infty,t]$.
Taking $k\to\infty$ in (\ref{eq:ca}), due to Dunford Petti Theorem,
\begin{align*}
&e^{F(b)}u_{\alpha}^-(\gamma_{x,t}^-(b),b)-e^{F(a)}u_{\alpha}^-(\gamma_{x,t}^-(a),a)\\
&=\varliminf_{k\to\infty}\int^b_ae^{F(\tau)}(L(\gamma_{n_k},\dot{\gamma}_{n_k},\tau)+\alpha)\mbox{d}\tau
\geq \int^b_ae^{F(\tau)}(L(\gamma_{x,t}^-,\dot{\gamma}_{x,t}^-,\tau)+\alpha)\mbox{d}\tau.
\end{align*}
Combining with (\ref{eq:dominant}), we get (\ref{eq:calibrated_0}). Since $\gamma_{x,t}^-|_{[s,t]}$ is a minimizer of (\ref{eq:dominant_min}) for each $s<t$, due to Lemma \ref{Sec3:minimizer}, $\gamma_{x,t}^-$ is $C^r$ and solves (\ref{eq:e-l}).\qed
\endproof
\begin{rmk}
Due to Lemma \ref{Sec3:calibrated dot bounded}, if we take $t_2=t$ and make $t_1\rightarrow-\infty$ in (\ref{eq:calibrated_0}) we instantly get
$$
u_{\alpha}^-(x,t)=\int^t_{-\infty}e^{F(\tau)-F(t)}(L(\gamma_{x,t}^-(\tau),\dot{\gamma}_{x,t}^-(\tau),\tau)+\alpha)\mbox{d}\tau,
$$
i.e. the infimum in (\ref{Sec3:solution}) is achieved at $\gamma_{x,t}^-:(-\infty,t]\rightarrow M$.
\end{rmk}
\begin{lem}\label{Sec3:calibrated dot bounded}
Suppose $\gamma_{x,\theta}^-:(-\infty,\theta]\to M$ is a backward calibrated curve ending with $x$ of $u_{\alpha}^-(x,\theta)$, then
\[
|\dot{\gamma}_{x,\theta}^-(\tau)|\leq \kappa_0,\quad\forall\ (x,\theta)\in M\times\mathbb{T}, \tau<\theta.
\]
for a constant $\kappa_0$ depending on $L$ and $\alpha$. That implies $\gamma_{x,\theta}^-$ is actually Lipschitz on $(-\infty,\theta]$.
\end{lem}
\proof
Let $s_1,s_2\leq \theta$ and $s_2-s_1=1$. Due to Lemma \ref{Sec3:calibrated},
\begin{align*}
&e^{F(s_2)}u^-_{\alpha}(\gamma^-_{x,\theta}(s_2),s_2)-e^{F(s_1)}u^-_{\alpha}(\gamma^-_{x,\theta}(s_1),s_1)\\
&=\int^{s_2}_{s_1}e^{F(\tau)}(L(\gamma^-_{x,\theta}(\tau),\dot{\gamma}^-_{x,\theta}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\geq\int^{s_2}_{s_1}e^{F(\tau)}(|\dot{\gamma}^-_{x,\theta}(\tau)|-C(1)+\alpha)\mbox{d}\tau.
\end{align*}
On the other hand, let $\beta:[s_1,s_2]\to M$ be a geodesic satisfying
$\beta(s_1)=\gamma^-_{x,\theta}(s_1),\beta(s_2)=\gamma^-_{x,\theta}(s_2)$, and $|\dot\beta(\tau)|\leq \mbox{diam(M)}=:k_1$. Then
\begin{align*}
&e^{F(s_2)}u^-_{\alpha}(\gamma^-_{x,\theta}(s_2),s_2)-e^{F(s_1)}u^-_{\alpha}(\gamma^-_{x,\theta}(s_1),s_1)\\
&\leq \int^{s_2}_{s_1}e^{F(\tau)}(L(\beta(\tau),\dot{\beta}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\leq \int^{s_2}_{s_1}e^{F(\tau)}(C_{k_1}+\alpha)\mbox{d}\tau.
\end{align*}
Hence,
$$
\int^{s_2}_{s_1}e^{F(\tau)}|\dot{\gamma}^-_{x,\theta}(\tau)|\mbox{d}\tau\leq \int^{s_2}_{s_1}e^{F(\tau)}(C_{k_1}+C(1))\mbox{d}\tau.
$$
Due to the continuity of $\dot{\gamma}^-_{x,\theta}(\tau)$, there exists $s_0\in(s_1,s_2)$ such that
\begin{equation}\label{eq:dot_bd}
|\dot{\gamma}^-_{x,\theta}(s_0)|\leq C_{k_1}+C(1).
\end{equation}
Note that $\gamma^-_{x,\theta}$ solves \eqref{eq:e-l}, so $|\dot{\gamma}^-_{x,\theta}(\tau)|$ is uniformly bounded for $(x,\theta)\in M\times\mathbb{T}$ and $\tau\in(-\infty,\theta]$.\qed\medskip
\begin{lem}{\bf [(2) of Theorem \ref{thm:1}]}\label{lem:lip-dis}
For each $\alpha\in\mathbb{R}$, $u_{\alpha}^-$ is Lipschitz on $M\times\mathbb{T}$.
\end{lem}
\proof
First of all, we prove $u_\alpha^-(\cdot,\theta):M\rightarrow\mathbb{R}$ is uniformly Lipschitz w.r.t. $\theta\in\mathbb{T}$. Let $x,y\in M$, $\Delta t=d(x,y)$, and $\gamma^-_{x,\theta}:(-\infty,\theta]\to M$ be a minimizer of $u^-_{\alpha}(x,\theta)$. Define $\tilde{\gamma}:(-\infty,\theta]\to M$ by
$$
\tilde{\gamma}(s)=
\begin{cases}
\gamma^-_{x,\theta}(s),s\in(-\infty,\theta-\Delta t),\\
\beta(s),s\in[\theta-\Delta t,\theta],
\end{cases}
$$
where $\beta:[\theta-\Delta t,\theta]\to M$ is a geodesic satisfying $\beta(\theta-\Delta t)=\gamma_{x,\theta}^-(\theta-\Delta t),\beta(\theta)=y$, and
$$
|\dot{\beta}(s)|\equiv \frac{d(\gamma^-_{x,\theta}(\theta-\Delta t),y)}{\Delta t}\leq \frac{d(\gamma_{x,\theta}^-(\theta-\Delta t),x)}{\Delta t}+1\leq \kappa_0+1.
$$
Then,
\begin{align*}
&u^-_{\alpha}(x,\theta)=\int^\theta_{-\infty}e^{F(\tau)-F(\theta)}(L(\gamma^-_{x,\theta}(\tau),\dot{\gamma}^-_{x,\theta}(\tau),\tau)+\alpha)\mbox{d}\tau,\\
&u^-_{\alpha}(y,\theta)\leq \int^{\theta}_{-\infty}e^{F(\tau)-F(\theta)}(L(\tilde{\gamma}(\tau),\dot{\tilde{\gamma}}(\tau),\tau)+\alpha)\mbox{d}\tau,
\end{align*}
which implies
\begin{align*}
u^-_{\alpha}(y,\theta)-u^-_{\alpha}(x,\theta)&\leq \int^\theta_{\theta-\Delta t}e^{F(\tau)-F(\theta)}(L(\beta,\dot{\beta},\tau)-L(\gamma_{x,\theta}^-,\dot{\gamma}_{x,\theta}^-,\tau))\mbox{d}\tau\\
&\leq (C_{\kappa_0+1}+C(0))\int^\theta_{\theta-\Delta t}e^{F(\tau)-F(\theta)}\mbox{d}\tau\\
&\leq (C_{\kappa_0+1}+C(0))e^{2k_0+[f]}\cdot d(x,y).
\end{align*}
By a similar approach, we derive the opposite inequality holds. Hence,
\begin{equation}\label{eq:xlip}
|u^-_{\alpha}(y,\theta)-u^-_{\alpha}(x,\theta)|\leq \rho_*\cdot d(x,y),
\end{equation}
where $\rho_*=(C_{\kappa_0+1}+C(0))e^{2k_0+[f]}$.\medskip
Next, we prove $u^-_{\alpha}(x,\cdot)$ is uniformly Lipschitz continuous for $x\in M$. Let $\bar{t},\bar{t}'\in\mathbb{T}, d(\bar{t},\bar{t}')=t'-t$, and $t\in[0,1)$. Then, $t'\in[0,2]$.
A curve $\eta:(-\infty,t']\to M$ is defined by
$$
\eta(s)=
\begin{cases}
\gamma^-_{x,t}(s),s\in(-\infty,t],\\
x,\ \ s\in (t,t'].
\end{cases}
$$
Then,
\begin{align*}
&\ \ \ \ e^{F(t')}u^-_{\alpha}(x,t')-e^{F(t)}u^-_{\alpha}(x,t)\\
&\leq\int^{t'}_{-\infty}e^{F(\tau)}(L(\eta,\dot{\eta},\tau)+\alpha)\mbox{d}\tau-\int^t_{-\infty}e^{F(\tau)}(L(\gamma^-_{x,t},\dot{\gamma}^-_{x,t},\tau)+\alpha)\mbox{d}\tau\\
&\leq \int^{t'}_te^{F(\tau)}(C_0+\alpha)\mbox{d}\tau\\
&\leq (C_0+\alpha)\max_{\tau\in[0,2]}e^{F(\tau)}\cdot|t'-t|.
\end{align*}
On the other hand,
we write $\Delta t=d(\bar{t}',\bar{t})$ and define $\eta_1\in C^{ac}((-\infty,t],M)$ by
$$
\eta_1(s)=
\begin{cases}
\gamma^-_{x,t'}(s),s\in(-\infty,t-\Delta t],\\
\gamma^-_{x,t'}(2(s-t)+t'),s\in(t-\Delta t,t].
\end{cases}
$$
It is easy to check $\eta_1(t)=x$, and $|\dot{\eta}_1(\tau)|\leq 2\kappa_0$, where $\kappa_0$ is a Lipschitz constant of $\gamma_{x,t'}^-$.
\begin{align*}
e^{F(t)}u^-_{\alpha}(x,t)&\leq \int^{t}_{-\infty}e^{F(\tau)}(L(\eta_1(\tau),\dot{\eta}_1(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\leq \int^{t}_{t-\Delta t}e^{F(\tau)}(L(\eta_1(\tau),\dot{\eta}_1(\tau),\tau)+\alpha)\mbox{d}\tau\\
&+\int^{t-\Delta t}_{-\infty}e^{F(\tau)}(L(\gamma^-_{x,t'}(\tau),\dot{\gamma}^-_{x,t'}(\tau),\tau)+\alpha)\mbox{d}\tau.
\end{align*}
Note that $\gamma^-_{x,t'}$ is a minimizer of $u^-_\alpha(x,t')$. We derive that
\begin{align*}
&\ \ \ e^{F(t)}u^-_{\alpha}(x,t)-e^{F(t')}u^-_\alpha(x,t')\\
&\leq \int^t_{t-\Delta t}e^{F(\tau)}(L(\eta_1(\tau),\dot{\eta}_1(\tau),\tau)+\alpha)\mbox{d}\tau\\
&-\int^{t'}_{t-\Delta t}e^{F(\tau)}(L(\gamma^-_{x,t'}(\tau),\dot{\gamma}^-_{x,t'}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\leq (C_{2\kappa_0}+2C(0)+|\alpha|)\max_{\tau\in[0,2]} e^{F(\tau)}\cdot d(\bar{t}',\bar{t}).
\end{align*}
We have proved the map $t\longmapsto e^{F(t)}u^-_{\alpha}(x,t)$ is uniformly Lipschitz for $x\in M$, with Lipschitz constant depends only on $L,f$ and $\alpha$.
Note that $F(t)$ is $C^{r+1}$ and $F'(t)=f(t)$ is 1-periodic. We derive $u^-_\alpha(x,\cdot)$ is uniformly Lipschitz for $x\in M$ with Lipschitz constant $\rho^*_0$ depending on $L,f$, and $\alpha$.
It follows that
\begin{align*}
|u^-_{\alpha}(x',\theta')-u^-_{\alpha}(x,\theta)|&\leq |u^-_{\alpha}(x',\theta')-u^-_{\alpha}(x,\theta')|+|u^-_{\alpha}(x,\theta')-u^-_{\alpha}(x,\theta)|\\
&\leq\rho_*d(x',x)+\rho^*_0d(\theta',\theta)
\end{align*}
so we finish the proof.
\qed
\begin{lem} {\bf [(5) of Theorem \ref{thm:1}]}\label{lem:vis-sol-1}
The function $u_{\alpha}^-(x,t)$ defined by (\ref{Sec3:solution}) is a viscosity solution of (\ref{eq:sta-hj}).
\end{lem}
\proof
Let $\phi^*(x,t)$ be a $C^1$ function such that $u^-_{\alpha}(x,t)-\phi^*(x,t)$ attains maximum at $(x_0,t_0)$ and $u(x_0,t_0)=\phi^*(x_0,t_0)$.
For each $v\in T_{x_0}M$, there exists a $C^1$ curve $\gamma$ defined on a neighborhood of $t_0$ with $\dot{\gamma}(t_0)=v$ and $\gamma(t_0)=x_0$.
Let $\Delta t<0$.
Then
\begin{eqnarray*}
& &e^{F(t_0)}\phi^*(\gamma(t_0),t_0)-e^{F(t_0)}\phi^*(\gamma(t_0+\Delta t),t_0+\Delta t)\\
&\leq& e^{F(t_0)}u^-_{\alpha}(\gamma(t_0),t_0)-e^{F(t_0)}u^-_{\alpha}(\gamma(t_0+\Delta t),t_0+\Delta t)\\
&=&e^{F(t_0)}u^-_{\alpha}(\gamma(t_0),t_0)-e^{F(t_0+\Delta t)}u^-_{\alpha}(\gamma(t_0+\Delta t),t_0+\Delta t)\\
& &+(e^{F(t_0+\Delta t)}-e^{F(t_0)})u^-_{\alpha}(\gamma(t_0+\Delta t),t_0+\Delta t).
\end{eqnarray*}
By (\ref{eq:dominant}),
we derive that
\begin{align*}
&e^{F(t_0)}\bigg(\frac{\phi^*(\gamma(t_0+\Delta t),t_0+\Delta t)-\phi^*(\gamma(t_0),t_0)}{\Delta t}\bigg)\\
&\leq \frac{1}{\Delta t}\int^{t_0+\Delta t}_{t_0}e^{F(\tau)}(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\ \ \ \ \ \ \ \ \ \ -\bigg(\frac{e^{F(t_0+\Delta t)}-e^{F(t_0)}}{\Delta t}\bigg)u^-_{\alpha}(\gamma(t_0+\Delta t),t_0+\Delta t).
\end{align*}
Taking $\Delta t\to 0^-$, we derive that
$$
\partial_t\phi^*(x_0,t_0)+\partial_x\phi^*(x_0,t_0)\cdot v-L(x_0,v,t_0)+f(t_0)u^-_{\alpha}(x_0,t_0)\leq \alpha.
$$
By the arbitrariness of $v$,
$$
\partial_t\phi^*(x_0,t_0)+H(x_0,\partial_x\phi^*(x_0,t_0),t_0)+f(t_0)u^-_{\alpha}(x_0,t_0)\leq\alpha,
$$
which implies $u^-_{\alpha}(x,t)$ is a viscosity subsolution of (\ref{eq:sta-hj}).
Let $(x_0,t_0)\in M\times\mathbb{R}$ and $\gamma^-_{x_0,t_0}:(-\infty,t_0]\to M$ be a minimizer of $u^-_{\alpha}(x_0,t_0)$ and let $\phi_*(x,t)\in C^1(M\times\mathbb{R},\mathbb{R})$ such that
$u^-_{\alpha}(x,t)-\phi_*(x,t)$ attains minimum at $(x_0,t_0)$. Then, for $\Delta t<0$,
\begin{align*}
&\ \ e^{F(t_0)}(\phi_*(\gamma^-_{x_0,t_0}(t_0),t_0)-\phi_*(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t))\\
&\geq e^{F(t_0)}u^-_{\alpha}(\gamma^-_{x_0,t_0}(t_0),t_0)-e^{F(t_0+\Delta t)}u^-_{\alpha}(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t)\\
& \ \ +e^{F(t_0+\Delta t)}u^-_{\alpha}(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t)-e^{F(t_0)}u^-_{\alpha}(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t)\\
&=\int^{t_0}_{t_0+\Delta t}e^{F(\tau)}(L(\gamma^-_{x_0,t_0}(\tau),\dot{\gamma}^-_{x_0,t_0}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&\ \ +(e^{F(t_0+\Delta t)}-e^{F(t_0)})u^-_{\alpha}(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t).
\end{align*}
Then
\begin{align*}
&e^{F(t_0)}\frac{\phi_*(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t)-\phi_*(\gamma^-_{x_0,t_0}(t_0),t_0)}{\Delta t}\\
&\geq \frac{1}{\Delta t}\int^{t_0+\Delta t}_{t_0}e^{F(\tau)}(L(\gamma^-_{x_0,t_0}(\tau),\dot{\gamma}^-_{x_0,t_0}(\tau),\tau)+\alpha)\mbox{d}\tau\\
&-\bigg(\frac{e^{F(t_0+\Delta t)}-e^{F(t_0)}}{\Delta t}\bigg)u^-_{\alpha}(\gamma^-_{x_0,t_0}(t_0+\Delta t),t_0+\Delta t).
\end{align*}
Taking $\Delta t\to 0^-$, we derive that
\[
\partial_t\phi_*(x_0,t_0)+H(x_0,\partial_x\phi_*(x_0,t_0),t_0)+f(t_0)u(x_0,t_0)\geq \alpha
\]
which implies the assertion.
\qed\medskip
As an complement, the following result is analogue to Proposition 6 of \cite{MS} will be useful in the following sections:
\begin{prop}
\label{Sec3:pro_differentiable}
The weak KAM solution $u_\alpha^-$ of (\ref{eq:sta-hj}) is differentiable at $(\gamma_{x,t}^-(s),\bar s)$ for any $\mathbb{R}\ni s<t$, where $\gamma_{x,t}^-: (-\infty,t]\to M$ is a backward calibrated curve
ending with $x$. In other words, we have
$$
\partial_tu^-(\gamma_{x,t}^-(s),s)+H(\gamma_{x,t}^-(s),\partial_xu^-_\alpha(\gamma_{x,t}^-(s),s),s)+f(s)u^-_\alpha(\gamma_{x,t}^-(s),s)=\alpha
$$
and
\begin{eqnarray}
(\gamma_{x,t}^-(s),\dot\gamma_{x,t}^-(s),\bar s)=\mathcal{L}\Big(\gamma_{x,t}^-(s),\partial_xu^-_\alpha(\gamma_{x,t}^-(s),s),\bar s\Big)
\end{eqnarray}
for all $\mathbb{R}\ni s<t$.
\end{prop}
\proof
By Theorem \ref{Sec3:thm_semiconcave2}, we derive $u^-_\alpha(x,t)$ is semiconcave. Let $s\in(-\infty,t)$ and $\tilde{p}=(p_x,p_t)\in D^+u^-_{\alpha}(\gamma_{x,t}^-(s),s)$.
For $\Delta s>0$,
\begin{eqnarray*}
& &\frac{e^{F(s+\Delta s)}u^-_\alpha(\gamma_{x,t}^-(s+\Delta s),s+\Delta s)-e^{F(s)}u^-_{\alpha}(\gamma^-_{x,t}(s),s)}{\Delta s}\\
&=&\frac{1}{\Delta s}\int^{s+\Delta s}_{s}e^{F(\tau)}(L(\gamma_{x,t}^-(\tau),\dot{\gamma}_{x,t}^-(\tau),\tau)+\alpha)\mbox{d}\tau.
\end{eqnarray*}
Then
\begin{eqnarray*}
& &\lim_{\Delta s\to 0^+}\frac{u^-_{\alpha}(\gamma_{x,t}^-(s+\Delta s),s+\Delta s)-u^-_{\alpha}(\gamma_{x,t}^-(s),s)}{\Delta s}\\
&=&L(\gamma_{x,t}^-(s),\dot{\gamma}_{x,t}^-(s),s)+\alpha-f(s)u^-_{\alpha}(\gamma_{x,t}^-(s),s).
\end{eqnarray*}
By Proposition \ref{Sec3:prop_semiconcave1},
$$
\lim_{\Delta s\to 0^+}\frac{u^-_{\alpha}(\gamma_{x,t}^-(s+\Delta s),s+\Delta s)-u^-_{\alpha}(\gamma_{x,t}^-(s),s)}{\Delta s}\leq p_x\cdot\dot{\gamma}_{x,t}^-(s)+p_t,
$$
which implies
\begin{eqnarray*}
& &p_t+H(\gamma_{x,t}^-(s),p_x,s)+f(s)u^-_{\alpha}(\gamma_{x,t}^-(s),s)\geq \alpha.
\end{eqnarray*}
On the other hand, $u^-_\alpha$ is a viscosity solution of (\ref{eq:sta-hj}). Hence, for each $(p_x,p_t)\in D^+u^-_{\alpha}(\gamma_{x,t}^-(s),s)$,
\begin{equation}\label{eq:singleton}
p_t+H(\gamma_{x,t}^-(s),p_x,s)+f(s)u^-_{\alpha}(\gamma_{x,t}^-(s),s)=\alpha.
\end{equation}
Note that $H(x,p,t)$ is strictly convex with respect to $p$. By (\ref{eq:singleton}), We derive $D^+u^-_{\alpha}(\gamma_{x,t}^-(s),s)$ is a singleton.
By Proposition \ref{Sec3:prop_semiconcave1}, $u^-_{\alpha}(x,t)$ is differentiable at $(\gamma_{x,t}^-(s),s)$.\qed
\subsection{Weak KAM solution of (\ref{eq:sta-hj2}) in the condition (\textbf{H0}$^0$)}
Now $[f]=0$, so $F(t):=\int^t_0 f(\tau)\mbox{d}\tau$ is 1-periodic. Let $\ell=\int^1_0e^{F(\tau)}\mbox{d}\tau$, then we define a new Lagrangian $\mathbf{L}:TM\times\mathbb{T}\to\mathbb{R}$ by
$$
\mathbf{L}(x,v,t)=e^{F(t)}L(x,v,t).
$$
For such a $\mathbf{L}$, {\sf the Peierls Barrier} $\textbf{h}^\infty_\alpha:M\times\mathbb{T}\times M\times\mathbb{T}\to\mathbb{R}$
$$
\textbf{h}^\infty_\alpha(x,\bar{s},y,\bar{t})=\liminf_{\substack{t\equiv\bar{t},s\equiv \bar{s} (mod\; 1)\\t-s\to+\infty}}\inf_{\substack{\gamma\in C^{ac}([s,t],M) \\ \gamma(s)=x,\gamma(t)=y }}\int^t_s\mathbf{L}(\gamma,\dot{\gamma},\tau)+\alpha\cdot\ell \mbox{d}\tau,
$$
is well-defined, once $\alpha$ is uniquely established by
\begin{equation}\label{eq:def_critical}
c(H)=\inf\{\alpha\in\mathbb{R}|\int^t_s\mathbf{L}(\gamma,\dot{\gamma},\tau)+\alpha\cdot\ell\mbox{d}\tau\geq 0,\ \forall \gamma\in\mathcal{C} \}
\end{equation}
with $\mathcal{C}=\{\gamma\in C^{ac}([s,t],M)|\gamma(s)=\gamma(t)\mbox{ and }t-s\in\mathbb{Z}_+\}$, due to Proposition 2 of \cite{CIM}. Moreover,
the following properties were proved in \cite{CIM}:
\begin{prop}\label{pro3:basic property}
\begin{description}
\item [(i)] If $\alpha<c(H)$, $\textbf{h}^\infty_\alpha\equiv-\infty$.
\item [(ii)] If $\alpha>c(H)$, $\textbf{h}^\infty_\alpha\equiv+\infty$.
\item [(iii)] $\textbf{h}^\infty_{c(H)}$ is finite.
\item [(iv)] $\textbf{h}^\infty_{c(H)}$ is Lipschitz.
\item [(v)] For each $\gamma\in C^{ac}([s,t],M)$ with $\gamma(s)=x,\gamma(t)=y$,
$$\textbf{h}^\infty_{c(H)}(z,\bar{\varsigma},y,\bar{t})-\textbf{h}^\infty_{c(H)}(z,\bar{\varsigma},x,\bar{s})\leq\int^t_s\mathbf{L}(\gamma,\dot{\gamma},\tau)+c(H)\cdot\ell\mbox{d}\tau.
$$
\end{description}
\end{prop}
Consequently, for any $(z,\bar{\varsigma})\in M\times\mathbb{T}$ fixed, we construct a function $u_{z,\bar{\varsigma}}:M\times\mathbb{T}\rightarrow \mathbb{R}$ by
\begin{equation}\label{eq3:def_u}
u^-_{z,\bar{\varsigma}}(x,\bar{t})=e^{-F(\bar{t})}\bigg(\textbf{h}^\infty_{c(H)}(z,\bar{\varsigma},x,\bar{t})+c(H)\cdot\int^t_\varsigma e^{F(\tau)}-\ell\mbox{d}\tau\bigg).
\end{equation}
\textbf{Proof of Theorem \ref{cor:1}: }
(1)
Due to (iv) of Proposition \ref{pro3:basic property}, $u^-_{z,\bar{\varsigma}}$ is also Lipschitz.
(2) The domination property of $u^-_{z,\bar{\varsigma}}$ can be achieved immediately by (v) of Proposition \ref{pro3:basic property}.
(3) By Tonelli Theorem and the definition of $u^-_{z,\bar{\varsigma}}$, there exists a sequence ${\varsigma_{k}}$ tending to $-\infty$ and a sequence $\gamma_k\in C^{ac}([\varsigma_k,\theta],M)$ with $\gamma_k(\varsigma_k)=z,\gamma_k(\theta)=x$, such that $\gamma_k$ minimizes the action function
$$
\mathcal{F}(\beta)= \inf_{\substack{\beta\in C^{ac}([\varsigma_k,\theta]) \\\beta(\varsigma_k)=z,\beta(\theta)=x }}\int^\theta_{\varsigma_k}e^{F(\tau)}(L(\beta,\dot{\beta},\tau)+c(H))\mbox{d}\tau
$$
and
$$
e^{F(\theta)}u^-_{z,\bar{\varsigma}}(x,\theta)=\lim_{k\to+\infty}\int^\theta_{\varsigma_k}e^{F(\tau)}(L(\gamma_k,\dot{\gamma}_k,\tau)+c(H))\mbox{d}\tau.
$$
Since each $\gamma_k$ solves (\ref{eq:e-l}), which implies $\gamma_k$ is $C^r$. By a standard way, there exists $\kappa_0$ independent of the choice of $k$, such that $|\dot{\gamma}_k|\leq \kappa_0$, when $\theta-\varsigma_k\geq 1$.
By Ascoli Theorem, there exists a subsequence of $\{\gamma_k\}$ (denoted still by $\gamma_k$) and an absolutely continuous curve $\gamma^-_{x,\theta}:(-\infty,\theta]\to M$ such that $\gamma_k$ converges uniformly to $\gamma^-_{x,\theta}$ on each compact subset of $(-\infty,\theta]$ and $\gamma^-_{x,\theta}(\theta)=x$. Then, for each $s<\theta$,
\begin{eqnarray*}
e^{F(\theta)}u^-_{z,\bar{\varsigma}}(x,\theta)
&=&\lim_{k\to+\infty}\bigg(\int^s_{\varsigma_k}e^{F(\tau)}(L(\gamma_k,\dot{\gamma}_k,\tau)+c(H))\mbox{d}\tau\\
& &+\int^{\theta}_se^{F(\tau)}(L(\gamma_k,\dot{\gamma}_k,\tau)+c(H))\mbox{d}\tau\bigg)\\
&\geq&\liminf_{k\to+\infty}\int^s_{\varsigma_k}e^{F(\tau)}(L(\gamma_k,\dot{\gamma}_k,\tau)+c(H))\mbox{d}\tau\\
& &+\liminf_{k\to+\infty}\int^\theta_se^{F(\tau)}(L(\gamma_k,\dot{\gamma}_k,\tau)+c(H))\mbox{d}\tau\\
&\geq& e^{F(s)}u^-_{z,\bar{\varsigma}}(\gamma^-_{x,\theta}(s),s)+\int^\theta_se^{F(\tau)}(L(\gamma^-_{x,\theta},\dot{\gamma}^-_{x,\theta},\tau)+c(H))\mbox{d}\tau
\end{eqnarray*}
which implies $\gamma^-_{x,\theta}$ is a calibrated curve by $u^-_{z,\bar{\varsigma}}$.
(4) By a similar approach of the proof of Lemma \ref{lem:vis-sol-1}, we derive $u^-_{z,\bar{\varsigma}}$ is also a viscosity solution of (\ref{eq:sta-hj2}).
\qed
\section{Various properties of variational invariant sets}\label{s3}
\subsection{Aubry set in the condition \textbf{(H0$^-$)}} Due to Theorem \ref{thm:1} and Proposition \ref{Sec3:pro_differentiable}, for any $(x,\bar{s})\in M\times\mathbb{T}$ we can find a backward calibrated curve
\begin{eqnarray}
\widetilde \gamma_{x,s}^-:=\begin{pmatrix}
\gamma_{x,s}^-(t) \\
\bar t
\end{pmatrix}:t\in (-\infty,s]\rightarrow M\times\mathbb{T}
\end{eqnarray}
ending with it, such that the associated backward orbit $\varphi_L^{t-s}(\gamma_{x,s}^-(s),\dot \gamma_{x,s}^-(s), s)$ has an $\alpha-$limit set $\widetilde \mathcal{A}_{x,s}\subset TM\times\mathbb{T}$, which is invariant and graphic over $\mathcal{A}_{x,s}:=\pi\widetilde \mathcal{A}_{x,s}$. Therefore, any critical curve $\widetilde \gamma_{x,s}^\infty$ in $\mathcal{A}_{x,s}$ has to be a globally calibrated curve, namely
\[
\widetilde \mathcal{A}_{x,s}\subset\widetilde \mathcal{A},\quad(\text{resp. } \mathcal{A}_{x,s}\subset\mathcal{A}).
\]
So $\widetilde \mathcal{A}\neq \emptyset$.
Recall that any critical curve in $\mathcal{A}$ is globally calibrated, then due to Proposition \ref{Sec3:pro_differentiable}, that implies for any $(x,\bar{s})\in\mathcal{A}$, the critical curve $\widetilde \gamma_{x,s}$ passing it is unique. In other words, $\pi^{-1}:\mathcal{A}\rightarrow\widetilde \mathcal{A}$ is a graph, and
\[
\dot\gamma_{x,s}(t)=\partial_p H(\mbox{d}u^-(\gamma_{x,s}(t),t),t),\quad \forall\ t\in\mathbb{R}.
\]
That indicates that $\mbox{d}u^-:\mathcal{A}\rightarrow TM$ coincides with $\partial_v L\circ (\pi|_{\widetilde \mathcal{A}})^{-1}$.
On the other side,
$\|\dot{\widetilde \gamma}_{x,s}(t)\|\leq A<+\infty$ for all $t\in\mathbb{R}$ due to Lemma \ref{Sec3:calibrated dot bounded}, so $\partial_v L\circ (\pi|_{\widetilde \mathcal{A}})^{-1}$ has to be Lipschitz. So
$\widetilde \mathcal{A}$ is Lipschitz over $\mathcal{A}$. This is an analogue of Theorem 4.11.5 of \cite{Fa} and a.4) of \cite{MS}, which is known as {\sf Mather's graph theorem} in more earlier works \cite{Mat} for conservative Hamiltonian systems.
\begin{lem} $\widetilde \mathcal{A}$ has an equivalent expression
\begin{eqnarray}\label{eq:mane-equi}
\widetilde \mathcal{A}:=\{(\gamma(t),\dot\gamma(t),\bar t)\in TM\times\mathbb{T}|\;\forall\; a<b\in\mathbb{R}, \gamma \text{ achieves $h_{\alpha}^{a,b}(\gamma(a),\gamma(b))$}\}.\quad
\end{eqnarray}
\end{lem}
\proof
Let $\gamma:\mathbb{R}\to M$ be a globally calibrated curve by $u^-_\alpha$.
Due to (3) and (4) of Theorem \ref{thm:1}, for $a<b\in\mathbb{R}$,
\begin{align*}
\int^b_ae^{F(\tau)}(L(\gamma,\dot{\gamma},\tau)+\alpha)\mbox{d}\tau&=
e^{F(b)}u_\alpha^-(\gamma(b),b)-e^{F(a)}u_\alpha^-(\gamma(a),a)\\
&\leq h^{a,b}_\alpha(\gamma(a),\gamma(b)).
\end{align*}
Due to the definition of $h^{a,b}_\alpha(\gamma(a),\gamma(b))$,
we derive $\gamma$ achieves $h^{a,b}_\alpha(\gamma(a),\gamma(b))$ for all $a<b\in\mathbb{R}$.
To prove the lemma, it suffices to show any curve $\gamma:\mathbb{R}\to M$ achieving $h^{a,b}_\alpha(\gamma(a),\gamma(b))$ for all $a<b\in\mathbb{R}$ is a calibrated curve by $u^-_\alpha$.
We claim
\begin{equation}\label{eq:mincal}
\lim_{s\to-\infty}h^{s,t}_\alpha(z,x)=e^{F(t)}u^-_\alpha(x,t), \ \ \forall x,z\in M,t\in\mathbb{R}.
\end{equation}
Due to (3) of Theorem \ref{thm:1}, for $s<t$,
$$
e^{F(t)}u_\alpha^-(x,t)-h^{s,t}_\alpha(z,x)\leq e^{F(s)}u_\alpha^-(z,s)\to 0, s\to-\infty.
$$
On the other hand, we assume $\gamma_{x,t}$ is a globally calibrated curve by $u^-_\alpha$ with $\gamma_{x,t}(t)=x$ and $s+1<t$.
Let $\beta:[s,s+1]\to M$ be a geodesic with $\beta(s)=z,\beta(s+1)=\gamma_{x,t}(s+1)$ satisfying $|\dot{\beta}|\leq k_1:=\mbox{diam}(M)$.
Then,
\begin{align*}
h^{s,t}_\alpha(z,x)&\leq\int^{s+1}_se^{F(\tau)}(L(\beta,\dot{\beta},\tau)+\alpha)\mbox{d}\tau+\int^t_{s+1}e^{F(\tau)}(L(\gamma_{x,t},\dot{\gamma}_{x,t},\tau)+\alpha)\mbox{d}\tau\\
&\leq (C_{k_1}+\alpha)e^{\max f+[f][s]}+e^{F(t)}u^-_\alpha(x,t)-e^{F(s+1)}u^-_\alpha(\gamma_{x,t}(s+1),s+1).
\end{align*}
Hence,
$$
h^{s,t}_\alpha(z,x)-e^{F(t)}u^-_\alpha(x,t)\leq (C_{k_1}+\alpha)e^{\max f+[f][s]}-e^{F(s+1)}u^-_\alpha(\gamma_{x,t}(s+1),s+1).
$$
From $[f]>0$, it follows that the right side of the inequality above tending to $0$, as $s\to -\infty$.
Hence, (\ref{eq:mincal}) holds. Actually, the limit in (\ref{eq:mincal}) is uniform for $x,z\in M$ and $t\in\mathbb{R}$.
If $\gamma$ achieves $h^{a,b}_\alpha(\gamma(a),\gamma(b))$ for $a<b\in\mathbb{R}$, then
$$
h^{s,b}_\alpha(\gamma(s),\gamma(b))-h^{s,a}_\alpha(\gamma(s),\gamma(a))=\int^b_ae^{F(\tau)}(L(\gamma,\dot{\gamma},\tau)+\alpha)\mbox{d}\tau,\forall s<a.
$$
Taking $s\to-\infty$, we derive
$\gamma$ is also a calibrated curve by $u^-_\alpha$.
\qed\medskip
With the help of \eqref{eq:mane-equi}, the following Lemma can be proved:
\begin{lem}[Upper Semi-continuity]\label{lem:semi-con}
The set valued function
\[
L\in \underbrace{C^{r\geq 2}(TM\times\mathbb{T},\mathbb{R})}_{\|\cdot\|_{C^r}}\longrightarrow \widetilde \mathcal{A}\subset \underbrace{TM\times\mathbb{T}}_{d_{\mathcal{H}}(\cdot,\cdot)}
\]
is upper semi-continuous. Here $\|\cdot\|_{C^r}$ is the $C^r-$norm and $d_{\mathcal{H}}$ is the {\sf Hausdorff distance}.
\end{lem}
\proof
It suffices to prove that for any $L_n\rightarrow L$ w.r.t. $\|\cdot\|_{C^r}-$norm, the accumulating curve of any sequence of curves $\widetilde \gamma_n$ in $\mathcal{A}(L_n)$ should lie in $\mathcal{A}(L)$.
Due to Lemma \ref{Sec3:calibrated dot bounded}, for any $n\in\mathbb{Z}_+$ such that $\|L_n-L\|_{C^r}\leq 1$, $\widetilde \mathcal{A}(L_n)$ is uniformly compact in the phase space. Therefore, for any sequence $\{\widetilde \gamma_n\}$ each of which is globally minimal, the accumulating curve $\widetilde \gamma_*$ satisfies
\begin{eqnarray*}
\int_t^se^{F(\tau)}\big(L(\gamma_*,\dot\gamma_*,\tau)+\alpha\big)\mbox{d}\tau&\leq&\lim_{n\rightarrow+\infty}\int_t^se^{F(\tau)}\big(L_n(\gamma_n,\dot\gamma_n,\tau)+\alpha\big)\mbox{d}\tau\\
&\leq &\lim_{n\rightarrow+\infty}\int_t^se^{F(\tau)}\big(L_n(\eta_n,\dot\eta_n,\tau)+\alpha\big)\mbox{d}\tau
\end{eqnarray*}
for any Lipschitz continuous $\eta_n:[t,s]\rightarrow M$ ending with $\gamma_n(t)$ and $\gamma_n(s)$. Since for any Lipschitz continuous $\eta:[t,s]\rightarrow M$ ending with $\gamma_*(t)$ and $\gamma_*(s)$, we can find such a sequence $\eta_n:[t,s]\rightarrow M$ converging to $\eta$ uniformly, then we get
\[
\int_t^se^{F(\tau)}(L(\gamma_*,\dot\gamma_*,\tau)+\alpha)\mbox{d}\tau\leq \inf_{\substack{\eta\in C^{ac}([t,s],M)\\\eta(t)=\gamma_*(t)\\\eta(s)=\gamma_*(s)}}\int_t^se^{F(\tau)}(L(\eta, \dot\eta,\tau)+\alpha)\mbox{d}\tau
\]
for any $t<s\in\mathbb{R}$, which implies $\gamma_*$ satisfies the Euler-Lagrange equation. Due to Theorem \ref{thm:1}, the weak KAM solution $u_*^-$ associated with $L$ is unique, so $\gamma_*$ is globally minimal, then globally calibrated by $u_*^-$, i.e. $\widetilde \gamma_*\in\mathcal{A}(L)$.\qed
\subsection{Mather set in the condition (\textbf{H0}$^-$)} For any globally calibrated curve $\widetilde \gamma$, we can always find a sequence $T_n>0$, such that a $\varphi_L^t-$invariant measure $\widetilde \mu$ can be found by
\[
\int_{TM\times\mathbb{T}}f(x,v,t)\mbox{d}\widetilde \mu=\lim_{n\rightarrow+\infty}\frac{1}{T_n}\int_0^{T_n}f(\gamma,\dot\gamma, t)\mbox{d}t,\quad\forall f\in C_c(TM\times\mathbb{T},\mathbb{R}).
\]
So the set of $\varphi_L^t-$invariant measures ${\mathfrak M}_L$ is not empty.
\begin{prop}\label{prop:mat}
For all $\widetilde \nu\in{\mathfrak M}_L$ and $\alpha\in\mathbb{R}$, we have
\[
\int_{TM\times\mathbb{T}}L+\alpha-f(t)u_\alpha^-d\widetilde \nu\geq 0.
\]
Besides,
\[
\int_{TM\times\mathbb{T}}L+\alpha-f(t)u_\alpha^-d \widetilde \nu= 0 \quad\Longleftrightarrow\quad \text{supp}(\widetilde \nu)\subset\widetilde \mathcal{A}
\]
\end{prop}
\begin{proof} For any Euler-Lagrange curve $\gamma:\mathbb{R}\rightarrow M$ contained in $\pi_x\footnote{Here $\pi_x,\pi_t,\pi_u$ is the standard projection to the space $M,\mathbb{T},\mathbb{R}$ respectively.} \text{supp}(\widetilde \nu)$, we have
\begin{eqnarray*}
& &\int_{TM\times\mathbb{T}} f(t) u_\alpha^-(x,t)\mbox{d}\widetilde \nu\\
&=&\lim_{T\rightarrow+\infty}\frac1T\int_0^Tf(t)u_\alpha^-(\gamma(t),t)\mbox{d}t\\
&\leq& \lim_{T\rightarrow+\infty}\frac1T\int_0^Tf(t)\int_{-\infty}^t e^{F(s)-F(t)} [L(\gamma(s),\dot\gamma(s),s)+\alpha] \mbox{d}s \mbox{d}t\\
&=&\lim_{T\rightarrow+\infty}\frac1T\int_0^Tf(t)e^{-F(t)}\int_{-\infty}^te^{F(s)} [L(\gamma(s),\dot\gamma(s),s) +\alpha]\mbox{d}s\mbox{d}t\\
&=&\lim_{T\rightarrow+\infty}-\frac1T\int_0^T\Big( \int_{-\infty}^te^{F(s)} [L(\gamma(s),\dot\gamma(s),s)+\alpha] \mbox{d}s\Big)\mbox{d} e^{-F(t)}\\
&=&\lim_{T\rightarrow+\infty}-\frac1T\Big(e^{-F(t)}\int_{-\infty}^te^{F(s)} [L(\gamma(s),\dot\gamma(s),s) +\alpha]ds\Big|_0^T\Big)\\
& &+\lim_{T\rightarrow+\infty}\frac1T\int_0^TL(\gamma(t),\dot\gamma(t),t)+\alpha \mbox{d}t\\
&=&\int_{TM\times\mathbb{T}}L(x,v,t)+\alpha \mbox{d} \widetilde \nu,\nonumber
\end{eqnarray*}
which is an equality only when $\gamma$ is a backward calibrated curve of $(-\infty,t]$ for all $t\in\mathbb{R}$, which implies $\gamma$ is globally calibrated.\qed
\end{proof}
Due to this Proposition we can easily show that $\emptyset\neq \widetilde \mathcal{M}\subset\widetilde \mathcal{A}$. Moreover, as we did for the Aubry set, we can similarly get that $\pi^{-1}:\mathcal{M}\rightarrow\widetilde \mathcal{M}$ is a Lipschitz function.
\subsection{Maximal global attractor in the condition (\textbf{H0}$^-$)}
Since now $[f]>0$ and
$
\dfrac d{dt}\widehat H(x,p,\bar{s},I,u)=-f(t)\widehat H(x,p,\bar{s},I,u)
$
due to Remark \ref{rmk:pro},
so for any initial point $(x,p,s,I,u)$, the $\omega-$limit of trajectory $\widehat \varphi_{ H}^t(x,p,\bar{s},I,u)$ lies in
\begin{equation}
\widehat \Sigma_{ H}:=\{{\widehat H}(x,p,\bar{s},I,u)=0\}\subset T^*M\times T^*\mathbb{T}\times\mathbb{R}.
\end{equation}
\begin{lem}\label{lem:layer}
For any point $Z:=\big(x,p,\bar s,\alpha-f(s)u-H(x,p,s),u\big)\in\widehat \Sigma_{ H}$ with $u\leq u^-_\alpha(x,s)$, if
\[
\liminf_{t\rightarrow-\infty}e^{F(t)}\big|\pi_u\widehat \varphi_{ H}^t(Z)\big|=0,
\]
then $\pi_x\widehat \varphi_{ H}^t(Z)$ is a backward calibrated curve for $t\leq 0$.
\end{lem}
\proof
From the equation $\dot u=\langle H_p,p\rangle-H+\alpha-f(t)u$, we derive
\begin{eqnarray*}
e^{F(s)}\pi_uZ&=&\int_{-\infty}^0 \frac{d}{dt}e^{F(t+s)}\pi_u\widehat \varphi_{ H}^t(Z)\mbox{d}t\\
&=&\int_{-\infty}^s e^{F(t)}\big(L(\mathcal{L}( \varphi_{ H}^{t-s}(x,p,\bar s)))+\alpha\big)\mbox{d}t\leq u_\alpha^-(x,s),
\end{eqnarray*}
then due to the expression of $u^-_\alpha$ in (\ref{Sec3:solution}), $\pi_x\widehat \varphi_{ H}^t(Z)$ is a backward calibrated curve for $t\leq 0$.\qed
This Lemma inspires us to decompose $\widehat \Sigma_{ H}$ further:
\[
\left\{
\begin{aligned}
\widehat \Sigma_{ H}^-:= \big\{(x,p,\bar{s},\alpha-f(s)u-H(x,p,s), u)\big| u> u^-_\alpha(x,s)\big\},\\
\widehat \Sigma_{ H}^0:=\big\{(x,p,\bar{s},\alpha-f(s)u-H(x,p,s), u)\big| u= u^-_\alpha(x,s)\big\},\\
\widehat \Sigma_{ H}^+:=\big\{(x,p,\bar{s},\alpha-f(s)u-H(x,p,s), u)\big| u< u^-_\alpha(x,s)\big\}.
\end{aligned}
\right.\]
\begin{lem}
For any $Z=\big(x,p,\bar s,\alpha-f(s)u-H(x,p,s),u\big)\in \widehat \Sigma_{ H}$, we have
\begin{eqnarray}\label{eq:+0}
& &\partial_t^+\Big(u_\alpha^-(\pi_{x,t}\widehat \varphi_{ H}^t(Z))-\pi_u\widehat \varphi_{ H}^t(Z)\Big)\\
&\leq& -f(t+s)\Big(u_\alpha^-(\pi_{x,t}\widehat \varphi_{ H}^t(Z))-\pi_u\widehat \varphi_{ H}^t(Z)\Big).\nonumber
\end{eqnarray}
Consequently, $\lim_{t\rightarrow+\infty}\widehat \varphi_{H}^t(Z)\in \widehat \Sigma_{ H}^-\cup\widehat \Sigma_{ H}^0$.
\end{lem}
\proof
As $\widehat \varphi_{ H}^t(Z)=\big(x(t),p(t),\overline{t+s},-f(s+t)u(t)-H(x(t),p(t),s+t),u(t)\big)$, then
\begin{eqnarray*}
& &\partial_t^+\big[u_\alpha^-(x(t),s+t)-u(t)\big]\\
&\leq&\max\big\langle \partial_x^* u_\alpha^-(x(t),s+t),\dot x(t)\big\rangle+\partial_t^*u_\alpha^-(x(t),s+t)-\dot u(t)\\
&\leq& \max H(x(t),\partial_x^* u_\alpha^-(x(t),s+t),s+t)+L(x(t),\dot x(t),s+t)\\
& &+\partial^*_tu_\alpha^-(x(t),s+t)-\langle H_p(x(t),p(t),t+s),p(t)\rangle\\
& &+f(t+s)u(t)+H(x(t),p(t),s+t)-\alpha\\
&=&\max H(x(t),\partial_x^* u_\alpha^-(x(t),s+t),s+t)+\partial^*_tu^-_\alpha(x(t),s+t)\\
& &+f(t+s)u(t)-\alpha\\
&\leq &f(t+s)[u(t)-u_\alpha^-(x(t),t+s)]
\end{eqnarray*}
where the `max' is about all the element $(\partial_x^* u_\alpha^-(x(t),s+t), \partial_t^* u_\alpha^-(x(t),s+t))$ in $D^*u_\alpha^-(x(t),s+t)$ (see Theorem \ref{thm:reachable deri} for the definition). So $\lim_{t\rightarrow+\infty}\widehat \varphi_{ H}^{t}(Z)\in\widehat \Sigma_{ H}^-\cup\widehat \Sigma_{ H}^0$.\qed
\begin{prop}
$\Omega:=\bigcap_{t\geq 0} \widehat \varphi_{ H}^t(\widehat \Sigma_{ H}^-\cup\widehat \Sigma_{ H}^0)$ is the maximal invariant set contained in $\widehat \Sigma_{H}^-\cup\widehat \Sigma_{ H}^0$.
\end{prop}
\begin{proof}
Due to (\ref{eq:+0}), $\widehat \Sigma_{ H}^-\cup\widehat \Sigma_{ H}^0$ is forward invariant. Besides, any invariant set in $\widehat \Sigma_{ H}$ has to lie in $\widehat \Sigma_{ H}^-\cup\widehat \Sigma_{ H}^0$. So
$
\Omega
$
is the maximal invariant set in $\widehat \Sigma_{ H}^-\cup\widehat \Sigma_{ H}^0$.\qed
\end{proof}
\begin{lem}
If the $p-$component of $\Omega$ is bounded, then the ${u,I}-$components of $\Omega$ are also bounded.
\end{lem}
\begin{proof}
It suffices to prove that for any $(x_0,p_0,\bar{t}_0,I_0,u_0)\in T^*M\times T^*\mathbb{T}\times\mathbb{R}$, there exists a time $T(x_0,p_0,\bar{t}_0,I_0,u_0)>0$ such that for any $t\geq T$,
\begin{equation}\label{eq:*}
\big\|\pi_{u,I}\widehat \varphi_{ H}^t(x_0,p_0,\bar{t}_0,I_0,u_0)\big\|\leq C \tag{*}
\end{equation}
for a uniform constant $C=C(\pi_{p}\Omega)$. Since $\pi_{p}\Omega$ is bounded, due to the definition of $\Omega$, for any $(x_0,p_0,\bar{t}_0,I_0,u_0)\in T^*M\times T^*\mathbb{T}\times\mathbb{R}$, there always exists a time $T'(x_0,p_0,\bar{t}_0,I_0,u_0)>0$ such that for any $t\geq T'$,
\[
\big\|\pi_p\widehat \varphi_{ H}^t(x_0,p_0,\bar{t}_0,I_0,u_0)\big\|\leq C'=\frac32 \text{diam}(\pi_{p}\Omega).
\]
On the other side, the $u-$equation of (\ref{eq:dis}) implies that for any $t> 0$,
\begin{eqnarray*}
& &\big\|\pi_u\widehat \varphi_{ H}^{t+T'}(x_0,p_0,\bar{t}_0,I_0,u_0)\big\|\\
&\leq& e^{F(t_0+T')-F(t+T'+t_0)}|\pi_u\widehat \varphi_{ H}^{T'}(x_0,p_0,\bar{t}_0,I_0,u_0)|\\
& &+\int_0^te^{F(s+t_0+T')-F(t+t_0+T')}\Big|\langle H_p,p\rangle-H\Big|_{\widehat \varphi_{ H}^{s+T'}(x_0,p_0,\bar{t}_0,I_0,u_0)}ds
\end{eqnarray*}
where the first term of the right hand side will tend to zero as $t\to +\infty$, and the second term has a uniform bound depending only on $[f], C'$. Therefore, there exists a time $T''(x_0,p_0,\bar{t}_0,I_0,u_0)$ such that for any $t\geq T'+T''$, there exists a constant $C''=C''(C',[f])$ such that
\[
\big\|\pi_u\widehat \varphi_{ H}^{t}(x_0,p_0,\bar{t}_0,I_0,u_0)\big\|\leq C''.
\]
Benefiting from the boundedness of $u-$component, we can repeat aforementioned scheme to the $I-$equation of (\ref{eq:dis}), then prove (\ref{eq:*}).\qed
\end{proof}
Once $\Omega$ is compact, it has to be the maximal global attractor of $\widehat \varphi_{ H}^t$ in the whole phase space $T^*M\times T^*\mathbb{T}\times\mathbb{R}$.
Then due to Proposition \ref{Sec3:pro_differentiable}, any backward calibrated curve $\gamma_{x,s}^-:(-\infty,s]\rightarrow M$ decides a unique trajectory
\begin{eqnarray*}
\widehat \varphi_{ H}^t&\Big(&\mathcal{L}^{-1}(x,\lim_{\varsigma\rightarrow s_-}\dot\gamma_{x,s}^-(\varsigma),s),\alpha-f(s)u_\alpha^-(x,s)\\
& &-H\big(\mathcal{L}^{-1}(x,\lim_{\varsigma\rightarrow s_-}\dot\gamma_{x,s}^-(\varsigma),s)\big),u_\alpha^-(x,s)\Big)
\end{eqnarray*}
for $t\in\mathbb{R}$, which lies in $\widehat \Sigma_{ H}$. Furthermore,
\[
\widehat \mathcal{A}:=\Big\{\Big(\mathcal{L}^{-1}(x,\partial_x u_\alpha^-(x,t),t),\partial_t u_\alpha^-(x,t),u_\alpha^-(x,t)\Big)\Big|(x,t)\in\mathcal{A}\Big\}\subset\Omega
\]
because $\Omega$ is the maximal invariant set in $\widehat \Sigma_{ H}$.
\begin{lem}
$\widehat \mathcal{A}$ is the maximal invariant set contained in $\widehat \Sigma_{H}^0$.
\end{lem}
\begin{proof}
If $\mathcal{I}$ is an invariant set contained in $\widehat \Sigma_{ H}^0$, then $\pi_u(\widehat \varphi_{ H}^t(\mathcal{I}))$ is always bounded. Due to Lemma \ref{lem:layer}, any trajectory in $\mathcal{I}$ has to be backward calibrated. As $\mathcal{I}$ is invariant, any trajectory in it has to be contained in $\widehat \mathcal{A}$. \qed
\end{proof}
\vspace{20pt}
\noindent{\it Proof of Theorem \ref{cor:critical}:} Let $\tilde{\mu}\in \mathfrak{M}_L$ be ergodic, then we can find $(x_0,v_0,t_0)\in TM\times\mathbb{T}$ such that
\begin{eqnarray*}
& &\int_{TM\times\mathbb{T}}e^{F(t)}(L(x,v,t)+c(H))\mbox{d}\tilde{\mu}\\
&=&\lim_{T\to+\infty}\frac{1}{T}
\int^0_{-T}e^{F(\tau)}(L(\varphi_L^\tau(x_0,v_0,t_0))+c(H))\mbox{d}\tau.
\end{eqnarray*}
Therefore, for any weak KAM solution $u_c^-:M\times\mathbb{T}\rightarrow\mathbb{R}$ of \eqref{eq:sta-hj2}, we have
\begin{eqnarray*}
& &e^{F(0)}u_c^-(x_0,t_0)-e^{F(-T)}u_c^-(\pi_{x,t}\varphi_L^{-T}(x_0,v_0,t_0))\\
&\leq&\int^0_{-T}e^{F(\tau)}(L(\varphi_L^\tau(x_0,v_0,t_0))+c(H))\mbox{d}\tau,
\end{eqnarray*}
which implies
\begin{eqnarray*}
& &\lim_{T\to+\infty}\frac{1}{T}\int^0_{-T}e^{F(\tau)}(L(\varphi_L^\tau(x_0,v_0,t_0))+c(H))\mbox{d}\tau\\
&=&\lim_{T\to+\infty}\frac{1}{T}
(e^{F(0)}u_c^-(x_0,t_0)-e^{F(-T)}u_c^-(\pi_{x,t}\varphi_L^{-T}(x_0,v_0,t_0))=0.
\end{eqnarray*}
Hence,
$$
\int_{TM\times\mathbb{T}}e^{F(t)}(L(x,v,t)+c(H))\mbox{d}{\tilde{\mu}}\geq 0.
$$
That further implies
$$
\frac{\inf_{\mu\in\mathfrak M_L}\int_{TM\times\mathbb{T}}e^{F(t)}L(x,v,t)\mbox{d}\tilde{\mu}}{
\int_{0}^1e^{F(\tau)}\mbox{d}\tau}\geq-c(H).
$$
On the other side, for any $(x,0)\in M\times\mathbb{T}$ fixed, the backward calibrated curve $\gamma^-_{x,0}:(-\infty,0]\to M$ satisfies
\begin{align*}
e^{F(0)}u_c(\gamma^-_{x,0}(0),0)&-e^{F(-n)}u_c(\gamma^-_{x,0}(-n),-n)\\
&=\int^{0}_{-n}e^{F(\tau)}(L(\gamma^-_{x,0}(\tau),\dot{\gamma}^-_{x,0}(\tau),\tau)+ c)\mbox{d}\tau
\end{align*}
for any $n\in\mathbb{Z}_+$.
By the {\sf Resize Representation Theorem}, the time average w.r.t. $\gamma^-_{x,0}|_{[-n,0]}:[-n,0]\rightarrow M$ decides a sequence of Borel probability measures ${\mu}_n$. Due to Lemma \ref{Sec3:calibrated dot bounded}, we can always find a subsequence $\{{\tilde{\mu}}_{n_k}\}$ converging to a unique Borel probability measure ${\tilde{\mu}}^*$, i.e.
\begin{eqnarray*}
\int_{TM\times\mathbb{T}}g(x,v,t)\mbox{d}{\tilde{\mu}}^*
&=&\lim_{k\to\infty}\int_{TM\times\mathbb{T}}g(x,v,t)\mbox{d}{\tilde{\mu}}_{n_k}\\
&=&\lim_{k\to\infty}\frac{1}{n_k}\int^0_{-n_k}g(\gamma^-_{x,0}(\tau),\dot{\gamma}^-_{x,0}(\tau),\bar{\tau})\mbox{d}\tau
\end{eqnarray*}
for any $g\in C_c(TM\times\mathbb{T},\mathbb{R})$. Besides, we can easily prove that $\tilde{\mu}^*\in\mathfrak M_L$ and
\begin{eqnarray*}
& &\int_{TM\times\mathbb{T}}e^{F(t)}(L(x,v,t)+c(H))\mbox{d}{\tilde{\mu}}^*\\
&=&\lim_{k\to\infty}\frac{1}{n_k}\int^{0}_{-n_k}e^{F(\tau)}(L(\gamma^-_{x,0}(\tau),\dot{\gamma}^-_{x,0}(\tau),\tau)+ c(H))\mbox{d}\tau\\
&=&\lim_{k\to\infty}\frac{1}{n_k}\bigg(u^-_c(\gamma^-_{x,0}(0),0)-u^-_c(\gamma^-_{x,0}(-n_k),-n_k)\bigg)=0.
\end{eqnarray*}
Then,
$$
-c(H)=\frac{\inf_{ \tilde{\mu}\in\mathfrak M_L}\int_{TM\times\mathbb{T}}e^{F(t)}L(x,v,t)\mbox{d}{\tilde{\mu}}}{\int_0^1e^{F(\tau)}\mbox{d}\tau}.
$$
Gathering all the infimum of the right side of previous equality, we get a set of Mather measures $\mathfrak M_m$. Due to the {\sf Cross Lemma} in \cite{Mat}, the Mather set
\[
\widetilde \mathcal{M}:=\overline{\bigcup_{\tilde{\mu}\in\mathfrak M_m} \text{supp}(\tilde{\mu})}
\]
is a Lipschitz graph over $\mathcal{M}:=\pi\widetilde \mathcal{M}$.
\qed
\section{Convergence of parameterized viscosity solutions}\label{s4}
In this section we deal with the convergence of weak KAM solution $u_\delta^-$ for system (\ref{eq:ham-par}) as $\delta\rightarrow 0_+$.
Recall that $[f_0]=0$ and
\[
f_1(t):=\lim_{\delta\rightarrow 0_+}\dfrac{f_\delta(t)-f_0(t)}{\delta}>0,
\]
there must exist a $\delta_0>0$ such that
\[
f_{\delta}(t)>f_0(t),\quad\forall\ t\in\mathbb{T}
\]
for all $\delta\in[0,\delta_0]$.
Due to Theorem \ref{cor:1} there exists a unique $c(H)$, such that the weak KAM solutions $u_0^-$ of (\ref{eq:hj-criti}) with $\alpha=c(H)$ exist.
For each $(x,t)\in M\times\mathbb{R}$ and $s<t$,
the {\sf Lax-Oleinik operator}
$$
T_s^{\delta,-}(x,t)=\inf_{\substack{\gamma\in C^{ac}([s,t],M)\\\gamma(t)=x}}\int^t_se^{F_\delta(\tau)-F_\delta(t)}\big(L(\gamma(\tau),\dot{\gamma}(\tau),\tau)+c(H)\big)\mbox{d}\tau
$$
is well defined, of which the following Lemma holds:
\begin{lem}
For each $\delta\geq 0$ and $T_s^{\delta,-}(x,t)$ converges uniformly to $u^-_{\delta}(x,t)$ on each compact subset of $M\times \mathbb{R}$ as $s\to-\infty$.
\end{lem}
\proof
Let $\gamma^-_{\delta,x,t}:(-\infty,t]\to M$ be a calibrated curve of $u_\delta^-(x,t)$. Then,
$$
e^{F_\delta(t)}u_\delta^-(x,t)=e^{F_\delta(s)}u_\delta^-(\gamma^-_{\delta,x,t}(s),s)+\int^t_se^{F_\delta(\tau)}(L(\gamma^-_{\delta,x,t},\dot{\gamma}^-_{\delta,x,t},\tau)+c(H))\mbox{d}\tau
$$
and
$$
e^{F_\delta(t)}T_s^{\delta,-}(x,t)\leq \int^t_se^{F_\delta(\tau)}(L(\gamma^-_{\delta,x,t},\dot{\gamma}^-_{\delta,x,t},\tau)+c(H))\mbox{d}\tau.
$$
Then,
\begin{equation}\label{eq:5-58}
T_s^{\delta,-}(x,t)-u_\delta^-(x,t)\leq-e^{F_\delta(s)-F_\delta(t)}u^-_\delta(\gamma^-_{\delta,x,t}(s),s).
\end{equation}
On the other hand, let $\gamma_0:[s,t]\to M$ be a minimizer of $T_s^{\delta,-}(x,t)$. Then,
$$
e^{F_\delta(t)}T_s^{\delta,-}(x,t)=\int^t_se^{F_\delta(\tau)}(L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H))\mbox{d}\tau
$$
and
$$
e^{F_\delta(t)}u^-_{\delta}(x,t)-e^{F_\delta(s)}u^-_\delta(\gamma_0(s),s)\leq \int^t_se^{F_\delta(\tau)}(L(\gamma_0,\dot{\gamma}_0,\tau)+c(H))\mbox{d}\tau.
$$
Hence,
\begin{equation}\label{eq:5-59}
u^-_\delta(x,t)-T_s^{\delta,-}(x,t)\leq e^{F_\delta(s)-F_\delta(t)}u_\delta^-(\gamma_0(s),s).
\end{equation}
From (\ref{eq:5-58}) and (\ref{eq:5-59}), it follows
$$
|u^-_\delta(x,t)-T_s^{\delta,-}(x,t)|\leq e^{F_\delta(s)-F_\delta(t)}\max u^-_\delta,
$$
which means $T_s^{\delta,-}(x,t)$ converges uniformly to $u_\delta^-(x,t)$ on each compact subset of $M\times\mathbb{R}$.
\qed
\begin{lem}\label{lem:equi-lip}
$u_\delta^-:M\times\mathbb{T}\rightarrow\mathbb{R}$ are equi-bounded and equi-Lipschitz w.r.t. $\delta\in(0,\delta_0]$.
\end{lem}
\proof
To show $u_{\delta}^-$ are equi-bounded from below, it suffices to show
\[
\{T_s^{\delta,-}(x,t)|(x,t)\in M\times [0,1],s\leq 0,\delta\in(0,\delta_0]\}
\]
is bounded from below. Let $\gamma_0:[s,t]\to M$ be a minimizer of $T_s^{\delta,-}(x,t)$, $u_\delta(\tau):=T_s^{\delta,-}(\gamma_0(\tau),\tau)$, and $\tilde{u}_\delta(\tau):=e^{F_\delta(\tau)}u_\delta(\tau), \tau\in[s,t]$. Then,
$$
\frac{\mbox{d}\tilde{u}_\delta(\tau)}{\mbox{d}\tau}=e^{F_\delta(\tau)}(L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H)).
$$
Hence,
$$
\frac{\mbox{d}u_\delta(\tau)}{\mbox{d}\tau}=L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H)-f_\delta(\tau)u_\delta(\tau).
$$
We could assume $T_s^{\delta,-}(x,t)<0$ for some $\delta\in(0,\delta_0]$, $(x,t)\in M\times [0,1],s\leq 0$, otherwise $0$ is a uniform lower bound of $\{T_s^{\delta,-}(x,t)|(x,t)\in M\times[0,1],s\leq 0,\delta\in(0,\delta_0]\}$.
Note that $u_\delta(\cdot)$ is continuous and $u_\delta(s)=0$. There exists $s_0\in[s,t)$ such that $u_\delta(s_0)=0$ and $u_\delta(\tau)<0,\tau\in(s_0,t]$. From $f_\delta>f_0$, it follows that
$$
\frac{\mbox{d}u_\delta(\tau)}{\mbox{d}\tau}\geq L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H)-f_0(\tau)u_\delta(\tau),\tau\in[s_0,t].
$$
Hence,
$$
\frac{\mbox{d}}{\mbox{d}\tau}\big(e^{F_0(\tau)}u_\delta(\tau)\big)\geq e^{F_0(\tau)}(L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H)),
$$
where $F_0(\tau)=\int^\tau_0f_0(\sigma)\mbox{d}\sigma$.
Integrating on $[s_0,t]$, it holds that
\begin{equation}\label{eq:lowerbd}
e^{F_0(t)}\cdot u_\delta(t)\geq\int^t_{s_0}e^{F_0(\tau)}(L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H))\mbox{d}\tau.
\end{equation}
Let $\beta:[t,t+2-\overline{t-s_0}]\to M$ be a geodesic with $\beta(t)=\gamma_0(t),\beta(t+2-\overline{t-s_0})=\gamma_0(s_0)$, and
$$
|\dot{\beta}(\tau)|=\frac{d(\gamma_0(s_0),\gamma_0(t))}{2-\overline{t-s_0}}\leq \mbox{diam}(M)=:k_1.
$$
Due to the definition of $c(H)$ in (\ref{eq:def_critical}), we derive
\begin{align*}
&\int^t_{s_0}e^{F_0(\tau)}(L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H))\mbox{d}\tau\\
&+\int^{t+2-\overline{t-s_0}}_te^{F_0(\tau)}(L(\beta(\tau),\dot{\beta}(\tau),\tau)+c(H))\mbox{d}\tau\geq 0.
\end{align*}
Note that
\begin{eqnarray*}
& &\int^{t+2-\overline{t-s_0}}_te^{F_0(\tau)}(L(\beta(\tau),\dot{\beta}(\tau),\tau)+c(H))\mbox{d}\tau\\
&\leq&\int^{t+2-\overline{t-s_0}}_te^{F_0(\tau)}(C_{k_1}+c(H))\mbox{d}\tau
\leq 2(C_{k_1}+c(H))e^{\max_{t\in\mathbb{T}} F_0(t)}.
\end{eqnarray*}
Hence,
$$
\int^t_{s_0}e^{F_0(\tau)}(L(\gamma_0(\tau),\dot{\gamma}_0(\tau),\tau)+c(H))\mbox{d}\tau
\geq -2(C_{k_1}+c(H))e^{\max F_0}.
$$
Combining (\ref{eq:lowerbd}), we derive
$$
u_\delta(t)\geq -2 |C_{k_1}+c(H)| e^{\max F_0-\min F_0}.
$$
Next, we prove $u_{\delta}^-(x,t)$ are equi-bounded from above. It suffices to show
$\{T^{\delta,-}_s(x,t)|(x,t)\in M\times [0,1],s\leq 0,\delta\in(0,\delta_0]\}$ is bounded from above. We could assume $T^{\delta,-}_s(x,t)>0$ for some $\delta\in(0,\delta_0]$, $(x,t)\in M\times [0,1],s\leq 0$, otherwise $0$ is a uniform upper bound of $\{T^{\delta,-}_s(x,t)|(x,t)\in M\times[0,1],s\leq 0,\delta\in(0,\delta_0]\}$.
Let $u_0^-(x,t)$ be a weak KAM solution of
$$
\partial_tu+H(x,\partial_xu,t)+f_0(t)u=c(H),
$$
and $\gamma^-_{x,t}:(-\infty,t]\to M$ be a calibrated curve of $u_0^-(x,t)$. Let
$$
v_\delta(\tau):=T^{\delta,-}_s(\gamma^-_{x,t}(\tau),\tau),\tau\in[s,t].
$$
Then
\begin{eqnarray*}
& &\frac{e^{F_\delta(\tau+\Delta \tau)}v_\delta(\tau+\Delta \tau)-e^{F_\delta(\tau)}v_\delta(\tau)}{\Delta \tau}\\
&\leq&\frac{1}{\Delta \tau}\int^{\tau+\Delta \tau}_{\tau}e^{F_\delta(\sigma)}(L(\gamma^-_{x,t}(\sigma),\dot{\gamma}^-_{x,t}(\sigma),\sigma)+c(H))\mbox{d}\sigma.
\end{eqnarray*}
Note that
\begin{align*}
&\varlimsup_{\Delta \tau\to 0}\frac{e^{F_\delta(\tau+\Delta \tau)}v_\delta(\tau+\Delta \tau)-e^{F_\delta(\tau)}v_\delta(\tau)}{\Delta \tau}\\
&=\varlimsup_{\Delta \tau\to 0}\frac{e^{F_\delta(\tau+\Delta\tau)}v_\delta(\tau+\Delta \tau)-e^{F_\delta(\tau+\Delta\tau)}v_\delta(\tau)+e^{F_\delta(\tau+\Delta\tau)}v_\delta(\tau)-e^{F_\delta(\tau)}v_\delta(\tau)}{\Delta \tau}\\
&=e^{F_\delta (\tau)}\varlimsup_{\Delta \tau\to 0}\bigg(\frac{v_\delta(\tau+\Delta\tau)-v_\delta(\tau)}{\Delta \tau}\bigg)+e^{F_\delta(\tau)}f_\delta(\tau)v_\delta(\tau).
\end{align*}
Hence,
$$
\varlimsup_{\Delta \tau\to 0}\bigg(\frac{v_\delta(\tau+\Delta\tau)-v_\delta(\tau)}{\Delta \tau}\bigg)\leq L(\gamma^-_{x,t}(\tau),\dot{\gamma}^-_{x,t}(\tau),\tau)+c(H)-f_\delta(\tau)v_\delta(\tau).
$$
Since $v_\delta(s)=0$ and $v_\delta(\tau)$ is continuous,
there exists $s_1\in[s,t)$ such that $v_\delta(s_1)=0$ and $v_\delta(\tau)>0,\tau\in(s_1,t]$.
For $\tau\in (s_1,t]$,
\begin{align*}
\varlimsup_{\Delta \tau\to 0}\bigg(\frac{v_\delta(\tau+\Delta\tau)-v_\delta(\tau)}{\Delta \tau}\bigg)&\leq L(\gamma^-_{x,t}(\tau),\dot{\gamma}^-_{x,t}(\tau),\tau)+c(H)-f_\delta(\tau)v_\delta(\tau)\\
&\leq L(\gamma^-_{x,t}(\tau),\dot{\gamma}^-_{x,t}(\tau),\tau)+c(H)-f_0(\tau)v_\delta(\tau).
\end{align*}
Then,
\begin{eqnarray*}
& &\varlimsup_{\Delta \tau\to 0}\bigg(\frac{e^{F_0(\tau+\Delta\tau)}v_\delta(\tau+\Delta\tau)-e^{F_0(\tau)}v_\delta(\tau)}{\Delta \tau}\bigg)\\
&\leq& e^{F_0(\tau)}(L(\gamma^-_{x,t}(\tau),\dot{\gamma}^-_{x,t}(\tau),\tau)+c(H)).
\end{eqnarray*}
From $v_\delta(s_1)=0$, it follows that
\begin{align*}
e^{F_0(t)}v_\delta(t)&\leq\int^t_{s_1}e^{F_0(\tau)}(L(\gamma^-_{x,t},\dot{\gamma}^-_{x,t},\tau)+c(H))\mbox{d}\tau\\
&=e^{F_0(t)}u_0^-(x,t)-e^{F_0(s_1)}u_0^-(\gamma^-_{x,t}(s_1),s_1).
\end{align*}
Then,
$$
v_\delta(t)\leq 2\max|u_0^-|\cdot e^{\max F_0-\min F_0}.
$$
Note that $u_\delta^-(x,t)$ is equi-bounded. By a similar approach of the proof of Lemma \ref{lem:lip-dis}, we derive that $u_\delta^-$ is equi-Lipschitz.
\qed
\begin{lem}\label{lem:com-cal}
For any $\delta\in(0,\delta_0]$ and any $(x,\bar{s})\in M\times\mathbb{T}$, the backward calibrated curve $\gamma_{\delta,x,s}^-:(-\infty,s]\rightarrow M$ associated with $u_\delta^-$ has a uniformly bounded velocity, i.e. there exists a constant $K>0$, such that
$$
|\dot\gamma_{\delta,x,s}^-(t)|\leq K, \quad \forall \delta\in(0,1]\mbox{ and } t\in(-\infty,s).
$$
\end{lem}
\proof
By a similar way in the proof of Lemma \ref{Sec3:calibrated dot bounded}, there exists $s_0$ in each interval with length 1, such that
$$
|\dot{\gamma}^-_{\delta,x,s}(s_0)|\leq C_{k_1}+C(1),
$$
where $k_1=\mbox{diam}(M)$.
Note that $f_{\delta}$ depends continuously on $\delta$ and is 1-periodic. We derive the Lagrangian flow $(\gamma^-_{\delta,x,s}(\tau),\dot{\gamma}^-_{\delta,x,s}(\tau),\tau)$ is 1-periodic and depends continuously on the parameter $\delta$. Hence, there exists $K>0$ depends only on $L, k_1$, and $\delta_0$, such that $|\dot{\gamma}^-_{\delta,x,s}|<K$.\qed
\begin{prop}\label{prop:geq}
For any ergodic measure $\tilde{\mu}\in\mathfrak M_m(0)$ and any $0<\delta\leq \delta_0$, we have
\begin{eqnarray}
\int_{TM\times\mathbb{T}}e^{F_0(t)}\frac{f_\delta(t)-f_0(t)}{\delta}u_\delta^-(x,t)\mbox{d}\tilde{\mu}(x,v,t)\leq 0.
\end{eqnarray}
\end{prop}
\begin{proof}
Since $\{u_\delta^-\}_{\delta\in(0,\delta_0]}$ is uniformly bounded and $[f_0]=0$, then
\[
\lim_{T\rightarrow+\infty}\frac 1T\int_0^Tu_\delta^-(\gamma(t),t)\mbox{d} e^{F_0(t)}=\int_{TM\times\mathbb{T}}u_\delta^-(x,t)f_0(t)e^{F_0(t)}\mbox{d}\tilde{\mu}(x,v,t)
\]
for any regular curve $\widetilde \gamma(t)=(\gamma(t),\bar{t}):t\in\mathbb{R}\rightarrow M\times\mathbb{T}$ contained in $\mathcal{M}(\delta)$. Due to Proposition \ref{Sec3:pro_differentiable},
\begin{eqnarray*}
& &\frac 1T\int_0^Tu_\delta^-(\gamma(t),t)\mbox{d} e^{F_0(t)}\\
&=&\frac 1Tu_\delta^-(\gamma(t),t) e^{F_0(t)}\Big|_0^T-\frac 1T\int_0^T e^{F_0(t)}\big[\partial_t u_\delta^-(\gamma(t),t)+\langle \dot\gamma(t),\partial_xu_\delta^-(\gamma(t),t)\rangle\big]\mbox{d}t
\end{eqnarray*}
and
\begin{eqnarray*}
& &\frac 1T\int_0^T e^{F_0(t)}\big[\partial_t u_\delta^-(\gamma(t),t)+\langle \dot\gamma(t),\partial_xu_\delta^-(\gamma(t),t)\rangle\big]\mbox{d}t\\
&\leq&\frac 1T\int_0^T e^{F_0(t)}\big[L(\gamma,\dot\gamma,t)+H(\gamma(t),\partial_x u_\delta^-(\gamma(t),t),t)+\partial_t u_\delta^-(\gamma(t),t)\big]\mbox{d}t\\
&\leq&\frac 1T\int_0^T e^{F_0(t)}\big[L(\gamma,\dot\gamma,t)+c(H)-f_\delta(t)u_\delta^-(\gamma(t),t) \big]\mbox{d}t,
\end{eqnarray*}
by taking $T\rightarrow +\infty$ and dividing both sides by $\delta$ we get the conclusion.\qed
\end{proof}
\begin{defn}
Let's denote by $\mathcal{F}_-$ the set of all viscosity subsolutions $\omega:M\times\mathbb{T}\rightarrow \mathbb{R}$ of (\ref{eq:hj-par}) with $\delta=0$ such that
\begin{eqnarray}
\int_{TM\times\mathbb{T}} f_1(t)\omega(x,t)e^{F_0(t)}\mbox{d}\tilde{\mu}\leq 0,\quad\forall\ \tilde{\mu}\in\mathfrak M_m(0).
\end{eqnarray}
\end{defn}
\begin{lem}
The set $\mathcal{F}_-$ is uniformly bounded from above, i.e.
\[
\sup\{u(x)|\ \forall\ x\in M,\ u\in\mathcal{F}_-\}<+\infty.
\]
\end{lem}
\begin{proof}
By an analogy of Lemma 10 of \cite{CIM}, all the functions in the set
\[
\Big\{ e^{F_0(t)}\omega:M\times\mathbb{T}\rightarrow \mathbb{R}\Big|\omega\prec_{f_0} L+c(H)\Big\}
\]
are uniformly Lipschitz with a Lipschitz constant $\kappa>0$.
For any $\tilde{\mu}\in\mathfrak{M}_m(0)$ and $u\in\mathcal{F}_-$
\begin{eqnarray*}
\min_{(x,t)\in M\times\mathbb{T}}u(x,t)e^{F_0(t)}
&=&\frac{\int_{TM\times\mathbb{T}} f_1(t)\min_{(x,t)\in M\times\mathbb{T}} u(x,t) e^{F_0(t)}\mbox{d}\tilde{\mu}}{\int_{TM\times\mathbb{T}}f_1(t) \mbox{d}\tilde{\mu} }\\
&=&\frac{\int_{TM\times\mathbb{T}} f_1(t) \min_{(x,t)\in M\times\mathbb{T}} u(x,t)e^{F_0(t)} \mbox{d}\tilde{\mu}}{\int_0^1f_1(t)\mbox{d}t }\\
&\leq& \dfrac{\int_{M\times\mathbb{T}}f_1(t) u(x,t)e^{F_0(t)} \mbox{d}\tilde{\mu}}{\int_0^1 f_1(t) \mbox{d}t }\leq 0.
\end{eqnarray*}
Then,
\begin{eqnarray*}
\max_{(x,t)\in M\times\mathbb{T}}u(x,t) e^{F_0(t)}&\leq& \max_{(x,t)\in M\times\mathbb{T}}u(x,t) e^{F_0(t)}-\min_{(x,t)\in M\times\mathbb{T}}u(x,t) e^{F_0(t)}\\
&\leq& \kappa\ \text{diam}(M\times\mathbb{T})<+\infty.
\end{eqnarray*}
As a result,
\[
\max_{(x,t)\in M\times\mathbb{T}} u(x,t)\leq \frac{\max_{(x,t)\in M\times\mathbb{T}}u(x,t) e^{F_0(t)}}{\min_{t\in\mathbb{T}} e^{F_0(t)}}<+\infty
\]
so we finish the proof.\qed
\end{proof}
As $\mathcal{F}_-$ is now upper bounded, we can define a supreme subsolution by
\begin{eqnarray}\label{eq:def-1}
u_0^*:=\sup_{u\in \mathcal{F}_-} u.
\end{eqnarray}
Later we will see that this is indeed a viscosity solution of (\ref{eq:hj-par}) for $\delta=0,\alpha=c(H)$ and is the unique accumulating function of $u_\delta^-$ as $\delta\rightarrow 0_+$.
\begin{prop}\label{prop:leq}
For any $\delta>0$, any viscosity subsolution $\omega:M\times\mathbb{T}\rightarrow\mathbb{R}$ of (\ref{eq:hj-par}) with $\delta=0,\alpha=c(H)$ and any point $(x,s)\in M\times\mathbb{T}$, there exists a $\varphi_{L}^t-$backward invariant
finite measure $\tilde{\mu}_{x,s}^\delta:TM\times\mathbb{T}\rightarrow\mathbb{R}$ such that
\begin{eqnarray}
u_\delta^-(x,s)\geq \omega(x,s)-\int_{TM\times\mathbb{T}}\omega(y,t)e^{F_0(t)}f_1(t)d\tilde{\mu}_{x,s}^\delta(y,v_y,t)
\end{eqnarray}
where
\begin{eqnarray*}
& &\int_{TM\times\mathbb{T}}g(y,t)d\tilde{\mu}_{x,s}^\delta(y,v_y,t)\\
&:=&\int_{-\infty}^s\frac{g(\gamma_{\delta,x,s}^-(t),t)\cdot \frac{\mbox{d}}{\mbox{d}t}(e^{F_\delta(t)}-e^{F_0(t)})}{f_1(t)}\mbox{d}t,\ \forall g\in C(M\times\mathbb{T},\mathbb{R}).
\end{eqnarray*}
\end{prop}
\begin{proof}
For any $(x,\bar{s})\in M\times\mathbb{T}$ and any $\delta\in(0,\delta_0]$, there exists a backward calibrated curve $\gamma_{\delta,x,s}^-:(-\infty,s]\rightarrow M$ ending with $x$, such that the viscosity solution $u_\delta^-$ is differentiable along $(\gamma_{\delta,x,s}^-(t),\overline{t})$ for all $t\in(-\infty,s)$ due to Proposition \ref{Sec3:pro_differentiable}. Precisely, for all $t\in(-\infty,s)$
$$
\frac{\mbox{d}}{\mbox{d}t}\big(e^{F_\delta}(t)u_\delta^-(\gamma^-_{\delta,x,s}(t),t)\big)=e^{F_\delta(t)}\big(L(\gamma_{\delta,x,x}(t),\dot{\gamma}_{\delta,x,s}(t),t)+c(H)\big).
$$
Integrating on $[s,-T]$,
\begin{eqnarray*}
& &e^{F_\delta(s)}u_\delta^-(x,s)-e^{F_\delta(-T)}u_\delta^-(\gamma_{\delta,x,s}^-(-T),-T)\\
&=&\int_{-T}^se^{F_\delta(t)}\Big[L\Big(\gamma_{\delta,x,s}^-(t),\dot\gamma_{\delta,x,s}^-(t),t\Big)+c(H)\Big]dt
\end{eqnarray*}
for any $T>0$, where $F_\delta(t):=\int_0^tf_\delta(\tau)d\tau$. On the other side,
\[
\partial_t\omega (x,t)+H(x,\partial_x\omega(x),t)+f_0(t)\omega(x,t)\leq c(H),\quad a.e.\ (x,\bar{t})\in M\times\mathbb{T}
\]
since $\omega$ is also a subsolution of (\ref{eq:hj-par}) (with $\delta=0$), then
\begin{eqnarray*}
& &e^{F_\delta(s)}u_\delta^-(x,s)-e^{F_\delta(-T)}u_\delta^-(\gamma_{\delta,x,s}^-(-T),-T)\\
&\geq&\int_{-T}^se^{F_\delta(t)}\Big[L\Big(\gamma_{\delta,x,s}^-(t),\dot\gamma_{\delta,x,s}^-(t),t\Big)+H\Big(\gamma_{\delta,x,s}^-(t),\partial_x\omega(\gamma_{\delta,x,s}^-(t),
t),t\Big)\\& &+\partial_t\omega(\gamma_{\delta,x,s}^-(t),t)+f_0(t)\omega(\gamma_{\delta,x,s}^-(t),t)\Big]\mbox{d}t\\
&\geq&\int_{-T}^se^{F_\delta(t)}\Big[
\frac{d}{dt}\omega(\gamma_{\delta,x,s}^-(t),t)+f_0(t)\omega(\gamma_{\delta,x,s}^-(t),t)\Big]\mbox{d}t\\
&\geq&e^{F_\delta(s)} \omega(x,s)-e^{F_\delta(-T)}\omega(\gamma_{\delta,x,s}^-(-T),-T)\\
& &-\int_{-T}^s\omega(\gamma_{\delta,x,s}^-(t),t) e^{F_\delta(t)}\Big(f_\delta(t)-f_0(t)\Big)\mbox{d}t.
\end{eqnarray*}
By taking $T\rightarrow+\infty$ we finally get
\begin{eqnarray*}
e^{F_\delta(s)}u_\delta^-(x,s)-e^{F_\delta(s)}\omega(x,s)\geq-\int_{-\infty}^s\omega(\gamma_{\delta,x,s}^-(t),t) e^{F_\delta(t)}\Big(f_\delta(t)-f_0(t)\Big)\mbox{d}t.
\end{eqnarray*}
By a suitable transformation,
\begin{eqnarray*}
& &u_\delta^-(x,s)\\
&\geq&\omega(x,s)-\int_{-\infty}^s\omega(\gamma_{\delta,x,s}^-(t),t)e^{F_0(t)} e^{F_\delta(t)-F_0(t)}\Big(f_\delta(t)-f_0(t)\Big)\mbox{d}t\\
&=&\omega(x,s)-\int_{-\infty}^s\omega(\gamma_{\delta,x,s}^-(t),t) e^{F_0(t)}\mbox{d}e^{F_\delta(t)-F_0(t)}\\
&=&\omega(x,s)-\int_{-\infty}^s\omega(\gamma_{\delta,x,s}^-(t),t)e^{F_0(t)}f_1(t)\frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}.
\end{eqnarray*}
Then for any $g\in C(M\times\mathbb{T},\mathbb{R})$, the measure $\tilde{\mu}_{x,s}^\delta$ defined by
\[
\int_{TM\times\mathbb{T}}g(y,\tau)\mbox{d}\tilde{\mu}_{x,s}^\delta(y,v_y,\tau):=\int_{-\infty}^sg(\gamma_{\delta,x,s}^-(t),t) \frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}
\]
is just the desired one.\qed
\end{proof}
\begin{lem}\label{lem:mat-mea}
Any weak limit of the normalized measure
\begin{eqnarray}
\widehat \mu_{x,s}^\delta:=\frac{\tilde{\mu}_{x,s}^\delta}{\int_{TM\times\mathbb{T}}\mbox{d}\tilde{\mu}_{x,s}^\delta}
\end{eqnarray}
as $\delta\to0_+$ is contained in $\mathfrak M_m(0)$, i.e. a Mather measure.
\end{lem}
\begin{proof}
As is proved in Proposition \ref{prop:leq}, $\tilde{\mu}_{x,s}^\delta$ are uniformly bounded w.r.t. $\delta\in(0,\delta_0]$. Therefore, it suffices to prove that any weak limit $\tilde{\mu}_{x,s}$ of $\tilde{\mu}_{x,s}^{\delta}$ as $\delta\rightarrow 0_+$ satisfies the following two conclusions:\medskip
First, we show $\tilde{\mu}_{x,s}$ is a closed measure. It is equivalent to show that for any $\phi(\cdot)\in C^1(M\times\mathbb{T},\mathbb{R})$,
\[
\lim_{\delta\rightarrow 0_+}\int_{-\infty}^s\frac{\mbox{d}}{\mbox{d}t}\phi(\gamma_{\delta,x,s}^-(t),t)\frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}=0.
\]
Indeed, we have
\begin{eqnarray*}
& &\lim_{\delta\rightarrow 0_+}\int_{-\infty}^s\frac{\mbox{d}}{\mbox{d}t}\phi(\gamma_{\delta,x,s}^-(t),t)\frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}\\
&=&\lim_{\delta\rightarrow 0_+}\int_{-\infty}^s e^{F_\delta(t)-F_0(t)} \frac{f_\delta(t)-f_0(t)}{f_1(t)}\mbox{d}\phi(\gamma_{\delta,x,s}^-(t),t)\\
&=&\lim_{\delta\rightarrow 0_+}\frac{f_\delta(t)-f_0(t)}{f_1(t)}e^{F_\delta(t)-F_0(t)}\phi(\gamma_{\delta,x,s}^-(t),t)\Bigg|_{-\infty}^s\\
& &-\lim_{\delta\rightarrow 0_+}\int_{-\infty}^s\phi(\gamma_{\delta,x,s}^-(t),t)\cdot \mbox{d}\Big(\frac{f_\delta(t)-f_0(t)}{f_1(t)}e^{F_\delta(t)-F_0(t)}\Big)=0
\end{eqnarray*}
because $f_\delta\rightarrow f_0$ uniformly as $\delta\rightarrow 0_+$.\medskip
Next, we can show that
\begin{eqnarray*}
\lim_{\delta\rightarrow 0_+}\int_{-\infty}^se^{F_\delta(t)}\Big[L\Big(\gamma_{\delta,x,s}^-(t),\dot\gamma_{\delta,x,s}^-(t),t\Big)+c(H)\Big]\frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}=0.
\end{eqnarray*}
Note that
$$
\frac{\mbox{d}}{\mbox{d}t}\big(e^{F_\delta(t)}u^-_\delta(\gamma^-_{\delta,x,s}(t),t)\big)=
e^{F_\delta(t)}\big(L(\gamma^-_{\delta,x,s}(t),\dot{\gamma}_{\delta,x,x}^-(t),t)+c(H)\big).
$$
We derive
\begin{eqnarray*}
& &\lim_{\delta\rightarrow 0_+}\int_{-\infty}^se^{F_\delta(t)}\Big[L\Big(\gamma_{\delta,x,s}^-(t),\dot\gamma_{\delta,x,s}^-(t),t\Big)+c(H)\Big]\frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}\\
&=&\lim_{\delta\rightarrow 0_+}\int_{-\infty}^s\frac {\mbox{d}}{\mbox{d}t}\Big( e^{F_\delta(t)}u_\delta^-(\gamma_{\delta,x,s}^-(t),t)\Big)\frac{\mbox{d}e^{F_\delta(t)-F_0(t)}}{f_1(t)}=0,
\end{eqnarray*}
since $u_\delta^-$ is differentiable along $(\gamma_{\delta,x,s}^-(t),\bar t)$ for all $t\in(-\infty,s)$ and $\tilde{\mu}_{x,s}$ is closed.
So we finish the proof.\qed
\end{proof}
\medskip
\noindent{\it Proof of Theorem \ref{thm:3}:} Due to the stability of viscosity solution (see Theorem 1.4 in \cite{CEL}), any accumulating function $u_0^-$ of $u_\delta^-$ as $\delta\rightarrow 0_+$ is a viscosity solution of (\ref{eq:hj-par}) with $\delta=0$. Therefore, Proposition \ref{prop:geq} indicates $u_0^-\in\mathcal{F}_-$, so $u_0^-\leq u_0^*$. On the other side, Proposition \ref{prop:leq} implies $u_0^-\geq \omega$ for any $\omega\in\mathcal{F}_-$ as $\delta\rightarrow 0_+$, since any weak limit of $\widehat \mu_{x,s}^\delta$ as $\delta\rightarrow 0_+$ proves to be a Mather measure in Lemma \ref{lem:mat-mea}. So we have $u_0^-\geq u_0^*$.\qed
\section{Asymptotic behaviors of trajectories of 1-D mechanical systems}\label{s5}
\begin{lem}\label{lem:conti}
For system (\ref{eq:ode0}), $\rho(c)$ is continuous of $c\in H^1(\mathbb{T},\mathbb{R})$.
\end{lem}
\proof
Firstly, all the orbits in $\widetilde \mathcal{A}(c)$ should have the unified rotation number. This is because $\pi^{-1}:\mathcal{A}(c)\rightarrow \widetilde \mathcal{A}(c)$ is a Lipschitz graph and dim$(M)=1$. Secondly, $\varlimsup_{c'\rightarrow c}\widetilde \mathcal{A}(c')\subset \widetilde \mathcal{A}(c)$ due to Lemma \ref{lem:semi-con}. That further indicates $\lim_{c'\rightarrow c}\rho(c')=\rho(c)$. \qed
\begin{lem}\label{lem:om-high}
For system (\ref{eq:ode0}), the rotation number $\rho(c)$ can be dominated by
\begin{eqnarray}
-\|V\|_{C^1}\cdot\varsigma-c\leq \rho(c)\leq \|V\|_{C^1}\cdot\varsigma-c
\end{eqnarray}
where $\varsigma=\varsigma([f])>0$ tends to infinity as $[f]\rightarrow 0_+$.
\end{lem}
\proof
Recall that
\[
\dot p=-V_x(x,t)-f(t)p,
\]
then starting from any point $(x_0,p_0,\bar{t}_0)\in T^*M\times\mathbb{T}$, we get
\[
p(t)=e^{-F(t)}p_0-e^{-F(t)}\int_0^te^{F(s)}V_x(x(s),s)\mbox{d}s, \quad t>0.
\]
As $t\rightarrow +\infty$, we have
\begin{eqnarray}
\lim_{t\rightarrow+\infty}|p(t)|&\leq& \|V\|_{C^1}\cdot \limsup_{t\rightarrow+\infty}e^{-F(t)}\int_0^te^{F(s)}\mbox{d}s\nonumber\\
&\leq & \varsigma(\|f\|) \cdot \|V\|_{C^1}
\end{eqnarray}
for a constant $\varsigma(\|f\|)>0$ depending only on $f$. As a consequence,
\begin{eqnarray}\label{eq:ineq-momen}
-\|V\|_{C^1}\cdot \varsigma \leq\pi_p\widetilde \mathcal{A}(c)\leq\|V\|_{C^1}\cdot\varsigma
\end{eqnarray}
dominates the $p-$component of $\widetilde \mathcal{A}(c)$. \qed\bigskip
\noindent{\it Proof of Theorem \ref{thm:4}:}
The first two items have been proved in previous Lemma \ref{lem:conti} and Lemma \ref{lem:om-high}.
As for the third item, Lemma \ref{lem:om-high} has shown the boundedness of $p-$component of $\Omega$, then due to Theorem \ref{thm:2}, we get the compactness of $\Omega$.\qed
|
1,116,691,497,312 | arxiv | \section{Background}
\label{sec:background}
\begin{figure}[t!]
\includegraphics[scale = 0.5]{pics/response.pdf}
\caption{Service time of GET operations on items of different sizes on our platform (y axis in log scale). The service time measures the interval from the reception of the client request on the server to the transmission of the reply message. To avoid queueing effects, only a single client performs operations in a closed loop. The time to process a large item can be up to almost four orders of magnitude higher than what is needed to serve a small one.}
\label{fig:background:service}
\end{figure}
\subsection{Item Sizes in Production KV Workloads}
\label{sec:background:production}
The sizes of the items stored and manipulated by KV stores in production environments can span orders of magnitude. For instance, large variations in item size have been reported in several deployments of the popular \texttt{memcached} KV store~\cite{memcached}. The Facebook \texttt{ETC} \texttt{memcached} pool stores items that vary in size from a handful of bytes to 1 Mbyte~\cite{Atikoglu:2012}. The size distribution is heavy-tailed: the 5th percentile in the \texttt{regional} pool is 231 bytes, while the 99th percentile is 381KB~\cite{Nishtala:2013}. A similar degree of variability in item size has also been reported for other KV deployments such as Wikipedia~\cite{Lim:2013} and Flickr~\cite{Blott:2015}, where item sizes span up to 4 orders of magnitude, from 500B to 1 MB.
Moreover, Atikoglu et al. report that in the \texttt{ETC} \texttt{memcached} pool at Facebook requests for large items, despite being rare, consume a large share of the computational resources, because service times are closely related to item size, and account for a significant fraction of the transfered data~\cite{Atikoglu:2012}.
This dynamic is consistent with observations from similar application domains, such as, e.g., web servers~\cite{Arlitt:1997,Crovella:1997} and large-scale clusters~\cite{Wang04}.
\subsection{Variations in Item Size and Tail Latencies}
\label{sec:background:hob}
\begin{figure*}[t!]
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[scale = 0.5]{pics/mica_simul_red-crop.pdf}\caption{nxM/G/1.}\label{fig:background:99p:mica}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[scale = 0.5]{pics/ramcloud_simul_red-crop}\caption{M/G/n.}\label{fig:background:99p:ramcloud}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[scale = 0.5]{pics/zygos_simul_red-crop}\caption{nxM/G/1 + work stealing.}\label{fig:background:99p:zygos}
\end{subfigure}
\caption{Throughput vs. 99th percentile of response times for different type of queues (y axis in log scale). The service time distribution is bimodal: 0.125\% of requests is for large items; the remaining is for small ones. A large request has a service time K times larger than a small one. K is varied from 1 to 1,000. A small (<1\%) fraction of large requests suffices to hamper greatly the 99th percentile of response times, and to considerably reduce the achievable throughput.}
\label{fig:background:99p}
\end{figure*}
Variations in item size have profound implications for tail latencies. As anecdotal evidence, Nishtala et al. report that in the Facebook \texttt{memcached} servers the median response time is 333 microseconds, while the 95th percentile is 1.135 milliseconds~\cite{Nishtala:2013}. In this section we show that this finding goes beyond the anecdotal, and that all common size-unaware sharding techniques exhibit high tail latencies for workloads in which even only a small fraction of requests targets large items. In particular, we show that, even under moderate loads, the (100-N)-th percentile is affected dramatically by a fraction, much smaller than N\%, of requests for large items. In the following we report on the 99th percentile, commonly used in Service Level Objective (SLO) definitions, but the results apply
to other high percentiles as well.
We simulate the operation of three common size-unaware sharding techniques on a server with n cores:
\begin{itemize}[leftmargin=*]
\item {\bf Multiple queues (nxM/G/1)},
where requests are dispatched immediately (early binding) to a queue for a particular core, often based on a keyhash, similar to what is used, for instance, in the EREW version of MICA~\cite{Lim:2014}.
\item {\bf Single-queue (M/G/n)}, in which requests are kept in a single queue and dispatched to a core when it becomes idle (late binding), similar to what is used, for instance, in RAMCloud~\cite{Ousterhout:2015}.
\item {\bf Multiple queues augmented with work stealing}, where requests are handled as in nxM/G/1, but in addition idle cores steal requests from the queues of other cores, similar to what is used, for instance, in ZygOS~\cite{Prekas:2017}.
\end{itemize}
For simplicity, we use a workload with a bimodal size distribution.
Small requests form 99.875\% of the workload, and have a service time of 1 time unit. Large requests form the remaining 0.125\%. We run different simulations in which the service time of these large requests is, respectively, K = 10, 100 and 1,000 time units. These values are in line with the order-of-magnitude differences in service time between small and large items observed on our platform (See Figure~\ref{fig:background:service} for a graph that depicts service time as a function of item size).
Inter-arrival times follow an exponential distribution.
We furthermore assume an idealized scenario with zero overhead for dispatching requests to cores, no need for synchronization, and no adverse effects from lack of locality.
We stress that our goal with this simulation is {\em not} to predict differences between these strategies in any real implementation, as their
performance in practice is greatly affected by various considerations such as locality, cost of synchronization, and cost of dispatching, which are not modeled in this simulation.
Instead, our goal is to demonstrate, for all three methods, the substantial increase in tail latency as a result of the presence of a small fraction of requests for large items.
Figure~\ref{fig:background:99p} shows the 99th percentiles for the three sharding strategies under this workload compared to a workload with an identical offered load, but with only requests for small items.
Even though the fraction of large items requested is much smaller than 1 percent, Figure~\ref{fig:background:99p} shows a very considerable increase in the 99th percentile latency for all three strategies. {\color{black}For K = 100 and K = 1,000, at only 10\% utilization, the 99th percentile for nxM/G/1 is one to two orders of magnitude higher than the 99th percentile in the workload composed only of small requests. Stealing and late binding are more resilient to service time variability at low load. As the load grows, however, they also suffer from one or two orders of magnitude degradation of the 99th percentile, with respect to the workload with only small requests.}
While all strategies produce increases in the 99th percentile, the reasons for these increases are somewhat different from one strategy to the next.
The nxM/G/1 strategy suffers from head-of-line blocking when a request for a small item ends up in a queue behind a request for a large item or behind a request for a large item being executed by this core.
The late binding of requests to cores makes M/G/n more resilient against head-of-line blocking than nxM/G/1, a well known result from queueing theory~\cite{Harchol-Balter:2013:book}, but it does not completely avoid it. The nxM/G/1 strategy is vulnerable to cases in which the arrival of many large requests in a short period of time leads many (or even all) cores to be busy serving large requests. Such an event temporarily reduces the amount of resources available to serve small requests, which impacts tail latency.
Stealing improves the tail latency of nxM/G/1, as it steals some of the requests that would otherwise experience head-of-line blocking
but it cannot completely avoid head-of-line blocking. First, stealing only occurs when a core is idle, and the likelihood of a core being idle decreases as the load increases. Second, by the time a core becomes idle,
a request that it steals is likely to have already experienced some head-of-line blocking in the queue from which it is stolen.
\vspace{-8pt}
~\\
In light of these results, Minos{} processes requests for small and large items on disjoint set of cores, a technique we call {\em size-aware sharding}. This addresses the shortcomings of existing approaches, by avoiding that a request on a small item waits for the completion of a request on a large one.
\section{Size-aware sharding in Minos{}}
\label{sec:design}
\noindent{\bf Preliminaries.} We consider a server with $n$ cores.
The server has a multi-queue NIC, with multiple receive (RX) and transmit (TX) queues.
We configure the NIC with $n$ RX queues and $n$ TX queues.
At any time,
there are $n_l$ cores handling requests for large items and $n_s$ cores handling requests for small items ($n_l$ + $n_s$ = $n$).
With a slight abuse of language, we say that a request for a small (large) item is a small (large) request, and that a core handling small (large) requests is a small (large) core.
In addition to an RX and a TX queue, each large core maintains a software queue.
In the following, we assume all $n$ cores are within the same NUMA domain, so that KV item accesses and inter-core communication happen within the same NUMA domain. Minos{} can seamlessly scale to multiple NUMA domains by running an independent set of small and large cores within each NUMA domain, and by having clients send requests to the NUMA domain that stores the target key~\cite{Lim:2014}.
We consider a KV store with the usual CRUD (Create, Read, Update, Delete) semantics.
A client can perform a GET(key) and a PUT(key, value).
Create and delete are considered special versions of PUT, and not discussed any further.
When a client issues GET and PUT operations, the client software puts in the request the id of the RX queue
in which the corresponding packets are deposited when they arrive at the server. The target RX queue is chosen at random for GET operations, and depends on the keyhash for PUT operations (as we describe in Section~\ref{sec:impl:kvs}).
A PUT request also includes the size of the item that is being written.
The client does not know the size of an item to be read.
Furthermore, the client does not need to know which or how many cores on the server handle small or large requests.
In the following discussion we initially assume that we know the threshold on the item size that separates small and large items. We explain later how the actual threshold is determined. We first explain size-aware sharding with a given number of small cores and one large core. We explain later how the number of small and large cores is determined, and how the system operates with a number of large cores different from one.
\vspace{-8pt}
~\\\noindent{\bf Receiving incoming requests.}
Only the small cores read incoming requests from the RX queues.
They do so in batches, to amortize the cost of communicating with the NIC.
Each small core repeats the following sequence of actions w.r.t. the RX queues: First, it reads a batch of B requests from its own RX queue.
Then it reads a batch of B/$n_s$ requests from the RX queue of the large core.
In this way, all RX queues are drained at approximately the same rate.
The reason a large core never reads incoming requests from its RX queue is that, if it were to receive a small request, this request could experience head-of-line blocking behind large requests.
We start by explaining how GET operations are handled.
\vspace{-8pt}
~\\\noindent{\bf Operation of the small cores.} For each request, a small core looks up the item associated with the requested key. If its size is below the threshold, the small core continues the GET operation and replies to the client with the requested item (by putting the corresponding reply packet(s) on its TX queue).
If the length is above the threshold, the small core puts the request in the software queue of the large core.
\vspace{-8pt}
~\\\noindent{\bf Operation of large core.} A large core looks at its software queue. If it finds an incoming request, it finds the corresponding item, and replies to the client by putting the item in its TX queue.
\vspace{-8pt}
~\\
The operation of a PUT is mostly similar, except that the size is known to the client and present in the request. There is therefore no need to do a lookup to find the size, and, depending on the new size, the request is handled either immediately by a small core or passed on by a small core to the large core and handled there.
\vspace{-8pt}
~\\\noindent{\bf How to find the threshold between large and small.} Each small core maintains a histogram of the number of requests that correspond to item sizes in certain ranges. This histogram is updated on the receipt of every request according to the size of the target item. Periodically, core 0 aggregates these histograms, finds the size corresponding to the 99th percentile, declares that size to be the threshold for the next epoch, and resets the histograms to zero.
To be resilient to transient workload oscillations, core 0 smooths the values in the aggregated histogram (noted $H$) according to a moving average that uses the histogram obtained in the previous epoch (noted $H_{curr}$). That is, for each entry $i$, core 0 computes $H_{curr}[i] = (1-\alpha) H_{curr}[i] \cdot \alpha H[i] $, and uses the new $H_{curr}$ to determine the 99th percentile. $\alpha$ is a discount factor in the range [0,1], and determines the weight of the new measurements over previous ones. Because Minos{} targets high throughput workloads, many item sizes are sampled during an epoch. Hence, $H$ is highly representative of the current workload, and is assigned a weight equal to 0.9~\cite{Zhang:2005}.
\vspace{-8pt}
~\\\noindent{\bf How to choose the number of small cores.} We maintain a cost function that gives us for a request of a given size a certain processing cost. Minos{} can use various cost functions, but currently uses the number of network packets handled to serve the request as cost, either the number of packets in an incoming PUT request or the number of packets in an outgoing GET reply. Alternatives would be the number of bytes or a constant plus the number of bytes. In any case, the fraction of cores that serve as small cores is set to the ceiling of the fraction of the total processing cost incurred by small requests times the total number of cores. The remaining cores are used as large cores.
\vspace{-8pt}
~\\\noindent{\bf Operating with a number of large cores different from one.} If, as a result of the above calculation, there is more than one large core, then Minos{} distributes the large requests over the large cores such that each large core handles a non-overlapping contiguous size range of requests, and such that the processing cost of requests assigned to each large core is the same. By doing so, not only does Minos balance the load on large cores, but it also shards large requests in a size-aware fashion. That is, the smallest among the large requests are assigned to the first large core, and larger requests are progressively assigned to other cores. Each large core has a software queue, and a small core that receives a large request puts the request in the software queue of the large core that is handling the size of the requested item.
If all cores are deemed to be small cores, then one core is designated a standby large core. In other words, it handles small requests, but if a large request arrives, it is sent to this core, which then becomes a large core.
\vspace{-8pt}
~\\\noindent{\bf Design rationale.} The goal of Minos{} is to improve the 99th percentile. To that end we identify the smallest 99 percent of the requests. We isolate the processing of these requests from the processing of larger requests, such that no head-of-line blocking occurs. Furthermore, we assign a sufficient number of cores to handle these requests such that no long request queues can materialize for these cores.
The use of randomization and of the hashed value of the key to decide the target RX queue for a request leads to reasonable load balance among the RX queues. A similar observation was made in the context of MICA~\cite{Lim:2014}.
Since the small cores handle the requests that arrive in their own RX queue, and an equal portion of the requests that arrive in the RX queues of the large cores, overall the load is balanced among the small cores.
By using purely hardware dispatch for the small requests we eliminate any unnecessary overhead in their processing, such as, for instance, software dispatches.
We achieve these results while never dropping large requests, since there is always at least one core available for handling large requests.
The only overheads compared to a purely hardware dispatch solution such as MICA are then:
1) software dispatch for the very small number of large requests,
2) synchronization on the RX queue and the software queue of the large cores, for which we found contention to be low, and
3) some minor loss in locality for the small requests that arrive in the RX queues of large cores.
\section{Evaluation}
\label{sec:eval}
~\noindent{\bf Summary.} The highlights of our evaluation are as follows.
\begin{itemize}[leftmargin=*]
\item Minos{} achieves both low latency and high throughput. Compared to its closest competitor, Minos{}
achieves a 99th percentile latency that is one to two orders of magnitude lower (\S~\ref{sec:eval:default}, \S~\ref{sec:eval:dynamic}).
With a 99th percentile specified to be 10 times the mean service time, its throughput is up to 7.4 times higher than the second best approach (\S~\ref{sec:eval:variability}).
\item Minos{} achieves good performance under both read-intensive and write-intensive workloads (\S~\ref{sec:eval:write}).
\item Minos{} scales with the amount of available network bandwidth (\S~\ref{sec:eval:bandwidth}).
\item Minos{} achieves load balancing across cores (\S~\ref{sec:eval:lb}).
\item Minos{} can adapt to changing workload conditions (\S~\ref{sec:eval:dynamic}).
\end{itemize}
\subsection{Default workload}
\label{sec:eval:default}
\noindent{\bf Throughput vs. 99th percentile latency.} Figure~\ref{fig:eval:default} shows the 99th percentile latency (99p) as a function of the throughput with the default workload.
Minos{} achieves the best peak throughput (6.2 Mops) and the lowest latency ($\leq 50 \mu sec$ up to 90\% of peak throughput).
Minos{} achieves similar peak throughput as HKH and HKH+WS, reflecting the fact that all three systems rely mostly or entirely on hardware handoff for request distribution (at very high load, stealing in HKS+WS rarely happens).
SHO achieves 10\% less peak throughput, because it is bottlenecked by the software handoff. In terms of 99th percentile, Minos{} does better than HKH at any load, with improvements reaching an order of magnitude as soon as the load exceeds 1 Mops.
HKH+WS and SHO start out with similar 99th percentile latencies as Minos{} under loads below 1 Mops, but under high load their 99th percentile latencies
rapidly deteriorate to reach values similar to HKH.
For an SLO on the 99th percentile latency of 50 $\mu$sec, i.e., 10 times the mean service time of a request, Minos{} can perform 5.6 Mops, 2.4 times the throughput of its best competitor (HKH+WS). For an SLO of 100 $\mu$sec, Minos{} still achieves 1.75 times the throughput of its best competitor.
Minos{} achieves the best performance by overcoming the limitations of existing designs when dealing with variable-size items.
Early binding in HKH causes head-of-line blocking even at relatively low loads. At low or medium loads, work stealing mitigates head-of-line blocking in HKH, and brings $HKH+WS$ latencies close to that of late binding in SHO. As the load increases, however, stealing occurs more rarely, and the performance of HKH+WS degrades to that of $HKH$. Late binding in SHO largely avoids head-of-line blocking, but sudden spikes of large requests hurt the high-percentile latencies of small requests. Further, the maximum throughput of SHO is bottlenecked by the maximum handoff rate sustainable by the handoff cores.
\vspace{-8pt}
~\\\noindent{\bf Latency of large requests.}
Minos{} leverages the insight that the latency of the slowest 1\% of the requests does not impact the 99th percentile.
Minos{} restricts the 1\% largest requests to a subset of the cores, which may result in increased latencies for such requests.
We now evaluate the performance penalty incurred by large requests in Minos{} as a consequence of size-aware sharding between small and large requests. Figure~\ref{fig:eval:large} reports the 99th percentile latency of large requests in Minos{} and HKH+WS (the best alternative).
Inevitably, Minos{} imposes some penalty on the performance of large requests under high load, reaching up to a factor of 2 for the 99th percentile latency of large requests before the system goes into saturation.
In this workload, large requests account to 0.125\% of the total, so the 99th percentile of large requests corresponds to 0.00125\% of the overall number of requests. We argue that moderately penalizing the very tail of the latency distribution is a reasonable price to pay for the order-of-magnitude improvement for the 99th percentile.
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.55]{pics/default_large}
\caption{Throughput vs. 99th percentile latency of large requests with the default workload (y axis in log scale). Minos{} trades its large benefits in terms of the overall 99th percentile for a moderate penalty on a minority of large requests, which already represent a small fraction of the workload.}
\label{fig:eval:large}
\end{figure}
Minos{} can improve the latency of large requests by allocating more cores to them. Minos{} currently determines the number of small cores by taking the ceiling of the total number of cores times the percentage of load generated by small requests.
For this particular workload, it allocates only one core to the large requests.
This represents an over-allocation to small requests to completely isolate them from large requests, and hence an under-allocation for large requests.
An alternative strategy is to allocate one more core to large requests, and let large cores steal from the RX queues of small ones to fully use any extra capacity.
To avoid re-introducing head-of-line blocking, stealing can be done one request at a time, so that there is never a small request queued behind a large request.
We are currently experimenting with this alternative design, which would improve performance for large requests, while only introducing a small degradation for small requests.
\subsection{Write-intensive workload}
\label{sec:eval:write}
We now investigate the effect of write intensity on Minos{}.
Figure~\ref{fig:eval:write} reports the 99th percentile of response times with all four systems and a 50:50 GET:PUT workload.
Minos{} continues to deliver a 99th percentile latency 1 order of magnitude lower than alternative approaches, up to the saturation point at 6.3 Mops, but overall achieves a lower (by 10\%) throughput than HKH and HKH+WS.
Throughput values are in general higher than with the 95:5 workload, because replying to a PUT requires less network bandwidth, since the response message does not contain any item value payload. This behavior is consistent with that observed by previous work~\cite{Lim:2014}. SHO is the only exception, as handoff cores represent the bottleneck.
A write-intensive workload shifts the bottleneck from the NIC to the CPU. Minos{} saturates the CPU earlier than HKH and HKH+WS because of the overhead stemming from profiling the workload and periodically aggregating them on core 0 to compute the 99th percentile of the item sizes. We are currently investigating techniques to reduce such overhead, e.g., sampling only a subset of the requests.
Alternatively, if traces of the target workload are available for off-line analysis (as typical in production workloads~\cite{Atikoglu:2012,Nishtala:2013,Reda:2017}), the threshold between large and small requests can be set statically. With this variant, Minos{} is able to match the throughput of HKH and HKH+WS.
\subsection{Sensitivity to item size distribution}
\label{sec:eval:variability}
We vary the percentage of large requests in the workload ($p_L$) and the maximum size of large requests ($s_L$). When changing the value of one, the other parameter keeps the default value. We then measure the maximum throughput achievable under different SLOs on the 99th percentile latency of 10 and 20 times the mean service time, i.e., $50 \mu sec$, and $100 \mu sec$.
Figure~\ref{fig:eval:pl} and Figure~\ref{fig:eval:sl} report the increase in throughput achieved by Minos{} compared to the other designs (y axis in log scale).
Figure~\ref{fig:eval:pl} shows the results of the experiments in which we change $p_L$. Figure~\ref{fig:eval:sl} refers to changing $s_L$.
The graph on the left uses an SLO of 50 $\mu$sec, the one on the right 100 $\mu$sec.
When varying $p_L$, the maximum throughput achieved by Minos{} within the $50 \mu sec$ ($100\mu sec$) SLO ranges from 6.2 to 1.7 Mops (6.9 to 2.3 Mops), corresponding to $p_L = 0.0625$ and $p_L = 0.75$.
When varying $s_L$, the maximum throughput achieved by Minos{} within the $50 \mu sec $ ($100\mu sec$) ranges from 6.2 to 4.7 Mops (6.9 to 4.7 Mops), corresponding to $s_L = 250 KB$ and $s_L = 1000KB$.
\begin{figure}[t]
\centering
\includegraphics[scale = 0.55]{pics/write}\caption{Troughput vs. 99th percentile latency for Minos{} vs. existing designs with the 50:50 GET:PUT workload (y axis in log scale).}
\label{fig:eval:write}
\end{figure}
\begin{figure*}[t]
\begin{subfigure}
{0.48\textwidth}
\centering
\includegraphics[scale = 0.5]{pics/norm-LP50}\caption{99p $\leq 50 \mu$sec.}\label{fig:eval:pl:50}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale = 0.5]{pics/norm-LP100}\caption{99p $\leq 100 \mu$sec.}\label{fig:eval:pl:100}
\end{subfigure}
\caption{Maximum throughput achievable for a given 99the percentile latency SLO with different percentages of large requests (y axis in log scale). Each bar represents the speedup of Minos{} over an alternative design (higher is better).}\label{fig:eval:pl}
\end{figure*}
\begin{figure*}[t]
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale = 0.5]{pics/norm-LS50}\caption{99p $\leq 50 \mu$sec.}\label{fig:eval:sl:50}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[scale = 0.5]{pics/norm-LS100}\caption{99p $\leq 100 \mu$sec.}\label{fig:eval:sl:100}
\end{subfigure}
\caption{Maximum throughput achievable for a given 99th percentile latency SLO with different maximum sizes of large requests (y axis in log scale). Each bar represents the speedup of Minos{} over an alternative design (higher is better).} \label{fig:eval:sl}
\end{figure*}
\begin{figure*}[t]
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[scale = 0.55]{pics/scal_resp}\caption{Throughput vs. 99th percentile latency}\label{fig:eval:sampling:resp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[scale = 0.55]{pics/scal_nic}\caption{Throughput vs. NIC utilization.}\label{fig:eval:sampling:nic}
\end{subfigure}
\caption{Scalability of Minos{} with more network bandwidth ($p_L = 0.75$). $S$ is the sampling percentage used to simulate more network bandwidth available. Minos{} processes and replies to a percentage $S$ of the requests. The remainder is processed, but the reply is dropped. Minos{} scales with more bandwidth (a) and saturates the NIC (b), except when the CPU becomes the bottleneck ((b), S = 25).} \label{fig:eval:sampling}
\end{figure*}
\begin{figure*}[t!]
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[scale = 0.5]{pics/lb_ops-crop}\caption{{\bf Operations} per second.}\label{fig:eval:lb:xput}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[scale = 0.5]{pics/frames-crop}\caption{{\bf Packets} per second.}\label{fig:eval:lb:frames}
\end{subfigure}
\caption{Breakdown of the load per core in Minos{} (y axis in log scale). Large cores process fewer requests per second than small cores (a), but the number of packets processed per second is uniformly distributed across cores (b).}\label{fig:eval:lb}
\end{figure*}
Minos{} outperforms existing designs, achieving consistently higher throughput for a given workload and a given SLO. The throughput speedup grows with $p_L$ and $s_L$, because
the increased presence of large(r) requests negatively affects the latency of small requests, and hence the 99th percentile.
As expected, the throughput gains are higher with the stricter SLO: the looser is the performance target, the smaller is the impact of Minos{}' design.
For the stricter SLO, Minos{} achieves a speedup of up to 7.4 w.r.t to HHK+WS (corresponding to the $p_L = 0.75$ case), i.e., the second best design. For the looser SLO, the speedup ranges from 1.34 ($s_L = 250$KB) to 3.9 ($p_L = 0.75$).
\subsection{Higher network bandwidth}
\label{sec:eval:bandwidth}
With the default workload, the NIC is 93\% utilized.
With higher percentages of large requests, the system becomes network-bound.
In this section we investigate whether Minos{} can take advantage of larger network bandwidths.
Because we cannot provision our machines with more bandwidth, we shift the bottleneck from the NIC to the CPU by sampling the number of replies that the server sends back to clients.
That is, the server processes requests as before, up to the time at which it would otherwise send the reply to the client. Then, instead, it only sends replies to a percentage $S\%$ of the total requests, and drops the remaining ones.
We vary $S$ from 100 to 25, and we measure the achieved performance (throughput and 99th percentile latency), as well as the utilization of the NIC. We choose the read-intensive workload with $p_L = 0.75$, as it quickly saturates the NIC when Minos{} replies to all requests.
Figure~\ref{fig:eval:sampling} reports the results of the experiment. The left plot shows the throughput vs. 99th percentile latency (y axis in log scale). The right one shows the utilization of the NIC as a function of the throughput. As $S$ decreases, Minos{} can sustain higher loads, because the bottleneck is increasingly shifted towards the CPU. Minos{} is able to fully utilize the available resources, by always reaching throughput values that bring either the NIC (S = 100,75,50) or the CPU (S = 25) close to saturation.
\subsection{Load balancing}
\label{sec:eval:lb}
\begin{figure}[b!]
\centering
\includegraphics[scale = 0.55]{pics/dyn-crop}\caption{Evolution over time of the 99th percentile latency of Minos{} and HKH+WS with a dynamic workload (top, with y axis in log scale) and evolution over time of number of large cores in Minos{} (bottom). Every 20 seconds the percentage of large requests changes, first growing from 0.125 to 0.75 and then shrinking back. Minos{} adapts to changing workload conditions and delivers up to two orders of magnitude lower 99th percentile latencies.}\label{fig:eval:dyn}
\end{figure}
We now evaluate the ability of Minos{} to distribute the load evenly across cores, for a variety of workloads.
To this end, we measure the load sustained by each core with $p_l$ = 0.0625, 0.25, 0.75, corresponding to low, medium and high load posed by large requests.
Figure~\ref{fig:eval:lb:xput} reports the percentage of requests performed, and Figure~\ref{fig:eval:lb:frames} reports the percentage of packets processed by each core (y axis in log scale).
Two conclusions can be drawn.
First, all cores process roughly the same number of packets, and hence roughly perform the same amount of work.
Small cores obviously process more requests per second, as these requests involve less work. Large cores process different requests per second among each other, as a consequence of the size-aware sharding that Minos{} implements also within large requests.
Second, Minos{} varies the number of small and large cores as a function of the workload, such that the work is balanced among all cores.
\subsection{Dynamic workload}
\label{sec:eval:dynamic}
We finally demonstrate the capability of Minos{} to adapt to changing workloads. To this end, we run a workload in which the percentage of large operations $p_L$ varies every 20 seconds. It first grows gradually from 0.125 to 0.75, and then shrinks back to 0.125. We keep the request arrival rate fixed at 2.25 Mops, corresponding to high load for $p_L = 0.75$.
Figure~\ref{fig:eval:dyn}(top) compares the performance achieved by Minos{} and HKH+WS, i.e., the second best design. Each point represents the 99th percentile latency as measured over a 1 second window (y axis in log scale).
Figure~\ref{fig:eval:dyn}(bottom) shows how many cores Minos{} assigns to large requests over time.
Minos{} achieves latencies up to 2 orders of magnitude lower than HKH+WS ($\approx 70 \mu$sec vs $\approx 1 msec$ with $p_L = 0.75$). Minos{} achieves this result by programmatically allocating cores to small and large requests proportionally to their corresponding loads.
\section{Implementation}
\label{sec:impl}
\subsection{Network stack}
\label{sec:impl:net}
Minos{} relies on the availability of a multi-queue NIC with support for redirecting, in hardware, a packet to a specific queue on the NIC (e.g., RSS~\cite{rss} or Flow Director~\cite{Fdir:intel}).
This feature is now commonplace in commodity NICs.
To reduce packet processing overhead, Minos{} uses the Intel DPDK library~\cite{dpdk} to implement a user-level zero-copy network stack.
All memory for the DPDK library is statically allocated and accessible by all cores.
Packets are received directly in memory, thus enabling zero-copy packet processing.
Furthermore, Minos{} uses DPDK-provided lockless software rings to dispatch large requests from small to large cores
without any copies~\cite{dpdk-rings}.
Small cores check for incoming requests by means of polling, to avoid costly interrupts~\cite{Ousterhout:2015}.
Similarly, large cores use polling to check for incoming requests on their software queue.
Requests are moved in batches to further limit overhead.
Communication between clients and servers uses UDP, implemented on top of Ethernet and IP.
Clients use the UDP header to specify the target RX queue for a given packet.
Requests that span multiple frames (large PUT requests and large GET replies)
are fragmented and defragmented at the UDP level.
Retransmission is handled by the client. Similar to previous work~\cite{Lim:2014}, Minos{} does not support exactly-once semantics and assumes idempotent operations. Guaranteeing exactly-once semantics can be achieved by means of request identifiers.
\subsection{KV store and memory management}
\label{sec:impl:kvs}
\noindent{\bf Data structures.} Minos{} employs the KV data structures used in MICA~\cite{Lim:2014}.
Keys are split in {\em partitions}.
Each partition is a hash table, each entry of which points to a bucket, equal in size to a cache line.
Each bucket contains a number of slots, each of which contains a tag and a pointer to a key-value item.
A first portion of the keyhash is used to determine the partition, a second portion to map a key to a bucket
within a partition, and a third portion forms the tag~\cite{Fan:2013}, which is used to reduce the number of random memory accesses when performing a key lookup~\cite{Lim:2014}.
Overflow buckets are dynamically assigned to a bucket when it has reached its maximum capacity.
\vspace{-8pt}
~\\\noindent{\bf Memory management.} The current prototype of Minos{} employs the memory manager of the DPDK library to handle allocation of memory regions for key-value entries. Minos{} can be extended to integrate more efficient memory allocators, such as the one based on segregated fits of MICA, or a dynamic one as in Facebook's \texttt{memcached} deployment~\cite{Nishtala:2013}.
\vspace{-8pt}
~\\\noindent{\bf Concurrency control scheme.} Minos{} uses a concurrency control scheme that is similar to Concurrent Read Exclusive Write (CREW)~\cite{Lim:2014}. In CREW, each core is the {\em master} of one partition, and a given key can be written only by the master core of corresponding partition. This naturally serializes write operations on a key.
The concurrency control scheme in Minos{} differs slightly from CREW, as a result of the distinction between small and large cores.
PUTs on keys whose master core is a small core proceed along the lines of CREW.
PUTs on keys whose master core is a large core may be served by any core (either because the request is small, or because it is dispatched to a large core different from the one which receives the request). Hence, PUTs are guarded by a spinlock.
We argue (and we experimentally show) that the corresponding overhead is largely outweighed by the benefits of size-aware
sharding, especially for the read-dominated workloads that are prevalent in production environments~\cite{Atikoglu:2012,Nishtala:2013,Bronson:2013,Noghabi:2016}.
First, in such workloads PUTs are rare.
Second, PUTs on large cores proceed without contention, because large cores serve non-overlapping size ranges, so requests for the same large item are sent to the same core.
Third, PUTs on small cores mostly proceed without contention because of the CREW nature of the concurrency protocol
for keys whose master is a small core.
GETs can be served by any core, and are
served by means of an optimistic scheme~\cite{Lim:2014}.
Each bucket has a 64-bit epoch, which is incremented when starting and ending a write on a key stored in that bucket.
Upon reading, a core looks at the epoch.
If it is odd, then there is an ongoing write on a key of the bucket, and the read is stalled until the epoch becomes even.
If (or when) the epoch is even, the core saves the current epoch value and performs the read.
After the read, the core re-reads the epoch of the bucket.
If the value is the same as when the read started, the read is successful.
Else, a conflicting write might have taken place, and the read is restarted.
\section{Introduction}
\label{sec:intro}
Many distributed applications use in-memory key-value (KV) stores as caches or as (non-persistent) data repositories~\cite{Nishtala:2013,Atikoglu:2012,Bronson:2013,memcached,Li:2017,Jin:2017}.
Their performance, both in terms of throughput and latency,
is often critical to overall system performance.
Many of these applications exhibit a high fan-out pattern, i.e., they issues a large number of requests in parallel~\cite{Nishtala:2013}.
From the application's standpoint, the overall response time is then determined by the slowest of the responses to these requests, hence the crucial importance of tail latency for KV stores~\cite{Dean:2013}.
Given their importance, the performance of KV stores has been the subject of much recent work, both in terms of software and hardware.
Software optimizations include, among others, zero-copy user-level networking stacks, polling, run-to-completion processing, and sharding of requests between cores~\cite{Lim:2014,Ousterhout:2015,Kapoor:2012}.
Hardware optimizations primarily revolve around the use of RDMA~\cite{Kalia:2014,Kalia:2016}, programmable NICs~\cite{Kaufmann:2016,Li:2017} or GPUs~\cite{Zhang:2015,Hetherington:2015}.
The work reported in this paper does not require any particular hardware support.
Instead, we assume commodity NICs with multiple queues and some mechanism to direct requests to a particular queue.
\vspace{-8pt}
~\\\noindent{\bf Variable item sizes and tail latency.}
The workload observed for many KV stores consists of a very large number of requests for small items and a much smaller number of requests for large items~\cite{Atikoglu:2012,Nishtala:2013,Blott:2015}.
Because of their higher service times, however, handling the requests for larger items consumes a significant share of the available resources.
Processing these large items therefore increases the probability of head-of-line blocking, a situation in which a request for a small item ends up waiting while a large item is being processed.
As a result of the wait, that request experiences additional latency, which in turn may increase the tail latency of the KV store.
Even a very small number of requests for large items can significantly drive up tail latencies.
More specifically, a percentage of large requests much smaller than N percent can lead to a considerable increase of the (100-N)th percentile.
\vspace{-8pt}
~\\\noindent{\bf Size-aware sharding.}
This paper introduces the notion of size-aware sharding to address this issue.
In general, size-aware sharding means that requests for items of different sizes go to different cores.
In its simplest form, it means that, for some cutoff value between small and large, small and large items are served by disjoint sets of cores.
The intuition behind size-aware sharding is that by isolating the requests for small items, they do not experience any head-of-line blocking, and, given that they account for a very large percentage of requests, the corresponding percentile of the latency distribution is improved.
The implementation of size-aware sharding poses several challenges.
A first challenge is how to continue to use hardware dispatch of an incoming request to the right core. In general, a client of the KV store does not know the size of an item to be read, and moreover it does not know which cores are responsible for small or large items. Therefore, size-aware sharding would seem to necessitate a software handoff in which an I/O core reads incoming requests and dispatches them to the proper core.
Instead, we demonstrate a method by which software dispatch is required only for the very small number of requests for large items.
Second, cutoff values between large and small items must be chosen and the proper number of cores must be allocated for handling small and large items. We show that, even in the presence of a workload that varies over time, this can be done by a simple control loop.
\vspace{-8pt}
~\\\noindent{\bf Minos{}.} This paper describes the Minos{} in-memory KV store that implements size-aware sharding using the above techniques. We compare Minos{} to alternative size-unaware designs based on keyhash-based request sharding, software handoff and work stealing, implemented by state-of-the-art systems such as MICA~\cite{Lim:2014}, RAMCloud~\cite{Ousterhout:2015} and ZygOS~\cite{Prekas:2017}.
We show that Minos{} achieves a 99th percentile latency that is up to two orders of magnitude lower than the second best approach. Put differently, for a given value for the 99th percentile latency equal to 10 times the mean service time, Minos{} achieves a throughput that is up to 7.4 times higher.
\vspace{-8pt}
~\\\noindent{\bf Contributions.} The contributions of this paper are:
\begin{itemize}[leftmargin=*]
\item the introduction of the notion of size-aware sharding for in-memory KV stores,
\item the design and implementation of the Minos{} KV store that implements size-aware sharding efficiently, and
\item the evaluation of Minos{} against state-of-the-art size-unaware designs.
\end{itemize}
\vspace{-8pt}
~\\\noindent{\bf Roadmap.} The outline of the rest of this paper is as follows. Section~\ref{sec:background} provides background on KV store workloads and discusses the shortcomings of existing approaches in achieving low tail latency. Section~\ref{sec:design} presents Minos{}' size-aware sharding approach. Section~\ref{sec:impl} discusses implementation details. Section~\ref{sec:testbed} describes the experimental environment. Section~\ref{sec:eval} presents experimental results. Section~\ref{sec:rw} discusses related work. Section~\ref{sec:conclusion} concludes the paper.
\section{Conclusion}
\label{sec:conclusion}
This paper presents Minos{}, an in-memory key-value store designed to deliver $\mu$sec-scale tail latency with workloads characterized by highly variable item sizes, as frequent in production workloads.
Minos{} implements size-aware sharding, a new technique that assigns small and large requests to disjoint set of cores. This ensures small requests never wait due to the collocation with a long request. Minos{} identifies at runtime the size threshold between long and short requests, and the amount of cores to allocate to them, so as to achieve low 99th percentile latency.
We compare Minos{} to three state-of-the-art designs and we show that, compared to its closest competitor, Minos{} achieves a 99th percentile latency that is up to two orders of magnitude lower. Put differently, for a given value for the 99th percentile latency equal to 10 times the mean service time, Minos{} achieves a throughput that is up to 7.4 times higher.
\bibliographystyle{ACM-Reference-Format}
\section{Related Work}
\label{sec:rw}
To the best of our knowledge, Minos{} is the first KV store to introduce the concept of size-aware sharding to address the challenges of delivering $\mu$sec-scale tail latency in presence of item size variability. We now discuss related systems.
\vspace{-8pt}
~\\\noindent{\bf In-memory KV stores.} A plethora of in-memory KV stores have been proposed in the last years. These systems propose different designs based on new data-structures (CPHash~\cite{Metreveli:2012}, Masstree~\cite{Mao:2012}, MemC3~\cite{Fan:2013}) and lightweight network stacks (Chronos~\cite{Kapoor:2012}, MICA~\cite{Lim:2014,Li:2015}, RamCloud~\cite{Ousterhout:2015}, RockSteady~\cite{Kulkarni:2017}), or on the use of RDMA (Pilaf~\cite{Mitchell:2013}, Herd~\cite{Kalia:2014}, FaRM~\cite{ Dragojevic:2014}, RFF~\cite{Su:2017}, FaSST~\cite{Kalia:2016}), FPGAs (KV-Direct~\cite{Li:2017}), GPUs (Mega-KV~\cite{Zhang:2015}, MemcacheGPU~\cite{Hetherington:2015}), HTMs (DrTM~\cite{Wei:2015,Chen:2016}), or other specialized hardware (~\cite{Kaufmann:2016,Blott:2015}).
None of these systems addresses the problem of achieving low tail latency in presence of item size variability, which is the primary focus of Minos{}. In addition, Minos{} only assumes the availability of commodity hardware. Investigating the synergies between the design of Minos{} and specialized hardware is an interesting avenue for future work.
\vspace{-8pt}
~\\\noindent{\bf Size-aware data-stores.} We are aware of a few data stores that take into account the size of items or requests to improve performance. Rein~\cite{Reda:2017} supports multi-key get requests and processes them taking into account the number of keys involved in a request. Rein relies on the assumption that there is only a weak correlation between the size of an item and the service time of a request for that item. Minos{}, instead, targets workloads with high item size variability, for which the service time of a request strongly depends on the size of the corresponding item (see Figure~\ref{fig:background:service}).
AdaptSize~\cite{Berger:2017} is a caching system for content delivery networks that reduces the probability of caching large objects, so as to increase the hit rate of smaller, more frequently accessed ones. AdaptSize targets a problem that is orthogonal to Minos{}, which assumes the presence in memory of both small and large items.
Other data stores for non-homogeneous requests~\cite{Harchol-Balter:2003,Zhang:2005,Ciardo:2001} target static content and leverage the {\em a priori} presence of a central component (the Linux kernel on a single-core architecture~\cite{Harchol-Balter:2003} or a scheduler in a distributed system~\cite{Ciardo:2001,Zhang:2005}) to implement request scheduling. By contrast, Minos{} deals with mixed read/write workloads and is suited for multi-core architectures with multi-queue NICs.
\vspace{-8pt}
~\\\noindent{\bf Operating systems.} IX~\cite{Belay:2016} and ZygOS~\cite{Prekas:2017} use lightweight network stacks to support applications with $\mu$sec-scale SLOs. ZygOS implements work stealing to avoid core idleness and reduce head-of-line blocking. As we show by means of simulation (\S~\ref{sec:background:hob}) and experimental data (\S~\ref{sec:eval}), this approach cannot fully avoid head-of-line blocking as done by Minos{}, because work stealing $i)$ is agnostic of the CPU time corresponding to serving a request; and $ii)$ is only triggered by idle cores, whose presence becomes less likely as the load increases.
\vspace{-8pt}
~\\\noindent{\bf Job schedulers.} There is a vast literature on scheduling techniques for cluster and data center jobs of heterogeneous size~\cite{Harchol-Balter:2013:book}. Proposed approaches include workload partitioning~\cite{Crovella:1998,Harchol-Balter:2003,Delgado:2016}, pre-empting~\cite{Bansal:2001} or migrating large jobs~\cite{Harchol-Balter:2002,Haque:2017}, and stealing~\cite{Delgado:2015,Li:2016}. Similar techniques have been applied also in the context of network flow scheduling~\cite{Guo:2001,Hong:2012}.
Minos{} draws from these techniques to efficiently implements size-aware request sharding in an in-memory key-value store, so as to avoid head-of-line blocking and achieve load balancing.
\section{Experimental Platform}
\label{sec:testbed}
\subsection{Hardware}
\label{sec:testbed:platform}
Our experimental platform is composed of 8 identical machines equipped with an Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz with 8 physical cores and 64 GB of main memory. The machines run Ubuntu 16.04.2 with a 4.4.0-72-generic kernel. One machine acts as server and the other 7 run the client processes. We disable hyperthreading and power-saving modes on all the machines. All the machines are equipped with a 40Gbit Mellanox MT27520 NIC (ConnectX-3 Pro), are located in the same physical rack, and are connected via a top-of-rack switch. The network stack for both client and server machines relies on the Intel DPDK library (version 17.02.1), to which we allocate 50 1GB huge pages.
Our NIC supports only RSS to implement hardware packet-to-RX queue redirection~\cite{mlx4:rss}. RSS determines the RX queue for an incoming packet by performing the hash of the quintuplet composed of source and destination IP, source and destination port and the transport layer protocol. To allow the clients and the server to send packets to specific RX queues, we ran a set of preliminary experiments to determine to which port to send a packet so that it is received by a specific RX queue. More flexible hardware packet redirection methods can be used on NICs that support them. For example Minos{} can use Flow Director~\cite{Fdir:intel,Fdir:mellanox} to set the target RX queue as UDP destination port of a packet.
\subsection{Systems used in comparison}
We compare Minos{} with three systems that implement state-of-the-art designs of KV store, and that are based on the queueing models that we have described in Section~\ref{sec:background}.
\begin{itemize}[leftmargin=*]
\item {\bf Hardware Keyhash-based sharding (HKH).} This system implements the nxM/G/1 queueing model, as done in MICA~\cite{Lim:2014}. Requests are redirected in hardware to the target core, according to the CREW policy.
This policy performs the best on skewed read-dominated workloads~\cite{Lim:2014}, such as our default workload.
\item {\bf Software hand-off (SHO).} This system implements the M/G/n queueing model, as in RAMCloud~\cite{Ousterhout:2015}. SHO uses disjoint sets of handoff and worker cores. Each handoff core has a software queue, in which it deposits the requests taken from its RX queue. Worker cores pull one request at a time from the handoff queues (in round robin if there is more than one), process the corresponding KV request, and reply to the client. The number of handoff cores is fixed and known {\em a priori} by the clients, which only send requests to the corresponding RX queues. The throughput of SHO is bounded by the dispatch rate of handoff cores. The best number of handoff cores depends on whether the workload is CPU or network bound. We have experimented with 1,2 and 3 handoff cores. We report experimental results corresponding to the best configuration for each workload.
\item {\bf HKH + work stealing (HKH+WS).} This system implements request stealing on top of HKH, as in ZygOS~\cite{Prekas:2017}. Each core has a software queue in which it places the requests taken from its own RX queue. When a core is idle, it steals requests from the software queues of other cores. If or when all software queues are empty, an idle core steals requests from another RX core's queue. Between stealing attempts, a core checks whether it has received any new request. If it has, it stops stealing and processes its own requests.
Cores steal requests from the software queues of other cores one at the time. Batching could introduce head-of-line blocking if the batch contains a large request followed by a short request, and is therefore not used. However, packets are stolen from other RX queues in batches, to increase resource efficiency. Requests stolen from another core's RX queue are put in the stealing core's software queue, so they can be stolen in turn.
\end{itemize}
\remove{
We further implement a system that implements our size-aware sharding on top of SHO (SHO+SA).
Requests of different sizes are dispatched to worker cores according to the same load balancing technique used by Minos{} to distribute large requests (Section~\ref{}). To obtain the size of a GET operation, a handoff core performs a lookup operation from the KV store. We use SHO+SA to show that our size-aware sharding technique can be beneficial also for systems that, by design, rely on software handoff (e.g., RamCloud~\cite{Ousterhout:2015} and Rocksteady~\cite{Kulkarni:2017}).
}
\remove{
SHO and SHO+SA can be configured to use a different number of handoff cores. In general, when the load posed by large requests is high, one handoff core is enough to keep up with the requests arrival rate, and all other cores can process KV operations. As the bulk of the load shifts to small requests, more handoff cores are needed to keep up with the higher arrival rate.
We have experimented with 1, 2 and 3 handoff cores. Using 3 handoff cores always resulted in worse performance because it results in using only 5 cores to serve KV operations.
In the next section, we report experimental results corresponding to the best configuration for each workload and target SLO.
}
For a fair comparison, all the designs we consider are implemented in the same codebase. In particular, they all use the same KV data structure (~\S~\ref{sec:impl:kvs}) and lightweight network stack (~\S~\ref{sec:impl:net}).
The internal parameters of Minos{} are set as follows. Workload statistics are collected by core 0 every second. The size of a batch of requests read from a RX queue is 32, and the same batch size is used for other systems as well.
\subsection{Workloads}
\label{sec:testbed:wkld}
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{|c|c|c|}
\cline{1-3}
{\bf \% large reqs ($p_L$)} & {\bf Max size ($s_L$)} & {\bf \% data for large reqs} \\ \cline{1-3}
\multirow{3}{*}{0.125} & 250 KB & 25 \\ \cline{2-3}
& 500 KB & 40 \\ \cline{2-3}
& 1000 KB & 60 \\ \hline
0.0625 & \multirow{4}{*}{500 KB} & 25 \\ \cline{1-1} \cline{3-3}
0.25 & & 60 \\ \cline{1-1} \cline{3-3}
0.5 & & 75 \\ \cline{1-1} \cline{3-3}
0.75 & & 80 \\ \hline
\end{tabular}
\caption{Item size variability profiles.}
\label{tab:wkld}
\end{table}
We use workloads characterized by different degrees of item size variability and GET:PUT ratios.
\vspace{-8pt}
~\\\noindent{\bf Item size variability.} We use, as a starting point, the characterization of the ETC workload at Facebook~\cite{Atikoglu:2012}. Specifically, we consider a trimodal item size distribution, according to which an item can be tiny (1-13 bytes), small (14-1400 bytes) or large (1500-maximum size). The size of a specific item within each class is drawn uniformly at random. To generate workloads with different degrees of item size variability, we vary both the percentage of large requests, (noted $p_L$), and the size of items corresponding to large requests, by changing the maximum size of large items (noted $s_L$).
We let $s_L$ range from 250KB to 1MB. These values are consistent with the production workloads we discussed in Section~\ref{sec:background:production}. Because we focus on 99th percentile response times, we set $p_L < 1\%$, so that the 99th percentile of the requests service times corresponds to small and tiny items only. Specifically, we vary $p_L$ from 0.0625 to 0.75.
Table~\ref{tab:wkld} reports the combinations of $p_L$ and $s_L$ we consider. It also reports the corresponding percentage of bytes that are exchanged because of large requests.
\vspace{-8pt}
~\\\noindent{\bf Key popularity.}
We consider a skewed workload that follows a zipfian distribution with parameter 0.99. This represents the default value in YCSB~\cite{Cooper:2010}, is widely used in the evaluation of several KV stores~\cite{Lim:2014,Kalia:2014}, and is representative of the strong skew of many production workloads~\cite{Atikoglu:2012,Balmau:2017}.
We use the zipfian distribution on the sets of tiny and small items, because they are many and they exhibit small variability in size. Large items, instead, are much fewer and exhibit much higher variability, and are therefore chosen uniformly at random. This avoids pathological cases in which the most accessed large item is the biggest or the smallest item,
thereby skewing the results.
We consider a dataset of 16M key-value pairs, out of which 10K are large elements. Of the remaining key-value pairs, 40\% correspond to tiny items, and 60\% to small ones. This setting is consistent with the item size distribution and the low access probability of individual large keys that characterize the ETC workload. Each large item has, in fact, a probability $p_L/100 \cdot 10K/16M$ of being accessed.
For simplicity, we keep the size of the keys constant to 8 bytes.
\vspace{-8pt}
~\\\noindent{\bf Write intensity.} We consider a read-dominated and a write-intensive workloads, corresponding, respectively, to a 95:5 and 50:50 GET:PUT ratio. These values are used as default values in YCSB and KV store evaluations~\cite{Lim:2014,Kalia:2014}. Moreover, in the ETC workload, 97\% of requests are GET operations.
\vspace{-8pt}
~\\\noindent{\bf Default workload.} We define one default value for each experiment parameter. We generate additional workloads by changing the value of one parameter at a time while keeping the other ones to their default values.
The default workload we consider is skewed with a 95:5 GET:PUT ratio, a percentage of large requests equal to 0.125\% and a maximum large item size of 500 KB.
\subsection{Benchmarking methodology}
\noindent{\bf Load generation.} We spawn 8 threads per client machine, each pinned to a separate physical core and to an RX queue. Client threads simulate an open system by generating requests at a given rate, which varies depending on the target arrival rate. The time between two consecutive requests of a thread is exponentially distributed.
\vspace{-8pt}
~\\\noindent{\bf Measurements.}
Each request is timestamped with the send time at the client, which is piggybacked by the server on the reply message. Client threads constantly check their own RX queues for replies, and compute the end-to-end latency of a request using the
timestamp in the reply message.
{\color{black}A client thread can have multiple requests in flight, so for simplicity packet retransmission is not enabled. For this reason, we only report performance values corresponding to scenarios in which the packet loss rate is equal to 0.
Each workload runs for 60 seconds. The first and last 10 seconds are not included in the reported results.
\vspace{-8pt}
~\\\noindent{\bf Performance metrics.} We focus on maximum achievable throughput and 99th percentile of end-to-end latencies. We also measure the utilization of the server NIC to evaluate whether Minos{} is able to fully use the available bandwidth.
We consider SLOs in the form ``The 99th percentile of latencies must be within X times the mean request service time". On our platform and for default workload the mean service time is 5 $\mu$sec. We set X to 10 and 20, i.e., the target 99th percentile latency values to 50 and 100 $\mu$sec. X = 10 corresponds to a strict SLO, and is the same value used in the evaluation of Zygos~\cite{Prekas:2017}. X = 20 corresponds to a looser SLO, and we use it to evaluate the sensitivity of Minos{} performance gains as a function of the strictness of the SLO.
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.55]{pics/default}\caption{Throughput vs. 99th percentile latency (y axis in log scale) with the default workload. By efficiently separating small and large requests, Minos{} is able to deliver the highest throughput and the lowest tail latency. Minos{} matches the throughput of the purely hardware-based design and achieves tail latencies lower than the software handoff design.}\label{fig:eval:default}
\end{figure}
|
1,116,691,497,313 | arxiv | \section{Introduction}
The TMT project is currently an equal partnership between Caltech, the University of
California, and the Association of Canadian Universities for Research in Astronomy
(ACURA) to construct a 30m telescope. TMT will have a 30-m f/1 primary mirror
composed of 738 1.2m segments. The final focal ratio of the telescope will be f/15, and
the field of view will be 20 arcminutes. Sites are being tested in northern Chile, Hawaii
and Mexico.
The instruments and their associated adaptive-optics (AO) systems will be located on two
large Nasmyth platforms, and each instrument station will be addressed by the articulated
tertiary mirror. Although both seeing-limited and AO observing modes will be supported
at TMT, it is clear that AO will be key in realizing the full scientific potential of the ``D$^4$
advantage" offered by such a large aperture. The telescope, enclosure, AO subsystems
and instruments are therefore being designed simultaneously as an end-to-end system
under stringent requirements imposed by AO-based science.
Some key system features
that will benefit GRB observations include:
\begin{itemize}
\item Rapid response: TMT is designed, as a system, to slew and acquire targets, set up
active and adaptive optic systems, and be ready to observe with any instrument in
less than 10 minutes.
\item Adaptive optics: TMT's AO systems will deliver high strehl images in the NIR
and MIR, resulting in a D$^4$ advantage. Laser tomography adaptive optics will
substantially improve the image quality in the visible. Since GRBs are initially
point sources, they will benefit the most from full AO correction, unlike distant
galaxies.
\end{itemize}
\section{Instrumentation}
Most of the proposed instruments will capitalize on the D$^4$ efficiency gain and exquisite
spatial resolution (7 milliarcsec images in J band) offered by diffraction-limited images.
TMT instruments will be able to address a broad range of GRB science topics, including:
a) Identification of optical counterparts:
\begin{itemize}
\item
WFOS (Wide Field Optical Spectrograph): Very efficient imaging and low
resolution spectroscopy simultaneously in two wavelength bands 0.34 - 0.6$\mu$;
$0.6 - 1.0\mu$
\item
IRIS: Low spectral resolution (R = 4000) integral field spectroscopy and imaging
from $0.8 - 2.5\mu$, assisted by high strehl adaptive optics.
\end{itemize}
b) IGM, ISM, Chemical Evolution of the Universe, Fundamental Physics
\begin{itemize}
\item
HROS: Very efficient high (R = 50 - 100, 000) resolution spectroscopy from $0.34
- 1.0\mu$. S/N = 100 at m$_{AB}$ $\sim 20$ for R = 50,000
\item
bNIRES: High resolution (R = 50, 000) spectroscopy from $0.8 - 2.5\mu$. Assisted
by high strehl adaptive optics (D$^4$ advantage). Continuum sensitivity (1hr, $100\sigma$):
Y, J or H $\sim17.0$. For z = 7, NIRES spectra will cover Ly$\alpha$, Si II, Si IV, C IV, Ni
II, Al III, Cr II and Zn II.
\item
rNIRES: R = 100,000 3-5$\mu$ spectroscopy, fed by a mid IR AO system or by an
adaptive secondary. Continuum sensitivity (1hr, 100$\sigma$): L = 13.5, M = 11.5
\end{itemize}
c) Properties of Host Galaxies
\begin{itemize}
\item
IRIS: Integral field spectroscopy with spatial resolutions of better than 100pc for all z $>$ 1.
In direct imaging, IRIS will reach point sources as faint as K = 28 ($K_{AB} = 30$)
($3\sigma$) in 3 hours.
\end{itemize}
\section{Summary}
The design of the TMT observatory offers huge potential to exploit the benefits of GRBs.
More details of TMT and its instruments can be found in \cite{ref:ce, ref:cs}
|
1,116,691,497,314 | arxiv | \subsection{Introduction}
At medium energies, our present understanding of QCD is still very
limited. Here, in the energy regime of meson and baryon
resonances the strong coupling constant is large and
perturbative methods can no longer be applied.
One of the key issues in this energy regime is to identify
the relevant degrees-of-freedom and the effective forces between them.
A necessary step towards this aim is undoubtedly a
precise knowledge of the experimental spectrum of baryon resonances
and of their properties.
Their comparison with different models may lead to a deeper
understanding of QCD in the non-perturbative regime.
Quark models are in general amazingly successful in
describing the spectrum of existing states.
However, constituent quark models usually predict many more resonances
than have been observed so far.
Different explanations have been suggested to explain this observation: \\
1) The "missing" states may not exist.
Nucleon resonances could e.g. have a
quark-diquark structure~\cite{lichtenberg}. This reduces the number of internal
degrees-of-freedom, and therefore, the number of existing states.
Of course, one might also think of other hidden symmetries.
At a first glance, this explanation seems to be rather exotic but
it is interesting to notice that the Regge trajectories
for mesons and baryons are parallel.
The similar dependence of the mass squared on the angular momentum
seems to indicate that also the acting force is similar. This
behavior could be easily understood in terms of a quark-diquark
picture, with a diquark in a baryon replacing the antiquark in the
meson (see also~\cite{klempt_massformula}). \\
2) The "missing" states may not have been
observed so far because of a lack of high quality data in channels
different from $\pi N$. Most available experimental data stem from
$\pi N$ scattering experiments. If the missing states decouple from $\pi N$
they would not have been observed so far. This conjecture seems reasonable
following quark model predictions~\cite{capstick9394}.
Many of these unobserved states are expected to couple significantly
to channels like $N\eta$, $N\eta^{\prime}$, $N\omega$, $\Delta \pi$, $N\rho$ or
$K\Lambda$ and also to $\gamma
p$~\cite{capstick9394,capstick92}. Therefore photoproduction
experiments investigating these channels have a great discovery
potential if these resonances really exist. \\
Experiments with electromagnetic probes are not only interesting to search for
unknown states but also to determine the properties of resonances like
photo-couplings and partial widths. These provide additional
information which can be compared to model predictions.
The properties of a resonance are also of big importance for an
interpretation of its nature. One immediate debate in the light of the
possible observation of a pentaquark is e.g. whether the $\rm P_{11}$(1710) and
the $\rm P_{11}$(1440) might be pentaquarks rather than 3-quark states. A good
understanding of their production and decay properties may help to elucidate
their nature.
In the following, different final states, where interesting resonance
structures have been observed, will be disucssed. \\[-3ex]
\subsection{The \boldmath$\gamma p \to p \eta$\unboldmath-channel}
Recently new data on $\eta$-photoproduction has been taken by the
CB-ELSA experiment in Bonn
\cite{eta_pap}.
Due to its electromagnetic calorimeter consisting of 1380 CsI(Tl)
crystals covering 98$\%$ of the 4$\pi$ solid angle, the CB-ELSA
detector is very well suited to measure
photons. The $\eta$ is observed either in its $\gamma\gamma$- or 3$\pi^0$-
decay. The two or six photons are
detected in the calorimeter and the proton is
identified in a 3-layer scintillating fiber detector.
The invariant masses show a clear $\eta$ signal over
an almost negligible background (Fig.~\ref{fig_eta}).
The differential as well as the total cross section is shown in
Fig.~\ref{fig_eta} in comparison to the TAPS~\cite{Krusche:nv},
GRAAL~\cite{Renard:2000iv} and CLAS~\cite{Dugger:ft} data. The new CB-ELSA data
extends the covered angular and energy range significantly compared to previous
measurements. The total cross section was obtained by integrating the
differential cross sections. The extrapolation to forward and
backward angles uses the result of the partial wave analysis (PWA) discussed below.
The PWA is necessary to extract the contributing resonances
from the data. Its result is shown as solid line in
Fig.~\ref{fig_eta}.
\begin{figure}[h!]
\begin{tabular}{rl}
{\includegraphics[width=.425\textwidth,angle=0]{thoma_fig1a.eps}} & \\[-40ex]
& {\includegraphics[width=.425\textwidth,angle=0]{thoma_fig1b.eps}} \\[-2ex]
{\includegraphics[width=.47\textwidth]{thoma_fig1c.eps}}
& \hspace*{+0.1cm} {\includegraphics[width=.42\textwidth]{thoma_fig1d.eps}} \\
\end{tabular}
\caption{Upper plots: Differential cross sections for $\gamma\,\rm{p}
\rightarrow \rm{p}\,\eta$, for $E_\gamma = 750$\,MeV to
3000\,MeV: CB-ELSA(black squares)~\protect\cite{eta_pap}, TAPS~\protect\cite{Krusche:nv},
GRAAL~\protect\cite{Renard:2000iv} and CLAS~\protect\cite{Dugger:ft} data (in
light gray). The solid line represents the result of our fit.
Lower left: Invariant $\gamma\gamma$ and $3\pi^0$ invariant mass.
Lower right: Total cross section (logarithmic scale) for the reaction
$\gamma\,\rm{p}\rightarrow\rm{p}\,\eta$.
For further details see~\protect{\cite{eta_pap}}.
}
\label{fig_eta}
\end{figure}
In the fit the following data sets were included in addition to the CB-ELSA
data on $\rm \gamma p \rightarrow p\eta $: The CB-ELSA data on $\rm \gamma p
\rightarrow
p\pi^0 $ \cite{Bartholomy:04}, the TAPS data on $\rm \gamma p
\rightarrow p\eta $ \cite{Krusche:nv}, the beam asymmetries
$\rm\Sigma(\gamma p \rightarrow p\pi^0)$ and $\rm\Sigma(\gamma p
\rightarrow p\eta)$ from GRAAL \cite{GRAAL2},
and $\rm\Sigma(\gamma p \rightarrow p\pi^0)$ and
$\rm \gamma p \rightarrow n\pi^+ $ from SAID.
Apart from known resonances a new state was found, an $\rm
D_{15}(2070)$
with a mass of ($2068\pm 22$)\,MeV and a width of ($295\pm 40$)\,MeV. Its
rather strong contribution to the data set is also shown in
Fig.\ref{fig_eta}. In addition an indication for a possible new $\rm
P_{13}(2200)$ was found.
No evidence was found for a third S$_{11}$ for which claims have been
reported at masses of 1780\,MeV~\cite{Saghai:2003ch} and
1846\,MeV~\cite{Chen:2002mn}. \\[-3ex]
\subsection{The \boldmath$\gamma p \to K^+ \Lambda$\unboldmath-channel }
Another interesting channel is the $K^+\Lambda$ channel, where a
structure around 1900~MeV was first observed in the SAPHIR
data~\cite{saphir_old}. The total cross section shows two bumps, at about 1700~MeV, and 1900~MeV (Fig.\ref{klambda} upper left).
Describing the data within different models it was
found that the first peak is mainly due to the $\rm S_{11}(1650)$,
$\rm P_{11}(1710)$, and $\rm P_{13}(1720)$. Within a tree
level isobar model based on an effective lagrangian approach Mart and Bennhold
found that a new resonance is needed to describe the second bump in the cross
section~\cite{mart_bennhold}. This resonance was identified with a
$\rm D_{13}$(1895). Fig.\ref{klambda}, upper left shows their best description of
the data with and without this new state.
Even though this picture looks quite convincing the existence of the
state is controversially discussed.
Saghai was e.g. able to describe the data within a chiral quark
model without any new resonance~\cite{klambda_saghai}. In his model the enhancement is
explained by hyperon exchange in the u-channel.
Hyperon exchanges are also included in the model of Jannsen et
al.~\cite{klambda_jannsen} but still an additional state around 1900
MeV was needed. A similar improvement of the fit was obtained
by resonances of different quantum numbers.
In the Giessen coupled-channel model a negligible $K
\Lambda$ coupling was found for a $\rm D_{13}$ state
which was introduced around 1950 MeV~\cite{klambda_giessen}.
Recently new high statistics data on this final state became available.
SAPHIR~\cite{klambda_saphir} and CLAS~\cite{klambda_clas} did provide new data on cross
sections and on the $\Lambda$ recoil polarization and
LEPS~\cite{klambda_leps} on the
beam-asymmetry.
\begin{figure}[t!]
\begin{tabular}{cc}
\vspace*{-0.0cm}
{\includegraphics[height=.255\textwidth]{thoma_fig2a.eps}} & \\[-20ex]
& \hspace*{-.cm}{\includegraphics[height=.57\textwidth]{thoma_fig2b.eps}} \\[-28ex]
\hspace*{-0.2cm}{\includegraphics[height=.235\textheight]{thoma_fig2c.eps}}
& \\
\end{tabular}
\caption{
Left, upper plot: Total cross section for $\gamma p \to K^+ \Lambda$ from SAPHIR
\protect{\cite{saphir_old}}. Description of the data within the model~\protect{\cite{mart_bennhold}} with and without including
a new resonance around 1900~MeV.
Right: Energy dependence of the $\gamma p \to K^+ \Lambda$-cross
sections for different $K^+$-angles (CLAS-data\protect{~\cite{klambda_clas}}).
Left, lower plot: Beam asymmetry for $\gamma p \to K^+ \Lambda$ measured by
LEPS~\protect{\cite{klambda_leps}}. }
\vspace*{-0.3cm}
\label{klambda}
\end{figure}
The differential cross sections as a function of $\sqrt{s}$ for
different K-angles are shown in Fig.\ref{klambda}, right for the CLAS data.
Again a structure around 1900\,MeV is observed. It varies in width and
position with the $K^+$-angle. This suggests an interference phenomenom between
several resonant states, rather than a single
resonance. This behaviour is not yet reproduced by the models shown, but the
model parameters have not yet been adjusted to the new data.
Some first prelimary results on an interpretation of the new data
have been shown by Mart and Bennhold~\cite{mart_bennhold_conf}.
Fitting the new SAPHIR data together with the beam asymmetry data from
LEPS (Fig.\ref{klambda}, lower left) they find that more than one resonance is needed to
describe the mass region around 1900~MeV. This work is still in progress, so
no definite statement on the existence of new resonances in this data could be
made yet. \\[-3ex]
\subsection{The \boldmath$\gamma p \to p \pi^0\pi^0$\unboldmath-channel }
The $\gamma p \to p \pi^0\pi^0$ cross section was measured by TAPS~\cite{taps}
in the low energy range and by GRAAL~\cite{graal} up to an incoming photon
energy of about 1500~MeV; two peak-like structures are observed~\cite{taps,graal}.
The data has been interpreted within the Laget-~\cite{laget_graal} and Valencia
model~\cite{oset_graal_pap},
resulting in very different interpretations. In the Valencia-model, which is
limited to the low
energy region, the $\rm D_{13}$(1520) decaying into $\Delta (1232)\pi$
dominates the lower energy peak, while
in the Laget-model the $\rm P_{11}$(1440) decaying into $\sigma p$ is clearly
the dominant contribution. Even though both models lead to a
reasonable description of the total cross section their interpretation of the
data is rather different, in fact they are contradicting each other.
Recently data on $\rm \gamma p\to p \pi^0\pi^0$ has also been taken by the
CB-ELSA experiment in Bonn extending the covered energy range up
the $E_{\gamma}$=3.0$\,$GeV~\cite{cbelsa_ppi0pi0}.
To extract the contributing resonances, their quantum numbers and their
properties from the data, a PWA has been done.
The formalism used is summarized in~\cite{formalism}. The fit uses Breit-Wigner
resonances and includes $s$- and $t$-channel amplitudes.
An unbinned maximum-likelihood fit was performed which has
the big advantage of being event-based; it takes all the
correlations between the five independent variables correctly into account.
The fits include the preliminary TAPS data~\cite{kottu} in the low energy region
in addition to the CB-ELSA data.
Resonances with different quantum numbers were
introduced in various combinations allowing, so far, for the following decay
modes: $\Delta(1232)\pi$, $\rm N(\pi\pi)_s$, $\rm P_{11}(1440)\pi$, $\rm
D_{13}(1520)\pi$ and $\rm X(1660)\pi$.
For a good description of the data resonances like e.g. the $\rm
P_{11}(1440)$, the $\rm D_{13}(1520)$, the $\rm D_{13}/D_{33}(1700)$, the $\rm
P_{13}(1720)$, the $\rm F_{15}$(1680) as well as several additional states are
needed.
One preliminary result of the PWA is a dominant
contribution of the $\rm D_{13}(1520)\to \Delta \pi$
amplitude in the energy range, where the first peak in the cross
section occurs.
Fig.~\ref{xsec_high} shows the total cross section obtained by fitting
the CB-ELSA and the TAPS data and by integrating the result of
the combined fit over phase space. \\
\begin{figure}[t!]
{\includegraphics[height=.2\textheight]{thoma_fig3a.eps}}
{\includegraphics[height=.2\textheight]{thoma_fig3b.eps}}
\caption{Left: Total cross section as obtained by integrating the result of
the partial wave analysis over phase space (solid line), in comparison
to the preliminary TAPS\protect\cite{kottu} and GRAAL\protect\cite{graal}
data. Right: $p\pi^0$ invariant mass for $E_{\gamma}$=0.8-3.0~GeV
in comparison to the result of the PWA.
The plots shows the experimental data (points with error bars), the
result of the PWA (solid gray curve), the contribution of the $\rm
D_{13}$(1520) (dashed black curve) and the phase
space distribution (thin black line), preliminary.}
\label{xsec_high}
\end{figure}
In the CB-ELSA data baryon resonances not only decaying into $\Delta \pi$ but
also via $\rm D_{13}(1520)\pi$ and $X(1660)\pi$ are observed for the first time.
The enhancements at the corresponding $p\pi$ invariant masses
are clearly visible in Fig.~\ref{xsec_high}, right. The observation of baryon cascades is
also interesting with respect to
the search for states which might not couple to $\pi N$ and $\gamma p$; they
still could be produced in such baryon cascades. \\[-3ex]
\subsection{The \boldmath$e p \to e^{\prime} p \pi^+\pi^-$\unboldmath -channel }
Recently, $2\pi$-electroproduction was investigated by CLAS.
The total cross section for different bins in
momentum transfer $Q^2$ is shown in Fig~\ref{clas1}.
\begin{figure}[t]
{\includegraphics[height=.22\textheight,width=.33\textheight]{thoma_fig4a.eps}}
\caption{Left: Total cross section for $\gamma^* p \to p
\pi^+\pi^-$ for
$Q^2$=0.5-0.8 (full points), 0.8-1.1 (open squares), 1.1-1-5 (GeV/c)$^2$
(open triangles)~\protect\cite{ripani}. Right: Differential cross
sections for W=1.7-1.75 GeV, $Q^2$= 0.8-1.1 (GeV/c)$^2$.
The line corresponds to the fit which includes the known
information on the resonances described in the text. For further
details see~\protect\cite{ripani}. }
\label{clas1}
\end{figure}
The cross section changes with $Q^2$ as one would expect since the
helicity couplings of the resonances depend on $Q^2$.
The data was investigated within an isobar model. The fit takes the
$\Delta\pi$ and the $\rm N\rho$ subchannels into account, in addition
non-resonant contributions are allowed for.
The fit included 12 known baryon resonances together with the non-resonant
background amplitudes. If all known information on the resonances is included
in the fit the mass region 1700~MeV is rather badly described while the fit agrees
fairly well with the data at low W (Fig.\ref{clas1}).
At the same time, the fit clearly overshoots the data in the $\rho$ region of
the $\pi^+\pi^-$ invariant mass in the W-bin around 1700~MeV.
This behavior can be traced back to the $\rm P_{13}$(1720)
which has, following the PDG, a rather large $\rm N\rho$-decay
width of 70 to 85$\%$.
A better description of the data is reached by either changing the decay properties
of the known $\rm P_{13}$ completely or by keeping the PDG-state and adding a new $3/2^+$
state of slightly smaller width and a stronger $\Delta\pi$ decay mode. For further
details see~\protect\cite{ripani}.
A recent combined analysis of this data together with the CLAS
$p\pi^+\pi^-$-photoproduction data seems to indicate that two state are needed to
describe these two data sets consistently~\cite{victor}. But this analysis is
still in progress. \\[-3ex]
\subsection{Summary}
Indications for new resonances are found in several finals states.
A $\rm D_{15}$(2070) and possible indications for an
$\rm P_{13}$(2200)-state were
found in a combined PWA of the new CB-ELSA $\eta$-photoproduction
together with other data sets.
In the old $\rm K^+\Lambda$-SAPHIR data possible indications for a new
state around 1900~MeV were observed, while the new higher statistics
data on this final state from SAPHIR, CLAS and LEPS seems to indicate that
even more than one resonance might contribute to the mass region around 1900~MeV.
2$\pi^0$-photoproduction data has been taken by TAPS, GRAAL and CB-ELSA.
The total cross section shows two clear peaks around $E_{\gamma}$=700~MeV and
1100~MeV. The interpretation of the lower energy peak has been controversially
discussed as either being mainly due to the $\rm P_{11}(1440)\to p
\sigma$, or due to the $\rm D_{13}(1520) \to \Delta \pi$ amplitude. A preliminary
combined PWA of the CB-ELSA and TAPS
data indicates a dominant contribution of the latter amplitude in this energy
range. At higher energies, in addition to the $\Delta\pi$ decay of
baryon resonances, their decay via higher mass resonances like e.g. the
$\rm D_{13}$(1520) are observed for the first time.
This observation opens up a new opportunity to search for baryon resonances
which may decouple from $\rm N\pi$ and $\rm \gamma N$; they still might be
produced in baryon cascades.
The possible existence of an additional $\rm P_{13}$ state in
the CLAS $\pi^+\pi^-$-electroproduction data around 1700~MeV also has been
discussed recently. \\
So the question is, whether the missing resonances are finally
appearing. The $\rm D_{15}$(2070) would nicely fit to one of the missing states
and the same would also be true for the $\rm P_{13}$(2200) and the
$\rm D_{13}$(1895) state, while a
second $\rm P_{13}$ state in the mass range around 1700~MeV is not expected by the
quark model. \\
Before the above question can be answered, a better
understanding of the spectrum is obviously needed.
One very important step towards this aim is the measurement of
polarization observables. They provide additional constraints for models or PWA
used to extract resonance information from the data. This increases
the sensitivity on smaller contributions and helps to distinguish
between ambiguous PWA solutions.
Such polarization data has been taken recently by various experiments e.g. by
GRAAL, at MAMI, by CLAS, by LEPS, and by CB-ELSA/TAPS. Additional new
measurements will also take place, like e.g. the double polarization experiments
planned at ELSA.
\subsection{Acknowledgments}
The author acknowledges an Emmy Noether grant from the Deutsche
Forschungsgemeinschaft.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.